sponsored by
OSdata.com: memory 

OSdata.com

Basics of computer memory

    Summary: Memory systems in computers. Access to memory (for storage of programs and data) is one of the most basic and lowest level activities of an operating system.

Google


OSdata.com is used in more than 300 colleges and universities around the world

Find out how to get similar high web traffic and search engine placement.

    Before discussing memory operations, a quick review about memory from the web page basics about computer hardware:

memory hardware issues

main storage

    Main storage is also called memory or internal memory (to distinguish from external memory, such as hard drives). An older term is working storage.

    Main storage is fast (at least a thousand times faster than external storage, such as hard drives). Main storage (with a few rare exceptions) is volatile, the stored information being lost when power is turned off.

    All data and instructions (programs) must be loaded into main storage for the computer processor.

    RAM is Random Access Memory, and is the basic kind of internal memory. RAM is called “random access” because the processor or computer can access any location in memory (as contrasted with sequential access devices, which must be accessed in order). RAM has been made from reed relays, transistors, integrated circuits, magnetic core, or anything that can hold and store binary values (one/zero, plus/minus, open/close, positive/negative, high/low, etc.). Most modern RAM is made from integrated circuits. At one time the most common kind of memory in mainframes was magnetic core, so many older programmers will refer to main memory as core memory even when the RAM is made from more modern technology. Static RAM is called static because it will continue to hold and store information even when power is removed. Magnetic core and reed relays are examples of static memory. Dynamic RAM is called dynamic because it loses all data when power is removed. Transistors and integrated circuits are examples of dynamic memory. It is possible to have battery back up for devices that are normally dynamic to turn them into static memory.

    ROM is Read Only Memory (it is also random access, but only for reads). ROM is typically used to store thigns that will never change for the life of the computer, such as low level portions of an operating system. Some processors (or variations within processor families) might have RAM and/or ROM built into the same chip as the processor (normally used for processors used in standalone devices, such as arcade video games, ATMs, microwave ovens, car ignition systems, etc.). EPROM is Erasable Programmable Read Only Memory, a special kind of ROM that can be erased and reprogrammed with specialized equipment (but not by the processor it is connected to). EPROMs allow makers of industrial devices (and other similar equipment) to have the benefits of ROM, yet also allow for updating or upgrading the software without having to buy new ROM and throw out the old (the EPROMs are collected, erased and rewritten centrally, then placed back into the machines).

    Registers and flags are a special kind of memory that exists inside a processor. Typically a processor will have several internal registers that are much faster than main memory. These registers usually have specialized capabilities for arithmetic, logic, and other operations. Registers are usually fairly small (8, 16, 32, or 64 bits for integer data, address, and control registers; 32, 64, 96, or 128 bits for floating point registers). Some processors separate integer data and address registers, while other processors have general purpose registers that can be used for both data and address purposes. A processor will typically have one to 32 data or general purpose registers (processors with separate data and address registers typically split the register set in half). Many processors have special floating point registers (and some processors have general purpose registers that can be used for either integer or floating point arithmetic). Flags are single bit memory used for testing, comparison, and conditional operations (especially conditional branching). For a much more advanced look at registers, see registers.

external storage

    External storage is any storage other than main memory. In modern times this is mostly hard drives and removeable media (such as floppy disks, Zip disks, optical media, etc.). With the advent of USB and FireWire hard drives, the line between permanent hard drives and removeable media is blurred. Other kinds of external storage include tape drives, drum drives, paper tape, and punched cards. Random access or indexed access devices (such as hard drives, removeable media, and drum drives) provide an extension of memory (although usually accessed through logical file systems). Sequential access devices (such as tape drives, paper tape punch/readers, or dumb terminals) provide for off-line storage of large amounts of information (or back ups of data) and are often called I/O devices (for input/output).

buffers

    Buffers are areas in main memory that are used to store data (or instructions) being transferred to or from external memory.

basic memory software approaches

static and dynamic approaches

    There are two basic approaches to memory usage: static and dynamic.

    Static memory approaches assume that the addresses don’t change. This may be a virtual memory illusion, or may be the actual physical layout. The static memory allocation may be through absolute addresses or through PC relative addresses (to allow for relocatable, reentrant, and/or recursive software), but in either case, the compiler or assembler generates a set of addresses that can not change for the life of a program or process.

    Dynamic memory approaches assume that the addresses can change (although change is often limited to predefined possible conditions). The two most common dynamic approaches are the use of stack frames and the use of pointers or handlers. Stack frames are used primarily for temporary data (such as fucntion or subroutine variables or loop counters). Handles and pointers are used for keeping track of dynamically allocated blocks of memory.

absolute addressing

    To look at memory use by programs and operating systems, let’s first examine the more simple problem of a single program with complete control of the computer (such as in a small-scale embedded system or the earliest days of computing).

    The most basic form of memory access is absolute addressing, in which the program explicitely names the address that is going to be used. An address is a numeric label for a specific location in memory. The numbering system is usually in bytes and always starts counting with zero. The first byte of physical memory is at address 0, the second byte of physical memory is at address 1, the third byte of physical memory is at address 2, etc. Some processors use word addressing rather than byte addressing. The theoretical maximum address is determined by the address size of a processor (a 16 bit address space is limited to no more than 65536 memory locations, a 32 bit address space is limited to approximately 4 GB of memory locations). The actual maximum is limited to the amount of RAM (and ROM) physically installed in the computer.

    A programmer assigns specific absolute addresses for data structures and program routines. These absolute addresses might be assigned arbitrarily or might have to match specific locations expected by an operating system. In practice, the assembler or complier determines the absolute addresses through an orderly predictable assignment scheme (with the ability for the programmer to override the compiler’s scheme to assign specific operating system mandated addresses).

    This simple approach takes advantage of the fact that the compiler or assembler can predict the exact absolute addresses of every program instruction or routine and every data structure or data element. For almost every processor, absolute addresses are the fastest form of memory addressing. The use of absolute addresses makes programs run faster and greatly simplifies the task of compiling or assembling a program.

    Some hardware instructions or operations rely on fixed absolute addresses. For example, when a processor is first turned on, where does it start? Most processors have a specific address that is used as the address of the first instruction run when the processer is first powered on. Some processors provide a method for the start address to be changed for future start-ups. Sometimes this is done by storing the start address internally (with some method for software or external hardware to change this value). For example, on power up the Motorola 680x0, the processor loads the interrupt stack pointer with the longword value located at address 000 hex, loads the program counter with the longword value located at address 004 hex, then starts execution at the frshly loaded program counter location. Sometimes this is done by reading the start address from a data line (or other external input) at power-up (and in this case, there is usually fixed external hardware that always generates the same pre-assigned start address).

    Another common example of hardware related absolute addressing is the handling of traps, exceptions, and interrupts. A processor often has specific memory addresses set aside for specific kinds of traps, exceptions, and interrupts. Using a specific example, a divide by zero exception on the Motorola 680x0 produces an exception vector number 5, with the address of the exception handler being fetched by the hardware from memory address 014 hex.

    Some simple microprocessor operating systems relied heavily on absolute addressing. An example would be the MS-DOS expectation that the start of a program would always be located at absolute memory address x100h (hexadecimal 100, or decimal 256). A typical compiler or assembler directive for this would be the ORG directive (for “origin”).

    The key disadvantage of absolute addressing is that multiple programs clash with each other, competing to use the same absolute memory locations for (possibly different) purposes.

overlay

    So, how do you implement multiple programs on an operating system using absolute addresses? Or, for early computers, how do you implement a program that is larger than available RAM (especially at a time when processors rarely had more than 1k, 2k, or 4k of RAM? The earliest answer was overlay systems.

    With an overlay system, each program or program segment is loaded into the exact same space in memory. An overlay handler exists in another area of memory and is responsible for swapping overlay pages or overlay segments (both are the same thing, but different operating systems used different terminology). When a overlay segment completes its work or needs to access a routine in another overlay segment, it signals the overlay handler, which then swaps out the old program segment and swaps in the next program segment.

    An overlay handler doesn’t take much memory. Typically, the memory space that contained the overlay handler was also padded out with additional routines. These might include key device drivers, interrupt handlers, exception handlers, and small commonly used routines shared by many programs (to save time instead of continual swapping of the small commonly used routines).

    In early systems, all data was global, meaning that it was shared by and available for both read and writes by any running program (in modern times, global almost always means available to a single entire program, no longer meaning available to all software on a computer). A section of memory was set aside for shared system variables, device driver variables, and interrupt handler variables. An additional area would be set aside as “scratch” or temporary data. The temporary data area would be available for individual programs. Because the earliest operating systems were batch systems, only one program other than the operating system would be running at any one time, so it could use the scratch RAM any way it wanted, saving any long term data to files.

relocatable software

    As computer science advance, hardware started to have support for relocatable programs and data. This would allow an operating system to load a program anywhere convenient in memory (including a different location each time the program was loaded). This was a necessary step for the jump to interactive operating systems, but was also useful in early batch systems to allow for multiple overlay segments.

demand paging and swapping

    Overlay systems were superceded by demand paging and swapping systems. In a swapping system, the operating system swaps out an entire program and its data (and any other context information).

    In a swapping system, instead of having programs explicitely request overlays, programs were divided into pages. The operating system would load a program’s starting page and start it running. When a program needed to access a data page or program page not currently in main memory, the hardware would generate a page fault, and the operating system would fetch the requested page from external storage. When all available pages were filled, the operating system would use one of many schemes for figuring out which page to delete from memory to make room for the new page (and if it was a data page that had any changes, the operating system would have to store a temporary copy of the data page). The question of how to decide which page to delete is one of the major problems facing operating system designers.

program counter relative

    One approach for making programs relocatable is program counter relative addressing. Instead of branching using absolute addresses, branches (including subroutine calls, jumps, and other kinds of branching) were based on a relative distance from the current program counter (which points to the address of the currently executing instruction). With PC relative addreses, the program can be loaded anywhere in memory and still work correctly. The location of routines, subroutines, functions, and constant data can be determined by the positive or negative distance from the current instruction.

    Program counter relative addressing can also be used for determining the address of variables, but then data and code get mixed in the same page or segment. At a minimum, mixing data and code in the same segment is bad programming practice, and in most cases it clashes with more sophisticated hardware systems (such as protected memory).

base pointers

    Base pointers (sometimes called segment pointers or page pointers) are special hardware registers that point to the start (or base) of a particular page or segment of memory. Programs can then use an absolute address within a page and either explicitly add the absolute address to the contents of a base pointer or rely on the hardware to add the two together to form the actual effective address of the memory access. Which method was used would depend on the processor capabilities and the operatign system design. Hiding the base pointer from the application program both made the program easier to compile and allowed for the operating system to implement program isolation, data/code isolation, protected memory, and other sophisticated services.

    As an example, the Intel 80x86 processor has a code segment pointer, a data segment pointer, a stack segment pointer, and an extra segment pointer. When a program is loaded into memory, an operating system running on the Intel 80x86 sets the segment pointers with the beginning of the pages assigned for each purpose for that particular program. If a program is swapped out, when it gets swapped back in, the operating system sets the segment pointers to the new memory locations for each segment. The program continues to run, without being aware that it has been moved in memory.

indirection, pointers, and handles

    A method for making data relocatable is to use indirection. Instead of hard coding an absolute memory address for a variable or data structure, the program uses a pointer that gives the memory address of the variable or data structure. Many processors have address pointer registers and a variety of indirect addressing modes available for software.

    In the most simple use of address pointers, software generates the effective address for the pointer just before it is used. Pointers can also be stored, but then the data can’t be moved (unless there is additional hardware support, such as virtual memory or base/segment pointers).

    Closely related to pointers are handles. Handles are two levels of indirection, or a pointer to a pointer. Instead of the program keeping track of an address pointer to a block of memory that can’t be moved, the program keeps track of a pointer to a pointer. Now, the operating system or the application program can move the underlying block of data. As long as the program uses the handle instead of the pointer, the operating system can freely move the data block and update the pointer, and everything will continue to resolve correctly.

    Because it is faster to use pointers than handles, it is common for software to convert a handle into a pointer and use the pointer for data accesses. If this is done, there must be some mechanism to make sure that the data block doesn’t move while the program is using the pointer. As an example, the Macintosh uses a system where data blocks can only be moved at specific known times, and an application program can rely on pointers derived from handles remaining valid between those known, specified times.

stack frames

    Stack frames are a method for generating temporary variables, especially for subroutines, functions, and loops. An are of memory is temporarily allocated on the system or process stack. In a simple version, the variables in the stack frame are accessed by using the stack pointer and an offset to point to the actual location in memory. This simple approach has the problem that there are many hardware instructions that change the stack pointer. The more sophisticated and stable approach is to have a second pointer called a frame pointer. The frame pointer can be set up in software using any address register. Many modern processors also have specific hardware instructions that allocate the stack frame and set up the frame pointer at the same time. Some processors have a specific hardware frame pointer register.

virtual memory

    Virtual memory is a technique in which each process generates addresses as if it had sole access to the entire logical address space of the processor, but in reality memory management hardware remaps the logical addresses into actual physical addresses in physical address space. The DEC VAX-11 gets it name from this technique, VAX standing for Virtual Address eXtension.

    Virtual memory can go beyond just remapping logical addresses into physical addresses. Many virtual memory systems also include software for page or segment swapping, shuffling portions of a program to and from a hard disk, to give the software the impression of having much more RAM than is actually installed on the computer.

OS memory services

    Operating systems offer some kind of mechanism for (both system and user) software to access memory.

    In the most simple approach, the entire memory of the computer is turned over to programs. This approach is most common in single tasking systems (only one program running at a time). Even in this approach, there often will be certain portions of memory designated for certain purposes (such as low memory variables, areas for operating system routines, memory mapped hardware, video RAM, etc.).

    With hardware support for virtual memory, operating systems can give programs the illusion of having the entire memory to themselves (or even give the illusion there is more memory than there actually is, using disk space to provide the extra “memory”), when in reality the operating system is continually moving programs around in memory and dynamically assigning physical memory as needed. Even with this approach, it is possible that some virtual memory locations are mapped to their actual physical addresses (such as for access to low memory variables, video RAM, or similar areas).

    The task of dividing up the available memory space in both of these approaches is left to the programmer and the compiler. Many modern languages (including C and C++) have service routines for allocating and deallocating blocks of memory.

    Some operating systems go beyond basic flat mapping of memory and provide operating system routines for allocating and deallocating memory. The Macintosh, for example, has two heaps (a system heap and an application heap) and an entire set of operating system routines for allocating, deallocating, and managing blocks of memory. The NeXT goes even further and creates an object oriented set of services for memory management.

    With hardware support for segments or demand paging, some operating systems (such as MVS and OS/2) provide operating system routines for programs to manage segments or pages of memory.

    Memory maps (not to be confused with memory mapped I/O) are diagrams or charts that show how an operating system divides up main memory. For more details, see memory maps.

    Low memory is the memory at the beginning of the address space. Some processors use designated low memory addresses during power on, exception processing, interrupt processing, and other hardware conditions. Some operating systems use designated low memory addresses for global system variables, global system structures, jump tables, and other system purposes. For more details, see low memory.

further reading: books:

If you want your book reviewed, please send a copy to: Milo, POB 1361, Tustin, CA 92781, USA.

Price listings are for courtesy purposes only and may be changed by the referenced businesses at any time without notice.

further reading: books: general

    Structured Computer Organization, 4th edition; by Andrew S. Tanenbaum; Prentice Hall; October 1998; ISBN 0130959901; Paperback; 669 pages; $95.00; used by CS 308-273A (Principles of Assembly Languages) at McGill University School of Computer Science


    Computers: An Introduction to Hardware and Software Design; by Larry L. Wear, James R. Pinkert (Contributor), William G. Lane (Contributor); McGraw-Hill Higher Education; February 1991; ISBN 0070686742; Hardcover; 544 pages; $98.60; used by CS 308-273A (Principles of Assembly Languages) at McGill University School of Computer Science

In Association with Amazon.com

If you want your book reviewed, please send a copy to: Milo, POB 1361, Tustin, CA 92781, USA.


OSdata.com is used in more than 300 colleges and universities around the world

Read details here.


    A web site on dozens of operating systems simply can’t be maintained by one person. This is a cooperative effort. If you spot an error in fact, grammar, syntax, or spelling, or a broken link, or have additional information, commentary, or constructive criticism, please e-mail Milo. If you have any extra copies of docs, manuals, or other materials that can assist in accuracy and completeness, please send them to Milo, PO Box 1361, Tustin, CA, USA, 92781.

    Click here for our privacy policy.


previous page next page
previous page next page

home page

one level up

Hardware Level of Operating System

peer level


Made with Macintosh

    This web site handcrafted on Macintosh computers using Tom Bender’s Tex-Edit Plus and served using FreeBSD .

Viewable With Any Browser


    †UNIX used as a generic term unless specifically used as a trademark (such as in the phrase “UNIX certified”). UNIX is a registered trademark in the United States and other countries, licensed exclusively through X/Open Company Ltd.

    Names and logos of various OSs are trademarks of their respective owners.

    Copyright © 2001, 2006 Milo

    Last Updated: September 11, 2006

    Created: March 15, 2001

previous page next page
previous page next page