Write about memory hierarchy

Trade-offs[ edit ] While using the cache may improve memory latency, it may not always result in the required improvement for the time taken to fetch data due to the way caches are organized and traversed.

Write about memory hierarchy

write about memory hierarchy

The tag contains the most significant bits of the address, which are checked against the current row the row has been retrieved by index to see if it is the one we need or another, irrelevant memory location that happened to have the same index bits as the one we want.

The tag length in bits is as follows: The valid bit indicates whether or not a cache block has been loaded with valid data. On power-up, the hardware sets all the valid bits in all the caches to "invalid". Some systems also set a valid bit to "invalid" at other times, such as when multi-master bus snooping hardware in the cache of one processor hears an address broadcast from some other processor, and realizes that certain data blocks in the local cache are now stale and should be marked invalid.

Having a dirty bit set indicates that the associated cache line has been changed since it was read from main memory "dirty"meaning that the processor has written data to that line and the new value has not propagated all the way to main memory. Associativity[ edit ] An illustration of different ways in which memory locations can be cached by particular cache locations The replacement policy decides where in the cache a copy of a particular entry of main memory will go.

If the replacement policy is free to choose any entry in write about memory hierarchy cache to hold the copy, the cache is called fully associative.

At the other extreme, if each entry in main memory can go in just one place in the cache, the cache is direct mapped.

write about memory hierarchy

Many caches implement a compromise in which each entry in main memory can go to any one of N places in the cache, and are described as N-way set associative. Choosing the right value of associativity involves a trade-off.

Engineering in your pocket

If there are ten places to which the replacement policy could have mapped a memory location, then to check if that location is in the cache, ten cache entries must be searched. Checking more places takes more power and chip area, and potentially more time. On the other hand, caches with more associativity suffer fewer misses see conflict misses, belowso that the CPU wastes less time reading from the slow main memory.

The general guideline is that doubling the associativity, from direct mapped to two-way, or from two-way to four-way, has about the same effect on raising the hit rate as doubling the cache size. However, increasing associativity more than four does not improve hit rate as much, [12] and are generally done for other reasons see virtual aliasing, below.

Some CPUs can dynamically reduce the associativity of their caches in low-power states, which acts as a power-saving measure. Therefore, a direct-mapped cache can also be called a "one-way set associative" cache.

This means that if two locations map to the same entry, they may continually knock each other out. Although simpler, a direct-mapped cache needs to be much larger than an associative one to give comparable performance, and it is more unpredictable. Two-way set associative cache[ edit ] If each location in main memory can be cached in either of two locations in the cache, one logical question is: Since the cache tags have fewer bits, they require fewer transistors, take less space on the processor circuit board or on the microprocessor chip, and can be read and compared faster.

Also LRU is especially simple since only one bit needs to be stored for each pair. Speculative execution[ edit ] One of the advantages of a direct mapped cache is that it allows simple and fast speculation. Once the address has been computed, the one cache index which might have a copy of that location in memory is known.

That cache entry can be read, and the processor can continue to work with that data before it finishes checking that the tag actually matches the requested address. The idea of having the processor use the cached data before the tag match completes can be applied to associative caches as well.

A subset of the tag, called a hint, can be used to pick just one of the possible cache entries mapping to the requested address.

The entry selected by the hint can then be used in parallel with checking the full tag. The hint technique works best when used in the context of address translation, as explained below.

Two-way skewed associative cache[ edit ] Other schemes have been suggested, such as the skewed cache, [14] where the index for way 0 is direct, as above, but the index for way 1 is formed with a hash function.

A good hash function has the property that addresses which conflict with the direct mapping tend not to conflict when mapped with the hash function, and so it is less likely that a program will suffer from an unexpectedly large number of conflict misses due to a pathological access pattern.

The downside is extra latency from computing the hash function. Nevertheless, skewed-associative caches have major advantages over conventional set-associative ones.Memory hierarchy is the hierarchy of memory and storage devices found in a computer.

Often visualized as a triangle, the bottom of the triangle represents larger, cheaper and slower storage devices, while the top of the triangle represents smaller, more expensive and faster storage devices. A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory.A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory attheheels.com CPUs have different independent caches, including instruction and data caches.

Most citations of Maslow's hierarchy of needs list only five levels. This is particularly true of management books and hand-outs. Very few sources that I have seen list the full range of seven need levels that Maslow outlines and explains in his revision .

Cache hierarchy is a form and part of memory hierarchy, and can be considered a form of tiered storage. There are two policies which define the way in which a modified cache block will be updated in the main memory: write through and write back.

Alexandra van de Kamp lives in San Antonio, TX, and is the Creative Writing Classes Program Director for Gemini Ink, a literary arts nonprofit. moves data up and down the memory hierarchy, then you can write your application programs so that their data items are stored higher in the hierarchy, where the CPU can access them more quickly.

This idea centers around a fundamental property of computer programs known as locality.

Short Note on Memory Hierarchy ? ยป Tutorial Bazar