Memory is one of the most talked about parts of a current PC's performance. A lot of news has been made about "dual channel", "low latency", and other buzzwords. Many people out there don't quite understand what they mean, or how they affect performance. This won't be an article meant to confuse electrical engineers, but it is a pretty hard topic to bring down to a high school level. In order to do that I'm skipping on some of the really technical jargon, and I'll try to use common anecdotes as much as possible.
"What IS memory, and what does it do??" This is a typical question from someone who really never has opened their case, and just looks at the 512MB rating, or DDR 400 one that the ad that sold them on their current computer to determine that it's good. I know that even more people than that don't really know what it does, they just don't ask. Well, here it is.
The memory that most people talk about, is the Random Access Memory, or RAM, that you plug into your mainboard. This acts as a high speed buffer for active programs that are being used by the CPU, so that you have an intermediary between the very slow hard disk drives, and the full speed cache and registers inside the processor itself. Programs, and all the data that they require to read and write to, cannot be stored entirely in the varying amounts of onboard cache in a processor. This is due to cost and size limitations, and that is why the hierarchy is present in storage systems. Registers, cache, and RAM also are volatile, in that once the electricity that drives them is removed, so to is the data that they had stored. Hard drives are a permanent storage solution, unfortunately as they are a mechanical device, their access and transfer times are remarkably slower than their electrical counterparts. Now, when a CPU operates, it reads an instruction pointed to by the program counter, decodes the instruction, performs an operation, then reads the next instruction. Many times, this breaks down simply to:
----->Retrieve data A
----->Retrieve data B
----->Add B to A
----->Store A to C
Notice how much of that dealt with reads and writes? What would be most efficient, is to have the instruction, data A, B and C in the cache. However, those do have to come from somewhere, as nothing can permanently be stored in the cache, nor can it hold ALL the data required for the operating of a computer. Main memory is much larger, and hard drive space much larger still. So the CPU searches first it's closest area, the cache. If there is a "hit", then the data is immediately sent to the register where it can perform the add. If a "miss" occurs, then it has to move to the next largest space. That is where your RAM comes in. If your application is loaded in ram, a read occurs, and it's shipped to the cache, and from there into an internal register. If a "miss" is recorded again, then it has to search your hard drive, and even more time is lost before the instruction can be completed. This is a basic overview of how a CPU accesses data, and how your memory fits in to that picture.
Another question is: "how does data get from memory to the CPU?" This is answered by one word. Busses. A bus is a collection of data lines, each one capable of sending one bit (a 1 or a 0) in one direction at a time. The bus itself is capable of sending one "word", a combination of bits into an established whole. The speed of that interconnect is where your 800MHz and such numbers come into play. That implies the speed at which data is capable of being sent along the bus from the memory chip to the north bridge memory controller. Busses are one-way highways, and can only have one "car" or word, on them at once. They are found everywhere in your computer, the most quoted probably being the Front Side Bus, or FSB. Along this is the next path the data takes after leaving the north bridge. This interconnect brings data right into the CPU's cache, from which it can take another bus into the processors execution units.
Now, I should explain what DDR (double data rate) really means, and how it compares to SDR (single data rate). It's pretty apparent, but the means of it are a little more complicated. Normally, a "clock" drives the rate of transfer of information in an SDRAM (synchronous dynamic random access memory) module. That clock is merely a voltage signal that goes from 0 (ground) to signal voltage (3.3V in SDR chips, 2.5V in DDR) on a set frequency. That frequency is the speed at which the module is running at. On the rising edge, data can be sent in or out, and then the module waits for the next rising edge before repeating the procedure. In a DDR module, data is sent or received on both the rising AND the falling edges of the clock. Hence the ability to send "double" the amount of data.
"How is memory addressed?" is another common question. This one is quite simple. Think of a matrix, a collection of rows and columns. Each cell containing a 1 or 0 is defined by a location that is the intersection of a specific row and column, within a certain "bank". The older i845 Brookdale chipsets only had 4 banks, and could only address 2GB of memory. Newer Springdale and Canterwood chipsets are capable of addressing 8 banks, with a total of 4GB's maximum of memory. How a bank is defined is mostly by the memory used. Most modules are "double sided", or have two banks on them. So in a Brookdale, you could only have one double sided module and two single sided modules, one/two/three/four single sided modules, or two double sided ones. With an i865/i875 based chipset, the combinations are much larger, especially relating to the use of double sided modules.
Lastly, "what do all these buzzwords mean? Latency, bandwidth? I'm so confused!". Those are bigger topics. Lets get to them, shall we?
Dual Channel Memory
Most of the mainstream chipsets these days, short of the VIA KT/PX series and SiS 746/648, are using one form or another of "dual channel" memory. This theoretically doubles the data rate associated with transfers of bits from the memory to the memory controller in the north bridge. To explain this, think of a highway. When loaded with cars, they can all move at the same speed, and you can't make any more cars fit through, without increasing that speed. Unless you widen that highway. Make it twice as wide, and now twice as many cars can travel on it. This is done by adding another memory controller to the north bridge, and having an algorithm tie them together. DDR SDRAM works with a 64 bit bus, so adding two get you a 128 bit combined bus.
Now, for the platform dependant part. Think of a highway yet again, this time an 4 lane wide one, representing a dual channel DDR configuration. Now suppose that highway comes up to a bridge. If that bridge is also 4 lanes across, then there is no choking off of the data, it can all ramp smoothly onto and over the bridge. This is what occurs in an Intel Pentium 4 configuration. That bridge is the FSB. Because a P4's FSB is "quad pumped" for data (it's only "double pumped" for addresses though, which are much shorter), it is capable of bringing all that data from the 128bit channels right in. What "quad pumped" means, is very similar to the DDR improvements I mentioned earlier. Only this time, data is sent twice on the rising and falling edges. Pretend the signal voltage is 1.5V. When the voltage reaches 0.7V, data is sent, and again when 1.5V is reached. On the down slope, the same thing occurs, data sent at 0.7V and 0V. This explains why the P4 architecture is such a bandwidth hog. When using a single memory controller configuration, like the i845 chipset, or a i865/i875 with only one channel being used, the FSB is capable of sending much more data than the memory controller can bring in from the ram, causing inefficiency.
This situation is quite the opposite for a system based on the Athlon XP architecture. Again, think of our 4 lane highway, representing a dual channel DDR configuration. This time though, our bridge is only 2 lanes wide. This is because the Athlons' FSB is only capable of DDR equivalent performance. So that second channel goes to waste, as you cannot force all the cars through the bridge at once. The only time this comes in useful is when latencies inhibit the RAM from performing as fast as the FSB is capable of taking in data... which leads nicely into the next section.
KEITHLEE2/home/servers/www.devhardware.com/www/zdeconfigurator/configs/INFUSIONSOFT_OVERLAY.php/home/servers/www.devhardware.com/www/zdeconfigurator/configs/ OFFLOADING INFUSIONSOFTLOADING INFUSIONSOFT 1debug:overlay status: OFF overlay not displayed overlay cookie defined: TI_CAMPAIGN_1012_D OVERLAY COOKIE set: status off