A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » Overclocking
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

fsb speed - why does it matter?



 
 
Thread Tools Display Modes
  #41  
Old November 2nd 04, 10:24 PM
Alexander Ferrari
external usenet poster
 
Posts: n/a
Default

Hi Peps..

I have a P4 Prescott 560 3.4ghz sockel 775 with a Asus P5ad2 premium
mainboard. I can unlock my cpu`s multi. in the bios from 18x to 14x. so i
have a FSB 1066 with cpu speed around 3.7 ghz. it ROCKS. with fsb 800, i
have cpu speed about 4.2 ghz

greez


  #42  
Old November 3rd 04, 03:57 AM
David Maynard
external usenet poster
 
Posts: n/a
Default

James Hanley wrote:
David Maynard wrote in message ...

James Hanley wrote:


snip

David Maynard wrote in message


snip

Whilst increasing the multiplier increases only CPU cycles. (the
increase in the multiplier only serves the CPU speed. But increasing
the FSB serves not just the CPU, but the FSB, which is an integral
part of the system)


Not if you change the multiplier to keep the CPU speed the same. When
you're looking to what 'effect' something has you keep everything else the
same so the 'difference', if any, is the result OF that one thing.



by increasing one thing, like the multiplier, and nothing else, you
are just seeing the effect of increasing the multiplier. I wouldn't
call that 'keping everything else the same', I would call it 'me
changing 1 thing and nothing else', the result, is that many things
can change.


You're being obtuse. Changing, as you call it, the "1 thing" *is* keeping
everything else 'the same' and observing the effect of changing that "1
thing."

How you come up with "many things changing" I don't know but, regardless,
even if many things 'changed' that *is* the 'effect' of altering the "1
thing" and observing what changed with everything else remaining the same.

Infact, in the case you mention where you say "if you
change the multiplier to keep the CPU speed the same" there, you are
actually changng the multiplier(increasing it) and the fsb(lowering
it), so you are changing 2 things, that is a bad test.


No, it is the appropriate test if the purpose is to see the effect of the
FSB itself, and nothing else, on performance. And that was the specific
point I was trying to convey: that increasing the FSB, alone, improves
processor performance even at the same processor speed.

You're
suggesting chnaging 2 things(fsb and multiplier), by talking about
changing the multiplier to keep cpu cycles the same. Yet you then
write of the importance of changing 1 thing only, to see what effect
it has. I must be misunderstanding your paragraph, but it's not
important, 'cos I didn't mean changing the multiplier to keep cpu
cycles the same. I meant changing just the multiplier, thus letting
cpu cycles rise.


What you misunderstood is the difference between "keeping everything else
the same" vs what one has to do in order to keep "everything else the same."

I.E. The effects we want to observe, independent of each other, are the
results of CPU clock rate and FSB clock rate; not how one 'sets' them.

I think we're in agreement here, as you say
"a processor at speed X will perform better if it also has a faster
FSB (within reason)."
The value of the multiplier only serves to determine the CPU speed.
Unlike the FSB. (actrually for a given cpu speed, a higher mnultiplier
is worse because it implies a lower fsb)


'For a given speed' is the point. Yes, a higher FSB, with a lower
multiplier for the 'same CPU speed', is better. Which was the point of the
topic "FSB speed - why does it matter?"

It 'matters' on it's own merit, not simply because it also increases the
CPU speed and 'CPU speed' is not always in the equation. As in, should I
buy an XP3200+ 333 FSB or an XP3200+ 400 FSB? Or, my mobile maxes out at
2400 MHz but, since I can change the multiplier, which FSB would be best:
266, 333, 400?


absolutely.
(answer to your rhetorical Q is 400)


  #43  
Old November 3rd 04, 04:57 AM
P2B
external usenet poster
 
Posts: n/a
Default



James Hanley wrote:
it seems to me that nobody needs a high fsb. since they could just
push the multiplier really high.

I can see the greatness of ddr since the same speed processor can
read/write twice as much per cycle. (i assume that the cpu has to be
ddr to receive or write double)


Here's a real example of why FSB matters ;-)

I have a pair of Tualatin-S engineering samples, (no multiplier lock),
and a modified dual processor motherboard (Asus P2B-DS), so I can
benchmark the processors at various FSB and multiplier settings which
result in (close to) the same CPU clock speed. PCmark2002 results:

CPU Mhz FSB Multi CPU Test RAM Test
950 100 9.5 3044 1711
931 133 7 3019 1948
910 140 6.5 2960 2010

Clearly RAM performance improves as FSB is increased, while CPU
performance remains proportional to CPU clock speed.

More detailed results he

http://tipperlinne.com/benchmark.htm

P2B
  #44  
Old November 4th 04, 03:16 AM
James Hanley
external usenet poster
 
Posts: n/a
Default

"Richard Hopkins" wrote in message ...
"James Hanley" wrote in message
Infact, in the case you mention where you say "if you change the
multiplier to keep the CPU speed the same" there, you are
actually changng the multiplier(increasing it) and the fsb(lowering
it), so you are changing 2 things, that is a bad test.


No you're not. You're evaluating the performance difference inherent in
changing *one* thing (the FSB), because the CPU clock speed stays the same.
You are missing the point of what David is saying - you need to change *two*
parameters to measure the effect of *one* change.

You're suggesting chnaging 2 things(fsb and multiplier), by talking about
changing the multiplier to keep cpu cycles the same. Yet you then
write of the importance of changing 1 thing only,


You're missing the point of changing the CPU multiplier. All it does is
govern the CPU core frequency. If, for example, you ran a CPU at 100MHz x
20, or 133.333'MHz x 15, the result would be 2000MHz, and the integer and
floating point performance of the CPU would be *exactly* the same, allowing
for any point variations in the PLL. Thus, any changes you saw in benchmark
performance would be caused by the FSB change.

I must be misunderstanding your paragraph,


You are.


Thanks, I understand it now, I see what David meant when he said my
test was a bad one. Though my example wasn't a test to prove anything,
it was just an exmaple where an increase in FSB improves system
performance for the 2 reasons. It definitely didn't prove anything.
If it were a test it would have been a bad one. I can now see where
David was coming from. I can see the benefits of david's test over my
example, since he proves that an FSB increase improves system
performance purely because the system has a faster FSB.


I meant changing just the multiplier, thus letting cpu cycles rise.


This is where your misunderstanding arises. Let's remind you that your
initial proposal was that FSB isn't, in itself, important. If you tested
this theory by raising the FSB and leaving the multiplier the same, you
would see an improvement in CPU integer and floating point performance, and,
if you didn't change the memory multiplier, you'd see an improvement in
memory performance too. By changing things like the CPU core and memory bus
multipliers, you can evaluate the effect of FSB changes without the results
being clouded by other issues.


right, thanks

which FSB would be best: 266, 333, 400?

absolutely.
(answer to your rhetorical Q is 400)


James, you came into this thread thinking that FSB doesn't matter. The
answer above indicates that you have revised your opinion. Is this correct?


yes.

note- I stated earlier in the thread that I had revised my opinion -
that I was wrong before.


thanks.
  #45  
Old November 4th 04, 03:58 AM
James Hanley
external usenet poster
 
Posts: n/a
Default

"Richard Hopkins" wrote in message ...
"James Hanley" wrote in message ...
memory frequency can be increased to a multiple of the FSB even
before DDR is 'applied'.


Of course it can. Trouble is that you can do what the hell you like to the
FSB:memory multiplier, but after a certain point the connection between the
memory and the processor can't keep up, so your increases in memory speed
are wasted. What is the point of having, say, 8GB/sec memory bandwidth if
the link between the memory controller and the CPU only runs at 4GB/sec?


I agree, but
what seems strange is an increase in bandwidth via- Dual inline
memory. doubling the bandwidth of the memory bus. Well, as far as I
understand it, the FSB and Memory bus are connected, and often
considered as one bus connecting cpu to memory. The FSB seems to be
the end towards the processor, and the memory bus seems to be the end
towards memeory. (i'm aware that the athlon 64 has no fsb / gives a
different name to the bus[s] between cpu and memory) So Dual inline
memory presumably doubles the bandwidth of the memory bus and fsb. So
it must be a special FSB/Memory bus that doubles in bandwidth for
when memory reads or writes (but not for the cpu reading and
writing!).
Here's the weird bit that seems very strange though. Dual inline
memory is not considered to increase the effective speed. Yet in each
cycle, the memory can r/w twice as much. So that means that the cpu
and memory cannot be efficiently synchronized when the cpu's access to
the FSB and the memory's access to the memory bus are at the same
effective speed. They are only efficiently syhcnronized when the cpu
is accessing the FSB at twice the speed that the memory is accessing
the memory bus / when the memory's effective speed is half that at
which the cpu accesses the fsb. To put it another way, only with the
dual inline memory accessing the fsb at half the effective speed that
the cpu accesses the fsb, would the bandwidth be equal, and memory
cycles go unwasted.

I have an option in my BIOS to set my DDR-SDRAM frequency,
I can set my FSB to 100 and my SDRAM to 266 (effective).


Virtually all motherboards do this nowadays. However, you are better off
keeping a synchronous memory bus and raising the FSB than you are clocking
the memory bus up and leaving the FSB slower. In both AMD (HyperTransport)
and Intel (NetBurst Bus) cases, the FSB directly controls the speed of the
internal processor to memory bus, and only by keeping the bandwidth of this
bus at least equal to the memory bandwidth can you take full advantage of
the memory speed.

This is why both AMD and Intel have been raising the effective FSB of their
motherboards and processors the last few years. Look at the way Intel went
from 100 (effective 400MHz QDR) FSB to (soon) 266MHz (effectively 1066MHz).
The reason they've done it is to allow sufficient headroom for ever faster
memory to interface optimally with the processor.

So both RAM and CPU can operate at a frequency that is a multiple of
the FSB.
So memory frequency can be increased without increasing the FSB.


Of course it can. The processor bus speed, by contrast, can only be
increased by increasing the FSB.

(i'm assuming bandwidth=throughput, but I cannot check at this moment,
since I'm leaving in a minute, so I have to click Send now!!


For the purposes of this conversation, bandwidth does equal throughput.
--

thanks
  #46  
Old November 4th 04, 05:44 AM
Richard Hopkins
external usenet poster
 
Posts: n/a
Default

"James Hanley" wrote in message...
I agree, but
what seems strange is an increase in bandwidth via- Dual inline
memory. doubling the bandwidth of the memory bus.


What's strange about that?

Well, as far as I understand it, the FSB and Memory bus are connected,


They're connected, yes. Either synchronously or asynchronously, via a series
of multipliers/dividers. They don't *have* to be connected though. In theory
it's possible to run the two buses on separate PLL's, it's just more
practical to run them, as well as other timing critical items from a single
timebase.

and often considered as one bus connecting cpu to memory.


To be strictly accurate, they're two, closely interlinked buses, one between
CPU and memory controller, and the other between memory controller and
memory. Even though, in practice, these two buses are normally run off the
same clock generator, don't assume that they're the same thing, although
admittedly the dividing lines between the two are becoming more blurred all
the time. The AMD Socket 939 chips have their memory controller onboard the
CPU for example - shortening the physical connections, and thus boosting
performance.

Also, The FSB seems to be the end towards the processor, and
the memory bus seems to be the end towards memeory.


Your terminology's not quite right, but you do generally have the right
idea.

(i'm aware that the athlon 64 has no fsb / gives a different
name to the bus[s] between cpu and memory)


The theory's the same, even though, as you say, the nomenclature is
different and the geography, as mentioned above, is quite confusing.

So Dual inline memory presumably doubles the bandwidth of the
memory bus and fsb.


No. A dual channel memory controller doubles the *width* of the memory bus,
effectively reading from/writing to two banks of memory in parallel. It
doesn't, in itself have any effect on the speed or bandwidth of the front
side bus.

So it must be a special FSB/Memory bus that doubles in
bandwidth for when memory reads or writes


You already know about the effect double data rate and quad data rate buses
have on bandwidth, so the rest should be pretty simple. You have to grasp
the difference between speed (i.e. clock cycles per second), width (number
of bytes written/read per cycle) and bandwidth (clocks per second multiplied
by width multiplied by the number of data transfers per clock).
An eight bit wide single data rate interface running at 1Hz has a bandwidth
of eight bits per second. A sixteen bit wide interface running at 1Hz has a
bandwidth of 16bps, and an eight bit wide interface running at 1Hz with two
accesses per clock (i.e DDR) also has a bandwidth of 16bps. All three
parameters - speed, width and data rate contribute to the overall bandwidth
of a digital connection.

Here's the weird bit that seems very strange though. Dual
inline memory is not considered to increase the effective
speed.


It doesn't increase the speed in cycles per second, but doubles the
effective width of the memory interface. Remember that bandwidth=width x
speed, so if the speed stays the same but the width doubles, the bandwidth
also doubles.

Yet in each cycle, the memory can r/w twice as much. So that
means that the cpu and memory cannot be efficiently synchronized


Yes they can, it all depends on the geometry of the processor bus

when the cpu's access to the FSB and the memory's access to the
memory bus are at the same effective speed. They are only efficiently
syhcnronized when the cpu is accessing the FSB at twice the speed
that the memory is accessing the memory bus


No, that's wrong. You're ass-u-ming that the geometry of the processor and
memory buses are the same. In most cases they're not.

To put it another way, only with the dual inline memory accessing
the fsb at half the effective speed that the cpu accesses the fsb,
would the bandwidth be equal, and memory cycles go unwasted.


Depending on what platform you are talking about, that's wrong. Take the
Pentium 4 as an example:

The bus between the memory controller and the processor is 8 bytes wide and
runs on a quad data rate cycle. At 200MHz FSB, that gives you an effective
800MHz bus, which, multiplied by 8 bytes, gives you an overall bandwidth of
6.4GB/sec.

The 800MHz Pentium 4's are designed to use PC3200 memory in a dual channel
configuration.

Each DDR-SDRAM interface is 64 bits wide, so in a dual channel configuration
you effectively have a 128 bit wide memory bus. PC3200 runs at a 200MHz base
clock, with a double data rate cycle. Multiply all this together, and you
get - surprise surprise, 6.4GB/sec. As you can see, in this instance the
bandwidth of the processor bus, and the bandwidth of the memory bus, is
equal.

If you take the evolution of the Pentium 4 as an example, you can also see
why FSB *is* important when it comes to raising the performance of a fixed
architecture. The original P4's had a 400MHz QDR FSB, giving the processor
bus a bandwidth of 3.2GB/sec. This was designed to mate with PC800 Rambus
memory in a dual channel configuration, which also ran at 3.2GB/sec.

When the first single channel DDR P4 chipsets were introduced, memory
bandwidth actually lagged behind the processor for a while - when 533MHz FSB
P4's (4.2GB/sec processor bus) were being run with a single channel of
PC2700 (2.7GB/sec).

Continuing the P4 example, you can see why your original "FSB doesn't
matter" comment caused the reaction it did.

If, for example, you managed to build a system with, say, one of the
original Pentium 4 2.4MHz 400FSB parts and dual channel PC3200 memory
running on a 2:1 multiplier, you'd have 6.4GB/sec of memory bandwidth on
tap, but a processor bus that only ran at 3.2GB/sec.

If, on the other hand, you substituted a Pentium 4 2.4C, 800MHz FSB chip
(with Hyperthreading disabled) and synchronised the processor and memory
buses, the performance of the system would leap, even though the processor
itself was running at the same speed.

You can also now see why raising the FSB is the most effective way to
increase overall system performance. If you raise the FSB by 33%, you not
only raise the CPU core clock 33%, you raise the bandwidth of the processor
bus by 33% too. If you keep your memory synchronous, you also raise the
bandwidth of the memory bus by 33%. The overall effect will be, there or
thereabouts, a 33% improvement in performance.

If, OTOH, you keep the FSB the same, but raise the CPU and memory
multipliers by 33%, you get the faster memory and CPU clocks, but you don't
raise the processor bus, so the resulting bottleneck - assuming that the
bandwidths matched in the first place - will curtail your performance gain.

Make sense now?
--


Richard Hopkins
Cardiff, Wales, United Kingdom
(replace .nospam with .com in reply address)

The UK's leading technology reseller www.dabs.com
Get the most out of your digital photos www.dabsxpose.com


  #47  
Old November 4th 04, 05:59 AM
Richard Hopkins
external usenet poster
 
Posts: n/a
Default

"James Hanley" wrote in message...
I agree, but
what seems strange is an increase in bandwidth via- Dual inline
memory. doubling the bandwidth of the memory bus.


What's strange about that?

Well, as far as I understand it, the FSB and Memory bus are connected,


They're connected, yes. Either synchronously or asynchronously, via a series
of multipliers/dividers. They don't *have* to be connected though. In theory
it's possible to run the two buses on separate PLL's, it's just more
practical to run them, as well as other timing critical items from a single
timebase.

and often considered as one bus connecting cpu to memory.


To be strictly accurate, they're two, closely interlinked buses, one between
CPU and memory controller, and the other between memory controller and
memory. Even though, in practice, these two buses are normally run off the
same clock generator, don't assume that they're the same thing, although
admittedly the dividing lines between the two are becoming more blurred all
the time. The AMD Socket 939 chips have their memory controller onboard the
CPU for example - shortening the physical connections, and thus boosting
performance.

Also, The FSB seems to be the end towards the processor, and
the memory bus seems to be the end towards memeory.


Your terminology's not quite right, but you do generally have the right
idea.

(i'm aware that the athlon 64 has no fsb / gives a different
name to the bus[s] between cpu and memory)


The theory's the same, even though, as you say, the nomenclature is
different and the geography, as mentioned above, is quite confusing.

So Dual inline memory presumably doubles the bandwidth of the
memory bus and fsb.


No. A dual channel memory controller doubles the *width* of the memory bus,
effectively reading from/writing to two banks of memory in parallel. It
doesn't, in itself have any effect on the speed or bandwidth of the front
side bus.

So it must be a special FSB/Memory bus that doubles in
bandwidth for when memory reads or writes


You already know about the effect double data rate and quad data rate buses
have on bandwidth, so the rest should be pretty simple. You have to grasp
the difference between speed (i.e. clock cycles per second), width (number
of bytes written/read per cycle) and bandwidth (clocks per second multiplied
by width multiplied by the number of data transfers per clock).
An eight bit wide single data rate interface running at 1Hz has a bandwidth
of eight bits per second. A sixteen bit wide interface running at 1Hz has a
bandwidth of 16bps, and an eight bit wide interface running at 1Hz with two
accesses per clock (i.e DDR) also has a bandwidth of 16bps. All three
parameters - speed, width and data rate contribute to the overall bandwidth
of a digital connection.

Here's the weird bit that seems very strange though. Dual
inline memory is not considered to increase the effective
speed.


It doesn't increase the speed in cycles per second, but doubles the
effective width of the memory interface. Remember that bandwidth=width x
speed, so if the speed stays the same but the width doubles, the bandwidth
also doubles.

Yet in each cycle, the memory can r/w twice as much. So that
means that the cpu and memory cannot be efficiently synchronized


Yes they can, it all depends on the geometry of the processor bus

when the cpu's access to the FSB and the memory's access to the
memory bus are at the same effective speed. They are only efficiently
syhcnronized when the cpu is accessing the FSB at twice the speed
that the memory is accessing the memory bus


No, that's wrong. You're ass-u-ming that the geometry of the processor and
memory buses are the same. In most cases they're not.

To put it another way, only with the dual inline memory accessing
the fsb at half the effective speed that the cpu accesses the fsb,
would the bandwidth be equal, and memory cycles go unwasted.


Depending on what platform you are talking about, that's wrong. Take the
Pentium 4 as an example:

The bus between the memory controller and the processor is 8 bytes wide and
runs on a quad data rate cycle. At 200MHz FSB, that gives you an effective
800MHz bus, which, multiplied by 8 bytes, gives you an overall bandwidth of
6.4GB/sec.

The 800MHz Pentium 4's are designed to use PC3200 memory in a dual channel
configuration.

Each DDR-SDRAM interface is 64 bits wide, so in a dual channel configuration
you effectively have a 128 bit wide memory bus. PC3200 runs at a 200MHz base
clock, with a double data rate cycle. Multiply all this together, and you
get - surprise surprise, 6.4GB/sec. As you can see, in this instance the
bandwidth of the processor bus, and the bandwidth of the memory bus, is
equal.

If you take the evolution of the Pentium 4 as an example, you can also see
why FSB *is* important when it comes to raising the performance of a fixed
architecture. The original P4's had a 400MHz QDR FSB, giving the processor
bus a bandwidth of 3.2GB/sec. This was designed to mate with PC800 Rambus
memory in a dual channel configuration, which also ran at 3.2GB/sec.

When the first single channel DDR P4 chipsets were introduced, memory
bandwidth actually lagged behind the processor for a while - when 533MHz FSB
P4's (4.2GB/sec processor bus) were being run with a single channel of
PC2700 (2.7GB/sec).

Continuing the P4 example, you can see why your original "FSB doesn't
matter" comment caused the reaction it did.

If, for example, you managed to build a system with, say, one of the
original Pentium 4 2.4MHz 400FSB parts and dual channel PC3200 memory
running on a 2:1 multiplier, you'd have 6.4GB/sec of memory bandwidth on
tap, but a processor bus that only ran at 3.2GB/sec.

If, on the other hand, you substituted a Pentium 4 2.4C, 800MHz FSB chip
(with Hyperthreading disabled) and synchronised the processor and memory
buses, the overall performance of the system would leap, even though the
processor and memory were running at exactly the same speed.

You can also now see why raising the FSB is the most effective way to
increase overall system performance given a fixed architecture. If you raise
the FSB by 33%, you not only raise the CPU core clock 33%, you raise the
bandwidth of the processor bus by 33% too. If you keep your memory
synchronous, you also raise the bandwidth of the memory bus by 33%. The
overall effect will be, in pure number crunching terms, and forgetting about
all other issues, a 33% improvement.

If, OTOH, you keep the FSB the same, but raise the CPU and memory
buses 33% by means of altered multipliers, you get the faster memory and CPU
clocks, but you don't raise the processor bus, so the resulting bottleneck -
assuming that the bandwidths matched in the first place - will greatly
curtail your performance gain. If the data can't flow between the memory and
CPU fast enough, the CPU sits there twiddling its thumbs rather than
processing data.

Make sense now?
--


Richard Hopkins
Cardiff, Wales, United Kingdom
(replace .nospam with .com in reply address)

The UK's leading technology reseller www.dabs.com
Get the most out of your digital photos www.dabsxpose.com


  #48  
Old November 5th 04, 02:05 AM
James Hanley
external usenet poster
 
Posts: n/a
Default

"Richard Hopkins" wrote in message ...
"James Hanley" wrote in message...


snip my errant thinking which you corrected where I thought that the
fsb and memory bus were to be considered parts of the same bus, and
where i was puzzled as to whether the fsb effectively doubled in width
with dual inline memory!!
I understand your response, about the fsb, memory bus, cpu and memory
cobtroller.



Here's the weird bit that seems very strange though. Dual
inline memory is not considered to increase the effective
speed.


It doesn't increase the speed in cycles per second, but doubles the
effective width of the memory interface. Remember that bandwidth=width x
speed, so if the speed stays the same but the width doubles, the bandwidth
also doubles.


well. Course Neither Dual inline nor DDR increase the actual speed.
However, DDR is considered to increase the effective speed, even
though it does not increase the speed in cycles per second. It just
writes twice as much per cycle, which has the same effect as working
at twice the frequency. Similarly, I would have expected Dual inline
to increase the ‘effective speed' even though – like DDR, it doesn't
increase the actual speed in cycles per second. It writes twice as
much – just not on the same wires/conductors, but on a new/unused set.

Yet in each cycle, the memory can r/w twice as much. So that
means that the cpu and memory cannot be efficiently synchronized


Yes they can, it all depends on the geometry of the processor bus


according to the table in pcguide.com article "Memory Banks and
Package Bit Width", it says Pentiums have a 64-bit data bus. In Scott
mueller's article formally on upgraingandrepairingpcs.com, now on
quepublishing.com, titled "Understanding PC2700 (DDR333) and PC3200
(DDR400) Memory" there's a table that says that DDR RAM from PC66 to
PC4300 have a 64-bit bus.
So a Pentium with DDR RAM DIMMS has a 64-bit data bus and 64-bit
memory bus. If the effective speeds are the same, - for example – the
actual FSB is the same, the FSB is dual pumped and so is the memory
bus, so the effective speeds are the same. Therefore if a pair of
memory modules are used, then the memory bus would ‘effectively' be
128-bit, hence doubling the bandwidth, but would be said to be running
at the same effective speed as it was when it was 64-bit. If the
memory bus can throw around twice as much data as the FSB, then it is
not an efficient set up. It is only efficient if the DDR RAM is half
the speed of the FSB. Since in one memory bus clock cycle when
reading, there are 2 servings for the FSB, it would require 2 FSB
cycles to pick up the data. Similarly when writing.


when the cpu's access to the FSB and the memory's access to the
memory bus are at the same effective speed. They are only efficiently
syhcnronized when the cpu is accessing the FSB at twice the speed
that the memory is accessing the memory bus


No, that's wrong. You're ass-u-ming that the geometry of the processor and
memory buses are the same. In most cases they're not.


yeah, I was assuming that, but based on the tables I saw at
pcguide.com and scott mueller's article. Collectively they said that
Pentiums have 64-bit data buses and DDR RAM (at least PC66-PC4300) has
64-bit memory bus.
To me, that means that the geometry is the same.



To put it another way, only with the dual inline memory accessing
the fsb at half the effective speed that the cpu accesses the fsb,
would the bandwidth be equal, and memory cycles go unwasted.


Depending on what platform you are talking about, that's wrong. Take the
Pentium 4 as an example:

The bus between the memory controller and the processor is 8 bytes wide and
runs on a quad data rate cycle. At 200MHz FSB, that gives you an effective
800MHz bus, which, multiplied by 8 bytes, gives you an overall bandwidth of
6.4GB/sec.

The 800MHz Pentium 4's are designed to use PC3200 memory in a dual channel
configuration.

Each DDR-SDRAM interface is 64 bits wide, so in a dual channel configuration
you effectively have a 128 bit wide memory bus. PC3200 runs at a 200MHz base
clock, with a double data rate cycle. Multiply all this together, and you
get - surprise surprise, 6.4GB/sec. As you can see, in this instance the
bandwidth of the processor bus, and the bandwidth of the memory bus, is
equal.


but isn't that a beautiful real world illustration of what I mean. In
that example, the effective speed of the FSB is 800=4*200. And the
effective speed of the memory bus is 400=2*200. It's efficient,
because the bandwidth is equal. Although that required the actual
speeds to be the same, it means that the FSB's effective speed is
half that of the memory bus.

If you take the evolution of the Pentium 4 as an example, you can also see
why FSB *is* important when it comes to raising the performance of a fixed
architecture. The original P4's had a 400MHz QDR FSB, giving the processor
bus a bandwidth of 3.2GB/sec. This was designed to mate with PC800 Rambus
memory in a dual channel configuration, which also ran at 3.2GB/sec.

When the first single channel DDR P4 chipsets were introduced, memory
bandwidth actually lagged behind the processor for a while - when 533MHz FSB
P4's (4.2GB/sec processor bus) were being run with a single channel of
PC2700 (2.7GB/sec).

Continuing the P4 example, you can see why your original "FSB doesn't
matter" comment caused the reaction it did.


yeah

If, for example, you managed to build a system with, say, one of the
original Pentium 4 2.4MHz 400FSB parts and dual channel PC3200 memory
running on a 2:1 multiplier, you'd have 6.4GB/sec of memory bandwidth on
tap, but a processor bus that only ran at 3.2GB/sec.

If, on the other hand, you substituted a Pentium 4 2.4C, 800MHz FSB chip
(with Hyperthreading disabled) and synchronised the processor and memory
buses, the performance of the system would leap, even though the processor
itself was running at the same speed.

You can also now see why raising the FSB is the most effective way to
increase overall system performance. If you raise the FSB by 33%, you not
only raise the CPU core clock 33%, you raise the bandwidth of the processor
bus by 33% too. If you keep your memory synchronous, you also raise the
bandwidth of the memory bus by 33%. The overall effect will be, there or
thereabouts, a 33% improvement in performance.


Lemme see!
P4 800MHz FSB(200*4). DDR-SDRAM PC3200 400MHz=(200*2) – a 2:1
multiplier.
That system, as you explained, has the bandwidths equal, because it's
dual inline.

FSB actual speed upped by 33% puts it from 200MHz to 266MHz.
Effective FSB speed (quad pumped) = 1066MHz(266*4)
FSB Bandwidth= 1066MHz * 8 = 8.5GB/s

DDR-SDRAM actual speed = 200MHz upped by 33% naturally = 266MHz
Effective DDR-SDRAM speed = 266*2 = 533MHz
Memory bus bandwidth before dual inline is applied = 4.2GB/s
Memory bus bandwidth after dual inline is applied = 8.5GB/s

I understand you there then. Yeah,
I can see. 200*4*8=200*2*8*2

When you say "If you keep your memory synchronous, you also raise the
bandwidth of the memory bus by 33%." You mean keeping the actual
memory speed equal to the processor speed, right? The word
synchronous is a funny one to use, because SDRAM is always
synchronous, that's what the S stands for. If the actual speeds are
different, it would stiull be synchronous, and if the multiplier were
non-integer, it would be pseudo-synchronous / pseudo-sync, which is
still synchronized, but I've read that via may call it async for some
reason. There was a long thread on the subject of pseudo-sync and
related issues such as async mode, in comp.sys.ibm.pc.hardware.chips
in sept 2004.


If, OTOH, you keep the FSB the same, but raise the CPU and memory
multipliers by 33%, you get the faster memory and CPU clocks, but you don't
raise the processor bus, so the resulting bottleneck - assuming that the
bandwidths matched in the first place - will curtail your performance gain.


And in your earlier example
P4 800MHz FSB(200*4). DDR-SDRAM PC3200 400MHz=(200*2) – a 2:1
multiplier.
There's no bottleneck because the bandwidths of the fsb and memory bus
are equal. (effective speeds being different and actual speeds being
the same, are irrelevant).


Make sense now?



yeah, and those calculations i did with the 33% increase all agree
with what you're saying.

thanks
  #49  
Old November 5th 04, 05:43 AM
Michael Brown
external usenet poster
 
Posts: n/a
Default

James Hanley wrote:
Richard Hopkins wrote:
James Hanley wrote:

[...]
Here's the weird bit that seems very strange though. Dual
inline memory is not considered to increase the effective
speed.


It doesn't increase the speed in cycles per second, but doubles the
effective width of the memory interface. Remember that
bandwidth=width x speed, so if the speed stays the same but the
width doubles, the bandwidth also doubles.


well. Course Neither Dual inline nor DDR increase the actual speed.
However, DDR is considered to increase the effective speed, even
though it does not increase the speed in cycles per second. It just
writes twice as much per cycle, which has the same effect as working
at twice the frequency.


Not exactly. A DDR bus is the same as an SDR bus at the same speed but
double the width (not the same width and double the speed). There's a subtle
difference, mainly in latency-bound situations.

[...]

--
Michael Brown
www.emboss.co.nz : OOS/RSI software and more
Add michael@ to emboss.co.nz - My inbox is always open


  #50  
Old November 5th 04, 02:52 PM
James Hanley
external usenet poster
 
Posts: n/a
Default

google groups seems to be be a bit slow in posting and archiving ,
i'll post this again. Maybe the other earlier post didn't get through.
My ISP seem to be unable to provide a working news server.

"Richard Hopkins" wrote in message ...
"James Hanley" wrote in message...


snip my errant thinking which you corrected where I thought that the
fsb and memory bus were to be considered parts of the same bus, and
where i was puzzled as to whether the fsb effectively doubled in width
with dual inline memory!!
I understand your response, about the fsb, memory bus, cpu and memory
cobtroller.



Here's the weird bit that seems very strange though. Dual
inline memory is not considered to increase the effective
speed.


It doesn't increase the speed in cycles per second, but doubles the
effective width of the memory interface. Remember that bandwidth=width x
speed, so if the speed stays the same but the width doubles, the bandwidth
also doubles.


well. Course Neither Dual inline nor DDR increase the actual speed.
However, DDR is considered to increase the effective speed, even
though it does not increase the speed in cycles per second. It just
writes twice as much per cycle, which has the same effect as working
at twice the frequency. Similarly, I would have expected Dual inline
to increase the ‘effective speed' even though – like DDR, it doesn't
increase the actual speed in cycles per second. It writes twice as
much – just not on the same wires/conductors, but on a new/unused set.

Yet in each cycle, the memory can r/w twice as much. So that
means that the cpu and memory cannot be efficiently synchronized


Yes they can, it all depends on the geometry of the processor bus


according to the table in pcguide.com article "Memory Banks and
Package Bit Width", it says Pentiums have a 64-bit data bus. In Scott
mueller's article formally on upgraingandrepairingpcs.com, now on
quepublishing.com, titled "Understanding PC2700 (DDR333) and PC3200
(DDR400) Memory" there's a table that says that DDR RAM from PC66 to
PC4300 have a 64-bit bus.
So a Pentium with DDR RAM DIMMS has a 64-bit data bus and 64-bit
memory bus. If the effective speeds are the same, - for example – the
actual FSB is the same, the FSB is dual pumped and so is the memory
bus, so the effective speeds are the same. Therefore if a pair of
memory modules are used, then the memory bus would ‘effectively' be
128-bit, hence doubling the bandwidth, but would be said to be running
at the same effective speed as it was when it was 64-bit. If the
memory bus can throw around twice as much data as the FSB, then it is
not an efficient set up. It is only efficient if the DDR RAM is half
the speed of the FSB. Since in one memory bus clock cycle when
reading, there are 2 servings for the FSB, it would require 2 FSB
cycles to pick up the data. Similarly when writing.


when the cpu's access to the FSB and the memory's access to the
memory bus are at the same effective speed. They are only efficiently
syhcnronized when the cpu is accessing the FSB at twice the speed
that the memory is accessing the memory bus


No, that's wrong. You're ass-u-ming that the geometry of the processor and
memory buses are the same. In most cases they're not.


yeah, I was assuming that, but based on the tables I saw at
pcguide.com and scott mueller's article. Collectively they said that
Pentiums have 64-bit data buses and DDR RAM (at least PC66-PC4300) has
64-bit memory bus.
To me, that means that the geometry is the same.



To put it another way, only with the dual inline memory accessing
the fsb at half the effective speed that the cpu accesses the fsb,
would the bandwidth be equal, and memory cycles go unwasted.


Depending on what platform you are talking about, that's wrong. Take the
Pentium 4 as an example:

The bus between the memory controller and the processor is 8 bytes wide and
runs on a quad data rate cycle. At 200MHz FSB, that gives you an effective
800MHz bus, which, multiplied by 8 bytes, gives you an overall bandwidth of
6.4GB/sec.

The 800MHz Pentium 4's are designed to use PC3200 memory in a dual channel
configuration.

Each DDR-SDRAM interface is 64 bits wide, so in a dual channel configuration
you effectively have a 128 bit wide memory bus. PC3200 runs at a 200MHz base
clock, with a double data rate cycle. Multiply all this together, and you
get - surprise surprise, 6.4GB/sec. As you can see, in this instance the
bandwidth of the processor bus, and the bandwidth of the memory bus, is
equal.


but isn't that a beautiful real world illustration of what I mean. In
that example, the effective speed of the FSB is 800=4*200. And the
effective speed of the memory bus is 400=2*200. It's efficient,
because the bandwidth is equal. Although that required the actual
speeds to be the same, it means that the FSB's effective speed is
half that of the memory bus.

If you take the evolution of the Pentium 4 as an example, you can also see
why FSB *is* important when it comes to raising the performance of a fixed
architecture. The original P4's had a 400MHz QDR FSB, giving the processor
bus a bandwidth of 3.2GB/sec. This was designed to mate with PC800 Rambus
memory in a dual channel configuration, which also ran at 3.2GB/sec.

When the first single channel DDR P4 chipsets were introduced, memory
bandwidth actually lagged behind the processor for a while - when 533MHz FSB
P4's (4.2GB/sec processor bus) were being run with a single channel of
PC2700 (2.7GB/sec).

Continuing the P4 example, you can see why your original "FSB doesn't
matter" comment caused the reaction it did.


yeah

If, for example, you managed to build a system with, say, one of the
original Pentium 4 2.4MHz 400FSB parts and dual channel PC3200 memory
running on a 2:1 multiplier, you'd have 6.4GB/sec of memory bandwidth on
tap, but a processor bus that only ran at 3.2GB/sec.

If, on the other hand, you substituted a Pentium 4 2.4C, 800MHz FSB chip
(with Hyperthreading disabled) and synchronised the processor and memory
buses, the performance of the system would leap, even though the processor
itself was running at the same speed.

You can also now see why raising the FSB is the most effective way to
increase overall system performance. If you raise the FSB by 33%, you not
only raise the CPU core clock 33%, you raise the bandwidth of the processor
bus by 33% too. If you keep your memory synchronous, you also raise the
bandwidth of the memory bus by 33%. The overall effect will be, there or
thereabouts, a 33% improvement in performance.


Lemme see!
P4 800MHz FSB(200*4). DDR-SDRAM PC3200 400MHz=(200*2) – a 2:1
multiplier.
That system, as you explained, has the bandwidths equal, because it's
dual inline.

FSB actual speed upped by 33% puts it from 200MHz to 266MHz.
Effective FSB speed (quad pumped) = 1066MHz(266*4)
FSB Bandwidth= 1066MHz * 8 = 8.5GB/s

DDR-SDRAM actual speed = 200MHz upped by 33% naturally = 266MHz
Effective DDR-SDRAM speed = 266*2 = 533MHz
Memory bus bandwidth before dual inline is applied = 4.2GB/s
Memory bus bandwidth after dual inline is applied = 8.5GB/s

I understand you there then. Yeah,
I can see. 200*4*8=200*2*8*2

When you say "If you keep your memory synchronous, you also raise the
bandwidth of the memory bus by 33%." You mean keeping the actual
memory speed equal to the processor speed, right? The word
synchronous is a funny one to use, because SDRAM is always
synchronous, that's what the S stands for. If the actual speeds are
different, it would stiull be synchronous, and if the multiplier were
non-integer, it would be pseudo-synchronous / pseudo-sync, which is
still synchronized, but I've read that via may call it async for some
reason. There was a long thread on the subject of pseudo-sync and
related issues such as async mode, in comp.sys.ibm.pc.hardware.chips
in sept 2004.


If, OTOH, you keep the FSB the same, but raise the CPU and memory
multipliers by 33%, you get the faster memory and CPU clocks, but you don't
raise the processor bus, so the resulting bottleneck - assuming that the
bandwidths matched in the first place - will curtail your performance gain.


And in your earlier example
P4 800MHz FSB(200*4). DDR-SDRAM PC3200 400MHz=(200*2) – a 2:1
multiplier.
There's no bottleneck because the bandwidths of the fsb and memory bus
are equal. (effective speeds being different and actual speeds being
the same, are irrelevant).


Make sense now?



yeah, and those calculations i did with the 33% increase all agree
with what you're saying.

thanks
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
How to speed up my CPU? MC General 11 December 12th 04 08:11 PM
AthlonXP 2000 on MSI KT4AV with (VIA KT400A) chipset Mainboard has Speed ÎÔ»¢²ØÁúCrouching Tiger Hidden Dragon Overclocking AMD Processors 18 May 6th 04 12:14 AM
AthlonXP 2000 on MSI KT4AV with (VIA KT400A) chipset Mainboard has Speed Complexity LongBow Overclocking AMD Processors 7 May 2nd 04 12:23 AM
D865GLC + CPU Fan Speed HELP Ron Reaugh General 1 December 16th 03 02:28 PM
CD burning speed determines read speed? David K General 4 July 22nd 03 09:31 AM


All times are GMT +1. The time now is 03:20 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.