A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » Intel
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Some early benchmarks for P4EE



 
 
Thread Tools Display Modes
  #72  
Old October 10th 03, 01:19 AM
Keith R. Williams
external usenet poster
 
Posts: n/a
Default

In article ,
says...
"wogston" wrote in message
...

The bytes in memory don't become 'fullwords' before they are loaded to ALU
registers.


No, sorry. A fullword is specifically data in memory, and has nothing to do
with registers. For example, I can do this:


Deano, you're talking 'Z' and woggy is talking 'x86'. I'm going
to say you're *both* wrong. In fact the wogster is wrong because
(any current processor) cannot access memory on even a "word"
boundary. The minimum addressable *memory* unit is a cache line,
(whatever they define that as).

CLC 0(4,R3),=F'1'

This compares a fullword with a value of '1' to 4 bytes in storage. The
four bytes in storage are *not* a fullword, because there is no implied
alignment. So I am comparing a fullword to 4 bytes. The *only* reason it
is a fullword is because my program has *defined* it as such. My program
*expects* it to be 4 bytes and on a word boundary. Otherwise, it is only 4
bytes in storage.


OTOH, x86 allows accesses across "word" boundaries. As Woggie
suggests this is ugly, but allowed. OTOH, all accesses to
*MEMORY* are in cache-line-sized chunks.

snip

But bytes in memory are just bytes in
memory. They don't become words or fullwords or whatever until they are
accessed.


Yes, bytes are just bytes - except for the 'format' imposed upon them by the
program. This is where the 'fullword' concept applies - and nowhere else.
Refer to the description I gave earlier from the z/OS Principles of
Operation:


For he 'Z', this may be correct. However the 'Z' isn't the
universe.

"Certain units of information must be on an integral boundary in storage. A
^^^^
^^^^^^^^
boundary is called integral for a unit of information when its storage
^^^^^^^^^^^^^^
^^^^^^
address is a multiple of the length of the unit in bytes. Special names are
^^^^^^ ^^^
given to fields of 2, 4, 8, and 16 bytes on an integral boundary. A halfword
^^^^
is a group of two consecutive bytes on a two-byte boundary and is the basic
building block of instructions. A word is a group of four consecutive bytes
on a four-byte boundary."

Emphasis mine. :-).


Sure, but this is highly processor dependent. This stuff varies
even within the implementation of an architecture. Between
architectures you cannot make any sort of comparison.

--
Keith
  #73  
Old October 10th 03, 03:01 AM
Felger Carbon
external usenet poster
 
Posts: n/a
Default

"Keith R. Williams" wrote in message
. ..

OTOH, x86 allows accesses across "word" boundaries. As Woggie
suggests this is ugly, but allowed. OTOH, all accesses to
*MEMORY* are in cache-line-sized chunks.


Keith, the way I understand this is that the cache controller part of
the CPU accesses the DRAM (usually via a north bridge) as cache lines.
But the CPU proper can address the cache on arbitrary byte boundaries.

Do I have this wrong?




  #74  
Old October 10th 03, 03:40 AM
Dean Kent
external usenet poster
 
Posts: n/a
Default

"Keith R. Williams" wrote in message
. ..

Deano, you're talking 'Z' and woggy is talking 'x86'. I'm going
to say you're *both* wrong. In fact the wogster is wrong because
(any current processor) cannot access memory on even a "word"
boundary. The minimum addressable *memory* unit is a cache line,
(whatever they define that as).


OK - define 'addressable'. ;-). I claim that the minimum *addressable*
unit is a byte. It may be that the minimum *fetch* unit is a cache line...


OTOH, x86 allows accesses across "word" boundaries. As Woggie
suggests this is ugly, but allowed. OTOH, all accesses to
*MEMORY* are in cache-line-sized chunks.


Eeeww! I just looked it up in my 'old' PC assembly language manual - and
word alignment is for 'segments', while word data types indicate length
only. What a typical kludgy, confusing mess. ;-).


For he 'Z', this may be correct. However the 'Z' isn't the
universe.


wh-wh-what? You mean, technology has passed me by?


Sure, but this is highly processor dependent. This stuff varies
even within the implementation of an architecture. Between
architectures you cannot make any sort of comparison.


Ah well. I can still address a single byte without worrying about it being
part of a word, however. At least that much of my world is stable. g.

Regards,
Dean



--
Keith



  #76  
Old October 10th 03, 04:04 AM
Keith R. Williams
external usenet poster
 
Posts: n/a
Default

In article ,
says...
"Keith R. Williams" wrote in message
. ..

Deano, you're talking 'Z' and woggy is talking 'x86'. I'm going
to say you're *both* wrong. In fact the wogster is wrong because
(any current processor) cannot access memory on even a "word"
boundary. The minimum addressable *memory* unit is a cache line,
(whatever they define that as).


OK - define 'addressable'. ;-). I claim that the minimum *addressable*
unit is a byte. It may be that the minimum *fetch* unit is a cache line...


Well, you two are doing a great job of munging these little
facts, so I thought I'd help you two out! ;-)

OTOH, x86 allows accesses across "word" boundaries. As Woggie
suggests this is ugly, but allowed. OTOH, all accesses to
*MEMORY* are in cache-line-sized chunks.


Eeeww! I just looked it up in my 'old' PC assembly language manual - and
word alignment is for 'segments', while word data types indicate length
only. What a typical kludgy, confusing mess. ;-).


Well, this is .chips! Basically, if you wish to screw your
performance (perhaps to justify the next release ;-), you're
welcome to do the nasty.

For he 'Z', this may be correct. However the 'Z' isn't the
universe.


wh-wh-what? You mean, technology has passed me by?


LOL! Marchand calculators have gone by way of the dodo, Deano!

Sure, but this is highly processor dependent. This stuff varies
even within the implementation of an architecture. Between
architectures you cannot make any sort of comparison.


Ah well. I can still address a single byte without worrying about it being
part of a word, however. At least that much of my world is stable. g.


Certainly on most architectures. AIUI, the original Alpha had no
byte addressing. Byte oriented I/O simply wasted the other
three. ...a *bad* idea that was soon corrected.

From the old farts ;-), it seems that the CDCs nor early Crays
had byte addressing either. It wasn't deemed necessary. (I'm
sure I'll be shortly correcte4d here ;-)

--
Keith
  #77  
Old October 10th 03, 04:58 PM
Judd
external usenet poster
 
Posts: n/a
Default



Certainly I have a job, though I'm taking the week off trying to
get my house ready for winter. You really do need to look in
that mirror. You might be amazed at what you see, if you look.


And you're doing a damn good job sitting there on your butt chatting on your
computer all day. Why don't you take a year off to figure out how to
install a lightbulb.

Dumb! Save the lame responses and take a cold shower already.

No, I'm serious. You really need to look in the mirror.


You're seriously dumb? I didn't need confirmation on that, but thanks.
Can't wait to see what lame a$$ response you come up with next.


You really do need to look in that mirror. I'm seriously
serious.


Wow, never heard that one before. You are a witty one aren't you!


  #79  
Old October 11th 03, 05:07 AM
George Macdonald
external usenet poster
 
Posts: n/a
Default

On Thu, 9 Oct 2003 23:04:12 -0400, Keith R. Williams
wrote:


Certainly on most architectures. AIUI, the original Alpha had no
byte addressing. Byte oriented I/O simply wasted the other
three. ...a *bad* idea that was soon corrected.


I didn't work with the Alpha enough to remember that but did they change it
later? What sticks in my mind most with it was the lack of an integer
divide instruction.

From the old farts ;-), it seems that the CDCs nor early Crays
had byte addressing either. It wasn't deemed necessary. (I'm
sure I'll be shortly correcte4d here ;-)


Humph, puff, grump! Yep the early CDCs were 60-bit words - they eventually
went to some kind of dual architecture with 64-bit words but I bever used
that (64-bit) side of it IIRC the Data General 16-bit minis,
Novas/Eclipses, were basically word addressing too, with some kind of
kludge for a byte select with a few special instructions.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
  #80  
Old November 13th 03, 09:40 PM
bill davidsen
external usenet poster
 
Posts: n/a
Default

In article ,
George Macdonald wrote:
| On Thu, 9 Oct 2003 23:04:12 -0400, Keith R. Williams
| wrote:
|
|
| Certainly on most architectures. AIUI, the original Alpha had no
| byte addressing. Byte oriented I/O simply wasted the other
| three. ...a *bad* idea that was soon corrected.
|
| I didn't work with the Alpha enough to remember that but did they change it
| later? What sticks in my mind most with it was the lack of an integer
| divide instruction.
|
| From the old farts ;-), it seems that the CDCs nor early Crays
| had byte addressing either. It wasn't deemed necessary. (I'm
| sure I'll be shortly correcte4d here ;-)
|
| Humph, puff, grump! Yep the early CDCs were 60-bit words - they eventually
| went to some kind of dual architecture with 64-bit words but I bever used
| that (64-bit) side of it IIRC the Data General 16-bit minis,
| Novas/Eclipses, were basically word addressing too, with some kind of
| kludge for a byte select with a few special instructions.

The Cray did not have byte addressing, and neither did the GE 600 series
(on which MULTICS was written) although by using "tally words" you could
have six or nine bit bytes, and one, two, or eight word (four nine bit
bytes) stacks with hardware bounds checking.

I was running a project on a Cray2, and troff ran faster on a VAX than
the Cary2, because it was fetching 128 bits and doing ANDs and ORs to
get the bytes. Some of the Crays didn't have virtual memory, either.
--
Bill Davidsen CTO, TMR Associates
As we enjoy great advantages from inventions of others, we should be
glad of an opportunity to serve others by any invention of ours; and
this we should do freely and generously.
-Benjamin Franklin (who would have liked open source)
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Benchmarks from upgrade: Ti4200 => 5900XT, if anyone is interested Mac Cool Nvidia Videocards 7 September 4th 04 04:56 PM
Question about Ti4200 benchmarks. archagon Nvidia Videocards 10 January 19th 04 05:23 AM
Tualatin on P2B Benchmarks? P2B Overclocking 8 December 29th 03 06:52 AM
Some early benchmarks for P4EE Yousuf Khan General 79 November 13th 03 09:40 PM
confusion about doom3 vs HL2 benchmarks Sumedh Ati Videocards 15 September 16th 03 03:44 AM


All times are GMT +1. The time now is 07:23 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.