If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
Endianess
Hello,
Simple question. With TCP/IP being Big Endian, and most servers being big-endian, why did intel choose little endian. I hear it is because of an electronic advantage. But, honnestly, i wonder what advantage it has over a 68HC11 which is a big-endian machine. Can someone tell me specifically what the electronic advantage is to creating a little-endian chip? Also, besides bitshift, are there some easy instructions to go from one primitive to another (hopefully that i don't have to use a special compiler)? Thanks! |
#2
|
|||
|
|||
Intel developed the x86 architecture ca. 1972 (Intel 4004 chip)
TCP/IP was still ARPAnet back then. When you are doing little narrow chips (4-bit, 8-bit), little-endian is a bit easier to implement. The hardware goes where the address points, fetches the first byte or nibble or even bit, does something (say add) with it, increments the address, then does the something again, and again, and again. With big-endian, it has to move to the right (extra step) before fetching the first byte/nibble/bit. Personally though, I prefer big-endian because that's what I learned (Arabic number system) in kindergarten. I was once involved in the conversion of a rather large software system (100K lines of C) from x86 to a big-endian chip. MOST things were handled by the compiler. There were a few HW-oriented things that were still little-endian, so we converted to C++ and had some little-endian objects that did the conversion. The conversion wasn't easy, but it wasn't nearly as hard as some folks predicted. And it's a BYTE swap, not a bit shift. -- Chuck Tribolet http://www.almaden.ibm.com/cs/people/triblet "Yoyoma_2" wrote in message .. . Hello, Simple question. With TCP/IP being Big Endian, and most servers being big-endian, why did intel choose little endian. I hear it is because of an electronic advantage. But, honnestly, i wonder what advantage it has over a 68HC11 which is a big-endian machine. Can someone tell me specifically what the electronic advantage is to creating a little-endian chip? Also, besides bitshift, are there some easy instructions to go from one primitive to another (hopefully that i don't have to use a special compiler)? Thanks! |
#3
|
|||
|
|||
Chuck Tribolet wrote:
Intel developed the x86 architecture ca. 1972 (Intel 4004 chip) TCP/IP was still ARPAnet back then. When you are doing little narrow chips (4-bit, 8-bit), little-endian is a bit easier to implement. The hardware goes where the address points, fetches the first byte or nibble or even bit, does something (say add) with it, increments the address, then does the something again, and again, and again. With big-endian, it has to move to the right (extra step) before fetching the first byte/nibble/bit. Great thanks! You are the only person (and i've been asking around a lot) that seemd to know the answer. Personally though, I prefer big-endian because that's what I learned (Arabic number system) in kindergarten. I was once involved in the conversion of a rather large software system (100K lines of C) from x86 to a big-endian chip. MOST things were handled by the compiler. There were a few HW-oriented things that were still little-endian, so we converted to C++ and had some little-endian objects that did the conversion. The conversion wasn't easy, but it wasn't nearly as hard as some folks predicted. Yes i would immagine so.. Is there an assembly instruction to help things along in that regards? or are the SHR and SHL operators good enough? There isn't an instruction to go from a DWORD BigEndian to a DWORD little endian in one step? And it's a BYTE swap, not a bit shift. Yeah sorry, 'cause in C you use the bit shift operators hehe... 8 8 Thank you for the reply. |
#4
|
|||
|
|||
There aren't endian instructions in x86, AFAIK (but I'm not much
of an x86 assembler programmer). There are machines that do have them. MMM, 16-bit endian swap is an 8-bit rotate. -- Chuck Tribolet http://www.almaden.ibm.com/cs/people/triblet "Yoyoma_2" wrote in message .. . Chuck Tribolet wrote: Intel developed the x86 architecture ca. 1972 (Intel 4004 chip) TCP/IP was still ARPAnet back then. When you are doing little narrow chips (4-bit, 8-bit), little-endian is a bit easier to implement. The hardware goes where the address points, fetches the first byte or nibble or even bit, does something (say add) with it, increments the address, then does the something again, and again, and again. With big-endian, it has to move to the right (extra step) before fetching the first byte/nibble/bit. Great thanks! You are the only person (and i've been asking around a lot) that seemd to know the answer. Personally though, I prefer big-endian because that's what I learned (Arabic number system) in kindergarten. I was once involved in the conversion of a rather large software system (100K lines of C) from x86 to a big-endian chip. MOST things were handled by the compiler. There were a few HW-oriented things that were still little-endian, so we converted to C++ and had some little-endian objects that did the conversion. The conversion wasn't easy, but it wasn't nearly as hard as some folks predicted. Yes i would immagine so.. Is there an assembly instruction to help things along in that regards? or are the SHR and SHL operators good enough? There isn't an instruction to go from a DWORD BigEndian to a DWORD little endian in one step? And it's a BYTE swap, not a bit shift. Yeah sorry, 'cause in C you use the bit shift operators hehe... 8 8 Thank you for the reply. |
#5
|
|||
|
|||
BSWAP - Byte Swap
Does an endian swap of a 32 bit register. XCHG - Exchange Can be used to do 16 bit swaps. "Chuck Tribolet" wrote in message ... There aren't endian instructions in x86, AFAIK (but I'm not much of an x86 assembler programmer). There are machines that do have them. MMM, 16-bit endian swap is an 8-bit rotate. -- Chuck Tribolet http://www.almaden.ibm.com/cs/people/triblet "Yoyoma_2" wrote in message .. . Chuck Tribolet wrote: Intel developed the x86 architecture ca. 1972 (Intel 4004 chip) TCP/IP was still ARPAnet back then. When you are doing little narrow chips (4-bit, 8-bit), little-endian is a bit easier to implement. The hardware goes where the address points, fetches the first byte or nibble or even bit, does something (say add) with it, increments the address, then does the something again, and again, and again. With big-endian, it has to move to the right (extra step) before fetching the first byte/nibble/bit. Great thanks! You are the only person (and i've been asking around a lot) that seemd to know the answer. Personally though, I prefer big-endian because that's what I learned (Arabic number system) in kindergarten. I was once involved in the conversion of a rather large software system (100K lines of C) from x86 to a big-endian chip. MOST things were handled by the compiler. There were a few HW-oriented things that were still little-endian, so we converted to C++ and had some little-endian objects that did the conversion. The conversion wasn't easy, but it wasn't nearly as hard as some folks predicted. Yes i would immagine so.. Is there an assembly instruction to help things along in that regards? or are the SHR and SHL operators good enough? There isn't an instruction to go from a DWORD BigEndian to a DWORD little endian in one step? And it's a BYTE swap, not a bit shift. Yeah sorry, 'cause in C you use the bit shift operators hehe... 8 8 Thank you for the reply. |
#6
|
|||
|
|||
"Chuck Tribolet" writes:
Intel developed the x86 architecture ca. 1972 (Intel 4004 chip) TCP/IP was still ARPAnet back then. When you are doing little Actually, was TCP/IP even 'invented' back then? I thought ARPAnet's switch to IP occurred in the early '80s. Or am I misremembering? -- David Magda dmagda at ee.ryerson.ca, http://www.magda.ca/ Because the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. -- Niccolo Machiavelli, _The Prince_, Chapter VI |
#7
|
|||
|
|||
TCP, TELNET, and FTP were part of the original design.
TCP/IP became the standard protocol for ARPANET on Jan 1, 1983. There were about 1,000 hosts on-line "David Magda" wrote in message ... "Chuck Tribolet" writes: Intel developed the x86 architecture ca. 1972 (Intel 4004 chip) TCP/IP was still ARPAnet back then. When you are doing little Actually, was TCP/IP even 'invented' back then? I thought ARPAnet's switch to IP occurred in the early '80s. Or am I misremembering? -- David Magda dmagda at ee.ryerson.ca, http://www.magda.ca/ Because the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. -- Niccolo Machiavelli, _The Prince_, Chapter VI |
Thread Tools | |
Display Modes | |
|
|