![]() |
If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#31
|
|||
|
|||
![]()
On 2/23/2014, Yousuf Khan posted:
On 23/02/2014 6:15 PM, BillW50 wrote: On 2/23/2014 4:41 PM, charlie wrote: The front panel on many of the old mainframes and minicomputers allowed direct entry of machine code, and was usually used to manually enter such things as a "bootstrap", or loader program. The way I recall is any computer only understands machine code and nothing else. Anything else must be converted to machine at some point. I know what Charlie is talking about. When he talks about directly entering machine code, it means typing in the binary codes directly, even without niceness of an assembler to translate it partially into English readable. This would be entering numbers into memory directly, like 0x2C, 0x01, 0xFB, etc., etc. Yousuf Khan Not so recently, when I worked on what were then called minicomputers, the boot process went like this: Set the front panel data switches to the bits of the first loader instruction (in machine language, of course) Set the front panel address switches to the first location of the loader Enter the data into memory by pressing the Store button. Set the data switches to the second instruction and the address switches to the second address. Press Store. Repeat a dozen or two times to get the entire bootstrap loader into memory Load the main loader paper tape into the paper tape reader Set the address switches to the starting location of the boot strap loader Press the Go button When to main loader is in, load the paper tape of the program you want to run into the reader Set the starting address to the main loader's first address Press Go That loader will load the final paper tape automatically, thank Silicon Over time the process was streamlined a bit, for example by letting the storage address autoincrement after each Store operation. Maybe you can guess how happy I was when BIOSes started to appear :-) -- Gene E. Bloch (Stumbling Bloch) |
#32
|
|||
|
|||
![]()
On Mon, 24 Feb 2014 14:09:02 -0500 " wrote
in article On Mon, 24 Feb 2014 13:38:40 -0500, Jason wrote: On Mon, 24 Feb 2014 13:02:02 -0500 " wrote in article On Sun, 23 Feb 2014 23:21:52 -0500, Jason wrote: On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier" Now, divide that expenditure by the number manufactured. I worked in high-end microprocessor design for seven or eight years. Transistors are indeed treated as free, and getting cheaper every year. If you look at how programmers write, they think they're free, too. ;-) Ok, transistors are indeed free in that regard. But as we've learned there are limits to absolute performance that can be had even with an unlimited transistor budget - hence multi-core machines. Programmers would be very happy if we could have figured out how to continuously boost uniprcessor performance but it cannot happen, at least with silicon. Taking advantage of parallel processor, for most tasks, is very hard. |
#33
|
|||
|
|||
![]()
On Mon, 24 Feb 2014 12:11:05 -0800 "Gene E. Bloch"
wrote in article leg90r$nln$1 @news.albasani.net On 2/23/2014, Yousuf Khan posted: On 23/02/2014 6:15 PM, BillW50 wrote: On 2/23/2014 4:41 PM, charlie wrote: The front panel on many of the old mainframes and minicomputers allowed direct entry of machine code, and was usually used to manually enter such things as a "bootstrap", or loader program. The way I recall is any computer only understands machine code and nothing else. Anything else must be converted to machine at some point. I know what Charlie is talking about. When he talks about directly entering machine code, it means typing in the binary codes directly, even without niceness of an assembler to translate it partially into English readable. This would be entering numbers into memory directly, like 0x2C, 0x01, 0xFB, etc., etc. Yousuf Khan Not so recently, when I worked on what were then called minicomputers, the boot process went like this: Set the front panel data switches to the bits of the first loader instruction (in machine language, of course) Set the front panel address switches to the first location of the loader Enter the data into memory by pressing the Store button. Set the data switches to the second instruction and the address switches to the second address. Press Store. Repeat a dozen or two times to get the entire bootstrap loader into memory Load the main loader paper tape into the paper tape reader Set the address switches to the starting location of the boot strap loader Press the Go button When to main loader is in, load the paper tape of the program you want to run into the reader Set the starting address to the main loader's first address Press Go That loader will load the final paper tape automatically, thank Silicon Over time the process was streamlined a bit, for example by letting the storage address autoincrement after each Store operation. Maybe you can guess how happy I was when BIOSes started to appear :-) lol I'm sure you were! The first computer I used had the boot record on a single tab card. It used up about 75 of the 80 columns. We whipersnappers memorized the sequence and could type it in on the console teletypewriter. It was faster than tracking down the boot card sometimes. |
#34
|
|||
|
|||
![]()
In comp.sys.ibm.pc.hardware.chips Jason wrote in part:
On Fri, 21 Feb 2014 14:23:01 +0000 (UTC) "Robert Redelmeier" wrote in article le7ng5$jfq$ In comp.sys.ibm.pc.hardware.chips Yousuf Khan wrote in part: But it goes to show why the age of compilers is well and truly upon us, there's no human way to keep track of these machine language instructions. Compilers just use a subset, and just repeat those instructions over and over again. Hate to break it to you, but you are behind the times. Compilers are passe' -- "modern" systems use interpreters like JIT Java. How else you you think Android gets Apps to run on the dogs-breakfast of ARM processors out there? It is [nearly] all interpreted Java. So much so that Dell can get 'roid Apps to run on its x86 tablet! (AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle) Compilers are NOT passe' I feel quoted-out-of-context. I was replying to Mr Khan (restored above) that compiled languages were in turn being supplanted by interpreted. The performance penalty for interpreted languages is a large factor. It's fine in many situations - scripting languages and the like - and the modern processors are fast enough to make the performance hit tolerable. Large-scale applications are still compiled and heavily optimized. Time is money. I am well aware of the perfomance penalty of interpreted languages (I once programmed in APL/360) and that compiling has been preferable for HPC. However, the differences between compilers are reducing to the quality of their libraries, especially SIMD and multi-threading. The flexibility of interpreters might have value. -- Robert |
#35
|
|||
|
|||
![]()
http://i272.photobucket.com/albums/j...SAovertime.jpg
700 as of '10. AVX/AVX2/BMI1/BMI2/XOP/FMA3/FMA4/Post 32nm Processor Instruction Extension (RDRAND and F16C) should put that over 800. |
#36
|
|||
|
|||
![]()
Robert Redelmeier redelm ev1.net.invalid wrote:
Jason jason_warren ieee.org wrote in part: "Robert Redelmeier" redelm ev1.net.invalid wrote Yousuf Khan bbbl67 spammenot.yahoo.com wrote: But it goes to show why the age of compilers is well and truly upon us, there's no human way to keep track of these machine language instructions. Compilers just use a subset, and just repeat those instructions over and over again. Hate to break it to you, but you are behind the times. Compilers are passe' -- "modern" systems use interpreters like JIT Java. How else you you think Android gets Apps to run on the dogs-breakfast of ARM processors out there? It is [nearly] all interpreted Java. So much so that Dell can get 'roid Apps to run on its x86 tablet! (AFAIK, iOS still runs compiled Apps prob'cuz Apple _hatez_ Oracle) Compilers are NOT passe' I feel quoted-out-of-context. I was replying to Mr Khan (restored above) that compiled languages were in turn being supplanted by interpreted. The performance penalty for interpreted languages is a large factor. It's fine in many situations - scripting languages and the like - and the modern processors are fast enough to make the performance hit tolerable. Large-scale applications are still compiled and heavily optimized. Time is money. I am well aware of the perfomance penalty of interpreted languages (I once programmed in APL/360) and that compiling has been preferable for HPC. However, the differences between compilers are reducing to the quality of their libraries, especially SIMD and multi-threading. The flexibility of interpreters might have value. Not talking about commercial stuff, but... I use speech and VC++. Speech activated scripting involves (what I think is) an interpreted scripting language (Vocola) hooked into NaturallySpeaking (DNS) speech recognition. Additionally, I'm using a Windows system hook written in C++ that is compiled. The systemwide hook is for a few numeric keypad key activated short SendInput() scripts. The much more involved voice-activated scripting is for a large number of longer scripts. It's a great combination for making Windows dance. I would say it's cumbersome, but I have the editors working efficiently here. Currently using that to play Age of Empires 2 HD. Speech is on the one extreme. I suppose assembly language would be on the other, but C++ is at least compiled. That has nothing to do with any mass of programmers, but it's useful here and is a very wide range mess of programming for one task. |
#37
|
|||
|
|||
![]()
On Fri, 21 Feb 2014 05:55:02 -0000, Yousuf Khan
wrote: On 20/02/2014 11:21 PM, Paul wrote: At one time, a compiler would issue instructions from about 30% of the instruction set. It would mean a compiled program would never emit the other 70% of them. But a person writing assembler code, would have access to all of them, at least, as long as the mnemonic existed in the assembler. I think the original idea of the x86's large instruction count was to make an assembly language as full-featured as a high-level language. x86 even had string-handling instructions! I remember I designed an early version of the CPUID program that ran under DOS. The whole executable including its *.exe headers was something like 40 bytes! Got it down to under 20 bytes when I converted it to *.com (which had no headers)! Most of the space was used to store strings, like "This processor is a:" followed by generated strings like 386SX or 486DX, etc. ![]() You could make some really tiny assembler programs on x86. Of course, compiled programs ignored most of these useful high-level instructions and stuck with simple instructions to do everything. Yousuf Khan Did you cater for all the early cpus? ;This code assembles under nasm as 105 bytes of machine code, and will ;return the following values in ax: ; ;AX CPU ;0 8088 (NMOS) ;1 8086 (NMOS) ;2 8088 (CMOS) ;3 8086 (CMOS) ;4 NEC V20 ;5 NEC V30 ;6 80188 ;7 80186 ;8 286 ;0Ah 386 and higher code segment assume cs:code,ds:code ..radix 16 org 100 mov ax,1 mov cx,32 shl ax,cl jnz x186 ;pusha db '60' stc jc nec mov ax,cs add ax,01000h mov es,ax xor si,si mov di,100h mov cx,08000h ;rep es movsb rep es:movsb or cx,cx jz cmos nmos: mov ax,0 jmp x8_16 cmos: mov ax,2 jmp x8_16 nec: mov ax,4 jmp x8_16 x186: push sp pop ax cmp ax,sp jz x286 mov ax,6 x8_16: xor bx,bx mov byte [a1],043h a1 label byte nop or bx,bx jnz t1 or bx,1 t1: jmp cpuid_end x286: pushf pop ax or ah,070h push ax popf pushf pop ax and ax,07000h jnz x386 mov ax,8 jmp cpuid_end x386: mov ax,0Ah cpuid_end: code ends end -- It's a money /life balance. |
#38
|
|||
|
|||
![]()
On 25/04/2014 5:54 AM, Stanley Daniel de Liver wrote:
On Fri, 21 Feb 2014 05:55:02 -0000, Yousuf Khan wrote: I remember I designed an early version of the CPUID program that ran under DOS. The whole executable including its *.exe headers was something like 40 bytes! Got it down to under 20 bytes when I converted it to *.com (which had no headers)! Most of the space was used to store strings, like "This processor is a:" followed by generated strings like 386SX or 486DX, etc. ![]() You could make some really tiny assembler programs on x86. Of course, compiled programs ignored most of these useful high-level instructions and stuck with simple instructions to do everything. Yousuf Khan Did you cater for all the early cpus? ;This code assembles under nasm as 105 bytes of machine code, and will ;return the following values in ax: ; ;AX CPU ;0 8088 (NMOS) ;1 8086 (NMOS) ;2 8088 (CMOS) ;3 8086 (CMOS) ;4 NEC V20 ;5 NEC V30 ;6 80188 ;7 80186 ;8 286 ;0Ah 386 and higher I don't know if I still have my old program anymore, but I do remember at that time it could distinguish 386SX from DX and 486SX from DX as well. Yousuf Khan |
#39
|
|||
|
|||
![]()
On Sat, 26 Apr 2014 01:58:41 +0100, Yousuf Khan
wrote: On 25/04/2014 5:54 AM, Stanley Daniel de Liver wrote: On Fri, 21 Feb 2014 05:55:02 -0000, Yousuf Khan wrote: I remember I designed an early version of the CPUID program that ran under DOS. The whole executable including its *.exe headers was something like 40 bytes! Got it down to under 20 bytes when I converted it to *.com (which had no headers)! Most of the space was used to store strings, like "This processor is a:" followed by generated strings like 386SX or 486DX, etc. ![]() I doubt the minimalism; a print rtn is 6 bytes, and the text "This processor is a:" is 20 on it's own! You could make some really tiny assembler programs on x86. Of course, compiled programs ignored most of these useful high-level instructions and stuck with simple instructions to do everything. Yousuf Khan Did you cater for all the early cpus? ;This code assembles under nasm as 105 bytes of machine code, and will ;return the following values in ax: ; ;AX CPU ;0 8088 (NMOS) ;1 8086 (NMOS) ;2 8088 (CMOS) ;3 8086 (CMOS) ;4 NEC V20 ;5 NEC V30 ;6 80188 ;7 80186 ;8 286 ;0Ah 386 and higher (this wasn't my code, I probably had it from clax some years back) I don't know if I still have my old program anymore, but I do remember at that time it could distinguish 386SX from DX and 486SX from DX as well. Yousuf Khan Here's the routine I boiled it down to: test_cpu: ; mikes shorter test for processor mov ax,07000h push ax popf sti pushf pop ax and ah,0C0h ; isolate top 2 bits shr ah,1 ; avoid negative cmp ah,020h ; anything greater means 8086 - but 80 =-1! ; anything less means bit 4 off, i.e 286 ; equal implies 386 ret of course when the CPUID instruction was introduced it made the later chips much easier to identify! -- It's a money /life balance. |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
refill instructions for hp 4v | panxerox | Printers | 0 | February 18th 07 07:25 PM |
Instructions SSE | bruno | Overclocking AMD Processors | 1 | May 10th 06 05:10 AM |
Instructions - terrible | Travis King | AMD x86-64 Processors | 3 | January 8th 05 03:04 PM |
REP instructions and TLB caching | John Marcus | Intel | 1 | October 22nd 04 06:23 PM |