A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » Processors » General
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Recommend Book on Basic Video Card Design?



 
 
Thread Tools Display Modes
  #21  
Old December 2nd 03, 10:07 AM
Thomas Engelmeier
external usenet poster
 
Posts: n/a
Default

Reinder Verlinde wrote:

For example, I've gathered that some or all video cards generate
an interrupt at the end of a vertical refresh cycle, but I have no idea
why.

That is so that programs can write to screen memory while the video card
is not reading from it.

Older hardware does not have dual-ported RAM; If one were to write at a
scanline at the time the video refresh hardware tried to read it, one
would see awful display artifacts. A clear example for this was the
Sinclair ZX80. Its Basic had 'Fast' and 'Slow' commands that toggled
whether screen output was sync'ed with the vertical refresh interrupt.


In this case, sorry, no: The ZX80 and 81 didn't have separate video
circuits. In order to generate an image, the processor had to output it.
To make calculations faster, the fast mode (without video output) could
be used.

Regards,
Tom"My first machine was an selfbuild ZX81"E

--
This address is valid in its unmodified form but expires soon.

  #22  
Old December 2nd 03, 04:03 PM
David Phillip Oster
external usenet poster
 
Posts: n/a
Default

In article ,
(Jeff Walther) wrote:

As a hobbyist project I would like to design and build a simple video
card. I would appreciate any recommendations of books (or other
references) that address this topic. Ideally, I would like something
which is thorough about all the things that need to be addressed, without
bogging down in too much specific detail--at least not in the first few
chapters.


Just for your education, you might be interested in looking at the spec
for the DVI interface (which references the VESA VGA interface):

it is available by following the link at the bottom of this web page:

http://www.ddwg.org/downloads.html

You might consider doing a bus interface between the mezzanine slot on
the SE-30 and a modern AGP video card. That way, the card looks like RAM
to the SE-30, and the card takes care of all the details of pumping out
a monitor-compatible analog signal. Just a handwave. I don't know how
hard it would be to send the control signals to an arbitrary card to
enable it.
  #24  
Old December 3rd 03, 02:49 AM
Keith R. Williams
external usenet poster
 
Posts: n/a
Default

In article , ldo@geek-
central.gen.new_zealand says...
In article ,
(Jeff Walther) wrote:

In article ,
Jan Panteltje wrote:

...in modern cards you can write to one part of memory,
while the other part one is displayed, then switch display
memory (address actually) and write to the other part and
display the first part.


This sounds like double-buffering, which is a technique used to avoid
flicker.

The problem happens when the program is rendering complex graphics, such
that it happens slowly enough that you can just barely make out the code
erasing parts of the image and drawing other bits on top, thus producing
a flickering effect.

To avoid this, the program does all its drawing into an offscreen buffer
that is not visible on-screen at all. Then, when the image is complete,
it tells the video hardware to exchange the offscreen buffer with the
onscreen one, which can be done simply by exchanging a couple of address
pointers, rather than swapping the entire contents of the buffers. Thus,
the new image appears on screen instantly in its entirety, without
flickering.

Funnily enough, even this kind of hardware support for double-buffering
isn't so important these days. Systems are now fast enough that a simple
QuickDraw CopyBits call (or its equivalent on other platforms) can move
the entire contents of the offscreen buffer into video RAM faster than
you can blink. Thus, you don't really need the hardware capability to
switch address pointers: just maintain your own offscreen buffers in
software. Which also means that the number of offscreen buffers you're
allowed to have is only limited by available RAM, not by numbers of
available video hardware mapping registers or whatever.

And what did folks do before there was dual ported memory--or if one does
not wish to use dual ported memory?


More "snow" than "flicker". But otherwise...

--
Keith
  #25  
Old December 4th 03, 02:30 AM
Lawrence DčOliveiro
external usenet poster
 
Posts: n/a
Default

In article ,
Keith R. Williams wrote:

In article , ldo@geek-
central.gen.new_zealand says...

The problem happens when the program is rendering complex graphics, such
that it happens slowly enough that you can just barely make out the code
erasing parts of the image and drawing other bits on top, thus producing
a flickering effect.


More "snow" than "flicker". But otherwise...


No, it is flicker, caused by split-second changes to the graphics being
displayed. The effect can be demonstrated even on the latest
state-of-the-art video hardware, it just takes more complex graphics
rendering to do it.
  #26  
Old December 6th 03, 05:33 AM
David M. Palmer
external usenet poster
 
Posts: n/a
Default

In article , Jeff Walther
wrote:

For my first cut, I'm considering only supporting grayscale on the
internal monitor (512 X 342). In a stock machine it's 1 bit, so this is a
modest improvement. The host CPU/68030 bus runs at 16 MHz and is 32 bits
data and 32 bits address. So it's not running very fast. Of course, my
video processor may run considerably faster.

Anyway, to simplify my first cut, I'm considering just using SRAM for the
video memory. That would save me needing to multiplex the addresses and
worry about refresh cycles. But dual-port SRAM is *expensive* so I'd
rather avoid going dual-port if I can.


The FPGA talks to the 68030 bus, to the SRAM and the D/A converters and
sync lines of the video display.

The 68030 writes to video RAM...it puts an address, data and other
signals on the bus. The FPGA reads those signals and stores the data
at the appropriate address in memory. As it whiles away the long
nanoseconds before the 68030 is ready to talk to it again, it grabs
data from the appropriate place in memory and sticks it on the D/A
converters.

With RAM from 1990 it took all sorts of double porting and fancy
circuitry to schedule the memory accesses and avoid flicker. Nowadays,
you just have the FPGA read a couple of pixels ahead so that when the
68030 needs to access the RAM, which the FPGA can do in 20ns or so, you
don't have to worry about hitting a tight timing window for producing
the next pixel.

You should check out the old Addison Wessley books Apple put out.
There were some on designing hardware devices for the Mac, and I think
it had a section on video cards.

I assume that you are doing this all for fun/education, because you
will certainly spend more than the $30 a Macintosh Color Classic will
cost on eBay. I assume you also know that you can buy video cards for
external monitors for the SE/30 (I used to have one).

Making something to interface to a Macintosh will be frustrating
because the Mac will already have ideas on how to do things. (Which
means that you have to dig through antique documentation to see how the
Mac wants to talk to a video card, and how you tell the Mac that your
video card is 640x480 8-bit color table 60Hz etc.). At the end of
this, you will know all the grungy interface details of a 15-year-old
computer that has less power than a $70 Palm.

If you want to learn how to do video electronics, you may want to start
with a project which just does video.

How about implementing pong with an FPGA or a microprocessor (Google
for 'pic pong' for an example). Implement it for an a VGA or high
resolution monitor so you can work out what to do to the various
connector wires. Make it accept a TV feed so that you have a pong ball
overlaid on the TV show, and can bounce the ball off objects on the
screen. Doing all that will teach you a lot, and then you will find it
much easier to make a video card for an old Mac, if you still want to.

That's just my advice.

--
David M. Palmer (formerly @clark.net, @ematic.com)
  #27  
Old December 6th 03, 05:41 PM
Jan Panteltje
external usenet poster
 
Posts: n/a
Default

On a sunny day (Fri, 05 Dec 2003 22:33:22 -0700) it happened "David M. Palmer"
wrote in :

How about implementing pong with an FPGA or a microprocessor (Google
for 'pic pong' for an example). Implement it for an a VGA or high
resolution monitor so you can work out what to do to the various
connector wires. Make it accept a TV feed so that you have a pong ball
overlaid on the TV show, and can bounce the ball off objects on the
screen. Doing all that will teach you a lot, and then you will find it
much easier to make a video card for an old Mac, if you still want to.

That's just my advice.

Very good advice, you can add a 10 Mbit Ethernet circuit to the FPGA
and send data from the PC, and display on the TV.
I just found a cool super simple FPGA Ethernet circuit on
http://www.fpga4fun.com/10BASE-T.html
I am soldering it on the board today :-)
The bandwidth is small, but that way you can do live video (4500kbps
mpeg2 stream from satellite, and do mpeg2 decoding in FPGA...), via
ethernet.
This works very well on the PC, just video broadcast :-)
I can stream the DVB-s over the Ethernet 10 interface, using dvbstream
in Linux.
(see http://www.home.zonnet.nl/panteltje/dvd/ for some software).
Doing the mpeg2 decoding in FPGA (to YCrCb) will be very complicated.
(well for me at least).
Not that far yet in FPGA...
But for pong balls very low bandwidth hehe, should be possible.
Uncompressed:
At 10 Mbits / second = 400 kbits / frame (at 25 fps), so 50 kByte per frame,
in BW you could have 80 bytes per frame.... for 625 lines...
Not very usable.
If you could use 100 Mbits /second Ethernet then 800 bytes per frame, even
uncompressed video would be possible.
10 nS is a bit tight though... 2.5 nS resolution to get middle of bit
position.
Could be done in FPGA.
It is such a wide area and so many possibilities, almost no end to it.

  #28  
Old December 8th 03, 08:14 AM
Jeff Walther
external usenet poster
 
Posts: n/a
Default

I want to thank all those who responded to my question--and if you're
seeing this thread only now, feel free to add to the discussion.

I have a wealth of references to check and new information to digest. I
could continue to repond post by post with each new bit of info causing me
to think of three new specific questions, but I think I should go check
the recommended reading before I do that.

Thank you again and please continue to participate if there is more you
wish to share or discuss.

--
A friend will help you move. A real friend will help you move a body.
  #29  
Old December 8th 03, 08:29 AM
Jeff Walther
external usenet poster
 
Posts: n/a
Default

In article , "David M. Palmer"
wrote:

With RAM from 1990 it took all sorts of double porting and fancy
circuitry to schedule the memory accesses and avoid flicker. Nowadays,
you just have the FPGA read a couple of pixels ahead so that when the
68030 needs to access the RAM, which the FPGA can do in 20ns or so, you
don't have to worry about hitting a tight timing window for producing
the next pixel.


This makes it sound as if one can predict where the host will next write
to memory. Does the host write video updates in an orderly monotonic
fashion? I was guessing that it would just write an update for which
portion of the screen may have changed, while leaving the unchanged bits
alone.

This would mean that the FPGA wouldn't necessarily be a couple of pixels
ahead of the update from the host, It would need to compare the host's
update with the current location of the pixel being displayed to know
whether the host write was ahead of or behind the displayed pixel.

You should check out the old Addison Wessley books Apple put out.
There were some on designing hardware devices for the Mac, and I think
it had a section on video cards.


I have "Designing Cards and Drivers for the Macintosh Family" 3rd Edition
and the "Inside Macintosh" books. The former does indeed have a video
card example in it. The reason I asked for a recommended book here is
that the Apple book assumes that it is speaking to someone who understands
video cards and just needs the specifics of how to make a video card work
on a Mac.

I assume that you are doing this all for fun/education, because you
will certainly spend more than the $30 a Macintosh Color Classic will
cost on eBay. I assume you also know that you can buy video cards for
external monitors for the SE/30 (I used to have one).


Fun/education is what I stated in the post that started this thread adn
that's the case. The SE/30 is actually superior to the Color Classic in
several ways that probably look insignificant now days. But that's the
target. I do have a few video cards for the SE/30 for external
monitors. They occasionally turn up at the local Good Will for $5.

However, thank you for the information, as it was possible that I did not
know that and it was kind and considerate of you to check.

Making something to interface to a Macintosh will be frustrating
because the Mac will already have ideas on how to do things. (Which
means that you have to dig through antique documentation to see how the
Mac wants to talk to a video card, and how you tell the Mac that your
video card is 640x480 8-bit color table 60Hz etc.).


That stuff is well documented, or appears well documented to my still
ignorant eye, in the Apple books that I have.

At the end of
this, you will know all the grungy interface details of a 15-year-old
computer that has less power than a $70 Palm.


True. That's why they call them hobbies...That's the fun side of this.

If you want to learn how to do video electronics, you may want to start
with a project which just does video.

How about implementing pong with an FPGA or a microprocessor (Google
for 'pic pong' for an example). Implement it for an a VGA or high
resolution monitor so you can work out what to do to the various
connector wires. Make it accept a TV feed so that you have a pong ball
overlaid on the TV show, and can bounce the ball off objects on the
screen. Doing all that will teach you a lot, and then you will find it
much easier to make a video card for an old Mac, if you still want to.

That's just my advice.


Thank you for the advice. It is well taken. I'm going to stick with the
original project. While the specifics of the Mac video and bus interface
may be outdated today, I think that the learning process for the project
as a whole will be useful. There's a bit of a gap between having an EE
degree and knowing how to take that theoretical knowledge and turn it into
a real world product.

The pong project would be good, but I suspect there are many existing,
documented versions of that project available. It would be too easy to
short circuit my learning process by wandering into a "reference" that
tells me exactly how to do everything. I want to go from spec to
product, with some advice on how to handle specific details. It may be an
outdated product, but I think it will still be valuable learning. But I
am happy to read counter-vailing arguments.

--
A friend will help you move. A real friend will help you move a body.
  #30  
Old December 9th 03, 04:32 AM
Lawrence DčOliveiro
external usenet poster
 
Posts: n/a
Default

In article ,
"David M. Palmer" wrote:

With RAM from 1990 it took all sorts of double porting and fancy
circuitry to schedule the memory accesses and avoid flicker.


No it didn't. 1987-vintage VRAM (as used in the "Toby" frame-buffer card
with the original Mac II) was quite good enough to deal with this. You
don't need a true double port; the second port is little more than a
shift register that feeds a continuous stream of bits to the D/A
convertors to generate the video signal. This interface is designed for
fast, sequential, read-only access, which is less demanding than the
fast, random, read/write access required by the CPU interface.

And when you had Mac models that didn't use proper VRAM, which meant
that both the video interface and the CPU were having to share a single
memory port (as with the original compact Macs, and even with the later
IIci and IIsi models from 1989 and 1990 respectively), then all that
happened was the contention impacted CPU speed, that was all. It was not
in itself a source of any video artifacts.
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Hardcore gamer is little upset. Ryan Atici Asus Motherboards 16 December 26th 04 03:26 AM
A7N8X-boot problems and dead video card lefthandblack Asus Motherboards 4 August 25th 04 02:09 PM
Video card for surveillance, any Recommend? Brandon Brown General 1 May 20th 04 06:28 AM
dual monitor video card - business use - recommend? woof General 5 January 9th 04 08:25 PM
Basic video Card w/ TV-Out? Willy Radisson General 0 August 25th 03 03:00 PM


All times are GMT +1. The time now is 07:23 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.