A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Homebuilt PC's
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

RAM Slack?



 
 
Thread Tools Display Modes
  #1  
Old February 14th 18, 05:37 PM posted to alt.comp.hardware.pc-homebuilt
Davej
external usenet poster
 
Posts: 273
Default RAM Slack?

I have been reading some stuff about digital forensics and have seen several comments about "RAM slack" where supposedly some old OS versions were so idiotic that patches of RAM larger than the desired file to be saved, were written to disk. To me this sounds utterly absurd.
  #2  
Old February 14th 18, 06:20 PM posted to alt.comp.hardware.pc-homebuilt
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default RAM Slack?

Davej wrote:
I have been reading some stuff about digital forensics
and have seen several comments about "RAM slack" where
supposedly some old OS versions were so idiotic that
patches of RAM larger than the desired file to be saved,
were written to disk. To me this sounds utterly absurd.


The disk drive writes sectors, not bytes. Some utilities
"do the right thing" within those boundaries, by supporting
read-modify-write on sub-sector sized chunks.

The file system deals in clusters (two files cannot occupy the
same cluster, and the usage of clusters wastes space on the
end of the cluster). If a file is 2KB and a cluster is 32KB,
that's 30KB "wasted" at the end, with somewhat random
information potentially in that end-bit.

The details are hidden, in the sense that the file system
keeps length information in units of bytes, and as long
as the file system doesn't read past the logical end of
a file, everything is OK.

But if you're looking at a partition with a hex editor,
you can see everything, including the bits you're not
supposed to be able to see. When reading at the
physical layer (the layer below the file system),
you see it all.

In one case, Microsoft Office may have made things worse,
by rounding something at the program level to sector-sized
chunks, and that left an undefined area at the end of
the file. A file reading utility could have seen the leftovers
at the end, stuff that Office would be ignoring the next
time it read the file in. That's application level mischief
and not as common. If the RAM buffer Microsoft had been
using, was zeroed before usage, you would not have noticed
anything was amiss. (I think when this issue was announced
many years ago, I checked at work, and could see garbage at
the end of a file I examined.) Perhaps this was also an issue
if you sent an MSWD as an attachment in an email - the person
at the other end would get that garbage at the end too, a
potential information leakage. So yeah, that was very very bad.

Paul
  #3  
Old February 15th 18, 05:17 AM posted to alt.comp.hardware.pc-homebuilt
Loren Pechtel[_2_]
external usenet poster
 
Posts: 427
Default RAM Slack?

On Wed, 14 Feb 2018 09:37:01 -0800 (PST), Davej
wrote:

I have been reading some stuff about digital forensics and have seen several comments about "RAM slack" where supposedly some old OS versions were so idiotic that patches of RAM larger than the desired file to be saved, were written to disk. To me this sounds utterly absurd.


Disks **are** written in the sector size. You can't write less.

All you can do is ensure that the extra space is wiped.
  #4  
Old February 15th 18, 05:17 AM posted to alt.comp.hardware.pc-homebuilt
Loren Pechtel[_2_]
external usenet poster
 
Posts: 427
Default RAM Slack?

On Wed, 14 Feb 2018 13:20:02 -0500, Paul
wrote:

In one case, Microsoft Office may have made things worse,
by rounding something at the program level to sector-sized
chunks, and that left an undefined area at the end of
the file. A file reading utility could have seen the leftovers
at the end, stuff that Office would be ignoring the next
time it read the file in. That's application level mischief
and not as common. If the RAM buffer Microsoft had been
using, was zeroed before usage, you would not have noticed
anything was amiss. (I think when this issue was announced
many years ago, I checked at work, and could see garbage at
the end of a file I examined.) Perhaps this was also an issue
if you sent an MSWD as an attachment in an email - the person
at the other end would get that garbage at the end too, a
potential information leakage. So yeah, that was very very bad.


Unless the file is to be distributed it doesn't even strike me as
mischief. I've got a program that I have written that very well might
have this "bug" in it. One file consists of millions of records that
will be accesed semi-randomly. All I/O is done in chunks of records
to fill IIRC 4k pages--and the next chunk of records starts at the
next page, a bit of slack. (This is actually more to keep writes from
spilling across pages.)

I can't see it being a leak, though--the file is useless on any
machine other than the one that created it so nobody would ever send
it to someone.
  #5  
Old February 15th 18, 07:28 AM posted to alt.comp.hardware.pc-homebuilt
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default RAM Slack?

Loren Pechtel wrote:
On Wed, 14 Feb 2018 13:20:02 -0500, Paul
wrote:

In one case, Microsoft Office may have made things worse,
by rounding something at the program level to sector-sized
chunks, and that left an undefined area at the end of
the file. A file reading utility could have seen the leftovers
at the end, stuff that Office would be ignoring the next
time it read the file in. That's application level mischief
and not as common. If the RAM buffer Microsoft had been
using, was zeroed before usage, you would not have noticed
anything was amiss. (I think when this issue was announced
many years ago, I checked at work, and could see garbage at
the end of a file I examined.) Perhaps this was also an issue
if you sent an MSWD as an attachment in an email - the person
at the other end would get that garbage at the end too, a
potential information leakage. So yeah, that was very very bad.


Unless the file is to be distributed it doesn't even strike me as
mischief. I've got a program that I have written that very well might
have this "bug" in it. One file consists of millions of records that
will be accesed semi-randomly. All I/O is done in chunks of records
to fill IIRC 4k pages--and the next chunk of records starts at the
next page, a bit of slack. (This is actually more to keep writes from
spilling across pages.)

I can't see it being a leak, though--the file is useless on any
machine other than the one that created it so nobody would ever send
it to someone.


I would fix that if I were you.

What I do, is I review my handiwork with a hex editor,
testing as I go. To verify that I'm not making a mess.

https://mh-nexus.de/en/hxd/

Initializing a 4K buffer should not be expensive (at
least, compared to the speed of modern disk drives).

Or fill the tail of the buffer, just before you do the write.

One reason for "keeping output clean", is actually to make
inspection with the hex editor easier.

Also, if you ever have to do data recovery on a damaged
image, you'll appreciate not having to "swim through crap".
The cleaner you keep things... there will be downstream
benefits.

Paul
  #6  
Old February 17th 18, 09:44 PM posted to alt.comp.hardware.pc-homebuilt
Loren Pechtel[_2_]
external usenet poster
 
Posts: 427
Default RAM Slack?

On Thu, 15 Feb 2018 02:28:56 -0500, Paul
wrote:

I would fix that if I were you.

What I do, is I review my handiwork with a hex editor,
testing as I go. To verify that I'm not making a mess.

https://mh-nexus.de/en/hxd/

Initializing a 4K buffer should not be expensive (at
least, compared to the speed of modern disk drives).

Or fill the tail of the buffer, just before you do the write.

One reason for "keeping output clean", is actually to make
inspection with the hex editor easier.

Also, if you ever have to do data recovery on a damaged
image, you'll appreciate not having to "swim through crap".
The cleaner you keep things... there will be downstream
benefits.


The thing is that the whole thing is a cache--without the file system
that created it it is pure crap. And since the real data is tens of
megabytes of effectively random hex data nobody's going to be
inspecting it.
  #7  
Old February 17th 18, 11:42 PM posted to alt.comp.hardware.pc-homebuilt
Paul[_28_]
external usenet poster
 
Posts: 1,467
Default RAM Slack?

Loren Pechtel wrote:
On Thu, 15 Feb 2018 02:28:56 -0500, Paul
wrote:

I would fix that if I were you.

What I do, is I review my handiwork with a hex editor,
testing as I go. To verify that I'm not making a mess.

https://mh-nexus.de/en/hxd/

Initializing a 4K buffer should not be expensive (at
least, compared to the speed of modern disk drives).

Or fill the tail of the buffer, just before you do the write.

One reason for "keeping output clean", is actually to make
inspection with the hex editor easier.

Also, if you ever have to do data recovery on a damaged
image, you'll appreciate not having to "swim through crap".
The cleaner you keep things... there will be downstream
benefits.


The thing is that the whole thing is a cache--without the file system
that created it it is pure crap. And since the real data is tens of
megabytes of effectively random hex data nobody's going to be
inspecting it.


Humans like patterns.

If you delineate items with a tail-of-zeros, it'll be
easier to spot the pattern. Your file will also be
easier to compress, if the file ever needs to be
sent over the network to you for analysis. That's a
side benefit of "whitening" those areas.

I expect you're doing some kind of memory mapped I/O and
that's why you like the 4K value. On a lot of disk drives,
the physical "quanta" there is 512 bytes (512e or 512n drive).
So you could use a slightly smaller quanta, if only the hard drive
side of things mattered. I don't use memory mapped I/O,
and so I don't know whether the page size is a big win
there or not. All I know of memory mapped I/O, is it
can misbehave with really large files (I've seen some slow
I/O because of it). This may have something to do with
how the system deals with garbage collection or something,
but because I don't understand the mechanism, I expect
you know all these details and can explain it.

Paul
 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Cut the guy a little slack already.. [email protected] Storage (alternative) 0 March 9th 08 06:04 PM
nforce2 and slack 9.0 none Nvidia Videocards 0 May 26th 04 04:13 AM
Thinking Dual-AMD for next slack box... thrugoodmarshall Overclocking AMD Processors 7 May 20th 04 02:09 AM
Thinking Dual-AMD for next slack box... thrugoodmarshall AMD x86-64 Processors 13 May 20th 04 02:09 AM
Thinking Dual-AMD for next slack box... thrugoodmarshall AMD Thunderbird Processors 7 May 20th 04 02:09 AM


All times are GMT +1. The time now is 02:16 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.