A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » System Manufacturers & Vendors » Dell Computers
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Leaving Dell Dimension 8300 running 24/7 ...?



 
 
Thread Tools Display Modes
  #11  
Old February 13th 05, 12:31 PM
Hank Arnold
external usenet poster
 
Posts: n/a
Default

1. The system is perfectly suited for staying on 24/7. Whether it's better
to keep it on vs. powering it off is an ongoing (decades, now) debate with
no clear winner... Do what is best for your situation.

2) Short answer is "Yes". However, reality these days is that it can take
as little as 15-20 minutes (or less) for an unprotected system, on the
internet to become infected with some kind of malware.... Having the
firewall from SP1 is, I guess, better than nothing. But not by much. To
protect your system you should, at a minimum:
- Install a software firewall (a real one).
- Install Spyware detection programs. Several of them.

I'd also recommend you get a hardware router for the additional protection.

--
Regards,
Hank Arnold

"Thomas G. Marshall" . com
wrote in message news:d%rPd.31607$W16.29973@trndny07...

(XP SP1 / Dim 8300 / 3.0 GHz / 800 MHz FSB / 512 meg / bla bla...)

I seem to be getting a virus here and there found by NAV2003 that makes
its
way in through the auto-protect. In this case I got a couple of circa
2003
Trojan.ByteVerifies. Don't know how the heck such a simple file can land
on
my system, particularly since it's such a well understood virus.

I am considering leaving the system on 24/7 and establishing a daily viral
sweep.

Questions:

1. Is the 8300 cooled enough or otherwise built for staying on? I'll of
course use power options to shutdown unnecessary things and brown down
perhaps the motherboard or something. I'll have to learn more about
this---I'm half ignorant on all things power control except for
hibernation.

2. Am I leaving myself open statistically to more infection simply by
staying on? I'm running SP1's firewall. SP2 is not an option currently
because of software incompatibilities.

3. Any thoughts on what I might have to worry about, in general and/or
specifically to the 8300?

Thanks!

--
"Gentlemen, you can't fight in here! This is the War Room!"




  #12  
Old February 13th 05, 02:44 PM
Thomas G. Marshall
external usenet poster
 
Posts: n/a
Default

Roger Wilco coughed up:
"Thomas G. Marshall"
. com wrote in
message news:d%rPd.31607$W16.29973@trndny07...

(XP SP1 / Dim 8300 / 3.0 GHz / 800 MHz FSB / 512 meg / bla bla...)

I seem to be getting a virus here and there found by NAV2003 that
makes its way in through the auto-protect. In this case I got a
couple of circa 2003 Trojan.ByteVerifies. Don't know how the heck
such a simple file can land on my system, particularly since it's
such a well understood virus.


I well understand that it is not a virus. It is, as stated, a
"trojan". It is an exploit trojan (if your Java is up to date - no
worries) that gets downloaded as a result of normal browsing (to
evidently untrustworthy sites).



Ok. As an aside though, I no longer go through the effort to
conversationally differentiate between the various bad thangs, so long as I
identify any particular one by it's proper NAV or McAV or KAV name.

"Virus", right or wrong, has conversationally become an umbrella term.

*Thanks* though. Your (and Kurt Wismer's) point underscores the point that
this trojan is not as harmful as I might otherwise have thought.


--
Whyowhydidn'tsunmakejavarequireanuppercaseletterto startclassnames....


  #13  
Old February 13th 05, 02:51 PM
Thomas G. Marshall
external usenet poster
 
Posts: n/a
Default

User N coughed up:
"Thomas G. Marshall"
. com wrote in
message news:d%rPd.31607$W16.29973@trndny07...

1. Is the 8300 cooled enough or otherwise built for staying on?


If the machine is operating properly, free of cooling obstructions
(dust/lint), and operated in a room that is within environmental
requirements, it should be fine.


Fair enough. Is this advice specific to the 8300 though? Some machines are
not configured for internal air travel properly. Some of the earlier
dimensions (my IT guy pointed out once) were known for not bringing enough
air by the default HD bay, and memory. Apparently in the memory case, it
was because the CPU heat sink was upstream. {shrug}.



I'll of
course use power options to shutdown unnecessary things and brown
down perhaps the motherboard or something. I'll have to learn more
about this---I'm half ignorant on all things power control except
for hibernation.


There are alot of net discussions regarding the pros/cons of leaving
on
vs turning off. A google web and/or groups search (keywords: leave
computer on turn off) would be worth performing.



Always do---good advice. Doesn't replace a usenet discussion (nor should
it). Virtual *talking* with you all is by far the most informative.

....[rip]...


BTW, Microsoft's Baseline Security Analyzer can be a usefull tool:

http://www.microsoft.com/technet/sec.../mbsahome.mspx


Perfect! Thanks for that!



Its MS newsgroup is: microsoft.public.security.baseline_analyzer




--
Whyowhydidn'tsunmakejavarequireanuppercaseletterto startclassnames....


  #14  
Old February 13th 05, 02:55 PM
Thomas G. Marshall
external usenet poster
 
Posts: n/a
Default

coughed up:
On Sat, 12 Feb 2005 23:09:22 GMT, ben_myers_spam_me_not @ charter.net
(Ben Myers) wrote:


6. The debate about leaving a computer powered up 24/7 or powered
down when not in use centers around wear-and-tear. Those who prefer
to leave a computer up 24/7 point to the wear-and-tear on system
electronics due to the zero-to-60 effect of a sudden surge of
current after a total absence of power. Those who prefer to power
down a computer point to the wear-and-tear of the bearings on
rotating motors, notably fans and the hard drives. For me, the hard
drive AND its contents are the most important part of my system,
even with regular backups. I can always replace a blown power
supply, motherboard, CD-ROM drive, memory, or ANY other part of a
computer. But I cannot replace the data. So I am in the
power-it-down camp... Ben Myers


For many, many years I worked for a company with thousands of
computers ranging from PCs up to high performance multi-processor
systems. The admin systems were switched off every night. The
development systems were left on all the time. The systems that were
never switched off had a much lower failure rate than the ones that
were switched off and on daily. The failure rate of hard drives was
also greatest in the systems that were switched off every day.


Clarification: You say "much lower" failure rate. Is this accurate, or
would you say it is more like "lower", sans the superlative?

BTW, your empirical evidence like this is incredibly useful--- *thanks* !



So I'm in the leave it switched on camp.




--
Whyowhydidn'tsunmakejavarequireanuppercaseletterto startclassnames....


  #15  
Old February 13th 05, 03:10 PM
Roger Wilco
external usenet poster
 
Posts: n/a
Default


ben_myers_spam_me_not @ charter.net (Ben Myers) wrote in message
...

6. The debate about leaving a computer powered up 24/7 or powered down

when not
in use centers around wear-and-tear. Those who prefer to leave a

computer up
24/7 point to the wear-and-tear on system electronics due to the

zero-to-60
effect of a sudden surge of current after a total absence of power.

Those who
prefer to power down a computer point to the wear-and-tear of the

bearings on
rotating motors, notably fans and the hard drives.


IIRC the maximum wear on the harddrive is during spinup and warmup.
Less, not more, wear occurs if left running.

....but that's for another group.


  #16  
Old February 13th 05, 03:15 PM
Roger Wilco
external usenet poster
 
Posts: n/a
Default


wrote in message
news
On Sat, 12 Feb 2005 23:09:22 GMT, ben_myers_spam_me_not @ charter.net
(Ben Myers) wrote:


6. The debate about leaving a computer powered up 24/7 or powered

down when not
in use centers around wear-and-tear. Those who prefer to leave a

computer up
24/7 point to the wear-and-tear on system electronics due to the

zero-to-60
effect of a sudden surge of current after a total absence of power.

Those who
prefer to power down a computer point to the wear-and-tear of the

bearings on
rotating motors, notably fans and the hard drives. For me, the hard

drive AND
its contents are the most important part of my system, even with

regular
backups. I can always replace a blown power supply, motherboard,

CD-ROM drive,
memory, or ANY other part of a computer. But I cannot replace the

data. So I
am in the power-it-down camp... Ben Myers


For many, many years I worked for a company with thousands of
computers ranging from PCs up to high performance multi-processor
systems. The admin systems were switched off every night. The
development systems were left on all the time. The systems that were
never switched off had a much lower failure rate than the ones that
were switched off and on daily. The failure rate of hard drives was
also greatest in the systems that were switched off every day.

So I'm in the leave it switched on camp.


Yes, the "zero to sixty" applies to motors and bearings as well as many
electronic parts. Your anectdotal evidence backs this up.


  #17  
Old February 13th 05, 03:23 PM
Roger Wilco
external usenet poster
 
Posts: n/a
Default


"Hank Arnold" wrote in message
...
1. The system is perfectly suited for staying on 24/7. Whether it's

better
to keep it on vs. powering it off is an ongoing (decades, now) debate

with
no clear winner... Do what is best for your situation.

2) Short answer is "Yes". However, reality these days is that it can

take
as little as 15-20 minutes (or less) for an unprotected system, on the
internet to become infected with some kind of malware.... Having the
firewall from SP1 is, I guess, better than nothing. But not by much.

To
protect your system you should, at a minimum:
- Install a software firewall (a real one).


Almost a contradiction of terms - unless you are talking about something
like this
http://www.smoothwall.org/

A "real" firewall is a separate device and not an application running on
the "protected" machine.


  #18  
Old February 13th 05, 04:15 PM
external usenet poster
 
Posts: n/a
Default

On Sun, 13 Feb 2005 14:55:50 GMT, "Thomas G. Marshall"
. com wrote:

coughed up:


For many, many years I worked for a company with thousands of
computers ranging from PCs up to high performance multi-processor
systems. The admin systems were switched off every night. The
development systems were left on all the time. The systems that were
never switched off had a much lower failure rate than the ones that
were switched off and on daily. The failure rate of hard drives was
also greatest in the systems that were switched off every day.


Clarification: You say "much lower" failure rate. Is this accurate, or
would you say it is more like "lower", sans the superlative?


Yes, it was "much lower". Computers that were left on 24 * 7 hardly
ever failed.


--
Steve Wolstenholme Neural Planner Software

EasyNN-plus. The easy way to build neural networks.
http://www.easynn.com
  #19  
Old February 13th 05, 06:09 PM
w_tom
external usenet poster
 
Posts: n/a
Default

Whether the heatsink was first or last in-line makes little
difference - means only single digit degrees. Case air flow
is mostly hyped by those who first did not learn the numbers.
Numbers that must come from the theory AND be confirmed by
experimentation. These are requirements as taught in junior
high school science.

Serious complication in airflow that causes heat problems is
dead space. Most every component is cooled sufficiently by an
air flow so little that your hand cannot detect it. The
difference between that airflow and dead space is a massive
increase in component temperature. Too often without first
learning these basics, then some will demand "More Fans". One
80 mm fan of Std CFM is more than sufficient airflow through a
chassis.

But what makes it sufficient? That one fan is sufficient
when room temperature is 100 degrees F. If you computer is
crashing due to heat, the solution is not more fans or where a
heatsink is located. Solution to hardware failure is heating
that component with a hairdryer on high to find and remove the
100% defective hardware. Heat is not a problem in a chassic
with one 80 mm fan. And heat is a diagnostic tool to locate
defective components.

Again, with only one 80 mm fan, that system should operate
just fine in a 100 degree F room. Why more fans for a system
in a 70 degree room? Junk science reasoning.

The IT guy's conclusion was correct ... as long as we don't
apply numbers. Apply numbers. Those few degrees of
temperature increase makes no difference. IOW without
numbers, then junk science conclusions are easily assumed.
Defined is the benchmark between myth purveyors verses those
from the world of reality. One who cannot provide the numbers
is most often from the junk science world. A few degrees
temperature difference means virtually nothing to heatsink
cooling - where tens of degrees are being discussed, and where
critically necessary air flow is so gentle as to not be
detectable by a human hand.

"Thomas G. Marshall" wrote:
Fair enough. Is this advice specific to the 8300 though? Some
machines are not configured for internal air travel properly. Some
of the earlier dimensions (my IT guy pointed out once) were known
for not bringing enough air by the default HD bay, and memory.
Apparently in the memory case, it was because the CPU heat sink
was upstream. {shrug}.
...

  #20  
Old February 13th 05, 06:18 PM
w_tom
external usenet poster
 
Posts: n/a
Default

We would demonstrate this 24/7 solution as a myth and
demonstrate why they jumped to erroneous conclusions. Let's
take fans as example. Why does a fan fail? Power on surge?
Myth. Unless the person has performed a forensic analysis,
then he is only wildly speculating that power on caused the
failure. One we learned underlying facts, then the '24/7 to
perverse life expectancy' myth was exposed.

Again, that fan. What causes it to fail. Hours of
operation caused bearing wear, dust buildup, and so called
'power cycling' damage. What is that 'power cycling'? Number
of times circuits turn off and on. IOW the fan that runs
constant is exposed to far more power cycles because it power
cycles so often only when on.

They ran the machines 24/7. Then when the machines were
powered off, those machines did not start. That proves that
turning machines off causes failure? Wrong. Failure from
excessive wear most often appears on startup. And when do
fans with too many hours most often fail? When powered on.
Therefore technicians *assumed* startup was destructive rather
than first learn *why* the failure occurred. Failures due to
power up were repeatedly traced to 'hours of operation'.
Excessive wear due to leaving a machine always on was being
misrepresented by technicians who did not first learn the
facts. They did not first discover why failure happens; then
jumped to wild conclusions.

Why did that fan not start? Bearing was so worn from 24/7
operation as to not start after one power off.

We know routinely that power cycling has minimal adverse
affect on electronics and their mechanical devices (ie fans).
Manufacturers also say same in their detailed spec sheets.
That's two sources - real world experience AND manufacturer
data. Some devices do have power cycling limits. That means
they fail 15 and 39 years later if power cycled 7 times every
day. Who cares after 15 years.

Best one does for computer life expectancy is to turn system
off (or put it to sleep or hibernate it) when done. The 'turn
it off' myth comes from those who only see when a failure
happens and failed to learn why it happens. Without
underlying facts, those who advocate 'leave it on' demonstrate
why statistics without sufficient underlying facts causes
lies.

The most wear and tear on computers is clearly during
excessive hours of operation. That even includes 'wear and
tear' inside the CPU. CPU is constantly power cycling only
when running.

Power cycling can create failure. And then we apply
numbers. Power cycling seven times every day should cause
component failure in a soon as 15 years. They are correct
about the destructive nature of power cycling until the
numbers are applied. After 15 years, who cares?
Furthermore, start up problems are often created by damage
from too many hours of operation. This made obvious once we
dug into technicians claims - and exposed facts they never
first learned.

"Thomas G. Marshall" wrote:
Clarification: You say "much lower" failure rate. Is this accurate, or
would you say it is more like "lower", sans the superlative?

BTW, your empirical evidence like this is incredibly useful--- *thanks*

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Goodbye Dell, Hello IBM or Toshiba? Giganews Dell Computers 56 October 4th 05 12:29 PM
FPS Really LOW - Whats Wrong? John W. Ati Videocards 5 January 20th 04 08:09 AM
Dell Dimension L700cx maximum processor support ? S.Lewis Dell Computers 2 December 26th 03 03:37 PM
Flickering/twitch (Dimension 8300 w/ MX 420 TV OUT) Adam S. Julius Dell Computers 0 November 16th 03 03:44 AM
Dell customer support Steve Dell Computers 30 July 13th 03 02:39 AM


All times are GMT +1. The time now is 01:38 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.