A computer components & hardware forum. HardwareBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » HardwareBanter forum » General Hardware & Peripherals » Storage & Hardrives
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Fine Tuning Linux for backup/production coexistence.



 
 
Thread Tools Display Modes
  #1  
Old January 31st 07, 11:23 PM posted to comp.os.linux.networking,comp.arch.storage
Todd
external usenet poster
 
Posts: 1
Default Fine Tuning Linux for backup/production coexistence.

Hi,

I am running a CentOS 4 box with 4 terabytes of data on it. It houses
most of my users' data, so it is constantly being used.


I'm running the Networker 7.3.2 client on the box, as well. I have
established a seperate backup network successfully - all data is
backed up over this secondary network, and not over the production
network. The secondary, like the primary, is all GigE, so we're as
fast as we can go, there.


Even though I've made this change, machine response is VERY slow
during a backup. The CPU(s) get thrashed, since it has a lot of
calculations to do in order to perform a backup.

Sadly, there are no throttling options in Legato Networker, since its
really designed to do a backup as quick as possible. What I'd like to
be able to do is a backup of the system, without (significantly)
impacting server performance.

Now granted, I try to run backups during non business hours, but
sometimes thats imposssible. I tried to /bin/nice the save processes
on the file server, but this did not have a significant change.

Is it possible to improve the performance (without completely swapping
out the back plane?) Are there any /proc settings that might help?
Would bumping up the number of NFS processes help? How about the wsize/
rsize of the exported directories? Anything else anyone can think of?

If anyone has any thoughts, I would really appreciate it. I'm at a
loss, and I don't even see the advantage to a backup network now,
since my network is STILL impacted when a backup is running.



- Thanks,


Todd

  #2  
Old February 1st 07, 06:13 AM posted to comp.arch.storage
Raju Mahala
external usenet poster
 
Posts: 47
Default Fine Tuning Linux for backup/production coexistence.

On Feb 1, 3:23 am, "Todd" wrote:
Hi,

I am running a CentOS 4 box with 4 terabytes of data on it. It houses
most of my users' data, so it is constantly being used.

I'm running the Networker 7.3.2 client on the box, as well. I have
established a seperate backup network successfully - all data is
backed up over this secondary network, and not over the production
network. The secondary, like the primary, is all GigE, so we're as
fast as we can go, there.

Even though I've made this change, machine response is VERY slow
during a backup. The CPU(s) get thrashed, since it has a lot of
calculations to do in order to perform a backup.

Sadly, there are no throttling options in Legato Networker, since its
really designed to do a backup as quick as possible. What I'd like to
be able to do is a backup of the system, without (significantly)
impacting server performance.

Now granted, I try to run backups during non business hours, but
sometimes thats imposssible. I tried to /bin/nice the save processes
on the file server, but this did not have a significant change.

Is it possible to improve the performance (without completely swapping
out the back plane?) Are there any /proc settings that might help?
Would bumping up the number of NFS processes help? How about the wsize/
rsize of the exported directories? Anything else anyone can think of?

If anyone has any thoughts, I would really appreciate it. I'm at a
loss, and I don't even see the advantage to a backup network now,
since my network is STILL impacted when a backup is running.

- Thanks,

Todd



I feel network must not be the bottlenek. In most of the cases disk
read/write and memory to perform comparision in case of incremental
backup. I am not sure about the working of Legato Networker but
logically in case of incremental backup it checks current data with
its database to find out changed data and for that it requires lots of
memory. So I suggest to keep notice on RAM utilization of the server.
Anothr workaround may be you put Networker client on another server
(lets say call it scan server) which can access user data over NFS and
can take backup. It this case your primary storage server will not be
impacted.

  #3  
Old February 3rd 07, 08:18 PM posted to comp.os.linux.networking,comp.arch.storage
Moojit
external usenet poster
 
Posts: 18
Default Fine Tuning Linux for backup/production coexistence.


"Todd" wrote in message
oups.com...
Hi,

I am running a CentOS 4 box with 4 terabytes of data on it. It houses
most of my users' data, so it is constantly being used.


I'm running the Networker 7.3.2 client on the box, as well. I have
established a seperate backup network successfully - all data is
backed up over this secondary network, and not over the production
network. The secondary, like the primary, is all GigE, so we're as
fast as we can go, there.


Even though I've made this change, machine response is VERY slow
during a backup. The CPU(s) get thrashed, since it has a lot of
calculations to do in order to perform a backup.


GigE is good, but GigE w/ TOE is better. This will offload the processor.
If the backup application is the culprit, there's nothing you
can do except insure that your system has adequate memory. Monitor the page
faults if this is possible. If it's excessive, more RAM may resolve the
sluggishness.

Da Moojit

Sadly, there are no throttling options in Legato Networker, since its
really designed to do a backup as quick as possible. What I'd like to
be able to do is a backup of the system, without (significantly)
impacting server performance.

Now granted, I try to run backups during non business hours, but
sometimes thats imposssible. I tried to /bin/nice the save processes
on the file server, but this did not have a significant change.

Is it possible to improve the performance (without completely swapping
out the back plane?) Are there any /proc settings that might help?
Would bumping up the number of NFS processes help? How about the wsize/
rsize of the exported directories? Anything else anyone can think of?

If anyone has any thoughts, I would really appreciate it. I'm at a
loss, and I don't even see the advantage to a backup network now,
since my network is STILL impacted when a backup is running.



- Thanks,


Todd



  #4  
Old February 5th 07, 04:01 PM posted to comp.os.linux.networking,comp.arch.storage
Thor Lancelot Simon
external usenet poster
 
Posts: 18
Default Fine Tuning Linux for backup/production coexistence.

In article ,
Moojit wrote:

"Todd" wrote in message
roups.com...
Hi,

I am running a CentOS 4 box with 4 terabytes of data on it. It houses
most of my users' data, so it is constantly being used.


I'm running the Networker 7.3.2 client on the box, as well. I have
established a seperate backup network successfully - all data is
backed up over this secondary network, and not over the production
network. The secondary, like the primary, is all GigE, so we're as
fast as we can go, there.


Even though I've made this change, machine response is VERY slow
during a backup. The CPU(s) get thrashed, since it has a lot of
calculations to do in order to perform a backup.


GigE is good, but GigE w/ TOE is better. This will offload the processor.


Do you have any rational reason to believe that a gigabit of TCP
throughput will saturate a modern processor?

The dirty little secret about "TOE" is that it often _slows things down_.

--
Thor Lancelot Simon
"All of my opinions are consistent, but I cannot present them all
at once." -Jean-Jacques Rousseau, On The Social Contract
  #5  
Old February 5th 07, 07:09 PM posted to comp.arch.storage
Raju Mahala
external usenet poster
 
Posts: 47
Default Fine Tuning Linux for backup/production coexistence.


GigE is good, but GigE w/ TOE is better. This will offload the processor.
If the backup application is the culprit, there's nothing you
can do except insure that your system has adequate memory. Monitor the page
faults if this is possible. If it's excessive, more RAM may resolve the
sluggishness.

Da Moojit


can you suggest how to find out page fault. by sar or there are some
other tools ?

RajuMahala

 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
NEWBIE: Needs advice on selecting graphic card Ricky Romaya Homebuilt PC's 13 December 19th 04 05:22 PM
Partitioning for XP & Linux, How Much for What? Nehmo Sergheyev Homebuilt PC's 51 October 11th 04 06:16 PM
Anybody here use Linux? GlueGum Homebuilt PC's 170 August 29th 04 09:15 PM
Asus A7N8X-X and Linux, getting pretty desperate... [email protected] Asus Motherboards 19 February 12th 04 01:30 PM
ATi All-in-wonder 7500, Channel fine tuning Pierre Tremblay Ati Videocards 2 August 24th 03 07:41 PM


All times are GMT +1. The time now is 01:37 PM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 HardwareBanter.
The comments are property of their posters.