Blocksize and Compression on 2550 TBU

jason andrade jason at dstc.edu.au
Sun Aug 25 18:55:01 CDT 2002


On Sun, 25 Aug 2002, Ken Rice wrote:

> Box is a PE 2550, (Tag B694811)
>
> The TBU is:
> Vendor: ARCHIVE   Model: Python 06408-XXX  Rev: 8160
> Type:   Sequential-Access                  ANSI SCSI revision: 03

i am guessing this is a DDS4 drive.

> Data to be backed up: ~60GB, averages 20,000 bytes per file, files are mix of JPG's and
> very small HTML's/XML's.
> FileSystem: Currently Ext2, but soon to be a reiserfs.

you could also consider ext3 or xfs..

> I'm currently doing a backup with:
> cd "dirtobebackedup"
> star -cv bs=32k -f /dev/nst0 .
>
> 1. Is that blocksize proper, or should it not even be specified?
> (Tried www.linuxtapecert.org, no help for this drive)

i can't see it hurting. i think the default bs is 10k.

> Also, could this enable me to possibly backup all 60GB of data to one tape, although I
> know I'm dealing with JPG's?

it would depend on the rate of compression on the jpgs but i doubt it myself.  with
20G available, the jpegs would have to compress to 1/3 of their size and i doubt this
is likely.

> 3. Is hardware compression used by default, without having to be specified?

i believe it's enabled, yep.


I'm not familiar with star, but here's some general points about the backup that
might be useful.


o you could do this onto two tapes for a full backup if you can swap tapes

o you could do a full backup and then do incrementals which would only backup
  the files that changed since the last backup

o you could buy another tape unit and you will probably fit your 60G of data
  across two drives

o you could buy an autoloader instead

o instead of writing lots of small files, can you do any kind of staged backup
  by tarring directories into something larger (e.g 1G+ files) and backing
  those up instead ?


regards,

-jason




More information about the Linux-PowerEdge mailing list