Enterprise Mail Setup

jason andrade jason at rtfmconsult.com
Thu Mar 27 15:08:00 CST 2003


On Thu, 27 Mar 2003, Paul wrote:

> The current system is a PE2550 with 4 x 36GB U160, 10000rpm in a RAID5 array
> via Perc3Di.
> The array is RAID5. Dual PII 1.3ghz 2GB of ram and the standard dell onboard
> equipment. Gigabit etc...
>
> The new server Im going to setup needs to be able to handle a huge spool.
> ie: 250GB of storage.
> It currently has 140,000 users approx popping and recieving mail via smtp.
>
> Mail is spooled to /var/spool/mail/x/y/username in a double level hashing
> setup.
>
> Has anybody had experience in this type of setup. I need some specs or at
> least some ideas on server equipment from dell.
>
> The server I was thinking of would be:
>
> Dual Xeon 2.4GHZ, 3GB of DDR RAM. A single 18GB mirror for the OS and
> internal files (logs and db apps etc..) and then a PowerVault for the
> external spool.
> The PowerVault would need to do RAID5 for disk spool of say 250GB. The
> PowerVault would need to be connected via the external SCSI port on the new
> server.

to concur with what seth said, a 18G mirror (RAID1) for the OS is good.  i
would look at a separate 18G mirror for logs and db apps if you can afford it.
and then purchase a 5th 18G disk and make it the global spare for both the
RAID1s above.  this reduces your risk window a lot.

i am assuming the lifetime of this server will be at least 24-36 months so the
cost of some extra drives is not that onerous.

in terms of external storage, you could look at raid1+0 as a faster (and more
reliable) method of storage using the PV220S using 14*73G drives which will
give you 490G of storage.  you could use the extra space either to expand or
to do "online/staging backups" of things like the db/logs which will speed up
restores etc.

raid5 would work but your write performance will definitely be less than raid1+0
so you just need to make sure that your app will ba happy with that.  from other
people's feedback i think RAID5 write is something like 20-25Mbyte/sec at best
using the PERC and double that using raid1+0 or more using software raid.

regards,

-jason




More information about the Linux-PowerEdge mailing list