[Crowbar] Deploying Swift on 3TB Disks

Gregory_Althaus at Dell.com Gregory_Althaus at Dell.com
Wed Jul 25 07:56:28 CDT 2012

Dieter is correct about the OS partitions.

I think the use as storage drives is dependent currently upon the usage.  Hadoop use GPT-based partitions and should work.  I don't think we've updated the swift recipes to use GPT-based partitions, but I'm not completely sure.  This is part of the work to support larger the 2TB disks for Crowbar and its services.


-----Original Message-----
From: crowbar-bounces On Behalf Of Plaetinck, Dieter
Sent: Wednesday, July 25, 2012 4:00 AM
To: Mike Dawson
Cc: crowbar
Subject: Re: [Crowbar] Deploying Swift on 3TB Disks

On Tue, 24 Jul 2012 16:27:52 -0400
Mike Dawson <mdawson at gammacode.com> wrote:

> All,
> It appears the barclamp to deploy Swift fails with 3TB or larger disks. 
> Looking deeper at the output of chef-client, I get:
> [Tue, 24 Jul 2012 14:32:31 -0500] INFO: Processing execute[creating 
> partition 0] action run 
> (/var/cache/chef/cookbooks/swift/providers/disk.rb line 239)
> Error: partition length of 5860509696 sectors exceeds the 
> bsd-partition-table-imposed maximum of 4294967295
> I have had success deploying to smaller disks. Has anyone else run 
> into this issue?

3TB storage disks or main OS disk? I was told 3TB OS disks are currently unsupported (something with needing EFI/GPT, and not being implemented yet in crowbar), whereas 3TB storage disks should be ok.


Crowbar mailing list
Crowbar at dell.com
For more information: https://github.com/dellcloudedge/crowbar/wiki

More information about the Crowbar mailing list