Skip to content

Instantly share code, notes, and snippets.

@rockymadden
Last active December 17, 2015 05:58
Show Gist options
  • Star 10 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save rockymadden/5561377 to your computer and use it in GitHub Desktop.
Save rockymadden/5561377 to your computer and use it in GitHub Desktop.
One simple example of a 10u/13node hardware private cloud infrastructure. Built specifically for colocation, based on how datacenters often slice up their racks (e.g. 10u/quarter rack is very common). Saves a few hundred thousand dollars over AWS in three years time. Replace/repeat at that time and then put these into a Hadoop cluster or somethi…

This chassis houses up to 8 processing nodes. Each node is roughly equivilent to 30 elastic compute units. You can scale as needed by building and slotting more nodes into the chassis. Candidates for these nodes includes: app servers of any kind, ssl termination/reverse proxies, cache servers, distributed/concurrent processing, cron jobs, etc.

Barebone:


Per node (up to 8):


  • Min cost: $4550 (1 node, $800 per additional)
  • Max cost: $10150
  • Max Realistic Amps(120v): 8
  • Each node having 240GB SSD, 32GB RAM, and 4 core/8 thread CPU.
  • Only non-critical/replicatable data should be held on these servers! Intentionally no RAID.
  • Each node should be treated as a dumb processing/cache server.
  • Expected that the developer/ops team anticipate/code for individual node failure (always true).
  • Though limited on cores, the E3 v3 processors are one generation ahead of the other Xeons. As a result, they are some of the highest performing Xeon processors available at a extremely low cost: http://www.cpubenchmark.net/high_end_cpus.html
  • If you need more processing power on the cheap, E3-1270v3 is a great way to do so for only ~$100 more per proc.

This chassis houses up to 4 storage nodes. You can scale as needed by building and slotting more nodes into the chassis. Candidates for these nodes includes: databases (e.g. mongodb, redis, neo4j, postgres, hbase), static caches, and persisted data of any kind that needs to be accessed quickly.

Barebone:


Per node (up to 4):


  • Min cost: $5860 (1 node, $3060 per additional)
  • Max cost: $15040
  • Max Realistic Amps(120v): 5
  • Each node having 1440GB SSD, 64GB RAM, 6 core/12 thread CPU.
  • Possible to double RAM and CPU. NOTE: The motherboard must have second CPU installed to add more RAM.
  • Leveraging RAID 10 with 6 disks would reduce individual disk writes by 2/3 (RAID 5/6 must parity calc, shortening the life of your SSDs).
  • RAID 10 config has 720GB of usable space, 6x reads, and 3x writes. At least one HDD failure recoverable, possible for two.
  • Intel SSD firmware has automatic GC, which is typically a problem in RAID configs with SSDs (e.g. lack of trim).

This chassis houses backups of the persisted data from the storage nodes (e.g. database snapshots). It could also be used to serve large amounts of data, however access is slower because it leverages 7.2k enterprise sata drives.

Chassis:

Motherboard:


Per node (1):


  • Max/Min Cost: $4690
  • Max Realistic Amps(120v): 1.75
  • Each node having 7.2k enterprise 32000GB HDD, 32GB RAM, 4 core/8 thread.
  • Assumes FreeBSD with ZFS in a Z2+ config.
  • Z2 RAID config has 24000GB of usable storage, 6x reads, 0x writes. Two failures possible.
  • Min cost: $15100
  • Max cost: $29880
  • Add 10u colocation, which typically starts at ~$350 a month.
  • Add hardware firewall, typically 1u.
  • Add managed switch behind firewall, typically 1u.
  • You have 1u to spare!
  • Add same setup at another datacenter for failover and balancing if you want to get nuts.
  • NOT a one size fits all, you should do your own analysis/builds based upon your own platform needs.
  • Analyze your write patterns to ensure consumer level SSDs (MLC) can last 3 years. Be sure to take RAID configs into consideration.
  • Ensure you have enough amps for your setup. Filled up, 16 amp(120v) would be needed.
  • Most of my code is functional/concurrent and scales horizontally. My stack does as well. Your mileage may vary if your stack scales more vertically.
  • If you are using the Xen hypervisor, you can move around VMs only on the same physical chassis. This is because the processors differ between processing and storage. This could be a big con for some.

Amortized 50% filled cost over 3 years:

  • $480 per month for 8 amp and 50Mbps via 95th percentile colocation
  • $2000 budget for spare parts
  • $20560 for hardware
  • Total: $1107 per month

Amortized 100% filled cost over 3 years:

  • $660 per month for 16 amp and 50Mbps via 95th percentile colocation
  • $3000 budget for spare parts
  • $29420 for hardware
  • Total: $1583 per month
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment