I’ve been working with private cloud environments for the last few years. From what I gather, it’s a great concept to save money and resources. Architect it right the first time and expand as your vision grows. I’d like to get more into Amazon AWS and Rackspace but I don’t want to shell out the cash to run a few VMs that I may or may not use on a day to day basis. So my solution is to setup my own private cloud using a mainstream framework.
The mainstream cloud framework is debatable. For a long time, I’ve been messing with CLOUDSTACK. CLOUDSTACK is an open source project that was procured by CITRIX. CITRIX donated portions if not all of the project to apache and continued its open source development. CLOUDSTACK is great on paper; it has tons of APIs built in you can use, has a tie in with Amazon AWS through CLOUDBRIDGE, and provides a really nice user interface natively. However, after playing around with CLOUDSTACK I find it really difficult to manage the different types of storage mounts you need as well as networking. I was able to get it up and running once but was not able to do it again. Common issues are with networking and secondary storage mounts.
After my personal CLOUDSTACK failure, I went with OPENSTACK. To me, OPENSTACK seemed to be more adopted in commercial and Government spaces so I thought it would be a great opportunity to learn the internal mechanics. So, moving forward, here’s my journey:
Objective: Quickly build out OPENSTACK (OS) in a proof of concept (POC) environment with the capability of quickly migrating GuestVMs to production.
The following capabilities will be configured and tested:
- House various system VMs in a designated “project” with the purposes of segregating roles: security, domain, malware, and “playground”
- Quickly create a development VM into production after successful test (create a process flow)
- Ability to create VMs as IDS sensors with a TAP attachment
- Ability to create isolated guestVMs for malware analysis
- OPENSTACK development (PKI authentication, API, etc)
- VNC into guestVMs (this should be available natively)
- Ability to leverage Citrix XenDesktop and guestVM provisioning with GLANCE via OS APIs
- Ability to recover guestVM from a host failure
- I plan on testing this by tearing down my POC and bring it back up; using glance to restore my guestVMs on the NAS back into the OS database.
- Utilize a single OS controller for multiple hosts dispersed in various regions
- I also have a server in another location I’d like to leverage
After OS-POC is fielded out, I’d like to do more testing on QUANTUM and CINDER. Specifically, configuring security groups for multi-tenant VLAN isolation and dispersed CINDER blocks across locations.
In my POC environment, I’ll be using 2 servers: OS-POC and my existing NAS4Free instance.
- Quad Xeon 2.33Ghz, 32Gb of Memory, single 160Gb drive + 1.1Tb ISCSI for GLANCE and CINDER
- 2 NICs (public + private)
- Private (GuestVM Isolation)
- Public (Internet Accessible)
- OS: Ubuntu 12.04.1 LTS Server x64 bit (for initial POC) – I’d like to use CentOS 6.3 eventually.
- Dual Xeon 3.02Ghz, 4Gb of Memory, 4 drives spanning 3.5Tb
- 500Gb advertised as NFS for central repository of ISOs
- 500Gb advertised as ISCSI Target for nova-volumes
- Formated as LVM ext4
- 100Gb advertised as ISCSI Target for glance-images
- Formated as LVM ext4
Important thing to note is this standalone install will install ALL components in a single server. Using the STACKGEEK scripts, it seems QUANTUM and CINDER will not be fully utilized. In my initial research, I found that CINDER operates in userspace interfacing with a volume group named “cinder-volumes”. I also found that glance images are stored in /var/lib/glance by default. I’ll be leveraging my NAS as much as possible to: 1. Test out data reliability and 2. Performance of ZFS ISCSI w/ only 4Gb of memory.
PREPREQ is to run the setup.sh. The script will essentially create a setuprc and will be used for system variables and system passwords.
Download STACKGEEK scripts via git:
- # Configure passwords and other variables (setuprc)
- # Check to see if system support KVM
- # Add sources + perform system update
- # Install vlan, bridge-utils, libvirt, kvm, ntp, turn on forwarding
- /etc/init.d/networking restart
- # Manually install connect to iscsi targets: https://help.ubuntu.com/11.10/serverguide/iscsi-initiator.html
- # mount glance images in /var/lib/glance
- # Msg saying to create VGs as “nova-volumes”
- # Install MSYQL, modify mysql.conf, create databases and add service accounts to them. I’m not a mysql admin and don’t have any experience with tuning it, so I’ll install webmin for basic administration of users and databases: http://www.ubuntugeek.com/how-to-install-webmin-on-ubuntu-12-10-quantal-server.html. By default webmin will use root as the username and its password. So you probably want to sudo root passwd to set a password for root.
- # Install keystone, note that the token is randomly generate using rand hex.
- # Install glance and add ubuntu image
- # Install nova controller and compute
- # Installs Horizon