2015년 3월 8일 일요일

Manually Install openstack Juno on Ubuntu 14.04

1.1. What is this post  for, and how to try it?

Openstack gets more attentions these days, and I'd like to share some of my experiences on deployment. This post is based on the instructions from official guide of apt-get installation document.

I'm going to exclude fancy scripts or python unlike other blog posts, just focus what is the default. The default guide being to big to read, this post is going to be like trimming down and add more human readable explanation of it.

1.2. Who am I?

I'm a java developer (mainly spring-framework-based Web or App) and recently working for openstack deployment automation for Data center or nfvi using chef. Four versions of openstack has been handled with my hands, grizzly, havana, icehouse, and juno.

2.1. The architecture of the default guides document

The default installation guide is for three node architecture which means a user should prepare three host and install openstack services on it. With this architecture, openstack components are  spread by its role whether it is server or not. For instance, Nova has many process like scheduler, vncserver, api server, compute agent etc and by this role, scheduler and vncserver and api server is categorized for server, and compute agent for agent. Services like glance which has no agent will be suffice to be installed solely.

The controller node means that all the server process should be running on one host and these server process manipulates the agent process like compute agent or neutron agents etc.


from chapter 1.architecture

As shown the above, all the controller services that can be a server exists on controller node, and agents for nova and neutron etc are running on network node and compute nodes or storage node(block, object). You may find out that node for compute and storage is expressed with plural form, which means user can add more nodes for scale-out use case.

2.1. Targeting architecture of this post

I'm going to change the architecture to create test bed easier and lighter than the original. Network node will be merged into controller node and block storage node to compute node, and exclude swift.

Controller Node
- hostname: ubjuno-contnet
- control functions: mariadb, rabbitmq, keystone, glance, nova, neutron, horizon, cinder
- network functions: ovs agent, dhcp agent, l3 agent, metadata agent

Compute Node
- hostname: ubjuno-compute
- compute: ovs agent, nova-compute-agent
- storage: cinder volume agent


3.1 Prequisites for creating openstack test bed

To install openstack, you can use baremetal pc on which there is no Operating System, or simply use Virtualbox. I'm going use virtualbox to set up openstack test bed, excluding vmware workstation for it is a commercial use only(correct me if wrong).

Setup your virtualbox with three host-only networking. You my use nat or bridged network, but I prefer host-only because it is simplest of all and closed environment. With Host meaning your PC, a virtual instance can be pinged or connected from it.

The name of Host-only network will vary depending what your host pc's os is, with mine being Centos 6.5.

vboxnet 0     88.11.11.1      255.255.255.0      no dhcp
vboxnet 1     88.22.22.1      255.255.255.0      no dhcp
vboxnet 2     88.33.33.1     255.255.255.0      no dhcp

the first one will be used for external networking with no external connection, and second one for management, and the last one for data(internal) networks that will be created by user after Openstack is on operation.

3.2 Prepare your virtual machine

The things that should be done will be listed like below
a) create two virtual machine with Ubuntu 64bit
b) setup vm's network with four network, Nat, vboxnet 0, vboxnet 1, vboxnet 2.
     Nat will be used only for downloading files from internet, so it's not for openstack.
c) change nic adapter's Promiscuous mode to "allow all" for cloud networking
d) setup hostname and /etc/hosts file on each node
    - hostname with first one, ubjuno-contnet, second one ubjuno-compute
           # hostnamectl set-hostname ubjuno-contnet
           # hostnamectl set-hostname ubjuno-compute
    - add hostfile
           # vi /etc/hosts
                      88.22.22.11     ubjuno-contnet
                      88.22.22.12     ubjuno-compute

3.2. Install Infrastructure Software

Openstack components heavily depends on two software, Relational database and Message Queue broker. This post will use the former with mariadb, and the latter with RabbitMQ.

Most of server services will be using Mariadb as persistent data store, so that all the activities happening on openstack cluster will be available to user any time they want like every server applications does.

The communications between servers and agents are done mostly queue protocol, which provides insurance of delivery of messages. With RESTful and message queue, Openstack components works with each other in a way of the "loosely coupled".

I'm going to exclude ntp, but this one is also important. you can google to find out why it is so.



4. Install Openstack Controller Services on Controller Node

4.1. Install Keystone Server

Keystone is a base component specifically for authentication. This authentication service is not only used for users outside of openstack, but also components inside of it. Although each components stands independently, the cooperation is unavoidable, so that one components calls the other components in the middle of process of creating instances, when it uses openstack keystone for getting token or service catalogs.



4.2 Setup Basic tenant and user with admin role

Like most of server app, keystone needs core data that should be populated before operation.
You can use admin token as service token to be a super admin but this should be only initial setup, otherwise there is a big risk exploiting credentials or service setup on operating time.



4.3 Install Glance Server

The word "image service" seems to be odd name at first glance, but you may notice that image is special word describing a disk file composed of data contents with specific file system. In most cases, golden image is used for booting up instance. Cirros image is used for simple test.



4.4 Install Nova Server

Nova is used for creating instance. Setting up nova is divided on what network type is used, lagacy or neutron. Because of this, at this point basic installation will be done.



4.5 Install Neutron Server

This guide will use Neutron. Neutron works tightly with nova, so configuration should be done at Nova too. Setup network with gre and flat.



4.6 Install horizon Web service

It's based on apache2 with django middleware with flexible session backend.



4.7 Install cinder Server

Cinder scheduler and api server will be installed on controller node



5. Install Network agents on Controller Node

Most of neutron.conf settings are done when controller functions are being setup. If you want to separate network functions to another node, check out how neutron.conf and ml2 conf should be done from the basic installation guide.



6. Setup Compute node with compute, l2 network agent, cinder agent

Compute node consists of three agents, nova-compute-agent, openvswitch-agent, and cinder-volume-agent. Each of these has self defining name, for compute to create computing instance, and for openvswitch to create ports or networkings with openvswitch and for volume to create volume that is available inside of instance.

6.1 Install nova compute agent
There is no need to setup database settings for compute agent.



6.2 Install ovs agent

Openvswitch agent prepares virtual network device and networking for instances



6.3 Install cinder volume agent

Cinder agent will be working for creating volumes on compute node.



댓글 없음:

댓글 쓰기