Skip to content

Instantly share code, notes, and snippets.

@ozgurakan
Last active August 29, 2015 13:56
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ozgurakan/9071601 to your computer and use it in GitHub Desktop.
Save ozgurakan/9071601 to your computer and use it in GitHub Desktop.
Marconi Installation Guide.md

Configure Queuing Service

Contents

  • System Requirements
  • Queuing Service Concepts
  • Install Queuing Service
  • Configure Shards
  • Create A Queue

System Requirements

Before you install OpenStack Queuing Service, you must meet the following system requirements.

  • OpenStack Compute Installation.
  • Enable the Identity Service for user and project management.
  • Python 2.6 or 2.7

Queuing Service Concepts

The Queuing Service is a multi-tenant, message queue implementation that utilizes a RESTful HTTP interface to provide an asynchronous communications protocol, which is one of the main requirements in today’s scalable applications.

Queue

Queue is a logical entity that groups messages. Ideally a queue is created per work type. For example, if you want to compress files, you would create a queue dedicated for this job. Any application that reads from this queue would only compress files.

Message

Message is stored in a queue and exists until it is deleted by a recipient or automatically by the system based on a TTL (time-to-live) value.

Worker

Worker is an application that reads one or multiple messages from the queue.

Producer

Producer is an application that creates messages in one or multiple queues.

Claim

Claim is a mechanism to mark messages so that other workers will not process the same message.

Publish - Subscribe

Publisher - Subscriber is a pattern where all worker applications have access to all messages in the queue. Workers can not delete or update messages.

Producer - Consumer

Producer - Consumer is a pattern where each worker application that reads the queue has to claim the message in order to prevent duplicate processing. Later, when the work is done, the worker is responsible from deleting the message. If message is not deleted in a predefined time (claim TTL), it can be claimed by other workers.

Message TTL

Message TTL is time-to-live value and defines how long a message will be accessible.

Claim TTL

Claim TTL is time-to-live value and defines how long a message will be in claimed state. A message can be claimed by one worker at a time.

Queues Database

Queues database stores the information about the queues and the messages within these queues. Storage layer has to guarantee durability and availability of the data.

Sharding

If sharding enabled, queuing service uses multiple queues databases in order to scale horizontally. A shard (queues database) can be added anytime without stopping the service. Each shard has a weight that is assigned during the creation time but can be changed later. Sharding is done by queue which indicates that all messages for a particular queue can be found in the same shard (queues database).

Catalog Database

If sharding is enabled, catalog database has to be created. Catalog database maintains queues to queues database mapping. Storage layer has to guarantee durability and availability of data.

Install Queuing Service

Before you install and configure queuing service, meet the requirements in the section called "System Requirements".

Minimum Scalable HA Setup

OpenStack Queuing Service has two main layers. First one is the transport (queuing application) layer which provides the RESTful interface, second one is the storage layer which keeps all the data and meta-data about queues and messages.

For a HA setup, a load balancer has to be placed in front of the web servers. Load balancer setup is out of scope in this document.

For storage we will use mongoDB in order to provide high availability with minimum administration overhead. For transport, we will use uwsgi.

To have a small footprint while providing HA, we will use 2 web servers which will host the application and 3 mongoDB servers (configured as replica-sets) which will host the catalog and queues databases. At larger scale, catalog database and the queues database are advised to be hosted on different mongoDB replica sets.

Package Installations on RedHat

Install mongoDB on Database Servers

Install mongoDB on three servers and setup the replica-set.

Configure Package Management System (YUM)

Create a /etc/yum.repos.d/mongodb.repo file to hold the following configuration information for the MongoDB repository:

If you are running a 64-bit system, use the following configuration:

[mongodb]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64/
gpgcheck=0
enabled=1

If you are running a 32-bit system, which is not recommended for production deployments, use the following configuration:

[mongodb]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/i686/
gpgcheck=0
enabled=1
Install Packages

Issue the following command (as root or with sudo) to install the latest stable version of MongoDB and the associated tools:

#yum install mongo-10gen mongo-10gen-server

Edit /etc/sysconfig/mongod

logpath=/var/log/mongo/mongod.log
logappend=true
fork = true
dbpath=/var/lib/mongo
pidfilepath = /var/run/mongodb/mongod.pid
replSet = catalog
nojournal = true
profile = 1
slowms = 200
oplogSize = 2048

Start mongoDB on all database servers.

#service mongodb start
Configure Replica Set

Assuming that primary mongodb servers hostname is mydb0.example-queues.net, once you install mongoDB on three servers go to mydb0 and run the commands below;

mydb0# mongo local --eval "printjson(rs.initiate())"
mydb0# rs.add("mydb1.example-queues.net")
mydb0# rs.add("mydb2.example-queues.net")

To check if replica-set is established run this command;

mydb0:~# mongo local --eval "printjson(rs.status())"

Install memcached on Web Servers

Install memcached on web servers in order to cache identity tokens and catalog mappings.

web# yum install memcached

Start memcached service.

web# service memcached start

Install uwsgi on Web Servers

web# yum -y install python-pip
web# pip install uwsgi

Configure OpenStack Marconi (Queuing Service)

On web servers run these commands;

web# git clone https://github.com/openstack/marconi.git .
web# pip install . -r ./requirements.txt --upgrade --log /tmp/marconi-pip.log

Create /srv/marconi folder to store related configurations files.

Create /srv/marconi/marconi_uwsgi.py with the following content:

from keystoneclient.middleware import auth_token
from marconi.transport.wsgi import app

app = auth_token.AuthProtocol(app.app, {})

Create /srv/marconi/uwsgi.ini file with the following content:

[uwsgi]
http = 192.168.192.168:80
daemonize = /var/log/marconi.log
pidfile = /var/run/marconi.pid
gevent = 2000
gevent-monkey-patch = true
listen = 1024
enable-threads = true
module = marconi_uwsgi:app
workers = 4

The uwsgi configuration options above can be modified for different performance requirements.

Create marconi configuration file /etc/marconi.conf

[DEFAULT]
# Show more verbose log output (sets INFO log level output)
#verbose = False

# Show debugging output in logs (sets DEBUG log level output)
#debug = False

# Sharding and admin mode configs
sharding      = True
admin_mode    = True

# Log to this file!
log_file = /var/log/marconi-queues.log
debug    = False
verbose  = False

# This is taken care of in our custom app.py, so disable here
;auth_strategy = keystone

[keystone_authtoken]
admin_password = < admin password >
admin_tenant_name = < admin tenant name >
admin_user = < admin user >
auth_host = < identity service host >
auth_port = '443'
auth_protocol = 'https'
auth_uri = < identity service uri >
auth_version = < auth version >
token_cache_time = < token cache time >
memcache_servers = 'localhost:11211'

[oslo_cache]
cache_backend = memcached
memcache_servers = 'localhost:11211'

[drivers]
# Transport driver module (e.g., wsgi, zmq)
transport = wsgi
# Storage driver module (e.g., mongodb, sqlite)
storage = mongodb

[drivers:storage:mongodb]
uri = mongodb://mydb0,mydb1,mydb2:27017/?replicaSet=catalog&w=2&readPreference=secondaryPreferred
database = marconi
partitions = 8

# Maximum number of times to retry a failed operation. Currently
# only used for retrying a message post.
;max_attempts = 1000

# Maximum sleep interval between retries (actual sleep time
# increases linearly according to number of attempts performed).
;max_retry_sleep = 0.1

# Maximum jitter interval, to be added to the sleep interval, in
# order to decrease probability that parallel requests will retry
# at the same instant.
;max_retry_jitter = 0.005

# Frequency of message garbage collections, in seconds
;gc_interval = 5 * 60

# Threshold of number of expired messages to reach in a given
# queue, before performing the GC. Useful for reducing frequent
# locks on the DB for non-busy queues, or for worker queues
# which process jobs quickly enough to keep the number of in-
# flight messages low.
#
# Note: The higher this number, the larger the memory-mapped DB
# files will be.
;gc_threshold = 1000

[limits:transport]
queue_paging_uplimit = 1000
metadata_size_uplimit = 262144
message_paging_uplimit = 10
message_size_uplimit = 262144
message_ttl_max = 1209600
claim_ttl_max = 43200
claim_grace_max = 43200

[limits:storage]
default_queue_paging = 10
default_message_paging = 10

Start queuing service

#/usr/bin/uwsgi --ini /srv/marconi/uwsgi.ini

Configure Shards

To have a functional queuing service, we need to define a shard. On one of the web servers run this command:

http put localhost:80/v1/shards/shard1 weight:=100 uri='mongodb://mydb0,mydb1,mydb2:27017/?replicaSet=catalog&w=2&readPreference=secondaryPreferred' options:='{"partitions": 8}' X-Auth-Token:$TOKEN

Above $TOKEN is the authentication token retrieved from identity service. If you choose not to enable keystone authentication you won't have to pass a token.

Reminder: In larger deployments, catalog database and queues databases (shards) are going to be on different mongoDB replica-sets.

Create A Queue

Define these variables

# USERNAME=my identity username
# APIKEY=my-long-api-key
# ENDPOINT=test-queue.mydomain.com < keystone endpoint >
# QUEUE=test-queue
# CLIENTID=c5a6114a-523c-4085-84fb-533c5ac40789
# HTTP=http
# PORT=80
# TOKEN=9abb6d47de3143bf80c9208d37db58cf < your token here >

Create the queue

# curl -i -X PUT $HTTP://$ENDPOINT:$PORT/v1/queues/$QUEUE -H "X-Auth-Token: $TOKEN" -H "Client-ID: $CLIENTID"
HTTP/1.1 201 Created
content-length: 0
location: /v1/queues/test-queue

HTTP/1.1 201 Created response proves that service is functioning properly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment