Skip to content

Instantly share code, notes, and snippets.

@pigeonflight
Last active January 4, 2016 19:27
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save pigeonflight/5ea6ecf1e843f152a63c to your computer and use it in GitHub Desktop.
Save pigeonflight/5ea6ecf1e843f152a63c to your computer and use it in GitHub Desktop.
Notes for migrating a standalone zope/plone to a zeo.cfg powered zope/plone. We assume that you are starting with a standalone unified installer based configuration. Add the following files to your buildout and edit the extends section of your buildout accordingly. Then run `bin/buildout -Nvv`.... You may customize the ports on which each servic…

We assume that you are starting with a standalone unified installer based configuration.

Add the following files to your buildout and edit the extends section of your buildout accordingly. Then run bin/buildout -Nvv.... You may customize the ports on which each service runs in the zeo.cfg. The result will be an haproxy balanced cluster of two zeo clients.

Install the new zeo files

From your buildout folder run the following command:

wget -qO- https://goo.gl/Svv16i | bash 

OR

Clone the repo:

git clone git@gist.github.com:5ea6ecf1e843f152a63c.git
cp 5ea6ecf1e843f152a63c/* ./

OR

Download the gist into your buildout folder:

wget https://gist.github.com/pigeonflight/5ea6ecf1e843f152a63c/archive/ad7afbdbe00b5c5ce6802115d631e3f10beeaa39.zip
unzip ad7afbdbe00b5c5ce6802115d631e3f10beeaa39.zip
mv 5ea6ecf1e843f152a63c-ad7afbdbe00b5c5ce6802115d631e3f10beeaa39/* ./

Edit the buildout.cfg file

And edit your buildout.cfg file by changing it in three locations.

  1. Near line 38 (if you used the UI installer) add zeo.cfg, haproxy.cfg and supervisor.cfg:

    extends =
        base.cfg
        versions.cfg
    #   http://dist.plone.org/release/5.0/versions.cfg
        zeo.cfg
        haproxy.cfg
        supervisord.cfg
  2. Near line 146 (if you used the UI installer) change parts = to parts += and comment out instance or remove it:

    parts +=
    #   instance
        repozo
        backup
        zopepy
        unifiedinstaller
  3. In the [instance] section make sure there's a zcml = 1 section even if it is an empty section.

Run buildout

Then run buildout:

bin/buildout

Start the server using supervisor

bin/supervisord

Stop the server using supervisor

bin/supervisorctl shutdown all

Restarting the server

Restart everything:

bin/supervisorctl restart all

restart one thing e.g. zeo:

bin/supervisorctl restart zeo

Changing the default ports (important for use on c9)

On c9 your service MUST run on port 8080.

Edit zeo.cfg to change the default ports (you'll want haproxy to run on port 8080 if you are completely replacing the instance)

It will look like this when you're done:

[ports]
zeoserver = 12000
instance1 = 12030
instance2 = 12031 
instance-debug = 12038
haproxy = 8080

Tne rerun buildout to update the configuration:

bin/buildout

You can reread your supervisor config and restart as follows:

bin/supervisorctl reread
bin/supervisorctl update
[buildout]
parts +=
haproxy-build
haproxy-conf
[haproxy-build]
recipe = plone.recipe.haproxy
target = linux26
pcre = 1
[haproxy-conf]
recipe = collective.recipe.template
input = ${buildout:directory}/haproxy.conf.in
output = ${buildout:directory}/haproxy.conf
maxconn = 200
ulimit-n = 65536
bind = 0.0.0.0:${ports:haproxy}
[versions]
collective.recipe.template = 1.5
global
log 0.0.0.0 local6
maxconn ${haproxy-conf:maxconn}
nbproc 1
# ulimit-n ${haproxy-conf:ulimit-n}
defaults
mode http
option httpclose
# Remove requests from the queue if people press stop button
option abortonclose
# Try to connect this many times on failure
retries 3
# If a client is bound to a particular backend but it goes down,
# send them to a different one
option redispatch
monitor-uri /haproxy-ping
timeout connect 7s
timeout queue 300s
timeout client 300s
timeout server 300s
# Enable status page at this URL, on the port HAProxy is bound to
stats enable
stats uri /haproxy-status
stats refresh 5s
stats realm Haproxy statistics
frontend zopecluster
bind ${haproxy-conf:bind}
default_backend zope
# Load balancing over the zope instances
backend zope
# Use Zope's __ac cookie as a basis for session stickiness if present.
appsession __ac len 32 timeout 1d
# Otherwise add a cookie called "serverid" for maintaining session stickiness.
# This cookie lasts until the client's browser closes, and is invisible to Zope.
cookie serverid insert nocache indirect
# If no session found, use the roundrobin load-balancing algorithm to pick a backend.
balance roundrobin
# Use / (the default) for periodic backend health checks
option httpchk
# Server options:
# "cookie" sets the value of the serverid cookie to be used for the server
# "maxconn" is how many connections can be sent to the server at once
# "check" enables health checks
# "rise 1" means consider Zope up after 1 successful health check
server plone0101 127.0.0.1:${ports:instance1} cookie p0101 check maxconn 20 rise 1
server plone0102 127.0.0.1:${ports:instance2} cookie p0102 check maxconn 20 rise 1
wget https://gist.github.com/pigeonflight/5ea6ecf1e843f152a63c/archive/ad7afbdbe00b5c5ce6802115d631e3f10beeaa39.zip
unzip ad7afbdbe00b5c5ce6802115d631e3f10beeaa39.zip
mv 5ea6ecf1e843f152a63c-ad7afbdbe00b5c5ce6802115d631e3f10beeaa39/* .
[buildout]
parts +=
supervisor
supervisor-conf
supervisor-crontab
[supervisor]
recipe = zc.recipe.egg
eggs = supervisor
[supervisor-conf]
recipe = collective.recipe.template
input = ${buildout:directory}/supervisord.conf.in
output = ${buildout:directory}/supervisord.conf
[supervisor-crontab]
recipe = z3c.recipe.usercrontab
times = @reboot
command = ${buildout:bin-directory}/supervisord -c ${supervisor-conf:output}
[unix_http_server]
file=${buildout:directory}/var/supervisor.sock
chmod=0600
[supervisorctl]
serverurl=unix://${buildout:directory}/var/supervisor.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface
[supervisord]
logfile=${buildout:directory}/var/log/supervisord.log
logfile_maxbytes=5MB
logfile_backups=10
loglevel=info
pidfile=${buildout:directory}/var/supervisord.pid ;
childlogdir=${buildout:directory}/var/log
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
directory=${buildout:directory}
[program:zeo]
command = ${buildout:directory}/bin/zeoserver foreground
redirect_stderr = true
autostart= true
autorestart = true
directory = ${buildout:directory}
stdout_logfile = ${buildout:directory}/var/log/zeo-stdout.log
stderr_logfile = ${buildout:directory}/var/log/zeo-stderr.log
[program:1]
command = ${buildout:directory}/bin/instance1 console
redirect_stderr = true
autostart= true
autorestart = true
directory = ${buildout:directory}
stdout_logfile = ${buildout:directory}/var/log/instance1-stdout.log
stderr_logfile = ${buildout:directory}/var/log/instance1-stderr.log
[program:1]
command = ${buildout:directory}/bin/instance1 console
redirect_stderr = true
autostart= true
autorestart = true
directory = ${buildout:directory}
stdout_logfile = ${buildout:directory}/var/log/instance1-stdout.log
stderr_logfile = ${buildout:directory}/var/log/instance1-stderr.log
[program:2]
command = ${buildout:directory}/bin/instance2 console
redirect_stderr = true
autostart= true
autorestart = true
directory = ${buildout:directory}
stdout_logfile = ${buildout:directory}/var/log/instance2-stdout.log
stderr_logfile = ${buildout:directory}/var/log/instance2-stderr.log
[group:instance]
programs = 1,2
[program:haproxy]
command = ${buildout:directory}/bin/haproxy -f haproxy.conf
redirect_stderr = true
autostart= true
autorestart = true
directory = ${buildout:directory}
stdout_logfile = ${buildout:directory}/var/log/haproxy-stdout.log
stderr_logfile = ${buildout:directory}/var/log/haproxy-stderr.log
[buildout]
parts +=
zeoserver
instance1
instance2
instance-debug
[ports]
zeoserver = 12000
instance1 = 12030
instance2 = 12031
instance-debug = 12038
haproxy = 12001
[config]
zeo-address = ${ports:zeoserver}
instance1-address = ${ports:instance1}
instance2-address = ${ports:instance2}
instance-debug-address = ${ports:instance-debug}
system-user =
[instance-settings]
user = admin:34adflllnereav
debug-mode = off
verbose-security = off
blob-storage = ${buildout:directory}/var/blobstorage
effective-user = ${config:system-user}
eggs =
${instance:eggs}
zcml =
${instance:zcml}
# resources = ${buildout:directory}/resources
event-log-max-size = 5 MB
event-log-old-files = 5
access-log-max-size = 20 MB
access-log-old-files = 10
environment-vars =
# PTS_LANGUAGES en
zope_i18n_compile_mo_files true
[zeo-instance-settings]
instance-clone = instance-settings
zeo-client = True
zeo-address = ${ports:zeoserver}
shared-blob = on
[zeoserver]
recipe = plone.recipe.zeoserver
zeo-address = ${config:zeo-address}
pack-days = 7
effective-user = ${config:system-user}
[instance1]
recipe = collective.recipe.zope2cluster
<= zeo-instance-settings
http-address = ${config:instance1-address}
# You can uncomment this line to add an additional instance to the zeocluster
[instance2]
recipe = collective.recipe.zope2cluster
<= zeo-instance-settings
http-address = ${config:instance2-address}
[instance-debug]
recipe = collective.recipe.zope2cluster
<= zeo-instance-settings
http-address = ${config:instance-debug-address}
debug-mode = on
verbose-security = on
[zopepy]
recipe = zc.recipe.egg
eggs = ${instance-settings:eggs}
interpreter = zopepy
scripts = zopepy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment