Skip to content

Instantly share code, notes, and snippets.

@jamesob
Created May 17, 2012 14:02
Show Gist options
  • Save jamesob/2719119 to your computer and use it in GitHub Desktop.
Save jamesob/2719119 to your computer and use it in GitHub Desktop.
Jenkins conference NYC, May 2012

Jenkins, NYC, May 2012

Conference page

keynote, Kohsuke Kawaguchi

  • people from all over have come; isreal, prague, LA.
  • jenkins evolved from a few shell scripts, beginning in 2004, because the creator kept embarrassing himself with faulty builds.

news

  • plugin growth is steady
  • installation at ~40k servers
  • Jenkins has jumped onto some non-profit org, so fundraising is tax-deductable
  • trying to open up governance, make administration transparent
  • more native jenkins packages (ubuntu, os x, et al.)
  • UI improvements
  • can now develop plugins in ruby

upcoming features

  • builds triggered from pull-requests on Github (surprised that's new)
  • Kohsuke wants to make test parallelization easier with slaves
  • distributed execution assistance. Jenkins helps you with distributed computing. (violation of toolbox principle? jenkins is for CI...)

Advanced Continuous Deployment with Jenkins, Andrew Phillips

  • CI: mostly unit-tests, sometimes also functional tests.

  • CI shortcomings:

    • deployment isn't often tested
    • app isn't tested on terminal platform
    • (not convinced these are shortcomings relevant to Percolate, since we're on AWS. We should be replicating production on QA at some scale.)
  • used to be make engineers, now that's push-button. we want to do the same for deploys.

enter CI

  • strictest definition: every commit goes to production.
    • worthwhile goal

non-unit tests

  • notion of smoke tests: small tests to ensure the app actually runs instead of just running unit tests
  • functional tests on target platform with selenium
  • performance tests (Grinder or Mechanize or whatever)

CI flow

  • dev

    • full nightly build
    • tag package as "ready"
  • QA

    • deploy "released" package to test env
    • do tests
      • (this sounds like we should have a QA branch; I'm thinking we only merge to master from QA.)
  • ops

    • deploy

"the dev commandments"

  • single package for deploy, independent of target

    • (we already have this, basically, though our treatment of settings files may be a violation of SPOT)
  • put all components needed for environment in a package (even environment preconditions).

    • (this is difficult for us because our system is distributed, but we have CFEngine for this.)

"the ops commandments"

  • provide fully configured infrastructure items (e.g. app, db, redis, rabbit boxes).

diy with jenkins

  • (TODO: cargo ssh?)

  • there are a lot of challenges associated with the diy approach

    • exposed security credentials?
    • how to get back to previous states?

no need to diy --- enter Deployit

  • (TODO: deployit jenkins plugin?)

  • step 1: package the build

    • specify SQL scripts to run, resources to include, etc.
      • i.e., specify components of an environment to include
    • specify smoke tests to run
      • wget or something similarly minimal
    • specify a target env
    • press go
  • deployit is analagous to an incremental compiler for deploys

    • only run new diffs
  • (I get the sense that this is more useful for JVM projects, where there's a more defined build process and more configuration involved)

  • main objective is to create a separation between users of the deploy process and deploy specialists who tune

what do I think of deployit?

  • impressive UI
  • I like that individual deploy steps are made explicit and automatically generated.
  • a deploy recipe is presented after specifications are made; only includes incremental changes.
  • a lot of process. may be valuable for org with distinct devs/qa/ops divisions. value is limited for us because we're all effectively devops
  • a(n exact) record of deploys is nice, and could be valuable. rollbacks are probably easier.

best practices

  1. build complete packages
  2. lock down credentials
  3. forge and share deployment patterns

TODO: visuwall

Nice display of deploy and build information

Massively Continuous Integration: A Case Study for Jenkins at Cloudscale, Jesse Dowdle & Joel Johnson

  • guys from AtTask
  • they're into project management software
  • 70 devs

their CI before

  • scripted scenario tests (selenium)

    • brittle
    • slow
    • 3-5 days for acceptance
  • ship monthly

their CI now

  • tight unit-like UI tests
  • Jenkins is SPOT
  • 30-45 mins to certify release for production

what is their def of ci

  • VCS into build service into test runner.
  • From test runner directly to app stack.

"true" CI

  • how often to integration?
    • every commit, it sounds like
  • what tests indicate integration?
    • all tests
  • what to know for release?

attask's pipeline

  • push

    • jenkins immediately spins up parallel jobs
    • use yum for packaging
    • selenium grid
  • while installing remotely, jenkins slaves run unit and int tests

  • after installed remotely, crank selenium tests in parallel

  • teardown

queuing theory

  • more slaves imply faster execution --- duh
  • made dependency graph; anything independent is parallelized --- duh

dynamic slave allocation

  • using jenkins ec2 plugin, jenkins allocates slaves on the fly to handle job load

    • automatic teardown
  • use of selenium grids

divide/conquer

  • test suites should support sharding
  • different environment requirements indicate tests that can be run in parallel

selenium

  • took three days to run/analyze tests

  • 1800 tests in selenium 1

    • 4hr runtime
    • someone went crazy with selenium ide
  • 750 tests in selenium 2

    • 30 mins to run
    • unit-style
  • test reduction by finding duplicate tests/code, refactoring UI

  • with selenium grid: 4x speedup

cloud formations for testing

  • json file describes stack

use of jenkins

  • helps link changesets to bugs

  • description plugin

    • tell jenkins to show certain info for each build
  • view plugin

    • display relevant builds based on any criterion
  • developer accountability has hugely increased

  • plugin that has selenium take screenshot of browser at time of test failure

  • really nice test history plugin

  • they've integrated attask with jenkins to link failures with tickets (very cool!)

investment

  • To release 100x a year:
    • was: $2mil to certify
    • now, with true CI: $265k
      • 1/10th of what it was

summary

Get cloud involved, scale testing horizontally

jenkins plugins used:

  • ec2
  • aws cloudformation
  • pipeline view
  • description setter
  • git
  • parameterized trigger plugin (to join parallel tests)

Scaling OpenStack Development with GIT, Gerrit and Jenkins, Monty Taylor

  • master always works
  • can't commit to master unless it works
  • process consistent across projects

systems

  • gerrit (code review, git)

  • jenkins

  • platform

    • python (pep8)
    • openstack.common
    • virtualenv/pip
    • irc
    • devstack

gated master

  • ensures code quality
  • protects devs
    • failing test cases are dealt with by their owners; no legacy failures

process flow

  1. write code, submit to gerrit for review
  2. run through automated code checks
  3. peer reviewed
  4. accepted/rejected by core team
  5. pre-merge automated tests
  6. code is merged or rejected
  7. code is run through post-merge automated checks

gerrit

  • developed by google for andriod
  • patch review system
  • integration points: hooks, json queries, ssh-based event stream

pre-merge check

  • patch is approved, then tests are run
  • jenkins tells gerrit to merge after tests have passed

states of a patch

  1. submitted
  2. verified (that it works)
  3. reviewed (style, performance, whatever)
  4. accepted
  5. landed (would say merge but it's not always a merge)

review

  • gerrit review comments can trigger jenkins events
    • (assign asana tasks based on comments?)

not doing the github model

  • pushing directly to ref instead of PR
    • you want to submit a patch targeting this ref (usually master)
    • this is cumbersome, so they wrote git review
      • wikipedia, clusterfs are using it

types of tests

  • unit

  • integration

    • maybe should run on virtual (or real) as possible
    • may be impossible for dev to run locally, but we want to mitigate that so that everything is locally repeatable
  • want to test results of the change, not the change itself

    • (not sure how they differ...)

problems

  • all network access is failure
    • assume that anything retrieving an exernal network resource is going to fail
    • to get around this, snapshot dependencies and external resources

jclouds

  • cross-cloud interaction for jenkins

how to cover an untested codebase

  • this guy tested from day 1

  • untested code is not reviewed positively, so all his devs are incentivized to write tests

How the guy who wrote jenkins does testing

  • Push directly to Jenkins, let jenkins merge upstream into the changeset, test the merged code, then push upstream if tests pass.

  • Basically, jenkins is the git server. Viable with JGit.

  • No need to then introduce extra infrastructure, gerrit, etc.

Continuous Development with Chef, Vagrant, VeeWee and Jenkins; Emmanuel Mwangi, SendGrid

  • qa engineer

  • not much manual testing at sendgrid

  • sendgrid is big on transactional email

  • sometimes 100+million emails a day go out

    • billions of events per day
  • SQL db -> json api

    • to abstract away from any specific datastore
  • split up reading, writing, arithmetic

  • got serious about testing: 90% code coverage

    • testing wasn't enough; configuration broke a lot of tests

introduction of chef

  • introduced chef to manage complexity of config

veewee

  • bad name, good product
  • lots of activity on gh
  • automates creation of vagrant boxes
  • .iso + chef script
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment