Skip to content

Instantly share code, notes, and snippets.

@djmitche
Last active July 25, 2016 01:57
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save djmitche/9ca81f91798d512d543d to your computer and use it in GitHub Desktop.
Save djmitche/9ca81f91798d512d543d to your computer and use it in GitHub Desktop.
(1118394) Notes on running Android builds in TaskCluster

Goal

"Drive Android builds out of taskcluster" .. meaning to run builds of Firefox for Android (we don't build Android) within taskcluster.

The idea is to get something non-b2g from releng running in taskcluster, both as a learning opportunity and as a proof of concept.

Task List

  • build a docker image for building Android using the tools in alder
  • build Android by hand / shell script in that docker image
  • do a basic Android build in a simple, docker-compatible way
    • handle tooltool cache
    • handle ccache for m-c and tools
    • encode the build process in a mozharness script
      • put that script in-tree
    • put all the make commands under 'mach build' (see below)
  • coordinate with mshal / @mrrrgn
    • post my docker patches for fb from @mrrrgn
    • refactor my docker patches to match https://github.com/mrrrgn/ff64-build/blob/master/Dockerfile
    • try running an fx desktop build in that container
    • Factor in some stuff from my first attempt
      • use tc-vcs (check out in the location mozharness will try to check it out later)
      • tc-vcs, tooltool caches
      • move some build.sh env's to docker
    • update tools repo (already installed in image) with tc-vcs in build.sh
    • Start Xvfb
    • try running an Android build in that container using https://bugzilla.mozilla.org/show_bug.cgi?id=1155349
    • parameterize other mozharness options
    • install java
      • newbug: do so via tooltool
  • rebase onto the new tooltool
    • allow specification of relengapi token via env variable
  • re-evaluate subsequent steps (maybe no in-tree MH script, maybe a script based on fx_desktop, etc.)
  • upload/export artifacts somewhere
  • use TC's usual mechanism for caching objdirs
  • generalize into a "mozharness runner" that takes a MH script and config as args
    • compare to build-emulator.sh
  • upload docker images somewhere they can be used by taskcluster
  • run a build in a hand-written, hand-submitted task
  • handle uploads / artifacts
  • figure out how to send a relengapi token without mozharness happily logging it everywhere -- just unset in build.sh?
  • hook up to try

Improvements after hooking up to try (all made into bugs)

  • move build-setup.sh from ubuntu-build to desktop-build; rename the latter to firefox-build
  • encrypt the relengapi token :( :( :(
  • handle ccache properly
  • handle symbol uploads
  • test that the APK actually works
  • figure out why compiler / binutils hacks are required and fix
  • zconf.h symlink
  • a.out.h symlink
  • LIBRARY_PATH setting
  • retry starting Xvfb in a loop
  • align input parameter names with b2g jobs (MOZHARNESS_REF, etc.)
  • figure out if we have to set MOZ_BUILD_DATE, and if so to what (comments in bug 1125973)
{
"provisionerId": "aws-provisioner",
"workerType": "b2g-desktop-opt",
"created": "2015-05-06T16:23:12.808Z",
"deadline": "2015-05-06T17:23:12.808Z",
"payload": {
"image": "quay.io/djmitche/desktop-build:0.0.1",
"env": {
"MOZHARNESS_SCRIPT": "mozharness/scripts/fx_desktop_build.py",
"MOZHARNESS_CONFIG": "builds/releng_base_android_64_builds.py",
"MOZHARNESS_HEAD_REPOSITORY": "https://bitbucket.org/djmitche/mozharness",
"MOZHARNESS_REV": "bug1125973",
"NEED_XVFB": "false",
"MH_CUSTOM_BUILD_VARIANT_CFG": "api-11",
"RELENGAPI_TOKEN": "..yeah, should encrypt this.."
},
"command": [
"/bin/bash",
"bin/build.sh"
],
"maxRunTime": 36000
},
"metadata": {
"name": "Test Task",
"description": "Testing execution of a thing in my own docker image",
"owner": "dustin@mozilla.com",
"source": "http://tools.taskcluster.net/task-creator/"
}
}
11:27 * lightsofapollo dustin: use the artifacts sectiion of the payload
http://docs.taskcluster.net/docker-worker/
11:27 <~lightsofapollo> dustin: mshal has been using the apis to do the same
in buildbot itself... he is an example
11:27 <~lightsofapollo>
https://hg.mozilla.org/projects/alder/file/5b37e35e89fc/testing/taskcluster/tasks/build.yml#l33
11:28 < mshal> dustin: here's the mozharness bug I'm working on to use TC to
upload files: https://bugzilla.mozilla.org/show_bug.cgi?id=1112252
11:28 < mshal> it hasn't landed yet, but the patches there show how I'm
planning to use it

(from wcosta and garndt)

caches

Specify a scope, e.g.,

scopes:
    - 'docker-worker:cache:sources-gecko'

Then use the cache in the payload:

payload:
  cache:
    sources-gecko: '/home/worker/gecko'

which causes the docker-worker to mount the thing identified by sources-gecko at that path.

checkouts

The tc-vcs tool does reliable checkouts, using caches and falling back to bundles on S3. It can be run from shell scripts. The env vars there come from the task as generated by the decision task.

Wander added support for using tc-vcs from Mozharness.

In the future, will be converting shell scripts to mozharness scripts.

11:31 < dustin> should I try to condense that to a mozharness script, do you
think?  And then the taskcluster task just invokes that script?
11:32 < dustin> if I do that, I have to put the mozharness script in the
mozharness repo, right?  In which case, the build is no longer reproducible if
that repo changes..
11:32 < dustin> I feel like that's a fatal, showstopper bug with mozharness :(
11:33 < mshal> dustin: I don't think the mozharness scripts have to be in the
mozharness repo, though most of them are there now. IIRC there's a bug to move
them in tree somewhere...
11:33 < dustin> ok
11:34 < dustin> anyone in particular I should talk to for guidance there?  I
guess ideally I'd have moharness installed in the docker image, and the
mozharness script in-tree
11:34 < garndt> dustin, if they were in the mozharness repo, we have an
example in our phone build tasks that can pin a mozharness repo/revision
11:34 < mshal> dustin: as for uploading, my understanding is that whatever's
running inside docker doesn't actually do file uploads, but just puts the
build outputs into the artifacts directory
11:34 < dustin> mshal: right, I think that's the lead I'll follow
11:34 <~lightsofapollo> dustin: armenzg_mtg  has been doing some awesome work
to move us in that directly
11:34 <~lightsofapollo> direction**
11:34 < dustin> garndt: got a pointer?
11:34 < dustin> lightsofapollo: cool, I'll check with him
11:34 < mshal> dustin: jlund would be good to chat with about overall
mozharness direction too, though he might be out on PTO?
11:34 <~lightsofapollo> and wcosta just landed some code so we can set the
mozharness version / repo from graph generation
11:34 < garndt> dustin: here is what was done for the phone builds
http://hg.mozilla.org/projects/alder/file/5b37e35e89fc/testing/taskcluster/tasks/phone_build.yml#l45
11:35 < dustin> lightsofapollo: is that what you're talking about ^^?
11:35 <~lightsofapollo> dustin: yup
11:35 <~lightsofapollo> set in the mach taskcluster-graph command
11:35 < garndt> dustin:
http://hg.mozilla.org/projects/alder/file/5b37e35e89fc/testing/taskcluster/mach_commands.py#l149
11:35 < garndt> ^^ mach command for building a graph with mozharness revision
11:35 < dustin> thanks!
11:36 < dustin> I'll talk to Armen and decide which way is more futuristic
11:36 <~lightsofapollo> (gaia + tc does not submit to gaia-try but
gaia/gaia-master)
11:36 <~lightsofapollo> dustin: IMO I think the future is in tree mozharness
scripts + locked down version 

In b2g

  • task calls build-emulator.sh which is baked into the image, specifying mozharness repo and revision as env vars
  • that script checks out mozharness and B2G
  • invokes ./mozharness/scripts/whatever.py with arguments -- so, in the mozharness repo

My plan

  • task calls build.sh which is baked into the docker image, with gecko, mozharness info as env vars (with defaults)
  • script checks out mozharness, gecko, tools
  • script invokes something like ./build/mozharness/android-emulator.py with config -- so, in the gecko tree
15:18:24 <lightsofapollo> dustin: mrrrgn|brb : the docs are not amazing here yet (one of my Q2 goals) but it's pretty easy to add new jobs/tasks to TC for try
15:18:46 <lightsofapollo> you add a new file here https://dxr.mozilla.org/mozilla-central/source/testing/taskcluster/tasks/builds
15:19:05 <lightsofapollo> add your flag name https://dxr.mozilla.org/mozilla-central/source/testing/taskcluster/tasks/branches/base_job_flags.yml
15:19:18 <lightsofapollo> schedule it https://dxr.mozilla.org/mozilla-central/source/testing/taskcluster/tasks/branches/try/job_flags.yml
15:20:03 <dustin> "schedule it"?
15:20:21 <dustin> not sure what that means
15:20:57 → jrmuizel joined • DerekH → DerekH|Lunch
15:21:40 <dustin> how are base_job_flags.yml and job_flags.yml related?
15:22:16 <•bhearsum> job_flags.yml inehrits from base IIRC?
15:22:28 <•bhearsum> yeah
15:22:30 <•bhearsum> https://dxr.mozilla.org/mozilla-central/source/testing/taskcluster/tasks/branches/mozilla-central/job_flags.yml#6
15:25:18 <dustin> yep, so I'm not sure why I'd need to change both
15:25:29 <•bhearsum> yeah...i don't fully understand what goes where either
15:25:35 <•bhearsum> you might want to ask in #taskcluster about that part
15:25:45 <dustin> ok
15:25:47 <•bhearsum> jonas and garndt are my go tos for that :)
15:26:00 <dustin> I'm hoping lightsofapollo replies :)
15:26:07 <dustin> but I can futz with it too
15:26:10 <lightsofapollo> dustin: ah sorry
15:27:02 <lightsofapollo> dustin: so if you look here https://dxr.mozilla.org/mozilla-central/source/testing/taskcluster/tasks/branches/try/job_flags.yml#9
15:27:20 <lightsofapollo> you can see that linux_gecko is a try flag
15:27:28 <lightsofapollo> then we map the opt/debug types to specific tsks
15:27:51 <lightsofapollo> dustin: you can omit adding the flag to base but then your jobs won't show up when you do try: -p all
15:28:13 <lightsofapollo> (which is an unintentional feature we added which is actually a real use case =p)
@djmitche
Copy link
Author

djmitche commented May 5, 2015

That didn't mean I could get rid of the zconf.h symlink.

Disabling signing allowed the build to complete. That's progress!

@djmitche
Copy link
Author

djmitche commented May 6, 2015

I pushed the image to quay.io/djmitche/desktop-build:

docker tag quay.io/{mozilla,djmitche}/desktop-build:0.0.1 && docker push quay.io/djmitche/desktop-build:0.0.1

now to build a task

@djmitche
Copy link
Author

djmitche commented May 6, 2015

Tried again with

{
    "provisionerId": "aws-provisioner",
    "workerType": "b2gtest",
    "created": "2015-05-06T16:23:12.808Z",
    "deadline": "2015-05-07T17:23:12.808Z",
    "scopes": [
        "docker-worker:cache:tc-vcs"
    ],
    "payload": {
        "image": "quay.io/djmitche/desktop-build:0.0.1",
        "env": {
            "MOZHARNESS_SCRIPT": "mozharness/scripts/fx_desktop_build.py",
            "MOZHARNESS_CONFIG": "builds/releng_base_android_64_builds.py",
            "MOZHARNESS_HEAD_REPOSITORY": "https://bitbucket.org/djmitche/mozharness",
            "MOZHARNESS_REV": "bug1125973",
            "NEED_XVFB": "false",
            "MH_CUSTOM_BUILD_VARIANT_CFG": "api-11",
        "RELENGAPI_TOKEN": ".."
        },
        "cache": {
        "tc-vcs": "/home/worker/.tc-vcs"
      },
        "command": [
            "/bin/bash",
            "bin/build.sh"
        ],
        "maxRunTime": 36000
    },
    "metadata": {
        "name": "Test Task",
        "description": "Testing execution of a thing in my own docker image",
        "owner": "dustin@mozilla.com",
        "source": "http://tools.taskcluster.net/task-creator/"
    }
}

but the task is rejected:

{
  "message": "Authorization Failed",
  "error": {
    "info": "None of the scope-sets was satisfied",
    "scopesets": [
      [
        "docker-worker:cache:tc-vcs"
      ]
    ]
  }
}

@djmitche
Copy link
Author

djmitche commented May 6, 2015

It looks like my own credentials don't include any caches, so I'll need to get some credentials via some other technique.

@djmitche
Copy link
Author

I really screwed up the file paths by not linking /builds to worksspace/build (which is an entirely different directory!)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment