Editor note: Hey, if you're still reading or linking to this, you probably want our official documentation:
- setting up a build zone: https://github.com/joyent/triton/blob/master/docs/developer-guide/build-zone-setup.md
- building your bits: https://github.com/joyent/triton/blob/master/docs/developer-guide/building.md
Both are distilled from this original document, but we fixed a few bugs along the way, so read those instead.
This is a short guide to help users code adapt to the changes we made to the build as part of TOOLS-2043.
You'll probably need to create a new dev zone for the component you want to build. The 'make validate-buildenv' target in converted repositories includes instructions on how to do that, but we'll go into more detail here since that's likely to be useful.
Each component should be built on SmartOS/Triton image containing a set
of base pkgsrc packages, some additional development pkgsrc packages, and
a set of build tools. The production builds impose a further restriction
that the platform image (that is, the kernel and userland bits running on
the bare metal) needs to be at a specific minimum version, defined via
min_platform
in Makefile.defs.
At the time of writing, most components will build on more modern platform images, so for now, we'll leave aside the PI restriction, other than to say that you should set '$ENGBLD_SKIP_VALIDATE_BUILD_PLATFORM' to 'true' in your environment. We'll talk more about that in the 'Going retro' section later in this document.
For convenience, we maintain a set of images that already include the required pkgsrc packages and build tools. The table below lists those image names and their corresponding uuids:
pkgsrc version | base image name | devzone image uuid |
---|---|---|
2011Q4 | sdc-smartos@1.6.3 | 956f365d-2444-4163-ad48-af2f377726e0 |
2014Q2 | sdc-base@14.2.0 | 83708aad-20a8-45ff-bfc0-e53de610e418 |
2015Q4 | triton-origin-multiarch-15.4.1@1.0.1 | 1356e735-456e-4886-aebd-d6677921694c |
2018Q1 | minimal-multiarch@18.1.0 | 8b297456-1619-4583-8a5a-727082323f77 |
These image uuids are exactly what we build components on in our Jenkins infrastructure (you'll notice the images have 'jenkins-agent-..' as their default aliases)
For any component, you can see the expected image_uuid by typing 'make show-buildenv'
$ cd /home/timf/projects/submodule/phase2-putback/sdc-manatee.git
$ make show-buildenv
2015Q4 triton-origin-multiarch-15.4.1@1.0.1 1356e735-456e-4886-aebd-d6677921694c
$
Taking one of the image uuids above, you need to download or import it, and create a VM from it. The mechanism to do that will depend on whether you're using a Triton instance or a SmartOS install to run your VM on. We'll describe both.
To retrieve an image on Triton, use:
# sdc-imgadm import -S 'https://updates.joyent.com?channel=experimental' 1356e735-456e-4886-aebd-d6677921694c
(note that this requires that your imgapi instance has a nic on the external
network. This can be achieved with sdcadm post-setup common-external-nics
)
or if you have the manifest and gz files already downloaded, use:
# sdc-imgadm import -m 1356e735-456e-4886-aebd-d6677921694c.manifest -f 1356e735-456e-4886-aebd-d6677921694c-file.gz
Then create a json file to pass to sdc-vmadm on Triton, or vmadm on SmartOS. There are several ways to do this, but we'll describe some simple approaches below.
When hosting dev zones on Triton, first we need to get some details to construct our json image manifest. Here, we're on a newly setup coal instance, so we're just looking for the admin user uuid, the external network uuid and the uuid of the headnode to provision to:
[root@headnode (uk-1) /zones/timf]# sdc-useradm search 'cn=*'
UUID LOGIN EMAIL CREATED
930896af-bf8c-48d4-885c-6573a94b1853 admin root@localhost 2019-02-13
[root@headnode (uk-1) /zones/timf]# sdc-network list
NAME UUID VLAN SUBNET GATEWAY
admin 680e5091-957b-4356-8bd9-67d7c9feebe5 0 10.99.99.0/24 -
external 2622e374-df99-491c-9c89-143ca9fdd4a4 0 10.88.88.0/24 10.88.88.2
[root@headnode (uk-1) /zones/timf]# sdc-server list
HOSTNAME UUID VERSION SETUP STATUS RAM ADMIN_IP
headnode 564d5104-226f-ec9a-b9d2-ed399ce525bc 7.0 true running 8191 10.99.99.7
Now use this json to create the VM, or use the adminui. The key parts are
to specify delegate_dataset
, required to use the new image construction
tooling, and to give yourself enough RAM to have a useful development
environment.
{
"brand": "joyent",
"image_uuid": "1356e735-456e-4886-aebd-d6677921694c",
"alias": "jenkins-agent-multiarch-15.4.1",
"owner_uuid": "930896af-bf8c-48d4-885c-6573a94b1853",
"server_uuid": "564d5104-226f-ec9a-b9d2-ed399ce525bc",
"hostname": "jenkins-agent-multiarch-15.4.1",
"ram": 4096,
"quota": 100,
"delegate_dataset": true,
"resolvers": [
"10.0.0.29",
"208.67.220.220"
],
"networks": [{"uuid": "2622e374-df99-491c-9c89-143ca9fdd4a4"}],
"customer_metadata": {
"root_authorized_keys": "ssh-rsa AAAAB3NzaC1y... me@myselfandi",
"user-script": "/usr/sbin/mdata-get root_authorized_keys > ~root/.ssh/authorized_keys ; /usr/sbin/mdata-get root_authorized_keys > ~admin/.ssh/authorized_keys; svcadm enable manifest-import" }
}
Then create the VM:
[root@headnode (uk-1) ~]# sdc-vmadm create -f json
Creating VM 60802c6d-e458-612b-bcc5-b472fffad1a2 (job "db55fe3e-a631-4df5-a5e7-18ab9ed11afa")
[root@headnode (uk-1) ~]#
On SmartOS, use the following:
# curl -k -o img.manifest 'https://updates.joyent.com/images/1356e735-456e-4886-aebd-d6677921694c?channel=experimental'
# curl -k -o img.gz 'https://updates.joyent.com/images/1356e735-456e-4886-aebd-d6677921694c/file?channel=experimental'
and then do the following to add the image to your SmartOS instance:
# imgadm install -m img.manifest -f img.gz
To create the VM, use a json manifest similar to the following:
{
"brand": "joyent",
"image_uuid": "1356e735-456e-4886-aebd-d6677921694c",
"alias": "jenkins-agent-multiarch-15.4.1",
"hostname": "jenkins-agent-multiarch-15.4.1",
"max_physical_memory": 4096,
"quota": 100,
"delegate_dataset": true,
"fs_allowed": ["ufs", "pcfs"],
"resolvers": [
"10.0.0.29",
"208.67.220.220"
],
"nics": [
{
"nic_tag": "admin",
"ip": "dhcp"
}
],
"customer_metadata": {
"root_authorized_keys": "ssh-rsa AAAAB3NzaC1y... me@myselfandi",
"user-script": "/usr/sbin/mdata-get root_authorized_keys > ~root/.ssh/authorized_keys ; /usr/sbin/mdata-get root_authorized_keys > ~admin/.ssh/authorized_keys; svcadm enable manifest-import " }
}
Then create the VM using vmadm:
[root@kura ~]# vmadm create -f json
Successfully created VM c1f04dfb-63c6-ca69-b04b-d68e5b4ffadc
[root@kura ~]#
You should then be able to login to your devzone. The build will happily run as a non-root user, however some parts of the build do need additional privileges. To add those to your non-root user inside your dev zone, do:
# usermod -P 'Primary Administrator' youruser
You should now be able to clone any of the repositories. The following Makefile targets are conventions used across most of Manta/Triton development:
target | description |
---|---|
show-buildenv | show the build environment and devzone image uuid for building this component |
validate-buildenv | check that the build machine is capable of building this component |
all | build all sources for this component |
release | build a tarball containing the bits for this component |
publish | publish a tarball containing the bits for this component |
buildimage | assemble a Triton/Manta image for this component |
bits-upload | post bits to Manta, and optionally updates.joyent.com for this component |
bits-upload-latest | just post the most recently built bits, useful in case of network outages, upload errors, etc. |
check | run build tests (e.g. xml validation, linting) |
prepush | additional testing that should occur before pushing to github |
For more details on the specifics of these targets, we do have commentary in eng.git:/Makefile and eng.git:/tools/mk/Makefile.defs. We hope to flesh out the documentation in this guide over time.
Typically, the following can be used to build any component, and will leave
a zfs image and manifest in ./bits
$ export ENGBLD_SKIP_VALIDATE_BUILD_PLATFORM=true
$ make all release publish buildimage
You can also upload bits to Manta by adding the 'bits-upload' target to the above. If you've already built bits, then 'bits-upload-latest' will publish those bits to Manta without rebuilding.
We mentioned before that most components will build on modern PIs. However,
our production builds always build on the earliest possible platform we support,
defined by min_platform
. We do this because of the binary compatibility
guarantee that comes with Illumos: binaries compiled on older platforms are
guaranteed to run on newer platforms, but the converse is not true.
In addition, when compiling binaries, constant values from the platform headers may be included in those binaries at build-time. If those constants change across platform images (which several have) then the binary will have different behaviour depending on which platform the source was built on.
For these reasons, when assembling the Manta/Triton images via the 'buildimage'
target, we set the min_platform
of the resulting image to be the version
of the platform running on the build machine.Code in vmadm
checks at
provisioning-time that the platform hosting the VM is greater than, or equal to
the min_platform
value baked into the Manta/Triton image. The implication
of this is that your devzone must be running a platform equal to, or older
than the Triton instance you wish to test on.
As mentioned previously, the build system itself will report an error if
your build platform is not equal to the min_platform
image, via the
validate-buildenv
make target.
In order to exactly replicate the build environment used for our production
builds, and produce images that can be installed on any supported platform,
we install devzones on joyent-retro
VMs, which are bhyve (or KVM) SmartOS
instances that boot that old platform image.
(See https://github.com/joyent/joyent-retro/blob/master/README.md)
At the time of writing, our min_platform
is set to 20151126T062538Z
That image is available as joyent-retro-20151126T062538Z
,
uuid bd83a9b3-65cd-4160-be2e-f7c4c56e0606
See: https://updates.joyent.com/images/bd83a9b3-65cd-4160-be2e-f7c4c56e0606?channel=experimental
The retro image does not itself contain any devzone images, so those will have to be imported by hand.
The following example json would then be used to deploy it. Note here we're adding a 64gb data disk which will then host our dev zones.
{
"brand": "bhyve",
"alias": "retro-20151126T062538Z",
"hostname": "retro-20151126T062538Z",
"ram": 4096,
"vcpus": 6,
"quota": 100,
"delegate_dataset": true,
"fs_allowed": ["ufs", "pcfs"],
"resolvers": [
"10.0.0.29",
"208.67.220.220"
],
"nics": [
{
"nic_tag": "admin",
"ip": "dhcp",
"netmask": "255.255.255.0",
"gateway": "10.0.0.1",
"model": "virtio",
"primary": "true"
}
],
"disks": [
{
"boot": true,
"model": "virtio",
"image_uuid": "bd83a9b3-65cd-4160-be2e-f7c4c56e0606",
"image_name": "joyent-retro-20151126T062747Z"
},
{
"boot": false,
"model": "virtio",
"size": 61440,
"media": "disk"
}
],
"customer_metadata": {
"root_authorized_keys": "ssh-rsa AAAAB3Nz... me@myselfandi"
}
}
Having deployed this image on your Triton or SmartOS host, you can then
ssh into the retro VM and proceed with creating devzones as mentioned in the
earlier section and do not need to set
$ENGBLD_SKIP_VALIDATE_BUILD_PLATFORM
in your environment.
Note that this retro image is a SmartOS instance rather than a Triton host.
To allow you to ssh directly into the devzones running in a retro VM,
a simple ipnat configuration works fine. For example, this retro VM has the
external IP address 10.0.0.180
and our devzones are all on the 172.16.9.0
network. We create a file /etc/ipf/ipnat.conf
:
[root@27882aaa /etc/ipf]# cat ipnat.conf
map vioif0 172.16.9.0/24 -> 0/32 portmap tcp/udp auto
map vioif0 172.16.9.0/24 -> 0/32
rdr vioif0 10.0.0.180 port 2222 -> 172.16.9.2 port 22 tcp
rdr vioif0 10.0.0.180 port 2223 -> 172.16.9.3 port 22 tcp
rdr vioif0 10.0.0.180 port 2224 -> 172.16.9.4 port 22 tcp
rdr vioif0 10.0.0.180 port 2225 -> 172.16.9.5 port 22 tcp
rdr vioif0 10.0.0.180 port 2226 -> 172.16.9.6 port 22 tcp
and enable ip forwarding and the ipfilter service in the retro VM:
[root@27882aaa ~]# routeadm -e ipv4-forwarding
[root@27882aaa ~]# svcadm enable ipfilter
We can then ssh into our individual devzones with the following changes added
to ~/.ssh/config
(note that we manually added 'jenkins' non-root users to our
zones here)
Host retro-kabuild2
User jenkins
Hostname 10.0.0.180
Port 2222
Host retro-kbbuild2
User jenkins
Hostname 10.0.0.180
Port 2223
Host retro-kcbuild2
User jenkins
Hostname 10.0.0.180
Port 2224
Host retro-kdbuild2
User jenkins
Hostname 10.0.0.180
Port 2225
Host retro-kebuild2
User jenkins
Hostname 10.0.0.180
Port 2226
Here's us logging in:
timf@iorangi-eth0 (master) ssh retro-kabuild2
-bash-4.1$ ifconfig
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
net0: flags=40001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,L3PROTECT> mtu 1500 index 2
inet 172.16.9.2 netmask ffffff00 broadcast 172.16.9.255
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
-bash-4.1$ id
uid=103(jenkins) gid=1(other) groups=1(other)
-bash-4.1$
In the future, TOOLS-2187 will allow you to bypass this min_platform check, though it's then up to you to ensure your testing with such bits remains valid (for example, are you sure those bits have no interactions with the platform from your build machine that would render them unusable when deployed on an older platform? If not, then building on a retro VM is always a safer option!)
We should draw your attention to a few things:
- bits-upload will publish bits to
$MANTA_USER
/publics/builds/ so please do not set MANTA_USER toJoyent_Dev
for now! - publishing bits to updates.joyent.com from
bits-upload
requires that imgapi instance to allow you to post bits there. By default, publishing to updates.joyent.com is disabled and will only happen if$ENGBLD_BITS_UPLOAD_IMAPI=true
in your shell environment. Note that using the defaultUPDATES_IMGADM_USER=mg
requires your build user to have a copy of the automation.id_rsa private key.
You can also publish bits to a local path instead of imgapi, allowing you to then import images directly to the imgapi service on your triton instance. To do that, set$ENGBLD_DEST_OUT_PATH
and$ENGBLD_BITS_UPLOAD_LOCAL
For example:$ export ENGBLD_DEST_OUT_PATH=/home/timf/projects/bits $ export ENGBLD_BITS_UPLOAD_LOCAL=true
The changes themselves generally follow the pattern laid down by the initial lullaby integration - for example, here's what we did for sdc-imgapi and sdc-papi:
- https://github.com/joyent/sdc-imgapi/commit/b6be5f7ab04b5cbd72509a0e933c0e5ba058205d
- https://github.com/joyent/sdc-papi/commit/4606502f651b7464abcf595388e991c2f351a837
Note that the first build of components on a new dev zone will likely take
a little longer than usual as the agent-cache
framework has to build each
agent to be included in the image, and the buildimage
tool has to download
and cache the base images for the component. See TOOLS-2063 and TOOLS-2066.
If you're reviewing changes that have not yet integrated to the master branch of a component, it's possible to build by cloning a review from cr.joyent.us.
Here's an example of us building patch set 2 of the sdc-sapi.git component,
which has the gerrit id 5538
:
-bash-4.1$ cd /tmp
-bash-4.1$ git clone https://cr.joyent.us/joyent/sdc-sapi.git
Cloning into 'sdc-sapi'...
remote: Counting objects: 2180, done
remote: Finding sources: 100% (2180/2180)
remote: Total 2180 (delta 1464), reused 2175 (delta 1464)
Receiving objects: 100% (2180/2180), 540.64 KiB | 251.00 KiB/s, done.
Resolving deltas: 100% (1464/1464), done.
Checking connectivity... done.
-bash-4.1$ cd sdc-sapi
-bash-4.1$ git ls-remote | grep 5538
From https://cr.joyent.us/joyent/sdc-sapi.git
d2daf78578e3854069cbe194f5f9cf4c96571d22 refs/changes/38/5538/1
53f51b8e1b4e6088b22757ee230edc9b6974e46e refs/changes/38/5538/2
-bash-4.1$ git fetch origin refs/changes/38/5538/2
remote: Counting objects: 13, done
remote: Finding sources: 100% (7/7)
remote: Total 7 (delta 6), reused 7 (delta 6)
Unpacking objects: 100% (7/7), done.
From https://cr.joyent.us/joyent/sdc-sapi
* branch refs/changes/38/5538/2 -> FETCH_HEAD
-bash-4.1$ git checkout FETCH_HEAD
Note: checking out 'FETCH_HEAD'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
HEAD is now at 53f51b8... TRITON-1131 convert sdc-sapi to engbld framework
-bash-4.1$ git describe --all --long
heads/master-1-g53f51b8
-bash-4.1$ make all release publish buildimage
fatal: ref HEAD is not a symbolic ref
/tmp/space/sdc-sapi/deps/eng/tools/validate-buildenv.sh
.
.
[ 29.00080643] Saving manifest to "/tmp/sapi-zfs--20190215T144650Z-g53f51b8.imgmanifest"
[ 30.24198958] Destroyed zones/3923c435-8688-47bb-a5f1-b213b010f826/data/b9f703a4-52e1-4c3d-b862-29b8dd047669
[ 30.29018650] Deleted /zoneproto-49345
[ 30.29080095] Build complete
cp /tmp/sapi-zfs--20190215T144650Z-g53f51b8.zfs.gz /tmp/space/sdc-sapi/bits/sapi
cp /tmp/sapi-zfs--20190215T144650Z-g53f51b8.imgmanifest /tmp/space/sdc-sapi/bits/sapi
pfexec rm /tmp/sapi-zfs--20190215T144650Z-g53f51b8.zfs.gz
pfexec rm /tmp/sapi-zfs--20190215T144650Z-g53f51b8.imgmanifest
pfexec rm -rf /tmp/buildimage-sapi--20190215T144650Z-g53f51b8
-bash-4.1$
Also note in the above, that the $(BRANCH)
used for the build artifacts looks
a little unusual due to the fact we checked out a gerrit branch that doesn't
follow the same naming format as most git branches.