Skip to content

Instantly share code, notes, and snippets.

@oxlade39
Created August 6, 2018 16:50
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save oxlade39/6b144a48fee716ac162edc43b81ae12e to your computer and use it in GitHub Desktop.
Save oxlade39/6b144a48fee716ac162edc43b81ae12e to your computer and use it in GitHub Desktop.
trying out nomad example
# There can only be a single job definition per file. This job is named
# "example" so it will create a job with the ID and Name "example".
# The "job" stanza is the top-most configuration option in the job
# specification. A job is a declarative specification of tasks that Nomad
# should run. Jobs have a globally unique name, one or many task groups, which
# are themselves collections of one or many tasks.
#
# For more information and examples on the "job" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/job.html
#
job "example" {
# The "region" parameter specifies the region in which to execute the job. If
# omitted, this inherits the default region name of "global". Note that this example job
# is hard-coded to eu-west-1, so if you are running your example elsewhere, make
# sure to update this setting, as well as the datacenters setting.
region = "eu-west-2"
# The "datacenters" parameter specifies the list of datacenters which should
# be considered when placing this task. This must be provided. Note that this example job
# is hard-coded to eu-west-1, so if you are running your example elsewhere, make
# sure to update this setting, as well as the region setting.
datacenters = ["eu-west-2b", "eu-west-2a", "eu-west-2c"]
# The "type" parameter controls the type of job, which impacts the scheduler's
# decision on placement. This configuration is optional and defaults to
# "service". For a full list of job types and their differences, please see
# the online documentation.
#
# For more information, please see the online documentation at:
#
# https://www.nomadproject.io/docs/jobspec/schedulers.html
#
type = "batch"
# The "constraint" stanza defines additional constraints for placing this job,
# in addition to any resource or driver constraints. This stanza may be placed
# at the "job", "group", or "task" level, and supports variable interpolation.
#
# For more information and examples on the "constraint" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/constraint.html
#
# constraint {
# attribute = "${attr.kernel.name}"
# value = "linux"
# }
# The "update" stanza specifies the job update strategy. The update strategy
# is used to control things like rolling upgrades. If omitted, rolling
# updates are disabled.
#
# For more information and examples on the "update" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/update.html
#
# update {
# # The "stagger" parameter specifies to do rolling updates of this job every
# # 10 seconds.
# stagger = "10s"
# # The "max_parallel" parameter specifies the maximum number of updates to
# # perform in parallel. In this case, this specifies to update a single task
# # at a time.
# max_parallel = 1
# }
# The "group" stanza defines a series of tasks that should be co-located on
# the same Nomad client. Any task within a group will be placed on the same
# client.
#
# For more information and examples on the "group" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/group.html
#
group "cache" {
# The "count" parameter specifies the number of the task groups that should
# be running under this group. This value must be non-negative and defaults
# to 1.
count = 1
# The "restart" stanza configures a group's behavior on task failure. If
# left unspecified, a default restart policy is used based on the job type.
#
# For more information and examples on the "restart" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/restart.html
#
restart {
# The number of attempts to run the job within the specified interval.
attempts = 10
interval = "5m"
# The "delay" parameter specifies the duration to wait before restarting
# a task after it has failed.
delay = "25s"
# The "mode" parameter controls what happens when a task has restarted
# "attempts" times within the interval. "delay" mode delays the next
# restart until the next interval. "fail" mode does not restart the task
# if "attempts" has been hit within the interval.
mode = "delay"
}
# The "ephemeral_disk" stanza instructs Nomad to utilize an ephemeral disk
# instead of a hard disk requirement. Clients using this stanza should
# not specify disk requirements in the resources stanza of the task. All
# tasks in this group will share the same ephemeral disk.
#
# For more information and examples on the "ephemeral_disk" stanza, please
# see the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/ephemeral_disk.html
#
ephemeral_disk {
# When sticky is true and the task group is updated, the scheduler
# will prefer to place the updated allocation on the same node and
# will migrate the data. This is useful for tasks that store data
# that should persist across allocation updates.
# sticky = true
#
# Setting migrate to true results in the allocation directory of a
# sticky allocation directory to be migrated.
# migrate = true
# The "size" parameter specifies the size in MB of shared ephemeral disk
# between tasks in the group.
size = 300
}
# The "task" stanza creates an individual unit of work, such as a Docker
# container, web application, or batch processing.
#
# For more information and examples on the "task" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/task.html
#
task "hello_world" {
# The "driver" parameter specifies the task driver that should be used to
# run the task.
driver = "exec"
# The "config" stanza specifies the driver configuration, which is passed
# directly to the driver to start the task. The details of configurations
# are specific to each driver, so please see specific driver
# documentation for more information.
config {
command = "/bin/echo"
args = ["Hello, World!"]
}
# The "artifact" stanza instructs Nomad to download an artifact from a
# remote source prior to starting the task. This provides a convenient
# mechanism for downloading configuration files or data needed to run the
# task. It is possible to specify the "artifact" stanza multiple times to
# download multiple artifacts.
#
# For more information and examples on the "artifact" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/artifact.html
#
# artifact {
# source = "http://foo.com/artifact.tar.gz"
# options {
# checksum = "md5:c4aa853ad2215426eb7d70a21922e794"
# }
# }
# The "logs" stana instructs the Nomad client on how many log files and
# the maximum size of those logs files to retain. Logging is enabled by
# default, but the "logs" stanza allows for finer-grained control over
# the log rotation and storage configuration.
#
# For more information and examples on the "logs" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/logs.html
#
# logs {
# max_files = 10
# max_file_size = 15
# }
# The "resources" stanza describes the requirements a task needs to
# execute. Resource requirements include memory, network, cpu, and more.
# This ensures the task will execute on a machine that contains enough
# resource capacity.
#
# For more information and examples on the "resources" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/resources.html
#
resources {
cpu = 500 # 500 MHz
memory = 256 # 256MB
network {
mbits = 10
port "db" {}
}
}
# The "service" stanza instructs Nomad to register this task as a service
# in the service discovery engine, which is currently Consul. This will
# make the service addressable after Nomad has placed it on a host and
# port.
#
# For more information and examples on the "service" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/service.html
#
# service {
# name = "global-redis-check"
# tags = ["global", "cache"]
# port = "db"
# check {
# name = "alive"
# type = "tcp"
# interval = "10s"
# timeout = "2s"
# }
# }
# The "template" stanza instructs Nomad to manage a template, such as
# a configuration file or script. This template can optionally pull data
# from Consul or Vault to populate runtime configuration data.
#
# For more information and examples on the "template" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/template.html
#
# template {
# data = "---\nkey: {{ key \"service/my-key\" }}"
# destination = "local/file.yml"
# change_mode = "signal"
# change_signal = "SIGHUP"
# }
# The "vault" stanza instructs the Nomad client to acquire a token from
# a HashiCorp Vault server. The Nomad servers must be configured and
# authorized to communicate with Vault. By default, Nomad will inject
# The token into the job via an environment variable and make the token
# available to the "template" stanza. The Nomad client handles the renewal
# and revocation of the Vault token.
#
# For more information and examples on the "vault" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/vault.html
#
# vault {
# policies = ["cdn", "frontend"]
# change_mode = "signal"
# change_signal = "SIGHUP"
# }
# Controls the timeout between signalling a task it will be killed
# and killing the task. If not set a default is used.
# kill_timeout = "20s"
}
}
}
{
"min_packer_version": "0.12.0",
"variables": {
"aws_region": "eu-west-2",
"nomad_version": "0.7.1",
"consul_module_version": "v0.3.1",
"consul_version": "1.0.3"
},
"builders": [
{
"ami_name": "nomad-consul-amazon-linux-{{isotime | clean_ami_name}}",
"ami_description": "An Amazon Linux AMI that has Nomad and Consul installed.",
"instance_type": "t2.micro",
"name": "amazon-linux-ami",
"region": "{{user `aws_region`}}",
"type": "amazon-ebs",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"architecture": "x86_64",
"name": "*amzn-ami-hvm-*",
"block-device-mapping.volume-type": "gp2",
"root-device-type": "ebs"
},
"owners": [
"amazon"
],
"most_recent": true
},
"ssh_username": "ec2-user"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"sudo yum install -y git"
],
"only": [
"amazon-linux-ami"
]
},
{
"type": "shell",
"inline": [
"git clone --branch v0.4.2 https://github.com/hashicorp/terraform-aws-nomad.git /tmp/terraform-aws-nomad",
"/tmp/terraform-aws-nomad/modules/install-nomad/install-nomad --version {{user `nomad_version`}}"
],
"pause_before": "30s"
},
{
"type": "shell",
"environment_vars": [
"NOMAD_VERSION={{user `nomad_version`}}",
"CONSUL_VERSION={{user `consul_version`}}",
"CONSUL_MODULE_VERSION={{user `consul_module_version`}}"
],
"script": "{{template_dir}}/setup_nomad_consul.sh"
}
]
}
dans-mbp:terraform-aws-nomad dan$ nomad run -address=http://35.176.91.104:4646 examples/nomad-examples-helper/example.nomad
==> Monitoring evaluation "814a173d"
Evaluation triggered by job "example"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "814a173d" finished with status "complete" but failed to place all allocations:
Task Group "cache" (failed to place 1 allocation):
* Constraint "missing drivers" filtered 6 nodes
Evaluation "323a522e" waiting for additional capacity to place remainder
dans-mbp:terraform-aws-nomad dan$
dans-mbp:terraform-aws-nomad dan$ nomad node-status -address=http://35.176.91.104:4646
ID DC Name Class Drain Eligibility Status
102d06ca eu-west-2a i-0399a0be8ad2fea8f <none> false <none> ready
7ed6b38e eu-west-2c i-09a37ca2e56de4a91 <none> false <none> ready
79c809d2 eu-west-2a i-045b549c3606367ee <none> false <none> ready
f2b524d8 eu-west-2c i-00bb214d9672adfc3 <none> false <none> ready
a1919d6b eu-west-2b i-0e1a6cff12d886b09 <none> false <none> ready
e38670c4 eu-west-2b i-08beb9ce9987a843c <none> false <none> ready
dans-mbp:terraform-aws-nomad dan$ nomad node-status -address=http://35.176.91.104:4646 102d06ca
error fetching node stats: Unexpected response code: 500 (node is not running a Nomad Client)
ID = 102d06ca
Name = i-0399a0be8ad2fea8f
Class = <none>
DC = eu-west-2a
Drain = false
Eligibility = <none>
Status = ready
Driver Status = <none>
Node Events
Time Subsystem Message
Allocated Resources
CPU Memory Disk IOPS
0/2400 MHz 0 B/985 MiB 0 B/48 GiB 0/0
Allocation Resource Utilization
CPU Memory
0/2400 MHz 0 B/985 MiB
error fetching node stats: actual resource usage not present
Allocations
No allocations placed
# ---------------------------------------------------------------------------------------------------------------------
# ENVIRONMENT VARIABLES
# Define these secrets as environment variables
# ---------------------------------------------------------------------------------------------------------------------
# AWS_ACCESS_KEY_ID
# AWS_SECRET_ACCESS_KEY
# AWS_DEFAULT_REGION
# ---------------------------------------------------------------------------------------------------------------------
# REQUIRED PARAMETERS
# You must provide a value for each of these parameters.
# ---------------------------------------------------------------------------------------------------------------------
# None
# ---------------------------------------------------------------------------------------------------------------------
# OPTIONAL PARAMETERS
# These parameters have reasonable defaults.
# ---------------------------------------------------------------------------------------------------------------------
variable "ami_id" {
description = "The ID of the AMI to run in the cluster. This should be an AMI built from the Packer template under examples/nomad-consul-ami/nomad-consul.json. If no AMI is specified, the template will 'just work' by using the example public AMIs. WARNING! Do not use the example AMIs in a production setting!"
default = "ami-5eeb15b5"
}
variable "cluster_name" {
description = "What to name the cluster and all of its associated resources"
default = "nomad-example"
}
variable "instance_type" {
description = "What kind of instance type to use for the nomad clients"
default = "t2.micro"
}
variable "num_servers" {
description = "The number of server nodes to deploy. We strongly recommend using 3 or 5."
default = 3
}
variable "num_clients" {
description = "The number of client nodes to deploy. You can deploy as many as you need to run your jobs."
default = 6
}
variable "cluster_tag_key" {
description = "The tag the EC2 Instances will look for to automatically discover each other and form a cluster."
default = "nomad-servers"
}
variable "cluster_tag_value" {
description = "Add a tag with key var.cluster_tag_key and this value to each Instance in the ASG. This can be used to automatically find other Consul nodes and form a cluster."
default = "auto-join"
}
variable "ssh_key_name" {
description = "The name of an EC2 Key Pair that can be used to SSH to the EC2 Instances in this cluster. Set to an empty string to not associate a Key Pair."
default = ""
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment