Skip to content

Instantly share code, notes, and snippets.

@smalleni
Last active February 8, 2017 20:38
Show Gist options
  • Save smalleni/4d6518bf05a82049578a631282375327 to your computer and use it in GitHub Desktop.
Save smalleni/4d6518bf05a82049578a631282375327 to your computer and use it in GitHub Desktop.
health HEALTH_ERR
256 pgs are stuck inactive for more than 300 seconds
256 pgs stuck inactive
no osds
monmap e1: 3 mons at {overcloud-controller-0=172.18.0.15:6789/0,overcloud-controller-1=172.18.0.27:6789/0,overcloud-controller-2=172.18.0.21:6789/0}
election epoch 20, quorum 0,1,2 overcloud-controller-0,overcloud-controller-2,overcloud-controller-1
osdmap e7: 0 osds: 0 up, 0 in
flags sortbitwise
pgmap v8: 256 pgs, 7 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
256 creating
## A Heat environment file which can be used to set up storage
## backends. Defaults to Ceph used as a backend for Cinder, Glance and
## Nova ephemeral storage.
resource_registry:
OS::TripleO::Services::CephMon: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-mon.yaml
OS::TripleO::Services::CephOSD: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-osd.yaml
OS::TripleO::Services::CephClient: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-client.yaml
OS::TripleO::NodeUserData: /home/stack/templates/firstboot/wipe-disks.yaml
OS::TripleO::Services::CinderBackup: /usr/share/openstack-tripleo-heat-templates/puppet/services/pacemaker/cinder-backup.yaml
OS::TripleO::Services::CephRgw: /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-rgw.yaml
OS::TripleO::Services::SwiftProxy: OS::Heat::None
OS::TripleO::Services::SwiftStorage: OS::Heat::None
OS::TripleO::Services::SwiftRingBuilder: OS::Heat::None
parameter_defaults:
#### BACKEND SELECTION ####
## Whether to enable iscsi backend for Cinder.
CinderEnableIscsiBackend: false
## Whether to enable rbd (Ceph) backend for Cinder.
CinderEnableRbdBackend: true
## Cinder Backup backend can be either 'ceph' or 'swift'.
CinderBackupBackend: ceph
## Whether to enable NFS backend for Cinder.
# CinderEnableNfsBackend: false
## Whether to enable rbd (Ceph) backend for Nova ephemeral storage.
NovaEnableRbdBackend: true
## Glance backend can be either 'rbd' (Ceph), 'swift' or 'file'.
GlanceBackend: rbd
## Gnocchi backend can be either 'rbd' (Ceph), 'swift' or 'file'.
GnocchiBackend: rbd
ExtraConfig:
ceph::profile::params::osds:
'/dev/sdb':
journal: '/dev/nvme0n1'
'/dev/sdc':
journal: '/dev/nvme0n1'
'/dev/sdd':
journal: '/dev/nvme0n1'
'/dev/sde':
journal: '/dev/nvme0n1'
'/dev/sdf':
journal: '/dev/nvme0n1'
'/dev/sdg':
journal: '/dev/nvme0n1'
'/dev/sdh':
journal: '/dev/nvme0n1'
'/dev/sdi':
journal: '/dev/nvme0n1'
'/dev/sdj':
journal: '/dev/nvme0n1'
'/dev/sdk':
journal: '/dev/nvme0n1'
'/dev/sdl':
journal: '/dev/nvme0n1'
'/dev/sdm':
journal: '/dev/nvme0n1'
'/dev/sdn':
journal: '/dev/nvme0n1'
'/dev/sdo':
journal: '/dev/nvme0n1'
'/dev/sdp':
journal: '/dev/nvme0n1'
CinderBackupBackend: swift
#### CINDER NFS SETTINGS ####
## NFS mount options
# CinderNfsMountOptions: ''
## NFS mount point, e.g. '192.168.122.1:/export/cinder'
# CinderNfsServers: ''
#### GLANCE NFS SETTINGS ####
## Make sure to set `GlanceBackend: file` when enabling NFS
##
## Whether to make Glance 'file' backend a NFS mount
# GlanceNfsEnabled: false
## NFS share for image storage, e.g. '192.168.122.1:/export/glance'
## (If using IPv6, use both double- and single-quotes,
## e.g. "'[fdd0::1]:/export/glance'")
# GlanceNfsShare: ''
## Mount options for the NFS image storage mount point
# GlanceNfsOptions: 'intr,context=system_u:object_r:glance_var_lib_t:s0'
#### CEPH SETTINGS ####
## When deploying Ceph Nodes through the oscplugin CLI, the following
## parameters are set automatically by the CLI. When deploying via
## heat stack-create or ceph on the controller nodes only,
## they need to be provided manually.
## Number of Ceph storage nodes to deploy
# CephStorageCount: 0
## Ceph FSID, e.g. '4b5c8c0a-ff60-454b-a1b4-9747aa737d19'
# CephClusterFSID: ''
## Ceph monitor key, e.g. 'AQC+Ox1VmEr3BxAALZejqeHj50Nj6wJDvs96OQ=='
# CephMonKey: ''
## Ceph admin key, e.g. 'AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ=='
# CephAdminKey: ''
## Ceph client key, e.g 'AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw=='
# CephClientKey: ''
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment