Skip to content

Instantly share code, notes, and snippets.

@carroarmato0
Created January 28, 2015 07:11
Show Gist options
  • Save carroarmato0/edfe9a00d6f8242c69ee to your computer and use it in GitHub Desktop.
Save carroarmato0/edfe9a00d6f8242c69ee to your computer and use it in GitHub Desktop.
Hiera should not return
---
# All Gluster nodes should advertise to first storage VM
glusterfs::server::peers:
- '192.168.33.11'
---
:backends:
- yaml
:hierarchy:
- "nodes/%{::hostname}"
- "common"
:yaml:
:datadir: '/vagrant/hieradata'
---
glusterfs::server::peers: []
lvm::volume_groups:
vg_gluster:
physical_volumes:
- /dev/sdb
logical_volumes:
brick01:
mountpath: /export/brick01
size: '512M'
---
lvm::volume_groups:
vg_gluster:
physical_volumes:
- /dev/sdb
logical_volumes:
brick01:
mountpath: /export/brick01
size: '512M'
glusterfs::volume:
apache_gl:
create_options: 'replica 2 192.168.33.11:/export/brick01/apache 192.168.33.12:/export/brick01/apache'
[carroarmato0:~/Work/Vagrant/Gluster] master(+21/-10) 50s 1 ± vagrant provision
==> storage01: Running provisioner: shell...
storage01: Running: inline script
==> storage01: Running provisioner: puppet...
==> storage01: Running Puppet with site.pp...
==> storage01: Debug: Runtime environment: ruby_version=1.8.7, puppet_version=3.7.3, run_mode=user
==> storage01: Info: Loading facts
==> storage01: Debug: Loading facts from /tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/stdlib/lib/facter/pe_version.rb
==> storage01: Debug: Loading facts from /tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/stdlib/lib/facter/root_home.rb
==> storage01: Debug: Loading facts from /tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/stdlib/lib/facter/facter_dot_d.rb
==> storage01: Debug: Loading facts from /tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/stdlib/lib/facter/puppet_vardir.rb
==> storage01: Info: Loading facts
==> storage01: Debug: Loading facts from /tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/lvm/lib/facter/lvm_support.rb
==> storage01: Info: Loading facts
==> storage01: Debug: Loading facts from /tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/keepalived/lib/facter/keepalived_host.rb
==> storage01: Debug: Loading facts from /tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/keepalived/lib/facter/keepalived_simple.rb
==> storage01: Debug: Executing '/bin/rpm --version'
==> storage01: Debug: Executing '/bin/rpm --version'
==> storage01: Debug: Executing '/bin/rpm -ql rpm'
==> storage01: Debug: importing '/tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/glusterfs/manifests/client.pp' in environment production
==> storage01: Debug: Automatically imported glusterfs::client from glusterfs/client into production
==> storage01: Debug: importing '/tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/lvm/manifests/init.pp' in environment production
==> storage01: Debug: Automatically imported lvm from lvm into production
==> storage01: Debug: hiera(): Hiera YAML backend starting
==> storage01: Debug: hiera(): Looking up lvm::volume_groups in YAML backend
==> storage01: Debug: hiera(): Looking for data source nodes/storage01
==> storage01: Debug: hiera(): Found lvm::volume_groups in nodes/storage01
==> storage01: Debug: importing '/tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/lvm/manifests/volume_group.pp' in environment production
==> storage01: Debug: Automatically imported lvm::volume_group from lvm/volume_group into production
==> storage01: Debug: importing '/tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/glusterfs/manifests/server.pp' in environment production
==> storage01: Debug: Automatically imported glusterfs::server from glusterfs/server into production
==> storage01: Debug: hiera(): Looking up glusterfs::server::peers in YAML backend
==> storage01: Debug: hiera(): Looking for data source nodes/storage01
==> storage01: Debug: hiera(): Found glusterfs::server::peers in nodes/storage01
==> storage01: Debug: importing '/tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/glusterfs/manifests/peer.pp' in environment production
==> storage01: Debug: Automatically imported glusterfs::peer from glusterfs/peer into production
==> storage01: Debug: importing '/tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/glusterfs/manifests/volume.pp' in environment production
==> storage01: Debug: Automatically imported glusterfs::volume from glusterfs/volume into production
==> storage01: Debug: hiera(): Looking up glusterfs::volume in YAML backend
==> storage01: Debug: hiera(): Looking for data source nodes/storage01
==> storage01: Debug: hiera(): Looking for data source common
==> storage01: Debug: hiera(): Found glusterfs::volume in common
==> storage01: Debug: importing '/tmp/vagrant-puppet/modules-012267475b91075b2257330d2551685a/lvm/manifests/logical_volume.pp' in environment production
==> storage01: Debug: Automatically imported lvm::logical_volume from lvm/logical_volume into production
==> storage01: Debug: Adding relationship from Class[Lvm] to Glusterfs::Volume[apache_gl] with 'before'
==> storage01: Debug: Adding relationship from Logical_volume[brick01] to Filesystem[/dev/vg_gluster/brick01] with 'before'
==> storage01: Debug: Adding relationship from Filesystem[/dev/vg_gluster/brick01] to Mount[/export/brick01] with 'before'
==> storage01: Debug: Adding relationship from Exec[ensure mountpoint '/export/brick01' exists] to Mount[/export/brick01] with 'before'
==> storage01: Debug: Yumrepo[glusterfs-epel]: Adding default for descr
==> storage01: Debug: Yumrepo[glusterfs-epel]: Adding default for enabled
==> storage01: Debug: Yumrepo[glusterfs-epel]: Adding default for gpgkey
==> storage01: Debug: Yumrepo[glusterfs-epel]: Adding default for gpgcheck
==> storage01: Debug: Yumrepo[glusterfs-epel]: Adding default for notify
==> storage01: Debug: Yumrepo[glusterfs-epel]: Adding default for require
==> storage01: Debug: Yumrepo[glusterfs-noarch-epel]: Adding default for descr
==> storage01: Debug: Yumrepo[glusterfs-noarch-epel]: Adding default for enabled
==> storage01: Debug: Yumrepo[glusterfs-noarch-epel]: Adding default for gpgkey
==> storage01: Debug: Yumrepo[glusterfs-noarch-epel]: Adding default for gpgcheck
==> storage01: Debug: Yumrepo[glusterfs-noarch-epel]: Adding default for notify
==> storage01: Debug: Yumrepo[glusterfs-noarch-epel]: Adding default for require
==> storage01: Debug: Package[glusterfs-fuse]: Adding default for allow_virtual
==> storage01: Debug: Package[avahi]: Adding default for allow_virtual
==> storage01: Debug: Package[nss-mdns]: Adding default for allow_virtual
==> storage01: Debug: Package[glusterfs-server]: Adding default for allow_virtual
==> storage01: Notice: Compiled catalog for storage01.local in environment production in 0.80 seconds
==> storage01: Debug: Puppet::Type::Package::ProviderRug: file /usr/bin/rug does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderSun: file /usr/bin/pkginfo does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderPacman: file /usr/bin/pacman does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderPkgin: file pkgin does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderOpkg: file opkg does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderPorts: file /usr/sbin/pkg_info does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderFreebsd: file /usr/sbin/pkg_info does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderAptrpm: file apt-get does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderUrpmi: file urpmi does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderPkg: file /usr/bin/pkg does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderPortupgrade: file /usr/sbin/pkg_info does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderOpenbsd: file pkg_info does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderUp2date: file /usr/sbin/up2date-nox does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderFink: file /sw/bin/fink does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderPortage: file /usr/bin/eix does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderApt: file /usr/bin/apt-get does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderHpux: file /usr/sbin/swlist does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderAix: file /usr/bin/lslpp does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderAptitude: file /usr/bin/aptitude does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderSunfreeware: file pkg-get does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderZypper: file /usr/bin/zypper does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderDpkg: file /usr/bin/dpkg does not exist
==> storage01: Debug: Puppet::Type::Package::ProviderNim: file /usr/bin/lslpp does not exist
==> storage01: Debug: Puppet::Type::Service::ProviderSystemd: file systemctl does not exist
==> storage01: Debug: Puppet::Type::Service::ProviderGentoo: file /sbin/rc-update does not exist
==> storage01: Debug: Puppet::Type::Service::ProviderLaunchd: file /bin/launchctl does not exist
==> storage01: Debug: Puppet::Type::Service::ProviderRunit: file /usr/bin/sv does not exist
==> storage01: Debug: Puppet::Type::Service::ProviderOpenrc: file /bin/rc-status does not exist
==> storage01: Debug: Puppet::Type::Service::ProviderDaemontools: file /usr/bin/svc does not exist
==> storage01: Debug: Puppet::Type::Service::ProviderDebian: file /usr/sbin/update-rc.d does not exist
==> storage01: Debug: Creating default schedules
==> storage01: Debug: Using settings: adding file resource 'confdir': 'File[/etc/puppet]{:loglevel=>:debug, :links=>:follow, :backup=>false, :ensure=>:directory, :path=>"/etc/puppet"}'
==> storage01: Debug: Puppet::Type::User::ProviderLdap: true value when expecting false
==> storage01: Debug: Puppet::Type::User::ProviderPw: file pw does not exist
==> storage01: Debug: Puppet::Type::User::ProviderUser_role_add: file roledel does not exist
==> storage01: Debug: Puppet::Type::User::ProviderDirectoryservice: file /usr/bin/dsimport does not exist
==> storage01: Debug: Puppet::Type::Group::ProviderLdap: true value when expecting false
==> storage01: Debug: Puppet::Type::Group::ProviderPw: file pw does not exist
==> storage01: Debug: Puppet::Type::Group::ProviderDirectoryservice: file /usr/bin/dscl does not exist
==> storage01: Debug: Using settings: adding file resource 'requestdir': 'File[/var/lib/puppet/ssl/certificate_requests]{:loglevel=>:debug, :links=>:follow, :group=>"puppet", :backup=>false, :owner=>"puppet", :mode=>"755", :ensure=>:directory, :path=>"/var/lib/puppet/ssl/certificate_requests"}'
==> storage01: Debug: Using settings: adding file resource 'statedir': 'File[/var/lib/puppet/state]{:loglevel=>:debug, :links=>:follow, :backup=>false, :mode=>"1755", :ensure=>:directory, :path=>"/var/lib/puppet/state"}'
==> storage01: Debug: Using settings: adding file resource 'privatedir': 'File[/var/lib/puppet/ssl/private]{:loglevel=>:debug, :links=>:follow, :group=>"puppet", :backup=>false, :owner=>"puppet", :mode=>"750", :ensure=>:directory, :path=>"/var/lib/puppet/ssl/private"}'
==> storage01: Debug: Using settings: adding file resource 'rundir': 'File[/var/run/puppet]{:loglevel=>:debug, :links=>:follow, :group=>"puppet", :backup=>false, :owner=>"puppet", :mode=>"755", :ensure=>:directory, :path=>"/var/run/puppet"}'
==> storage01: Debug: Using settings: adding file resource 'certdir': 'File[/var/lib/puppet/ssl/certs]{:loglevel=>:debug, :links=>:follow, :group=>"puppet", :backup=>false, :owner=>"puppet", :mode=>"755", :ensure=>:directory, :path=>"/var/lib/puppet/ssl/certs"}'
==> storage01: Debug: Using settings: adding file resource 'clientyamldir': 'File[/var/lib/puppet/client_yaml]{:loglevel=>:debug, :links=>:follow, :backup=>false, :mode=>"750", :ensure=>:directory, :path=>"/var/lib/puppet/client_yaml"}'
==> storage01: Debug: Using settings: adding file resource 'logdir': 'File[/var/log/puppet]{:loglevel=>:debug, :links=>:follow, :group=>"puppet", :backup=>false, :owner=>"puppet", :mode=>"750", :ensure=>:directory, :path=>"/var/log/puppet"}'
==> storage01: Debug: Using settings: adding file resource 'libdir': 'File[/var/lib/puppet/lib]{:loglevel=>:debug, :links=>:follow, :backup=>false, :ensure=>:directory, :path=>"/var/lib/puppet/lib"}'
==> storage01: Debug: Using settings: adding file resource 'publickeydir': 'File[/var/lib/puppet/ssl/public_keys]{:loglevel=>:debug, :links=>:follow, :group=>"puppet", :backup=>false, :owner=>"puppet", :mode=>"755", :ensure=>:directory, :path=>"/var/lib/puppet/ssl/public_keys"}'
==> storage01: Debug: Using settings: adding file resource 'vardir': 'File[/var/lib/puppet]{:loglevel=>:debug, :links=>:follow, :group=>"puppet", :backup=>false, :owner=>"puppet", :ensure=>:directory, :path=>"/var/lib/puppet"}'
==> storage01: Debug: Using settings: adding file resource 'clientbucketdir': 'File[/var/lib/puppet/clientbucket]{:loglevel=>:debug, :links=>:follow, :backup=>false, :mode=>"750", :ensure=>:directory, :path=>"/var/lib/puppet/clientbucket"}'
==> storage01: Debug: Using settings: adding file resource 'privatekeydir': 'File[/var/lib/puppet/ssl/private_keys]{:loglevel=>:debug, :links=>:follow, :group=>"puppet", :backup=>false, :owner=>"puppet", :mode=>"750", :ensure=>:directory, :path=>"/var/lib/puppet/ssl/private_keys"}'
==> storage01: Debug: Using settings: adding file resource 'pluginfactdest': 'File[/var/lib/puppet/facts.d]{:loglevel=>:debug, :links=>:follow, :backup=>false, :ensure=>:directory, :path=>"/var/lib/puppet/facts.d"}'
==> storage01: Debug: Using settings: adding file resource 'graphdir': 'File[/var/lib/puppet/state/graphs]{:loglevel=>:debug, :links=>:follow, :backup=>false, :ensure=>:directory, :path=>"/var/lib/puppet/state/graphs"}'
==> storage01: Debug: Using settings: adding file resource 'hiera_config': 'File[/tmp/vagrant-puppet/hiera.yaml]{:loglevel=>:debug, :links=>:follow, :backup=>false, :ensure=>:file, :path=>"/tmp/vagrant-puppet/hiera.yaml"}'
==> storage01: Debug: Using settings: adding file resource 'ssldir': 'File[/var/lib/puppet/ssl]{:loglevel=>:debug, :links=>:follow, :group=>"puppet", :backup=>false, :owner=>"puppet", :mode=>"771", :ensure=>:directory, :path=>"/var/lib/puppet/ssl"}'
==> storage01: Debug: Using settings: adding file resource 'client_datadir': 'File[/var/lib/puppet/client_data]{:loglevel=>:debug, :links=>:follow, :backup=>false, :mode=>"750", :ensure=>:directory, :path=>"/var/lib/puppet/client_data"}'
==> storage01: Debug: /File[/var/lib/puppet/state]: Autorequiring File[/var/lib/puppet]
==> storage01: Debug: /File[/var/lib/puppet/state/graphs]: Autorequiring File[/var/lib/puppet/state]
==> storage01: Debug: /File[/var/lib/puppet/facts.d]: Autorequiring File[/var/lib/puppet]
==> storage01: Debug: /File[/var/lib/puppet/ssl/private_keys]: Autorequiring File[/var/lib/puppet/ssl]
==> storage01: Debug: /File[/var/lib/puppet/ssl/certs]: Autorequiring File[/var/lib/puppet/ssl]
==> storage01: Debug: /File[/var/lib/puppet/client_yaml]: Autorequiring File[/var/lib/puppet]
==> storage01: Debug: /File[/var/lib/puppet/ssl/public_keys]: Autorequiring File[/var/lib/puppet/ssl]
==> storage01: Debug: /File[/var/lib/puppet/ssl]: Autorequiring File[/var/lib/puppet]
==> storage01: Debug: /File[/var/lib/puppet/ssl/private]: Autorequiring File[/var/lib/puppet/ssl]
==> storage01: Debug: /File[/var/lib/puppet/clientbucket]: Autorequiring File[/var/lib/puppet]
==> storage01: Debug: /File[/var/lib/puppet/client_data]: Autorequiring File[/var/lib/puppet]
==> storage01: Debug: /File[/var/lib/puppet/lib]: Autorequiring File[/var/lib/puppet]
==> storage01: Debug: /File[/var/lib/puppet/ssl/certificate_requests]: Autorequiring File[/var/lib/puppet/ssl]
==> storage01: Debug: /File[/var/lib/puppet/facts.d]/ensure: created
==> storage01: Debug: /File[/var/lib/puppet/ssl]/ensure: created
==> storage01: Debug: /File[/var/lib/puppet/ssl/private_keys]/ensure: created
==> storage01: Debug: /File[/var/lib/puppet/ssl/public_keys]/ensure: created
==> storage01: Debug: /File[/var/lib/puppet/ssl/certs]/ensure: created
==> storage01: Debug: /File[/var/lib/puppet/client_data]/ensure: created
==> storage01: Debug: /File[/var/lib/puppet/state]/mode: mode changed '0750' to '1755'
==> storage01: Debug: /File[/var/lib/puppet/state/graphs]/ensure: created
==> storage01: Debug: /File[/var/lib/puppet/client_yaml]/ensure: created
==> storage01: Debug: /File[/var/lib/puppet/ssl/private]/ensure: created
==> storage01: Debug: /File[/var/lib/puppet/lib]/ensure: created
==> storage01: Debug: /File[/var/lib/puppet/clientbucket]/ensure: created
==> storage01: Debug: /File[/var/lib/puppet/ssl/certificate_requests]/ensure: created
==> storage01: Debug: Finishing transaction 70269389699920
==> storage01: Debug: /Stage[main]/Repos/Yumrepo[glusterfs-noarch-epel]/require: requires Stage[repositories]
==> storage01: Debug: /Stage[main]/Repos/Yumrepo[glusterfs-noarch-epel]/notify: subscribes to Exec[refresh-yum-cache]
==> storage01: Debug: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Filesystem[/dev/vg_gluster/brick01]/before: requires Mount[/export/brick01]
==> storage01: Debug: /Stage[main]/Main/Node[storage]/Glusterfs::Volume[apache_gl]/Exec[gluster volume create apache_gl]/require: requires Class[Glusterfs::Server]
==> storage01: Debug: /Stage[main]/Lvm/before: requires Glusterfs::Volume[apache_gl]
==> storage01: Debug: /Stage[main]/Glusterfs::Server/Service[glusterd]/require: requires Package[glusterfs-server]
==> storage01: Debug: /Stage[main]/Repos/Yumrepo[glusterfs-epel]/require: requires Stage[repositories]
==> storage01: Debug: /Stage[main]/Repos/Yumrepo[glusterfs-epel]/notify: subscribes to Exec[refresh-yum-cache]
==> storage01: Debug: /Stage[repositories]/before: requires Stage[main]
==> storage01: Debug: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Exec[ensure mountpoint '/export/brick01' exists]/before: requires Mount[/export/brick01]
==> storage01: Debug: /Stage[main]/Main/Node[storage]/Glusterfs::Volume[apache_gl]/Exec[/usr/sbin/gluster volume start apache_gl]/require: requires Exec[gluster volume create apache_gl]
==> storage01: Debug: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Logical_volume[brick01]/before: requires Filesystem[/dev/vg_gluster/brick01]
==> storage01: Debug: /Stage[main]/Common/Service[avahi-daemon]/require: requires Package[avahi]
==> storage01: Debug: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Logical_volume[brick01]: Autorequiring Volume_group[vg_gluster]
==> storage01: Info: Applying configuration version '1422428151'
==> storage01: Debug: Prefetching inifile resources for yumrepo
==> storage01: Notice: /Stage[main]/Repos/Yumrepo[glusterfs-noarch-epel]/ensure: created
==> storage01: Info: changing mode of /etc/yum.repos.d/glusterfs-noarch-epel.repo from 600 to 644
==> storage01: Debug: /Stage[main]/Repos/Yumrepo[glusterfs-noarch-epel]: The container Class[Repos] will propagate my refresh event
==> storage01: Info: /Stage[main]/Repos/Yumrepo[glusterfs-noarch-epel]: Scheduling refresh of Exec[refresh-yum-cache]
==> storage01: Debug: Prefetching yum resources for package
==> storage01: Debug: Executing '/bin/rpm --version'
==> storage01: Debug: Executing '/bin/rpm -qa --nosignature --nodigest --qf '%{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\n''
==> storage01: Debug: Executing '/bin/rpm -q nss-mdns --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\n'
==> storage01: Debug: Executing '/bin/rpm -q nss-mdns --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\n --whatprovides'
==> storage01: Debug: Package[nss-mdns](provider=yum): Ensuring => present
==> storage01: Debug: Executing '/usr/bin/yum -d 0 -e 0 -y install nss-mdns'
==> storage01: Notice: /Stage[main]/Common/Package[nss-mdns]/ensure: created
==> storage01: Debug: /Stage[main]/Common/Package[nss-mdns]: The container Class[Common] will propagate my refresh event
==> storage01: Debug: Executing '/bin/rpm -q glusterfs-fuse --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\n'
==> storage01: Debug: Executing '/bin/rpm -q glusterfs-fuse --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\n --whatprovides'
==> storage01: Debug: Package[glusterfs-fuse](provider=yum): Ensuring => present
==> storage01: Debug: Executing '/usr/bin/yum -d 0 -e 0 -y install glusterfs-fuse'
==> storage01: Notice: /Stage[main]/Glusterfs::Client/Package[glusterfs-fuse]/ensure: created
==> storage01: Debug: /Stage[main]/Glusterfs::Client/Package[glusterfs-fuse]: The container Class[Glusterfs::Client] will propagate my refresh event
==> storage01: Debug: Class[Glusterfs::Client]: The container Stage[main] will propagate my refresh event
==> storage01: Debug: Executing '/bin/rpm -q avahi --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\n'
==> storage01: Debug: Executing '/sbin/service avahi-daemon status'
==> storage01: Debug: Executing '/sbin/chkconfig avahi-daemon'
==> storage01: Debug: Executing '/sbin/service avahi-daemon start'
==> storage01: Debug: Executing '/sbin/chkconfig avahi-daemon'
==> storage01: Notice: /Stage[main]/Common/Service[avahi-daemon]/ensure: ensure changed 'stopped' to 'running'
==> storage01: Debug: /Stage[main]/Common/Service[avahi-daemon]: The container Class[Common] will propagate my refresh event
==> storage01: Info: /Stage[main]/Common/Service[avahi-daemon]: Unscheduling refresh on Service[avahi-daemon]
==> storage01: Debug: Class[Common]: The container Stage[main] will propagate my refresh event
==> storage01: Debug: Executing '/sbin/pvs /dev/sdb'
==> storage01: Debug: Executing '/sbin/pvcreate /dev/sdb'
==> storage01: Notice: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Physical_volume[/dev/sdb]/ensure: created
==> storage01: Debug: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Physical_volume[/dev/sdb]: The container Lvm::Volume_group[vg_gluster] will propagate my refresh event
==> storage01: Debug: Executing '/sbin/vgs vg_gluster'
==> storage01: Debug: Executing '/sbin/vgcreate vg_gluster /dev/sdb'
==> storage01: Notice: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Volume_group[vg_gluster]/ensure: created
==> storage01: Debug: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Volume_group[vg_gluster]: The container Lvm::Volume_group[vg_gluster] will propagate my refresh event
==> storage01: Debug: Executing '/sbin/lvs vg_gluster'
==> storage01: Debug: Executing '/sbin/lvcreate -n brick01 --size 512M vg_gluster'
==> storage01: Notice: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Logical_volume[brick01]/ensure: created
==> storage01: Debug: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Logical_volume[brick01]: The container Lvm::Logical_volume[brick01] will propagate my refresh event
==> storage01: Debug: Exec[ensure mountpoint '/export/brick01' exists](provider=posix): Executing check 'test -d /export/brick01'
==> storage01: Debug: Executing 'test -d /export/brick01'
==> storage01: Debug: Exec[ensure mountpoint '/export/brick01' exists](provider=posix): Executing 'mkdir -p /export/brick01'
==> storage01: Debug: Executing 'mkdir -p /export/brick01'
==> storage01: Notice: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Exec[ensure mountpoint '/export/brick01' exists]/returns: executed successfully
==> storage01: Debug: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Exec[ensure mountpoint '/export/brick01' exists]: The container Lvm::Logical_volume[brick01] will propagate my refresh event
==> storage01: Notice: /Stage[main]/Repos/Yumrepo[glusterfs-epel]/ensure: created
==> storage01: Info: changing mode of /etc/yum.repos.d/glusterfs-epel.repo from 600 to 644
==> storage01: Debug: /Stage[main]/Repos/Yumrepo[glusterfs-epel]: The container Class[Repos] will propagate my refresh event
==> storage01: Info: /Stage[main]/Repos/Yumrepo[glusterfs-epel]: Scheduling refresh of Exec[refresh-yum-cache]
==> storage01: Debug: Exec[refresh-yum-cache](provider=posix): Executing 'yum makecache'
==> storage01: Debug: Executing 'yum makecache'
==> storage01: Notice: /Stage[main]/Repos/Exec[refresh-yum-cache]: Triggered 'refresh' from 2 events
==> storage01: Debug: /Stage[main]/Repos/Exec[refresh-yum-cache]: The container Class[Repos] will propagate my refresh event
==> storage01: Debug: Class[Repos]: The container Stage[main] will propagate my refresh event
==> storage01: Debug: Executing '/bin/rpm -q glusterfs-server --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\n'
==> storage01: Debug: Executing '/bin/rpm -q glusterfs-server --nosignature --nodigest --qf %{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\n --whatprovides'
==> storage01: Debug: Package[glusterfs-server](provider=yum): Ensuring => present
==> storage01: Debug: Executing '/usr/bin/yum -d 0 -e 0 -y install glusterfs-server'
==> storage01: Notice: /Stage[main]/Glusterfs::Server/Package[glusterfs-server]/ensure: created
==> storage01: Debug: /Stage[main]/Glusterfs::Server/Package[glusterfs-server]: The container Class[Glusterfs::Server] will propagate my refresh event
==> storage01: Debug: Executing '/sbin/service glusterd status'
==> storage01: Debug: Executing '/sbin/chkconfig glusterd'
==> storage01: Debug: Executing '/sbin/service glusterd start'
==> storage01: Debug: Executing '/sbin/chkconfig glusterd'
==> storage01: Notice: /Stage[main]/Glusterfs::Server/Service[glusterd]/ensure: ensure changed 'stopped' to 'running'
==> storage01: Debug: /Stage[main]/Glusterfs::Server/Service[glusterd]: The container Class[Glusterfs::Server] will propagate my refresh event
==> storage01: Info: /Stage[main]/Glusterfs::Server/Service[glusterd]: Unscheduling refresh on Service[glusterd]
==> storage01: Debug: Class[Glusterfs::Server]: The container Stage[main] will propagate my refresh event
==> storage01: Debug: Executing '/sbin/blkid /dev/vg_gluster/brick01'
==> storage01: Debug: Executing 'mkfs.ext4 /dev/vg_gluster/brick01'
==> storage01: Notice: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Filesystem[/dev/vg_gluster/brick01]/ensure: created
==> storage01: Debug: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Filesystem[/dev/vg_gluster/brick01]: The container Lvm::Logical_volume[brick01] will propagate my refresh event
==> storage01: Debug: Prefetching parsed resources for mount
==> storage01: Debug: Executing '/bin/mount'
==> storage01: Notice: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Mount[/export/brick01]/ensure: defined 'ensure' as 'mounted'
==> storage01: Debug: Flushing mount provider target /etc/fstab
==> storage01: Info: Computing checksum on file /etc/fstab
==> storage01: Debug: Executing '/bin/mount /export/brick01'
==> storage01: Debug: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Mount[/export/brick01]: The container Lvm::Logical_volume[brick01] will propagate my refresh event
==> storage01: Info: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Mount[/export/brick01]: Scheduling refresh of Mount[/export/brick01]
==> storage01: Info: Mount[/export/brick01](provider=parsed): Remounting
==> storage01: Debug: Executing '/bin/mount -o remount /export/brick01'
==> storage01: Notice: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Mount[/export/brick01]: Triggered 'refresh' from 1 events
==> storage01: Debug: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Mount[/export/brick01]: The container Lvm::Logical_volume[brick01] will propagate my refresh event
==> storage01: Info: /Stage[main]/Lvm/Lvm::Volume_group[vg_gluster]/Lvm::Logical_volume[brick01]/Mount[/export/brick01]: Scheduling refresh of Mount[/export/brick01]
==> storage01: Debug: Lvm::Logical_volume[brick01]: The container Lvm::Volume_group[vg_gluster] will propagate my refresh event
==> storage01: Debug: Lvm::Volume_group[vg_gluster]: The container Class[Lvm] will propagate my refresh event
==> storage01: Debug: Class[Lvm]: The container Stage[main] will propagate my refresh event
==> storage01: Debug: Exec[gluster volume create apache_gl](provider=posix): Executing '/usr/sbin/gluster volume create apache_gl replica 2 192.168.33.11:/export/brick01/apache 192.168.33.12:/export/brick01/apache'
==> storage01: Debug: Executing '/usr/sbin/gluster volume create apache_gl replica 2 192.168.33.11:/export/brick01/apache 192.168.33.12:/export/brick01/apache'
==> storage01: Notice: /Stage[main]/Main/Node[storage]/Glusterfs::Volume[apache_gl]/Exec[gluster volume create apache_gl]/returns: volume create: apache_gl: failed: Host 192.168.33.12 is not in 'Peer in Cluster' state
==> storage01: Error: /usr/sbin/gluster volume create apache_gl replica 2 192.168.33.11:/export/brick01/apache 192.168.33.12:/export/brick01/apache returned 1 instead of one of [0]
==> storage01: Error: /Stage[main]/Main/Node[storage]/Glusterfs::Volume[apache_gl]/Exec[gluster volume create apache_gl]/returns: change from notrun to 0 failed: /usr/sbin/gluster volume create apache_gl replica 2 192.168.33.11:/export/brick01/apache 192.168.33.12:/export/brick01/apache returned 1 instead of one of [0]
==> storage01: Notice: /Stage[main]/Main/Node[storage]/Glusterfs::Volume[apache_gl]/Exec[/usr/sbin/gluster volume start apache_gl]: Dependency Exec[gluster volume create apache_gl] has failures: true
==> storage01: Warning: /Stage[main]/Main/Node[storage]/Glusterfs::Volume[apache_gl]/Exec[/usr/sbin/gluster volume start apache_gl]: Skipping because of failed dependencies
==> storage01: Debug: Finishing transaction 70269389927460
==> storage01: Debug: Storing state
==> storage01: Info: Creating state file /var/lib/puppet/state/state.yaml
==> storage01: Debug: Stored state in 0.01 seconds
==> storage01: Notice: Finished catalog run in 129.35 seconds
==> storage01: Debug: Using settings: adding file resource 'rrddir': 'File[/var/lib/puppet/rrd]{:loglevel=>:debug, :links=>:follow, :group=>"puppet", :backup=>false, :owner=>"puppet", :mode=>"750", :ensure=>:directory, :path=>"/var/lib/puppet/rrd"}'
==> storage01: Debug: /File[/var/lib/puppet/rrd]/ensure: created
==> storage01: Debug: Finishing transaction 70269388493560
==> storage01: Debug: Received report to process from storage01.local
==> storage01: Debug: Processing report from storage01.local with processor Puppet::Reports::Store
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
node /storage/ {
#notify {'Hello, I\'m a storage box!':}
include common
include lvm
include glusterfs::server
Class['lvm']->GlusterFS::Volume['apache_gl']
$glusterfs_volume_hash = hiera('glusterfs::volume', {})
create_resources('glusterfs::volume', $glusterfs_volume_hash)
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment