Skip to content

Instantly share code, notes, and snippets.

@younga3 younga3/hyrax-notes

Created May 8, 2018
What would you like to do?
Hyrax notes
Hyrax notes
## Local fullstack development install
Centos 7
ruby 2.4.2
rails 5.0.6
fits 1.2.0
hyrax v2.1.0.rc2
fedora 4.7.3
solr 6.6.2
passenger 5.1.11
## Log files
Hyrax - $HYRAX/log/production.log
Redis - /var/log/redis/redis.log
Sidekiq - /var/log/sidekiq.log (Log is here because I created it here)
Passenger - /var/log/httpd/error_log
Fedora repo - /var/fedora-data/velocity.log (Log is here because that is where I told tomcat to put fedora-data)
solr - /var/solr/logs/solr.logs
## Backup/restore:
for moving stuff around for development. (This is separate from backup/restore for disaster recovery and/or operations).
This is mostly taken from
## Files/directories needed
fcrepo data
redis db
postgresql db
pg_dump of hyraxdb
$HYRAX/tmp/derivatives (This is a symlink on actual production)
solr indexes
## Detail: fcrepo
Option 1:
Use fedora's backup/restore instructions. Here's a cheatsheet.
$ mkdir /tmp/fcrepo_backup
$ sudo chgrp -R tomcat /tmp/fcrepo_backup
$ sudo -u tomcat curl -X POST -d "/tmp/fcrepo_backup" "localhost:8080/fedora/rest/fcr:backup
then in new place
$ sudo -u tomcat curl -X POST -d "/pathto/fcrepo_backup" "localhost:8080/fedora/rest/fcr:restore"
Option 2:
Install fcrepo import/export tool
For centos 7: /usr/share/tomcat/webapps/fedora/WEB-INF/lib/
## Detail: redis
stop redis before copying
make sure redis owns dump.rdb
## Detail derivatives
just untar in new place
## Detail: postgresql db
the quick and dirty. Lots of ways to do this.
$ sudo -u postgres pg_dump -Fc fcrepodb > /tmp/hyraxdb-backup/pgdump.[date]
in new place
$ sudo systemctl stop httpd # or you will get an error when clearing the existing db
$ sudo -u postgres dropdb [hyraxdbname]
$ sudo -u postgres createdb [hyraxdbname]
$ sudo -u postgres pg_restore -d fcrepodb pgdump.[date]
## Detail solr
Option: Pie-in-the-sky
$ sudo bundle exec rails c production
irb(main):001:0> ActiveFedora::Base.reindex_everything
Option: When/if that fails
stop solr
untar /var/solr
start solr
Problems - upload
If your work successfully creates, but the bitstreams fail to upload, check that sidekiq is running.
If sidekiq is not working then Batch Loads will also fail (Both work creation and upload.)
Once sidekiq is started, those queued load or creation/load jobs in redis should go through (I’ve seen this not work a few times).
# Other useful things
## Fedora: Delete container
To delete a Fedora container:
curl -X DELETE "http://localhost:8080/fedora/rest/[containername]"
curl -X DELETE "http://localhost:8080/fedora/rest/prod"
To delete the subsequent tombstone:
curl -X DELETE "http://localhost:8080/fedora/rest/prod/fcr:tombstone"
To create a new node with the old URL:
curl -X PUT "http://localhost:8080/fedora/rest/prod
## Fedora: Delete a thing
curl -X DELETE "http://localhost:8080/rest/tx:83e34464-144e-43d9-af13-b3464a1fb9b5/path/to/resource/to/delete"
curl -X DELETE "http://localhost:8080/rest/prod/9c/67/wm/82/9c67wm82b"
Solr: Delete core
sudo -u solr /opt/solr/bin/solr delete -c blacklight-core
sudo systemctl restart solr.service
Solr: Create new solr index
sudo -u solr /opt/solr/bin/solr create -c blacklight-core -n data_driven_schema_configs
Rerun ansible with solr tag
ansible-playbook -i inventory fullstack.yml --tags=solr
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.