Skip to content

Instantly share code, notes, and snippets.

server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:3000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_read_timeout 100m;
1. Login to VM as user who will be changed
1. use command 'id' to list current user's ids. Writedown current(old) UID and GID
2. determine what your new username's ids will be. If you already have an account in the sgn network, ssh into solanine and run 'cat /etc/passwd' to see what you systemwide ids are
3. change password with command 'passwd'
4. 'sudo su' to become superuser
4.5 for the following commands, it may be necessary to first stop all processes associated with the user to be changed, by running 'killall --user <OLDUSERNAME>'
5. 'usermod -l <NEWUSERNAME> -m -d <PATH/TO/NEW/HOMEDIR> <OLDUSERNAME>' to rename username and homedir
6. 'groupmod -n <NEWUSERNAME> <OLDUSERNAME>' to rename group
7. 'usermod -u <NEWUID> <USERNAME>' to change UID
server {
listen 80;
server_name localhost;
return 301 https://$host$request_uri;
}
server {
listen 443 default_server;
## Ref seq
cd to /export/prod/jbrowse_cassavabase/JBrowse-1.11.6/data/json
sudo mkdir cassavaV6_1
cd into it
Downloaded Mesculenta_305_v6.fa.gz from JGI <http://genome.jgi.doe.gov/pages/dynamicOrganismDownlo\
ad.jsf?organism=Mesculenta>
sudo bin/prepare-refseqs.pl --fasta Mesculenta_305_v6.fa.gz
sudo mv data/* .
sudo rm -r data
ssh production@172.30.2.197
cd /export/prod/jbrowse_cassavabase/current/data
sudo mkdir cassava_example
cd cassava_example/
sudo mkdir data_files
cd data_files/
sudo ln -s /export/prod/public_cassava/Manihot_v6.1/ .
cd ../
sudo ../../bin/prepare-refseqs.pl --fasta data_files/Manihot_v6.1/assembly/Mesculenta_305_v6.fa
sudo mv data/* .
to dump production db:
from commandline: pg_dump -U postgres -h db5.sgn.cornell.edu cxgn_musabase | db5.cxgn_musabase.pgsql.gz
to rename old test db:
ALTER DATABASE sandbox_musabase RENAME TO sandbox_musabase_backup
if there is an error, check for active processes: SELECT * FROM pg_stat_activity;
and force disconnect of processes (should be idle!):
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE pid <> pg_backend_pid( ) AND datname = 'sandbox_musabase';
the run ALTER command again, or remove it:
DROP DATABASE sandbox_musabase;
Make sure your VM is shut down!
1. In your virtualbox folder, find the correct disk image. If it isn't already a vdi, convert it:
`VBoxManage clonehd box-disk1.vmdk box-disk1.vdi --format vdi`
2. Enlarge the vdi file.
The number you give as an argument is in MB, or GB times 1024. In this case 250GB * 1024 MB/GB = 256000:
`VBoxManage modifyhd box-disk1.vdi --resize 256000`
3. Open VirtualBox manager and click on the New icon in the top left to create a new machine. After naming it and picking the amount of memory,
@bellerbrock
bellerbrock / Create temp user accounts
Last active May 6, 2019 20:06
Create temp user accounts en masse for workshops
#here's how I was able to create temp user accounts en masse for spb workshops. Could prove useful for future workshops:
Step1:
Manually create a dummy account through the test site's user interface. Give it username 'password_source' and the password that you want to use for all the temp user accounts.
Then, connect to the test database using psql and run:
begin;
insert into sgn_people.sp_person (first_name, last_name, username, password, private_email, user_type) (select 'test' as first_name, 'user' || i as last_name, 'testuser' || i as username, (select password from sgn_people.sp_person where username = 'password_source') as password, 'spbuser' || i || '@mailinator.com' as email, 'submitter' as user_type from generate_Series(0,50) as i);
query for no misisng values:
SELECT table1.accession_id, table1.accession_name, table1.trait1, table2.trait2, table3.trait3 from (SELECT accession_id, accession_name, avg(phenotype_value::real) as trait1 FROM materialized_phenoview WHERE trial_id = 122 AND trait_id = 70741 group by 1,2) as table1 join (SELECT accession_id, accession_name, avg(phenotype_value::real) as trait2 FROM materialized_phenoview WHERE trial_id = 122 AND trait_id = 70691 group by 1,2) as table2 using(accession_id) join (SELECT accession_id, accession_name, avg(phenotype_value::real) as trait3 FROM materialized_phenoview WHERE trial_id = 122 AND trait_id = 70762 group by 1,2) as table3 using(accession_id) order by 2;
query for allowing missing values:
SELECT table0.accession_id, table0.accession_name, table1.trait1, table2.trait2, table3.trait3 from (SELECT accession_id, accession_name from materialized_phenoview WHERE trial_id = 122 group by 1,2) as table0 full outer join (SELECT accession_id, accession_name, avg(phenotype_value::re
commit and push, or pull all branches to be included in update into master branch.
1. run tests, including unit, unit fixture, and selenium tests.
- for unit tests: `perl t/test_fixture.pl --noserver t/unit`
- for unit_fixture tests: `perl t/test_fixture.pl t/unit_fixture
- for selenium tests:
run selenium2 server in seperate terminal with `java -jar selenium-server-standalone-2.53.0.jar`. This jar file should be in the vagrant home directory
then run `perl t/test_fixture.pl t/selenium2/`
if any tests fail, they can be run individually and troubleshot to make any necessary fixes to the code on the master branch, or to update the test itself