Skip to content

Instantly share code, notes, and snippets.

@dereneaton
Last active August 29, 2015 13:57
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save dereneaton/dc6241083c912519064e to your computer and use it in GitHub Desktop.
Save dereneaton/dc6241083c912519064e to your computer and use it in GitHub Desktop.
** PERMALINKED **
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tutorial: \n",
"### paired-end ddRAD w/ merged reads\n",
"#### (or other two-cutter based datatypes)\n",
"### _pyRAD_ v.3.0.4"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"----------------------------- \n",
"\n",
"###__Topics__: \n",
"\n",
" + Setup of params file for paired-end ddRAD (pairddrad)\n",
" + Check for merge/overlap of paired reads\n",
" + Assemble simulated data set with merged reads\n",
" + Combine merged with non-merged assembly\n",
"\n",
"----------------------------- "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## What do the data look like?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The double digest library preparation method was developed and described by [Peterson et al. 2012](http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0037135). Here I will be talking about __ _paired-end ddRAD_ __ data and describe my recommendations for how to setup a params file to analyze them in _pyRAD_ when many of the data are __merged__. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A ddRAD library is prepared by cutting genomic DNA with two different restriction enzymes and selecting the intervening fragments that are within a certain size window. These will contain overhangs from the respective cut sites on either side. One side will have a barcode+illumina adapter ligated, and the other end will have only the reverse Illumina adapter ligated. The first reads may come in one or multiple files with \"\\_R1\\_\" in the name, and the second reads are in a separate file/s with \"\\_R2\\_\". Second read files will contain the corresponding pair from the first read files in the exact same order.\n",
"\n",
"-------------------"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![alt](https://dl.dropboxusercontent.com/u/2538935/PYRAD_TUTORIALS/figures/diag_ddradpair.svg)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Which cutters did you use?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A feature of ddRAD data is that two different cutters are used to generate the data. There is typically a rare cutter and a common cutter. You will need to know what the overhang sequence is that these cutters leave on your sequences. This can easily be found by looking at the raw forward and reverse reads files. Find the invariant sequence near the beginning of R1 files for the first cutter, and invariant sequence at the end of the _R2_ files for the second cutter. You will list them in this order in the params file, discussed below."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Barcodes\n",
"Your data will likely come to you non-demultiplexed (meaning not sorted by which individual the reads belong to). You can demultiplex the reads according to their barcodes using _pyRAD_ or separate software. If your reads are already de-multiplexed that is OK as well. \n",
"\n",
"#### Detecting merged reads\n",
"A failure to merge paired end reads that have overlapping sequences can lead to _major_ problems during the assembly. A number of external programs are available to check for overlap of paired end reads, and you can run your data through these programs before being input to _pyRAD_. At the time of writing this, I recommend the software PEAR (https://github.com/xflouris/PEAR), which I'll demonstrate below. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The Example data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For this tutorial I simulated paired-end ddRAD reads on a 12 taxon species tree, shown below. You can download the data using the script below and assemble these data by following along with all of the instructions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](https://dl.dropboxusercontent.com/u/2538935/PYRAD_TUTORIALS/figures/fig_tree4sims.svg)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Archive: simpairddradsmerge.zip\n",
" inflating: simpairddradsmerge.barcodes \n",
" inflating: simpairddradsmerge_R1_.fastq.gz \n",
" inflating: simpairddradsmerge_R2_.fastq.gz \n"
]
}
],
"source": [
"%%bash\n",
"## download the data\n",
"wget -q https://github.com/dereneaton/dereneaton.github.io/raw/master/downloads/simpairddradsmerge.zip\n",
"\n",
"## unzip the data\n",
"unzip simpairddradsmerge.zip"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Where the example data come from (skip this section if you want)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can simply download the data, but it might also be worth describing how the data were generated. In this case I simulated ddRAD-like data using the egglib coalescent simulator with the following options using my program [simRRLs.py](http://dereneaton.com/software/simrrls/)\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"indels = 0.005 ## low rate of indels (prob. mutation is indel)\n",
"dropout = 0 ## if 1, mutations to restriction site can cause locus dropout.\n",
"nloci = 1000 ## Total Nloci simulated (less if locus dropout or merged reads) \n",
"ninds = 1 ## sampled individuals per tip taxon\n",
"shortfrag = 50 ## smallest size of digested fragments\n",
"longfrag = 300 ## largest size of digested fragments"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"480000 sequenced reads\n"
]
}
],
"source": [
"## Because the data are simulated at 20X coverage, \n",
"## and the reads are sequenced from both ends (paired-end)\n",
"## the total number of reads is: \n",
"reads = 12*ninds*nloci*20*2\n",
"print \"%s sequenced reads\" % reads"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here I execute the simulation script to simulate the data, and write it to a file called _simpairddradsmerge_. If you simply downloaded the data you do not need to do this step, I show it only for those interested. The relevant point is that the fragment size is randomly selected between 50 and 300 bp, meaning that around half of our fragments will be <200 bp long, which in the case of paired 100 bp reads will lead to overlap."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\tsimulating pairddrad data\n",
"\tsimulating 1000 loci at 20X coverage across 12 tip taxa with 1 samples per taxon\n",
"\tindels arise at frequency of 0.005000 per mutation\n",
"\tmutations in restriction site = False\n",
"\tsequencing error rate = 0.0005\n",
"\ttheta=4Nu= 0.0014\n",
"\tmin fragment length allows read overlaps/adapter sequences \n",
"\tcreating new barcode map\n",
".\n"
]
}
],
"source": [
"%%bash\n",
"## download the simulation program\n",
"wget -q http://www.dereneaton.com/downloads/simRRLs.py\n",
"\n",
"## simulate data with param settings described above\n",
"## (requires the Python package `Egglib`) <---- !! \n",
"python simRRLs.py 0.005 0 1000 1 50,300 pairddrad simpairddradsmerge"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Assembling the data set with _pyRAD_"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We first create an empty params.txt input file for the _pyRAD_ analysis. \n",
"The following command will create a template which we will fill in with all relevant parameter settings for the analysis."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\tnew params.txt file created\n"
]
}
],
"source": [
"%%bash\n",
"pyrad -n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"------------- \n",
"\n",
"Take a look at the default options. Each line designates a parameter, and contains a \"##\" symbol after which comments can be added, and which includes a description of the parameter. The affected step for each parameter is shown in parentheses. The first 14 parameters are required. Numbers 15-37 are optional."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"==** parameter inputs for pyRAD version 3.0.4 **======================== affected step ==\n",
"./ ## 1. Working directory (all)\n",
"./*.fastq.gz ## 2. Loc. of non-demultiplexed files (if not line 16) (s1)\n",
"./*.barcodes ## 3. Loc. of barcode file (if not line 16) (s1)\n",
"vsearch ## 4. command (or path) to call vsearch (or usearch) (s3,s6)\n",
"muscle ## 5. command (or path) to call muscle (s3,s7)\n",
"TGCAG ## 6. Restriction overhang (e.g., C|TGCAG -> TGCAG) (s1,s2)\n",
"2 ## 7. N processors (parallel) (all)\n",
"6 ## 8. Mindepth: min coverage for a cluster (s4,s5)\n",
"4 ## 9. NQual: max # sites with qual < 20 (or see line 20)(s2)\n",
".88 ## 10. Wclust: clustering threshold as a decimal (s3,s6)\n",
"rad ## 11. Datatype: rad,gbs,ddrad,pairgbs,pairddrad,merged (all)\n",
"4 ## 12. MinCov: min samples in a final locus (s7)\n",
"3 ## 13. MaxSH: max inds with shared hetero site (s7)\n",
"c88d6m4p3 ## 14. Prefix name for final output (no spaces) (s7)\n",
"==== optional params below this line =================================== affected step ==\n",
" ## 15.opt.: select subset (prefix* only selector) (s2-s7)\n",
" ## 16.opt.: add-on (outgroup) taxa (list or prefix*) (s6,s7)\n",
" ## 17.opt.: exclude taxa (list or prefix*) (s7)\n",
" ## 18.opt.: loc. of de-multiplexed data (s2)\n",
" ## 19.opt.: maxM: N mismatches in barcodes (def= 1) (s1)\n",
" ## 20.opt.: phred Qscore offset (def= 33) (s2)\n",
" ## 21.opt.: filter: def=0=NQual 1=NQual+adapters. 2=strict (s2)\n",
" ## 22.opt.: a priori E,H (def= 0.001,0.01, if not estimated) (s5)\n",
" ## 23.opt.: maxN: max Ns in a cons seq (def=5) (s5)\n",
" ## 24.opt.: maxH: max heterozyg. sites in cons seq (def=5) (s5)\n",
" ## 25.opt.: ploidy: max alleles in cons seq (def=2;see docs) (s4,s5)\n",
" ## 26.opt.: maxSNPs: (def=100). Paired (def=100,100) (s7)\n",
" ## 27.opt.: maxIndels: within-clust,across-clust (def. 3,99) (s3,s7)\n",
" ## 28.opt.: random number seed (def. 112233) (s3,s6,s7)\n",
" ## 29.opt.: trim overhang left,right on final loci, def(0,0) (s7)\n",
" ## 30.opt.: output formats: p,n,a,s,v,u,t,m,k,g,* (see docs) (s7)\n",
" ## 31.opt.: maj. base call at depth>x<mindepth (def.x=mindepth) (s5)\n",
" ## 32.opt.: keep trimmed reads (def=0). Enter min length. (s2)\n",
" ## 33.opt.: max stack size (int), def= max(500,mean+2*SD) (s3)\n",
" ## 34.opt.: minDerep: exclude dereps with <= N copies, def=1 (s3)\n",
" ## 35.opt.: use hierarchical clustering (def.=0, 1=yes) (s6)\n",
" ## 36.opt.: repeat masking (def.=1='dust' method, 0=no) (s3,s6)\n",
" ## 37.opt.: vsearch max threads per job (def.=6; see docs) (s3,s6)\n",
"==== optional: list group/clade assignments below this line (see docs) ==================\n"
]
}
],
"source": [
"%%bash\n",
"cat params.txt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"--------------- \n",
"\n",
"### Edit the params file"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"I will use the script below to substitute new values, but you should simply __use any text editor__ to make changes. For this analysis I made the following changes from the defaults: \n",
"\n",
"-------------------- \n",
"\n",
" 6. set the two restriction enzymes used to generate the ddRAD data\n",
" 10. lowered clustering threshold to .85\n",
" 11. set datatype to pairddrad\n",
" 14. changed the output name prefix\n",
" 19. mismatches for demulitiplexing set to 0, exact match.\n",
" 24. Raised maxH. Lower is better for filtering paralogs.\n",
" 30. added additional (all) output formats (e.g., nexus,SNPs,STRUCTURE)\n",
"\n",
"-------------------- \n",
" "
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%%bash\n",
"sed -i '/## 6. /c\\TGCAG,AATT ## 6. cutsites... ' ./params.txt\n",
"sed -i '/## 10. /c\\.85 ## 10. lowered clust thresh' ./params.txt\n",
"sed -i '/## 11. /c\\pairddrad ## 11. datatype... ' ./params.txt\n",
"sed -i '/## 14. /c\\merged ## 14. prefix name ' ./params.txt\n",
"sed -i '/## 19./c\\0 ## 19. errbarcode... ' ./params.txt\n",
"sed -i '/## 24./c\\10 ## 24. maxH... ' ./params.txt\n",
"sed -i '/## 30./c\\* ## 30. outformats... ' ./params.txt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"----------------- \n",
"\n",
"Let's take a look at the edited params.txt file\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"==** parameter inputs for pyRAD version 3.0.4 **======================== affected step ==\n",
"./ ## 1. Working directory (all)\n",
"./*.fastq.gz ## 2. Loc. of non-demultiplexed files (if not line 16) (s1)\n",
"./*.barcodes ## 3. Loc. of barcode file (if not line 16) (s1)\n",
"vsearch ## 4. command (or path) to call vsearch (or usearch) (s3,s6)\n",
"muscle ## 5. command (or path) to call muscle (s3,s7)\n",
"TGCAG,AATT ## 6. cutsites... \n",
"2 ## 7. N processors (parallel) (all)\n",
"6 ## 8. Mindepth: min coverage for a cluster (s4,s5)\n",
"4 ## 9. NQual: max # sites with qual < 20 (or see line 20)(s2)\n",
".85 ## 10. lowered clust thresh\n",
"pairddrad ## 11. datatype... \n",
"4 ## 12. MinCov: min samples in a final locus (s7)\n",
"3 ## 13. MaxSH: max inds with shared hetero site (s7)\n",
"merged ## 14. prefix name \n",
"==== optional params below this line =================================== affected step ==\n",
" ## 15.opt.: select subset (prefix* only selector) (s2-s7)\n",
" ## 16.opt.: add-on (outgroup) taxa (list or prefix*) (s6,s7)\n",
" ## 17.opt.: exclude taxa (list or prefix*) (s7)\n",
" ## 18.opt.: loc. of de-multiplexed data (s2)\n",
"0 ## 19. errbarcode... \n",
" ## 20.opt.: phred Qscore offset (def= 33) (s2)\n",
" ## 21.opt.: filter: def=0=NQual 1=NQual+adapters. 2=strict (s2)\n",
" ## 22.opt.: a priori E,H (def= 0.001,0.01, if not estimated) (s5)\n",
" ## 23.opt.: maxN: max Ns in a cons seq (def=5) (s5)\n",
"10 ## 24. maxH... \n",
" ## 25.opt.: ploidy: max alleles in cons seq (def=2;see docs) (s4,s5)\n",
" ## 26.opt.: maxSNPs: (def=100). Paired (def=100,100) (s7)\n",
" ## 27.opt.: maxIndels: within-clust,across-clust (def. 3,99) (s3,s7)\n",
" ## 28.opt.: random number seed (def. 112233) (s3,s6,s7)\n",
" ## 29.opt.: trim overhang left,right on final loci, def(0,0) (s7)\n",
"* ## 30. outformats... \n",
" ## 31.opt.: maj. base call at depth>x<mindepth (def.x=mindepth) (s5)\n",
" ## 32.opt.: keep trimmed reads (def=0). Enter min length. (s2)\n",
" ## 33.opt.: max stack size (int), def= max(500,mean+2*SD) (s3)\n",
" ## 34.opt.: minDerep: exclude dereps with <= N copies, def=1 (s3)\n",
" ## 35.opt.: use hierarchical clustering (def.=0, 1=yes) (s6)\n",
" ## 36.opt.: repeat masking (def.=1='dust' method, 0=no) (s3,s6)\n",
" ## 37.opt.: vsearch max threads per job (def.=6; see docs) (s3,s6)\n",
"==== optional: list group/clade assignments below this line (see docs) ==================\n"
]
}
],
"source": [
"%%bash\n",
"cat params.txt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 1: De-multiplexing the data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---------------- \n",
" \n",
"Four examples of acceptable input file name formats for paired-end data: \n",
"\n",
" 1. xxxx_R1_001.fastq xxxx_R2_001.fastq\n",
" 2. xxxx_R1_001.fastq.gz xxxx_R2_001.fastq.gz\n",
" 3. xxxx_R1_100.fq.gz xxxx_R2_100.fq.gz\n",
" 4. xxxx_R1_.fq xxxx_R2_.fq\n",
"\n",
"+ The file ending can be .fastq, .fq, or .gz. \n",
"+ There should be a unique name or number shared by each pair and the characters \\_R1\\_ and \\_R2\\_. \n",
"+ For every file name with \\_R1\\_ there should be a corresponding \\_R2\\_ file. \n",
"\n",
"If your data are _already_ demultiplexed skip step 1 and see step 2 below. \n",
"\n",
"----------------- \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we run step 1 of the analysis by designating the params file with the -p flag, and the step with the -s flag. "
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"\n",
" ------------------------------------------------------------\n",
" pyRAD : RADseq for phylogenetics & introgression analyses\n",
" ------------------------------------------------------------\n",
"\n",
"\n",
"\tstep 1: sorting reads by barcode\n",
"\t ."
]
}
],
"source": [
"%%bash\n",
"pyrad -p params.txt -s 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can look at the stats output for this below:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"file \tNreads\tcut_found\tbar_matched\n",
"simpairddradsmerge_.fastq.gz\t240000\t240000\t240000\n",
"\n",
"\n",
"sample\ttrue_bar\tobs_bars\tN_obs\n",
"1C0 \tAAAAGA \tAAAAGA\t20000 \n",
"2G0 \tAAGTGA \tAAGTGA\t20000 \n",
"1D0 \tAGAATG \tAGAATG\t20000 \n",
"2H0 \tAGGGTG \tAGGGTG\t20000 \n",
"1A0 \tCATCAT \tCATCAT\t20000 \n",
"3I0 \tGAATGG \tGAATGG\t20000 \n",
"3K0 \tGAGTAA \tGAGTAA\t20000 \n",
"2E0 \tGGAGAG \tGGAGAG\t20000 \n",
"1B0 \tGTAGTG \tGTAGTG\t20000 \n",
"3L0 \tGTTGAA \tGTTGAA\t20000 \n",
"2F0 \tTGATTT \tTGATTT\t20000 \n",
"3J0 \tTTAATG \tTTAATG\t20000 \n",
"\n",
"nomatch \t_ \t0\n"
]
}
],
"source": [
"%%bash\n",
"cat stats/s1.sorting.txt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The de-multiplexed reads are written to a new file for each individual in a new directory created within your working directory called fastq/"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1A0_R1.fq.gz\n",
"1A0_R2.fq.gz\n",
"1B0_R1.fq.gz\n",
"1B0_R2.fq.gz\n",
"1C0_R1.fq.gz\n",
"1C0_R2.fq.gz\n",
"1D0_R1.fq.gz\n",
"1D0_R2.fq.gz\n",
"2E0_R1.fq.gz\n",
"2E0_R2.fq.gz\n",
"2F0_R1.fq.gz\n",
"2F0_R2.fq.gz\n",
"2G0_R1.fq.gz\n",
"2G0_R2.fq.gz\n",
"2H0_R1.fq.gz\n",
"2H0_R2.fq.gz\n",
"3I0_R1.fq.gz\n",
"3I0_R2.fq.gz\n",
"3J0_R1.fq.gz\n",
"3J0_R2.fq.gz\n",
"3K0_R1.fq.gz\n",
"3K0_R2.fq.gz\n",
"3L0_R1.fq.gz\n",
"3L0_R2.fq.gz\n"
]
}
],
"source": [
"%%bash \n",
"ls fastq/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"An individual file will look like this:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"@lane1_fakedata0_R1_0 1:N:0:\n",
"TGCAGCATGACAATCTATGGACCACAGAGCGCGAAATCTGCTCTCGGACATCAACGTCTGAAGTTCGGGCCCGTAACGCT\n",
"+\n",
"BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB\n",
"@lane1_fakedata0_R1_1 1:N:0:\n",
"TGCAGCATGACAATCTATGGACCACAGAGCGCGAAATCTGCTCTCGGACATCAACGTCTGAAGTTCGGGCCCGTAACGCT\n",
"+\n",
"BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB\n",
"@lane1_fakedata0_R1_2 1:N:0:\n",
"TGCAGCATGACAATCTATGGACCACAGAGCGCGAAATCTGCTCTCGGACATCAACGTCTGAAGTTCGGGCCCGTAACGCT\n",
"+\n",
"BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB\n"
]
}
],
"source": [
"%%bash\n",
"## FIRST READS file -- \n",
"## I show only the first 12 lines and 80 characters to print it clearly here.\n",
"less fastq/1A0_R1.fq.gz | head -n 12 | cut -c 1-80"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"@lane1_fakedata0_R2_0 1:N:0:\n",
"AATTGTTGGTTGTTTTACATGCAGGATATGAACTAGTGGCACTGATAGGATATTATCCCGTGCGAGCGCGTATACCGTGG\n",
"+\n",
"BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB\n",
"@lane1_fakedata0_R2_1 1:N:0:\n",
"AATTGTTGGTTGTTTTACATGCAGGATATGAACTAGTGGCACTGATAGGATATTATCCCGTGCGAGCGCGTATACCGTGG\n",
"+\n",
"BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB\n",
"@lane1_fakedata0_R2_2 1:N:0:\n",
"AATTGTTGGTTGTTTTACATGCAGGATATGAACTAGTGGCACTGATAGGATATTATCCCGTGCGAGCGCGTATACCGTGG\n",
"+\n",
"BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB\n"
]
}
],
"source": [
"%%bash\n",
"## SECOND READS file\n",
"less fastq/1A0_R2.fq.gz | head -n 12 | cut -c 1-80"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"-------------------------\n",
"\n",
"The reads were sorted into a separate file for the first (R1) and second (R2) reads for each individual. \n",
"\n",
"__If your data were previously de-multiplexed__ you need the following things before step 2: \n",
"\n",
"+ your sorted file names should be formatted similar to above, but with sample names substituted for 1A0, 1A1, etc.\n",
"+ file names should include \"_\\_R1._\" in first read files, and \"_\\_R2._\" in second read files (note this is different from the format for non de-multiplexed data files).\n",
"+ the files can be gzipped (.gz) or not (.fq or .fastq). \n",
"+ the barcode should be removed (not on left side of first reads) \n",
"+ the restriction site should _not_ be removed, but if it is, enter a '@' symbol before the location of your sorted data.\n",
"\n",
"+ __Enter on line 18 of the params file the location of your sorted data.__\n",
"\n",
"-------------------- \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Before Step 2: Merge/overlap filtering"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Due to the use of a common cutter (e.g., ecoRI), paired ddRAD data often contains fragments that are shorter than the sequence length such that the first and second reads overlap. It is usually beneficial to remove these from your data. To do so, you could run the following script in the program PEAR. If you think your data don't have this problem then skip this step (but most data have overlaps).\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%%bash\n",
"## you first need to un-archive the files\n",
"for gfile in fastq/*.gz; \n",
" do gunzip $gfile;\n",
"done"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%%bash\n",
"## then run PEAR on each data file\n",
"for gfile in fastq/*_R1.fq; \n",
" do pear -f $gfile \\\n",
" -r ${gfile/_R1.fq/_R2.fq} \\\n",
" -o ${gfile/_R1.fq/} \\\n",
" -n 33 \\\n",
" -j 4 >> pear.log 2>&1\n",
"done"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### take a peek at the log file"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" ____ _____ _ ____ \n",
"| _ \\| ____| / \\ | _ \\\n",
"| |_) | _| / _ \\ | |_) |\n",
"| __/| |___ / ___ \\| _ <\n",
"|_| |_____/_/ \\_\\_| \\_\\\n",
"\n",
"PEAR v0.9.7 [February 12, 2015]\n",
"\n",
"Citation - PEAR: a fast and accurate Illumina Paired-End reAd mergeR\n",
"Zhang et al (2014) Bioinformatics 30(5): 614-620 | doi:10.1093/bioinformatics/btt593\n",
"\n",
"Forward reads file.................: fastq/3L0_R1.fq\n",
"Reverse reads file.................: fastq/3L0_R2.fq\n",
"PHRED..............................: 33\n",
"Using empirical frequencies........: YES\n",
"Statistical method.................: OES\n",
"Maximum assembly length............: 999999\n",
"Minimum assembly length............: 33\n",
"p-value............................: 0.010000\n",
"Quality score threshold (trimming).: 0\n",
"Minimum read size after trimming...: 1\n",
"Maximal ratio of uncalled bases....: 1.000000\n",
"Minimum overlap....................: 10\n",
"Scoring method.....................: Scaled score\n",
"Threads............................: 4\n",
"\n",
"Allocating memory..................: 200,000,000 bytes\n",
"Computing empirical frequencies....: DONE\n",
" A: 0.256721\n",
" C: 0.240800\n",
" G: 0.247794\n",
" T: 0.254686\n",
" 0 uncalled bases\n",
"Assemblying reads: 0%\r",
"Assemblying reads: 100%\n",
"\n",
"Assembled reads ...................: 9,981 / 20,000 (49.905%)\n",
"Discarded reads ...................: 0 / 20,000 (0.000%)\n",
"Not assembled reads ...............: 10,019 / 20,000 (50.095%)\n",
"Assembled reads file...............: fastq/3L0.assembled.fastq\n",
"Discarded reads file...............: fastq/3L0.discarded.fastq\n",
"Unassembled forward reads file.....: fastq/3L0.unassembled.forward.fastq\n",
"Unassembled reverse reads file.....: fastq/3L0.unassembled.reverse.fastq\n"
]
}
],
"source": [
"%%bash \n",
"tail -n 42 pear.log"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Important: Assembling our merged reads"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As expected, about 1/2 of reads are merged since we simulated fragment data in a size range of 50-300 bp. We now have a number of decisions to make. We could (1) analyze only the merged reads, (2) analyze only the non-merged reads, or (3) analyze both. If most of your data are merged, it may not be worth your time to bother with the non-merged data, or vice-versa. And combining them may just introduce more noise rather than more signal. \n",
"\n",
"We could combine the merged and non-merged reads early in the analysis, however, I recommend assembling the two separately (if you choose to keep both), and combining them at the end, if desired. This way you can easily check whether the two give conflicting signals, and if one dataset or the other appears messy or noisy. \n",
"To see how to assemble the non-merged reads see the [paired ddRAD non-merged tutorial](link). For step 2 you would enter the location of the \".unassembled.\\*\" reads below instead of the \".assembled.\\*\", as we do here. I show at the end of this tutorial how to combine the data sets. \n",
"\n",
"__In this tutorial I focus on how to assemble the merged data:__\n",
"\n",
"For this, we need to make two changes to the params file:\n",
"\n",
"+ __Change param 11 datatype from 'pairddrad' to 'ddrad'__ (we do not use the 'merged' data type for ddRAD data because they do not have the problem of needing to be reverse-complement clustered because the forward and reverse adapters ligate to different cutters).\n",
"+ __Set the location of our _pear_ output files for param 18__. This tells pyrad to use these files instead of simply selected all files in the fastq/ directory. \n"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%%bash\n",
"## set location of demultiplexed data that are 'pear' filtered\n",
"sed -i '/## 11./c\\ddrad ## 11. datatype ' ./params.txt\n",
"sed -i '/## 18./c\\fastq/*.assembled.* ## 18. demulti data loc ' ./params.txt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 2: Quality filtering"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we apply the quality filtering. \n",
"\n",
"+ We set the filter (param 21) to its default of 0, meaning that filtering is based only on quality scores of base calls. In this step, low quality sites are converted to Ns, and any locus with more than X number of Ns is discarded, where X is the number set on line 9 of the params file. We do not need to filter for Illumina adapters because PEAR has already done this. "
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"\n",
" ------------------------------------------------------------\n",
" pyRAD : RADseq for phylogenetics & introgression analyses\n",
" ------------------------------------------------------------\n",
"\n",
"\tsorted .fastq from fastq/*.assembled.* being used\n",
"\tstep 2: editing raw reads \n",
"\t............"
]
}
],
"source": [
"%%bash\n",
"pyrad -p params.txt -s 2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Statistics for the number of reads that passed filtering can be found in the stats/ directory. You can see that not all samples have the same exact number of reads. This is because there is some variation around how well reads are merged by _pear_ when they overlap by only a few bases. \n"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"sample \tNreads\tpassed\tpassed.w.trim\tpassed.total\n",
"1A0.assembled\t9960\t9960\t0\t9960\n",
"1B0.assembled\t9970\t9970\t0\t9970\n",
"1C0.assembled\t9961\t9961\t0\t9961\n",
"1D0.assembled\t9960\t9960\t0\t9960\n",
"2E0.assembled\t9961\t9961\t0\t9961\n",
"2F0.assembled\t9962\t9962\t0\t9962\n",
"2G0.assembled\t9980\t9980\t0\t9980\n",
"2H0.assembled\t9960\t9960\t0\t9960\n",
"3I0.assembled\t9960\t9960\t0\t9960\n",
"3J0.assembled\t9960\t9960\t0\t9960\n",
"3K0.assembled\t9961\t9961\t0\t9961\n",
"3L0.assembled\t9981\t9981\t0\t9981\n",
"\n",
" Nreads = total number of reads for a sample\n",
" passed = retained reads that passed quality filtering at full length\n",
" passed.w.trim= retained reads that were trimmed due to detection of adapters\n",
" passed.total = total kept reads of sufficient length\n",
" note: you can set the option in params file to include trimmed reads of xx length. \n"
]
}
],
"source": [
"%%bash \n",
"cat stats/s2.rawedit.txt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Assembled (merged) reads\n",
"\n",
"The filtered data files are converted to fasta format and written to a directory called edits/. Our merged data set will have the name \".assembled\" attached to it. This is important to keep it separate from our un-merged data set. \n"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1A0.assembled.edit\n",
"1B0.assembled.edit\n",
"1C0.assembled.edit\n",
"1D0.assembled.edit\n",
"2E0.assembled.edit\n",
"2F0.assembled.edit\n",
"2G0.assembled.edit\n",
"2H0.assembled.edit\n",
"3I0.assembled.edit\n",
"3J0.assembled.edit\n",
"3K0.assembled.edit\n",
"3L0.assembled.edit\n"
]
}
],
"source": [
"%%bash\n",
"ls edits/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Take a look at the merged reads \n",
"The merged data should generally have the first cutter at the beggining of the read, and the second cutter at the end of the read, like below:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
">1A0.assembled_0_r1\n",
"TGCAGCTCAAGCCTCTTAGGGGCCAGCGGTGCTACATGCTCCGCGTGCACTGGGTTCCTCCTGTTACCGTTCTGGGTGCTGCTACTACGTCTAGCGCTTCTGAAAGGTGGATGCCGGTTTGACCAAATATGCTACTGCGAGTATATACCTTGTGTGAGAGAATT\n",
">1A0.assembled_1_r1\n",
"TGCAGCTCAAGCCTCTTAGGGGCCAGCGGTGCTACATGCTCCGCGTGCACTGGGTTCCTCCTGTTACCGTTCTGGGTGCTGCTACTACGTCTAGCGCTTCTGAAAGGTGGATGCCGGTTTGACCAAATATGCTACTGCGAGTATATACCTTGTGTGAGAGAATT\n",
">1A0.assembled_2_r1\n",
"TGCAGCTCAAGCCTCTTAGGGGCCAGCGGTGCTACATGCTCCGCGTGCACTGGGTTCCTCCTGTTACCGTTCTGGGTGCTGCTACTACGTCTAGCGCTTCTGAAAGGTGGATGCCGGTTTGACCAAATATGCTACTGCGAGTATATACCTTGTGTGAGAGAATT\n",
">1A0.assembled_3_r1\n",
"TGCAGCTCAAGCCTCTTAGGGGCCAGCGGTGCTACATGCTCCGCGTGCACTGGGTTCCTCCTGTTACCGTTCTGGGTGCTGCTACTACGTCTAGCGCTTCTGAAAGGTGGATGCCGGTTTGACCAAATATGCTACTGCGAGTATATACCTTGTGTGAGAGAATT\n"
]
}
],
"source": [
"%%bash\n",
"head -n 8 edits/1A0.assembled.edit"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"-------------- \n",
"\n",
"## Step 3: Within-sample clustering"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Unlike for paired-GBS or ez-RAD data we do not need to perform reverse complement clustering for paired or single ddRAD data. There are some options for different ways to cluster paired-ddRAD data in the un-merged tutorial, but for merged data it is pretty straight forward, since the data act like single-end data. With our datatype set to 'ddrad' we simply run step 3 like below:"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"\n",
" ------------------------------------------------------------\n",
" pyRAD : RADseq for phylogenetics & introgression analyses\n",
" ------------------------------------------------------------\n",
"\n",
"\n",
"\tde-replicating files for clustering...\n",
"\n",
"\tstep 3: within-sample clustering of 12 samples at \n",
"\t '.85' similarity. Running 2 parallel jobs\n",
"\t \twith up to 6 threads per job. If needed, \n",
"\t\tadjust to avoid CPU and MEM limits\n",
"\n",
"\tsample 3L0 finished, 500 loci\n",
"\tsample 2G0 finished, 499 loci\n",
"\tsample 1B0 finished, 499 loci\n",
"\tsample 2F0 finished, 499 loci\n",
"\tsample 1C0 finished, 499 loci\n",
"\tsample 3K0 finished, 499 loci\n",
"\tsample 2E0 finished, 499 loci\n",
"\tsample 2H0 finished, 498 loci\n",
"\tsample 1A0 finished, 498 loci\n",
"\tsample 3I0 finished, 498 loci\n",
"\tsample 1D0 finished, 498 loci\n",
"\tsample 3J0 finished, 498 loci\n"
]
}
],
"source": [
"%%bash\n",
"pyrad -p params.txt -s 3 "
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"taxa\ttotal\tdpt.me\tdpt.sd\td>5.tot\td>5.me\td>5.sd\tbadpairs\n",
"1A0.assembled\t498\t20.0\t0.0\t498\t20.0\t0.0\t0\n",
"1B0.assembled\t499\t19.98\t0.447\t499\t19.98\t0.447\t0\n",
"1C0.assembled\t499\t19.962\t0.85\t498\t20.0\t0.0\t0\n",
"1D0.assembled\t498\t20.0\t0.0\t498\t20.0\t0.0\t0\n",
"2E0.assembled\t499\t19.962\t0.85\t498\t20.0\t0.0\t0\n",
"2F0.assembled\t499\t19.964\t0.805\t498\t20.0\t0.0\t0\n",
"2G0.assembled\t499\t19.98\t0.447\t499\t19.98\t0.447\t0\n",
"2H0.assembled\t498\t20.0\t0.0\t498\t20.0\t0.0\t0\n",
"3I0.assembled\t498\t20.0\t0.0\t498\t20.0\t0.0\t0\n",
"3J0.assembled\t498\t20.0\t0.0\t498\t20.0\t0.0\t0\n",
"3K0.assembled\t499\t19.962\t0.85\t498\t20.0\t0.0\t0\n",
"3L0.assembled\t500\t19.962\t0.849\t499\t20.0\t0.0\t0\n",
"\n",
" ## total = total number of clusters, including singletons\n",
" ## dpt.me = mean depth of clusters\n",
" ## dpt.sd = standard deviation of cluster depth\n",
" ## >N.tot = number of clusters with depth greater than N\n",
" ## >N.me = mean depth of clusters with depth greater than N\n",
" ## >N.sd = standard deviation of cluster depth for clusters with depth greater than N\n",
" ## badpairs = mismatched 1st & 2nd reads (only for paired ddRAD data)\n",
"\n"
]
}
],
"source": [
"%%bash\n",
"head -n 23 stats/s3.clusters.txt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Steps 4 & 5: Consensus base calling"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We next make consensus base calls for each cluster within each individual. First we estimate the error rate and heterozygosity within each sample:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"\n",
" ------------------------------------------------------------\n",
" pyRAD : RADseq for phylogenetics & introgression analyses\n",
" ------------------------------------------------------------\n",
"\n",
"\n",
"\tstep 4: estimating error rate and heterozygosity\n",
"\t............"
]
}
],
"source": [
"%%bash\n",
"pyrad -p params.txt -s 4"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Calling consensus sequences applies a number of filters, as listed in the params file and the general tutorial. "
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"\n",
" ------------------------------------------------------------\n",
" pyRAD : RADseq for phylogenetics & introgression analyses\n",
" ------------------------------------------------------------\n",
"\n",
"\n",
"\tstep 5: creating consensus seqs for 12 samples, using H=0.00129 E=0.00049\n",
"\t............"
]
}
],
"source": [
"%%bash\n",
"pyrad -p params.txt -s 5"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"taxon \tnloci\tf1loci\tf2loci\tnsites\tnpoly\tpoly\n",
"1A0.assembled\t498\t498\t498\t55339\t61\t0.0011023\n",
"2E0.assembled\t499\t498\t497\t55187\t83\t0.001504\n",
"3I0.assembled\t498\t498\t498\t55339\t83\t0.0014998\n",
"3L0.assembled\t500\t499\t499\t55508\t71\t0.0012791\n",
"2F0.assembled\t499\t498\t498\t55340\t88\t0.0015902\n",
"2H0.assembled\t498\t498\t498\t55339\t62\t0.0011204\n",
"3J0.assembled\t498\t498\t498\t55339\t73\t0.0013191\n",
"1D0.assembled\t498\t498\t498\t55340\t74\t0.0013372\n",
"2G0.assembled\t499\t499\t499\t55509\t83\t0.0014953\n",
"1C0.assembled\t499\t498\t498\t55339\t73\t0.0013191\n",
"1B0.assembled\t499\t499\t498\t55349\t70\t0.0012647\n",
"3K0.assembled\t499\t498\t498\t55339\t72\t0.0013011\n",
"\n",
" ## nloci = number of loci\n",
" ## f1loci = number of loci with >N depth coverage\n",
" ## f2loci = number of loci with >N depth and passed paralog filter\n",
" ## nsites = number of sites across f loci\n",
" ## npoly = number of polymorphic sites in nsites\n",
" ## poly = frequency of polymorphic sites\n"
]
}
],
"source": [
"%%bash\n",
"cat stats/s5.consens.txt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 6: Across-sample clustering"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This step will sometimes take the longest, depending on the size of your data set. Here it will go very quickly. "
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"vsearch v1.0.7_linux_x86_64, 7.5GB RAM, 4 cores\n",
"https://github.com/torognes/vsearch\n",
"\n",
"\n",
"\tfinished clustering\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"\n",
" ------------------------------------------------------------\n",
" pyRAD : RADseq for phylogenetics & introgression analyses\n",
" ------------------------------------------------------------\n",
"\n",
"\n",
"\tstep 6: clustering across 12 samples at '.85' similarity \n",
"\n",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 0% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 0% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 1% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 1% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 2% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 2% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 3% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 3% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 4% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 4% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 5% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 5% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 6% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 6% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 7% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 7% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 8% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 8% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 9% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 9% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 10% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 10% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 11% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 11% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 12% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 12% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 13% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 13% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 14% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 14% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 15% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 15% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 16% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 16% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 17% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 17% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 18% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 18% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 19% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 19% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 20% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 20% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 21% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 22% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 22% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 23% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 23% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 24% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 24% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 25% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 25% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 26% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 26% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 27% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 27% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 28% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 28% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 29% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 29% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 30% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 30% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 31% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 31% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 32% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 32% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 33% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 33% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 34% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 34% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 35% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 35% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 36% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 36% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 37% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 37% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 38% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 38% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 39% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 39% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 40% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 40% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 41% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 41% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 42% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 42% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 43% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 43% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 44% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 44% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 45% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 45% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 46% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 46% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 47% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 47% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 48% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 49% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 49% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 50% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 50% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 51% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 51% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 52% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 52% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 53% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 53% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 54% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 54% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 55% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 55% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 56% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 56% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 57% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 57% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 58% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 58% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 59% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 59% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 60% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 60% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 61% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 61% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 62% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 62% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 63% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 63% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 64% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 64% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 65% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 65% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 66% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 66% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 67% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 67% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 68% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 68% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 69% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 69% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 70% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 70% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 71% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 71% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 72% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 72% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 73% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 73% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 74% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 74% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 75% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 75% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 76% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 76% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 77% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 77% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 78% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 79% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 79% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 80% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 80% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 81% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 81% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 82% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 82% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 83% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 83% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 84% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 84% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 85% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 85% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 86% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 86% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 87% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 87% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 88% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 88% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 89% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 89% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 90% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 90% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 91% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 91% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 92% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 92% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 93% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 93% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 94% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 94% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 95% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 95% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 96% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 96% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 97% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 97% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 98% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 98% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 99% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 99% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 100% \r",
"Reading file /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/clust.85/cat.haplos_ 100%\n",
"724037 nt in 5977 seqs, min 59, max 184, avg 121\n",
"Indexing sequences 0% \r",
"Indexing sequences 0% \r",
"Indexing sequences 0% \r",
"Indexing sequences 1% \r",
"Indexing sequences 1% \r",
"Indexing sequences 2% \r",
"Indexing sequences 2% \r",
"Indexing sequences 3% \r",
"Indexing sequences 3% \r",
"Indexing sequences 4% \r",
"Indexing sequences 4% \r",
"Indexing sequences 5% \r",
"Indexing sequences 5% \r",
"Indexing sequences 6% \r",
"Indexing sequences 6% \r",
"Indexing sequences 7% \r",
"Indexing sequences 7% \r",
"Indexing sequences 8% \r",
"Indexing sequences 8% \r",
"Indexing sequences 9% \r",
"Indexing sequences 9% \r",
"Indexing sequences 10% \r",
"Indexing sequences 10% \r",
"Indexing sequences 11% \r",
"Indexing sequences 11% \r",
"Indexing sequences 12% \r",
"Indexing sequences 12% \r",
"Indexing sequences 13% \r",
"Indexing sequences 13% \r",
"Indexing sequences 14% \r",
"Indexing sequences 14% \r",
"Indexing sequences 15% \r",
"Indexing sequences 15% \r",
"Indexing sequences 16% \r",
"Indexing sequences 16% \r",
"Indexing sequences 16% \r",
"Indexing sequences 17% \r",
"Indexing sequences 17% \r",
"Indexing sequences 18% \r",
"Indexing sequences 18% \r",
"Indexing sequences 19% \r",
"Indexing sequences 19% \r",
"Indexing sequences 20% \r",
"Indexing sequences 20% \r",
"Indexing sequences 21% \r",
"Indexing sequences 21% \r",
"Indexing sequences 22% \r",
"Indexing sequences 22% \r",
"Indexing sequences 23% \r",
"Indexing sequences 23% \r",
"Indexing sequences 24% \r",
"Indexing sequences 24% \r",
"Indexing sequences 25% \r",
"Indexing sequences 25% \r",
"Indexing sequences 26% \r",
"Indexing sequences 26% \r",
"Indexing sequences 27% \r",
"Indexing sequences 27% \r",
"Indexing sequences 28% \r",
"Indexing sequences 28% \r",
"Indexing sequences 29% \r",
"Indexing sequences 29% \r",
"Indexing sequences 30% \r",
"Indexing sequences 30% \r",
"Indexing sequences 31% \r",
"Indexing sequences 31% \r",
"Indexing sequences 32% \r",
"Indexing sequences 32% \r",
"Indexing sequences 33% \r",
"Indexing sequences 33% \r",
"Indexing sequences 33% \r",
"Indexing sequences 34% \r",
"Indexing sequences 34% \r",
"Indexing sequences 35% \r",
"Indexing sequences 35% \r",
"Indexing sequences 36% \r",
"Indexing sequences 36% \r",
"Indexing sequences 37% \r",
"Indexing sequences 37% \r",
"Indexing sequences 38% \r",
"Indexing sequences 38% \r",
"Indexing sequences 39% \r",
"Indexing sequences 39% \r",
"Indexing sequences 40% \r",
"Indexing sequences 40% \r",
"Indexing sequences 41% \r",
"Indexing sequences 41% \r",
"Indexing sequences 42% \r",
"Indexing sequences 42% \r",
"Indexing sequences 43% \r",
"Indexing sequences 43% \r",
"Indexing sequences 44% \r",
"Indexing sequences 44% \r",
"Indexing sequences 45% \r",
"Indexing sequences 45% \r",
"Indexing sequences 46% \r",
"Indexing sequences 46% \r",
"Indexing sequences 47% \r",
"Indexing sequences 47% \r",
"Indexing sequences 48% \r",
"Indexing sequences 48% \r",
"Indexing sequences 49% \r",
"Indexing sequences 49% \r",
"Indexing sequences 49% \r",
"Indexing sequences 50% \r",
"Indexing sequences 50% \r",
"Indexing sequences 51% \r",
"Indexing sequences 51% \r",
"Indexing sequences 52% \r",
"Indexing sequences 52% \r",
"Indexing sequences 53% \r",
"Indexing sequences 53% \r",
"Indexing sequences 54% \r",
"Indexing sequences 54% \r",
"Indexing sequences 55% \r",
"Indexing sequences 55% \r",
"Indexing sequences 56% \r",
"Indexing sequences 56% \r",
"Indexing sequences 57% \r",
"Indexing sequences 57% \r",
"Indexing sequences 58% \r",
"Indexing sequences 58% \r",
"Indexing sequences 59% \r",
"Indexing sequences 59% \r",
"Indexing sequences 60% \r",
"Indexing sequences 60% \r",
"Indexing sequences 61% \r",
"Indexing sequences 61% \r",
"Indexing sequences 62% \r",
"Indexing sequences 62% \r",
"Indexing sequences 63% \r",
"Indexing sequences 63% \r",
"Indexing sequences 64% \r",
"Indexing sequences 64% \r",
"Indexing sequences 65% \r",
"Indexing sequences 65% \r",
"Indexing sequences 66% \r",
"Indexing sequences 66% \r",
"Indexing sequences 66% \r",
"Indexing sequences 67% \r",
"Indexing sequences 67% \r",
"Indexing sequences 68% \r",
"Indexing sequences 68% \r",
"Indexing sequences 69% \r",
"Indexing sequences 69% \r",
"Indexing sequences 70% \r",
"Indexing sequences 70% \r",
"Indexing sequences 71% \r",
"Indexing sequences 71% \r",
"Indexing sequences 72% \r",
"Indexing sequences 72% \r",
"Indexing sequences 73% \r",
"Indexing sequences 73% \r",
"Indexing sequences 74% \r",
"Indexing sequences 74% \r",
"Indexing sequences 75% \r",
"Indexing sequences 75% \r",
"Indexing sequences 76% \r",
"Indexing sequences 76% \r",
"Indexing sequences 77% \r",
"Indexing sequences 77% \r",
"Indexing sequences 78% \r",
"Indexing sequences 78% \r",
"Indexing sequences 79% \r",
"Indexing sequences 79% \r",
"Indexing sequences 80% \r",
"Indexing sequences 80% \r",
"Indexing sequences 81% \r",
"Indexing sequences 81% \r",
"Indexing sequences 82% \r",
"Indexing sequences 82% \r",
"Indexing sequences 82% \r",
"Indexing sequences 83% \r",
"Indexing sequences 83% \r",
"Indexing sequences 84% \r",
"Indexing sequences 84% \r",
"Indexing sequences 85% \r",
"Indexing sequences 85% \r",
"Indexing sequences 86% \r",
"Indexing sequences 86% \r",
"Indexing sequences 87% \r",
"Indexing sequences 87% \r",
"Indexing sequences 88% \r",
"Indexing sequences 88% \r",
"Indexing sequences 89% \r",
"Indexing sequences 89% \r",
"Indexing sequences 90% \r",
"Indexing sequences 90% \r",
"Indexing sequences 91% \r",
"Indexing sequences 91% \r",
"Indexing sequences 92% \r",
"Indexing sequences 92% \r",
"Indexing sequences 93% \r",
"Indexing sequences 93% \r",
"Indexing sequences 94% \r",
"Indexing sequences 94% \r",
"Indexing sequences 95% \r",
"Indexing sequences 95% \r",
"Indexing sequences 96% \r",
"Indexing sequences 96% \r",
"Indexing sequences 97% \r",
"Indexing sequences 97% \r",
"Indexing sequences 98% \r",
"Indexing sequences 98% \r",
"Indexing sequences 98% \r",
"Indexing sequences 99% \r",
"Indexing sequences 99% \r",
"Indexing sequences 100% \r",
"Indexing sequences 100%\n",
"Counting unique k-mers 0% \r",
"Counting unique k-mers 0% \r",
"Counting unique k-mers 0% \r",
"Counting unique k-mers 1% \r",
"Counting unique k-mers 1% \r",
"Counting unique k-mers 2% \r",
"Counting unique k-mers 2% \r",
"Counting unique k-mers 3% \r",
"Counting unique k-mers 3% \r",
"Counting unique k-mers 4% \r",
"Counting unique k-mers 4% \r",
"Counting unique k-mers 5% \r",
"Counting unique k-mers 5% \r",
"Counting unique k-mers 6% \r",
"Counting unique k-mers 6% \r",
"Counting unique k-mers 7% \r",
"Counting unique k-mers 7% \r",
"Counting unique k-mers 8% \r",
"Counting unique k-mers 8% \r",
"Counting unique k-mers 9% \r",
"Counting unique k-mers 9% \r",
"Counting unique k-mers 10% \r",
"Counting unique k-mers 10% \r",
"Counting unique k-mers 11% \r",
"Counting unique k-mers 11% \r",
"Counting unique k-mers 12% \r",
"Counting unique k-mers 12% \r",
"Counting unique k-mers 13% \r",
"Counting unique k-mers 13% \r",
"Counting unique k-mers 14% \r",
"Counting unique k-mers 14% \r",
"Counting unique k-mers 15% \r",
"Counting unique k-mers 15% \r",
"Counting unique k-mers 16% \r",
"Counting unique k-mers 16% \r",
"Counting unique k-mers 16% \r",
"Counting unique k-mers 17% \r",
"Counting unique k-mers 17% \r",
"Counting unique k-mers 18% \r",
"Counting unique k-mers 18% \r",
"Counting unique k-mers 19% \r",
"Counting unique k-mers 19% \r",
"Counting unique k-mers 20% \r",
"Counting unique k-mers 20% \r",
"Counting unique k-mers 21% \r",
"Counting unique k-mers 21% \r",
"Counting unique k-mers 22% \r",
"Counting unique k-mers 22% \r",
"Counting unique k-mers 23% \r",
"Counting unique k-mers 23% \r",
"Counting unique k-mers 24% \r",
"Counting unique k-mers 24% \r",
"Counting unique k-mers 25% \r",
"Counting unique k-mers 25% \r",
"Counting unique k-mers 26% \r",
"Counting unique k-mers 26% \r",
"Counting unique k-mers 27% \r",
"Counting unique k-mers 27% \r",
"Counting unique k-mers 28% \r",
"Counting unique k-mers 28% \r",
"Counting unique k-mers 29% \r",
"Counting unique k-mers 29% \r",
"Counting unique k-mers 30% \r",
"Counting unique k-mers 30% \r",
"Counting unique k-mers 31% \r",
"Counting unique k-mers 31% \r",
"Counting unique k-mers 32% \r",
"Counting unique k-mers 32% \r",
"Counting unique k-mers 33% \r",
"Counting unique k-mers 33% \r",
"Counting unique k-mers 33% \r",
"Counting unique k-mers 34% \r",
"Counting unique k-mers 34% \r",
"Counting unique k-mers 35% \r",
"Counting unique k-mers 35% \r",
"Counting unique k-mers 36% \r",
"Counting unique k-mers 36% \r",
"Counting unique k-mers 37% \r",
"Counting unique k-mers 37% \r",
"Counting unique k-mers 38% \r",
"Counting unique k-mers 38% \r",
"Counting unique k-mers 39% \r",
"Counting unique k-mers 39% \r",
"Counting unique k-mers 40% \r",
"Counting unique k-mers 40% \r",
"Counting unique k-mers 41% \r",
"Counting unique k-mers 41% \r",
"Counting unique k-mers 42% \r",
"Counting unique k-mers 42% \r",
"Counting unique k-mers 43% \r",
"Counting unique k-mers 43% \r",
"Counting unique k-mers 44% \r",
"Counting unique k-mers 44% \r",
"Counting unique k-mers 45% \r",
"Counting unique k-mers 45% \r",
"Counting unique k-mers 46% \r",
"Counting unique k-mers 46% \r",
"Counting unique k-mers 47% \r",
"Counting unique k-mers 47% \r",
"Counting unique k-mers 48% \r",
"Counting unique k-mers 48% \r",
"Counting unique k-mers 49% \r",
"Counting unique k-mers 49% \r",
"Counting unique k-mers 49% \r",
"Counting unique k-mers 50% \r",
"Counting unique k-mers 50% \r",
"Counting unique k-mers 51% \r",
"Counting unique k-mers 51% \r",
"Counting unique k-mers 52% \r",
"Counting unique k-mers 52% \r",
"Counting unique k-mers 53% \r",
"Counting unique k-mers 53% \r",
"Counting unique k-mers 54% \r",
"Counting unique k-mers 54% \r",
"Counting unique k-mers 55% \r",
"Counting unique k-mers 55% \r",
"Counting unique k-mers 56% \r",
"Counting unique k-mers 56% \r",
"Counting unique k-mers 57% \r",
"Counting unique k-mers 57% \r",
"Counting unique k-mers 58% \r",
"Counting unique k-mers 58% \r",
"Counting unique k-mers 59% \r",
"Counting unique k-mers 59% \r",
"Counting unique k-mers 60% \r",
"Counting unique k-mers 60% \r",
"Counting unique k-mers 61% \r",
"Counting unique k-mers 61% \r",
"Counting unique k-mers 62% \r",
"Counting unique k-mers 62% \r",
"Counting unique k-mers 63% \r",
"Counting unique k-mers 63% \r",
"Counting unique k-mers 64% \r",
"Counting unique k-mers 64% \r",
"Counting unique k-mers 65% \r",
"Counting unique k-mers 65% \r",
"Counting unique k-mers 66% \r",
"Counting unique k-mers 66% \r",
"Counting unique k-mers 66% \r",
"Counting unique k-mers 67% \r",
"Counting unique k-mers 67% \r",
"Counting unique k-mers 68% \r",
"Counting unique k-mers 68% \r",
"Counting unique k-mers 69% \r",
"Counting unique k-mers 69% \r",
"Counting unique k-mers 70% \r",
"Counting unique k-mers 70% \r",
"Counting unique k-mers 71% \r",
"Counting unique k-mers 71% \r",
"Counting unique k-mers 72% \r",
"Counting unique k-mers 72% \r",
"Counting unique k-mers 73% \r",
"Counting unique k-mers 73% \r",
"Counting unique k-mers 74% \r",
"Counting unique k-mers 74% \r",
"Counting unique k-mers 75% \r",
"Counting unique k-mers 75% \r",
"Counting unique k-mers 76% \r",
"Counting unique k-mers 76% \r",
"Counting unique k-mers 77% \r",
"Counting unique k-mers 77% \r",
"Counting unique k-mers 78% \r",
"Counting unique k-mers 78% \r",
"Counting unique k-mers 79% \r",
"Counting unique k-mers 79% \r",
"Counting unique k-mers 80% \r",
"Counting unique k-mers 80% \r",
"Counting unique k-mers 81% \r",
"Counting unique k-mers 81% \r",
"Counting unique k-mers 82% \r",
"Counting unique k-mers 82% \r",
"Counting unique k-mers 82% \r",
"Counting unique k-mers 83% \r",
"Counting unique k-mers 83% \r",
"Counting unique k-mers 84% \r",
"Counting unique k-mers 84% \r",
"Counting unique k-mers 85% \r",
"Counting unique k-mers 85% \r",
"Counting unique k-mers 86% \r",
"Counting unique k-mers 86% \r",
"Counting unique k-mers 87% \r",
"Counting unique k-mers 87% \r",
"Counting unique k-mers 88% \r",
"Counting unique k-mers 88% \r",
"Counting unique k-mers 89% \r",
"Counting unique k-mers 89% \r",
"Counting unique k-mers 90% \r",
"Counting unique k-mers 90% \r",
"Counting unique k-mers 91% \r",
"Counting unique k-mers 91% \r",
"Counting unique k-mers 92% \r",
"Counting unique k-mers 92% \r",
"Counting unique k-mers 93% \r",
"Counting unique k-mers 93% \r",
"Counting unique k-mers 94% \r",
"Counting unique k-mers 94% \r",
"Counting unique k-mers 95% \r",
"Counting unique k-mers 95% \r",
"Counting unique k-mers 96% \r",
"Counting unique k-mers 96% \r",
"Counting unique k-mers 97% \r",
"Counting unique k-mers 97% \r",
"Counting unique k-mers 98% \r",
"Counting unique k-mers 98% \r",
"Counting unique k-mers 98% \r",
"Counting unique k-mers 99% \r",
"Counting unique k-mers 99% \r",
"Counting unique k-mers 100% \r",
"Counting unique k-mers 100%\n",
"Clustering 0% \r",
"Clustering 0% \r",
"Clustering 1% \r",
"Clustering 1% \r",
"Clustering 2% \r",
"Clustering 3% \r",
"Clustering 3% \r",
"Clustering 4% \r",
"Clustering 4% \r",
"Clustering 5% \r",
"Clustering 6% \r",
"Clustering 6% \r",
"Clustering 7% \r",
"Clustering 7% \r",
"Clustering 8% \r",
"Clustering 9% \r",
"Clustering 9% \r",
"Clustering 10% \r",
"Clustering 10% \r",
"Clustering 11% \r",
"Clustering 11% \r",
"Clustering 12% \r",
"Clustering 13% \r",
"Clustering 13% \r",
"Clustering 14% \r",
"Clustering 14% \r",
"Clustering 15% \r",
"Clustering 15% \r",
"Clustering 16% \r",
"Clustering 17% \r",
"Clustering 17% \r",
"Clustering 18% \r",
"Clustering 18% \r",
"Clustering 19% \r",
"Clustering 19% \r",
"Clustering 20% \r",
"Clustering 20% \r",
"Clustering 21% \r",
"Clustering 22% \r",
"Clustering 22% \r",
"Clustering 23% \r",
"Clustering 23% \r",
"Clustering 24% \r",
"Clustering 24% \r",
"Clustering 25% \r",
"Clustering 25% \r",
"Clustering 26% \r",
"Clustering 26% \r",
"Clustering 27% \r",
"Clustering 27% \r",
"Clustering 28% \r",
"Clustering 29% \r",
"Clustering 29% \r",
"Clustering 30% \r",
"Clustering 30% \r",
"Clustering 31% \r",
"Clustering 31% \r",
"Clustering 32% \r",
"Clustering 32% \r",
"Clustering 33% \r",
"Clustering 33% \r",
"Clustering 34% \r",
"Clustering 34% \r",
"Clustering 35% \r",
"Clustering 35% \r",
"Clustering 36% \r",
"Clustering 37% \r",
"Clustering 37% \r",
"Clustering 38% \r",
"Clustering 38% \r",
"Clustering 39% \r",
"Clustering 40% \r",
"Clustering 40% \r",
"Clustering 41% \r",
"Clustering 41% \r",
"Clustering 42% \r",
"Clustering 43% \r",
"Clustering 43% \r",
"Clustering 44% \r",
"Clustering 44% \r",
"Clustering 45% \r",
"Clustering 46% \r",
"Clustering 46% \r",
"Clustering 47% \r",
"Clustering 47% \r",
"Clustering 48% \r",
"Clustering 48% \r",
"Clustering 49% \r",
"Clustering 50% \r",
"Clustering 50% \r",
"Clustering 51% \r",
"Clustering 51% \r",
"Clustering 52% \r",
"Clustering 52% \r",
"Clustering 53% \r",
"Clustering 53% \r",
"Clustering 54% \r",
"Clustering 55% \r",
"Clustering 55% \r",
"Clustering 56% \r",
"Clustering 56% \r",
"Clustering 57% \r",
"Clustering 57% \r",
"Clustering 58% \r",
"Clustering 58% \r",
"Clustering 59% \r",
"Clustering 59% \r",
"Clustering 60% \r",
"Clustering 60% \r",
"Clustering 61% \r",
"Clustering 61% \r",
"Clustering 62% \r",
"Clustering 62% \r",
"Clustering 63% \r",
"Clustering 63% \r",
"Clustering 64% \r",
"Clustering 64% \r",
"Clustering 65% \r",
"Clustering 66% \r",
"Clustering 66% \r",
"Clustering 67% \r",
"Clustering 67% \r",
"Clustering 68% \r",
"Clustering 69% \r",
"Clustering 69% \r",
"Clustering 70% \r",
"Clustering 70% \r",
"Clustering 71% \r",
"Clustering 71% \r",
"Clustering 72% \r",
"Clustering 72% \r",
"Clustering 73% \r",
"Clustering 74% \r",
"Clustering 74% \r",
"Clustering 75% \r",
"Clustering 75% \r",
"Clustering 76% \r",
"Clustering 76% \r",
"Clustering 77% \r",
"Clustering 77% \r",
"Clustering 78% \r",
"Clustering 78% \r",
"Clustering 79% \r",
"Clustering 79% \r",
"Clustering 80% \r",
"Clustering 80% \r",
"Clustering 81% \r",
"Clustering 81% \r",
"Clustering 82% \r",
"Clustering 83% \r",
"Clustering 83% \r",
"Clustering 84% \r",
"Clustering 84% \r",
"Clustering 85% \r",
"Clustering 85% \r",
"Clustering 86% \r",
"Clustering 86% \r",
"Clustering 87% \r",
"Clustering 87% \r",
"Clustering 88% \r",
"Clustering 88% \r",
"Clustering 89% \r",
"Clustering 90% \r",
"Clustering 90% \r",
"Clustering 91% \r",
"Clustering 91% \r",
"Clustering 92% \r",
"Clustering 92% \r",
"Clustering 93% \r",
"Clustering 93% \r",
"Clustering 94% \r",
"Clustering 94% \r",
"Clustering 95% \r",
"Clustering 95% \r",
"Clustering 96% \r",
"Clustering 96% \r",
"Clustering 97% \r",
"Clustering 97% \r",
"Clustering 98% \r",
"Clustering 99% \r",
"Clustering 99% \r",
"Clustering 100% \r",
"Clustering 100%\n",
"Writing clusters 0% \r",
"Writing clusters 0% \r",
"Writing clusters 0% \r",
"Writing clusters 1% \r",
"Writing clusters 1% \r",
"Writing clusters 2% \r",
"Writing clusters 2% \r",
"Writing clusters 3% \r",
"Writing clusters 3% \r",
"Writing clusters 4% \r",
"Writing clusters 4% \r",
"Writing clusters 5% \r",
"Writing clusters 5% \r",
"Writing clusters 6% \r",
"Writing clusters 6% \r",
"Writing clusters 7% \r",
"Writing clusters 7% \r",
"Writing clusters 8% \r",
"Writing clusters 8% \r",
"Writing clusters 9% \r",
"Writing clusters 9% \r",
"Writing clusters 10% \r",
"Writing clusters 10% \r",
"Writing clusters 11% \r",
"Writing clusters 11% \r",
"Writing clusters 12% \r",
"Writing clusters 12% \r",
"Writing clusters 13% \r",
"Writing clusters 13% \r",
"Writing clusters 14% \r",
"Writing clusters 14% \r",
"Writing clusters 15% \r",
"Writing clusters 15% \r",
"Writing clusters 16% \r",
"Writing clusters 16% \r",
"Writing clusters 16% \r",
"Writing clusters 17% \r",
"Writing clusters 17% \r",
"Writing clusters 18% \r",
"Writing clusters 18% \r",
"Writing clusters 19% \r",
"Writing clusters 19% \r",
"Writing clusters 20% \r",
"Writing clusters 20% \r",
"Writing clusters 21% \r",
"Writing clusters 21% \r",
"Writing clusters 22% \r",
"Writing clusters 22% \r",
"Writing clusters 23% \r",
"Writing clusters 23% \r",
"Writing clusters 24% \r",
"Writing clusters 24% \r",
"Writing clusters 25% \r",
"Writing clusters 25% \r",
"Writing clusters 26% \r",
"Writing clusters 26% \r",
"Writing clusters 27% \r",
"Writing clusters 27% \r",
"Writing clusters 28% \r",
"Writing clusters 28% \r",
"Writing clusters 29% \r",
"Writing clusters 29% \r",
"Writing clusters 30% \r",
"Writing clusters 30% \r",
"Writing clusters 31% \r",
"Writing clusters 31% \r",
"Writing clusters 32% \r",
"Writing clusters 32% \r",
"Writing clusters 33% \r",
"Writing clusters 33% \r",
"Writing clusters 33% \r",
"Writing clusters 34% \r",
"Writing clusters 34% \r",
"Writing clusters 35% \r",
"Writing clusters 35% \r",
"Writing clusters 36% \r",
"Writing clusters 36% \r",
"Writing clusters 37% \r",
"Writing clusters 37% \r",
"Writing clusters 38% \r",
"Writing clusters 38% \r",
"Writing clusters 39% \r",
"Writing clusters 39% \r",
"Writing clusters 40% \r",
"Writing clusters 40% \r",
"Writing clusters 41% \r",
"Writing clusters 41% \r",
"Writing clusters 42% \r",
"Writing clusters 42% \r",
"Writing clusters 43% \r",
"Writing clusters 43% \r",
"Writing clusters 44% \r",
"Writing clusters 44% \r",
"Writing clusters 45% \r",
"Writing clusters 45% \r",
"Writing clusters 46% \r",
"Writing clusters 46% \r",
"Writing clusters 47% \r",
"Writing clusters 47% \r",
"Writing clusters 48% \r",
"Writing clusters 48% \r",
"Writing clusters 49% \r",
"Writing clusters 49% \r",
"Writing clusters 49% \r",
"Writing clusters 50% \r",
"Writing clusters 50% \r",
"Writing clusters 51% \r",
"Writing clusters 51% \r",
"Writing clusters 52% \r",
"Writing clusters 52% \r",
"Writing clusters 53% \r",
"Writing clusters 53% \r",
"Writing clusters 54% \r",
"Writing clusters 54% \r",
"Writing clusters 55% \r",
"Writing clusters 55% \r",
"Writing clusters 56% \r",
"Writing clusters 56% \r",
"Writing clusters 57% \r",
"Writing clusters 57% \r",
"Writing clusters 58% \r",
"Writing clusters 58% \r",
"Writing clusters 59% \r",
"Writing clusters 59% \r",
"Writing clusters 60% \r",
"Writing clusters 60% \r",
"Writing clusters 61% \r",
"Writing clusters 61% \r",
"Writing clusters 62% \r",
"Writing clusters 62% \r",
"Writing clusters 63% \r",
"Writing clusters 63% \r",
"Writing clusters 64% \r",
"Writing clusters 64% \r",
"Writing clusters 65% \r",
"Writing clusters 65% \r",
"Writing clusters 66% \r",
"Writing clusters 66% \r",
"Writing clusters 66% \r",
"Writing clusters 67% \r",
"Writing clusters 67% \r",
"Writing clusters 68% \r",
"Writing clusters 68% \r",
"Writing clusters 69% \r",
"Writing clusters 69% \r",
"Writing clusters 70% \r",
"Writing clusters 70% \r",
"Writing clusters 71% \r",
"Writing clusters 71% \r",
"Writing clusters 72% \r",
"Writing clusters 72% \r",
"Writing clusters 73% \r",
"Writing clusters 73% \r",
"Writing clusters 74% \r",
"Writing clusters 74% \r",
"Writing clusters 75% \r",
"Writing clusters 75% \r",
"Writing clusters 76% \r",
"Writing clusters 76% \r",
"Writing clusters 77% \r",
"Writing clusters 77% \r",
"Writing clusters 78% \r",
"Writing clusters 78% \r",
"Writing clusters 79% \r",
"Writing clusters 79% \r",
"Writing clusters 80% \r",
"Writing clusters 80% \r",
"Writing clusters 81% \r",
"Writing clusters 81% \r",
"Writing clusters 82% \r",
"Writing clusters 82% \r",
"Writing clusters 82% \r",
"Writing clusters 83% \r",
"Writing clusters 83% \r",
"Writing clusters 84% \r",
"Writing clusters 84% \r",
"Writing clusters 85% \r",
"Writing clusters 85% \r",
"Writing clusters 86% \r",
"Writing clusters 86% \r",
"Writing clusters 87% \r",
"Writing clusters 87% \r",
"Writing clusters 88% \r",
"Writing clusters 88% \r",
"Writing clusters 89% \r",
"Writing clusters 89% \r",
"Writing clusters 90% \r",
"Writing clusters 90% \r",
"Writing clusters 91% \r",
"Writing clusters 91% \r",
"Writing clusters 92% \r",
"Writing clusters 92% \r",
"Writing clusters 93% \r",
"Writing clusters 93% \r",
"Writing clusters 94% \r",
"Writing clusters 94% \r",
"Writing clusters 95% \r",
"Writing clusters 95% \r",
"Writing clusters 96% \r",
"Writing clusters 96% \r",
"Writing clusters 97% \r",
"Writing clusters 97% \r",
"Writing clusters 98% \r",
"Writing clusters 98% \r",
"Writing clusters 98% \r",
"Writing clusters 99% \r",
"Writing clusters 99% \r",
"Writing clusters 100% \r",
"Writing clusters 100%\n",
"Clusters: 501 Size min 1, max 12, avg 11.9\n",
"Singletons: 3, 0.1% of seqs, 0.6% of clusters\n"
]
}
],
"source": [
"%%bash\n",
"pyrad -p params.txt -s 6"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 7: Statistics and output files"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Alignment of loci, statistics, and output of various formatted data files."
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\tingroup 1A0.assembled,1B0.assembled,1C0.assembled,1D0.assembled,2E0.assembled,2F0.assembled,2G0.assembled,2H0.assembled,3I0.assembled,3J0.assembled,3K0.assembled,3L0.assembled\n",
"\taddon \n",
"\texclude \n",
"\t\n",
"\tfinal stats written to:\n",
"\t /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/stats/merged.stats\n",
"\toutput files being written to:\n",
"\t /home/deren/Dropbox/Public/PyRAD_TUTORIALS/tutorial_pairddRAD/outfiles/ directory\n",
"\n",
"\twriting nexus file\n",
"\twriting phylip file\n",
"\t + writing full SNPs file\n",
"\t + writing unlinked SNPs file\n",
"\t + writing STRUCTURE file\n",
"\t + writing geno file\n",
"\t ** must enter group/clade assignments for treemix output \n",
"\twriting vcf file\n",
"\twriting alleles file\n",
"\t ** must enter group/clade assignments for migrate-n output \n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n",
"\n",
" ------------------------------------------------------------\n",
" pyRAD : RADseq for phylogenetics & introgression analyses\n",
" ------------------------------------------------------------\n",
"\n",
"..."
]
}
],
"source": [
"%%bash\n",
"pyrad -p params.txt -s 7"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Stats for this run. In this case ~490 loci were shared across all 12 samples."
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"498 ## loci with > minsp containing data\n",
"498 ## loci with > minsp containing data & paralogs removed\n",
"498 ## loci with > minsp containing data & paralogs removed & final filtering\n",
"\n",
"## number of loci recovered in final data set for each taxon.\n",
"taxon\tnloci\n",
"1A0.assembled\t498\n",
"1B0.assembled\t497\n",
"1C0.assembled\t498\n",
"1D0.assembled\t498\n",
"2E0.assembled\t497\n",
"2F0.assembled\t498\n",
"2G0.assembled\t498\n",
"2H0.assembled\t498\n",
"3I0.assembled\t498\n",
"3J0.assembled\t498\n",
"3K0.assembled\t498\n",
"3L0.assembled\t498\n",
"\n",
"\n",
"## nloci = number of loci with data for exactly ntaxa\n",
"## ntotal = number of loci for which at least ntaxa have data\n",
"ntaxa\tnloci\tsaved\tntotal\n",
"1\t-\n",
"2\t-\t\t-\n",
"3\t-\t\t-\n",
"4\t0\t*\t498\n",
"5\t0\t*\t498\n",
"6\t0\t*\t498\n",
"7\t0\t*\t498\n",
"8\t0\t*\t498\n",
"9\t0\t*\t498\n",
"10\t0\t*\t498\n",
"11\t2\t*\t498\n",
"12\t496\t*\t496\n",
"\n",
"\n",
"## nvar = number of loci containing n variable sites (pis+autapomorphies).\n",
"## sumvar = sum of variable sites (SNPs).\n",
"## pis = number of loci containing n parsimony informative sites.\n",
"## sumpis = sum of parsimony informative sites.\n",
"\tnvar\tsumvar\tPIS\tsumPIS\n",
"0\t9\t0\t107\t0\n",
"1\t21\t21\t150\t150\n",
"2\t53\t127\t125\t400\n",
"3\t78\t361\t63\t589\n",
"4\t69\t637\t33\t721\n",
"5\t82\t1047\t13\t786\n",
"6\t56\t1383\t3\t804\n",
"7\t45\t1698\t2\t818\n",
"8\t35\t1978\t2\t834\n",
"9\t21\t2167\t0\t834\n",
"10\t11\t2277\t0\t834\n",
"11\t9\t2376\t0\t834\n",
"12\t6\t2448\t0\t834\n",
"13\t2\t2474\t0\t834\n",
"14\t0\t2474\t0\t834\n",
"15\t1\t2489\t0\t834\n",
"total var= 2489\n",
"total pis= 834\n",
"sampled unlinked SNPs= 489\n",
"sampled unlinked bi-allelic SNPs= 465\n"
]
}
],
"source": [
"%%bash\n",
"cat stats/merged.stats"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Output files"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"merged.alleles\n",
"merged.excluded_loci\n",
"merged.geno\n",
"merged.loci\n",
"merged.nex\n",
"merged.phy\n",
"merged.snps\n",
"merged.str\n",
"merged.unlinked_snps\n",
"merged.vcf\n"
]
}
],
"source": [
"%%bash\n",
"ls outfiles/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Stats files"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"merged.stats\n",
"Pi_E_estimate.txt\n",
"s1.sorting.txt\n",
"s2.rawedit.txt\n",
"s3.clusters.txt\n",
"s5.consens.txt\n"
]
}
],
"source": [
"%%bash\n",
"ls stats/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Take a look at the data"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
">1A0.assembled GGCTGCGTAAGAATGTCGACCTGACGACAAAGCTGCCCCCGCCCTAATATTTACCCCTATTA\n",
">1B0.assembled GGCTGCGTAAGAATGTCGACCTGACGACAAAGCTGCCCCCGCCCTAATATTTACCCCTATTA\n",
">1C0.assembled GGCTGCGTAAGAATGTCGACCTGACGACAAAGCTGCCCCCGCCCTAATATTTACCCCTATTA\n",
">1D0.assembled GGGTGCGTAAGAATGTCGACCTGACGACAAAGCTGCCCCCGCCCTAATATTTACCCCTATTA\n",
">2E0.assembled GGCTGCGTAAGAATGTCGACCTGACGACAAAGCTGCCCCCGCCCGAATATTTACCCCTATTA\n",
">2F0.assembled GGCTGCGTAAGAATGTCGACCTGACGACAAAGCTGCCCCCGCCCTAATATTTACCCCTATTA\n",
">2G0.assembled GGCTGCGTAAGAATGTCGACCTGACGACAAAGCTGCCCCCGCCCTAATATTTACCCCTATTA\n",
">2H0.assembled GGCTGCGTAAGAATGTCGACCTGACGACAAAGCTGCCCCCGCCCTAATATTTACCCCTATTA\n",
">3I0.assembled GGCTGCGTAAGAATGTCGACCTGACGACAAAGCTGCCCCCGCCCTAATATTTACCCCTATTA\n",
">3J0.assembled GGCTGCGTAAGAATGTCGACCTGACGACAAAGCTGCCCCCGCCCTAATATTTACCCCTATTA\n",
">3K0.assembled GGCTGCGTAAGAATGTCGACCTGACGACAAAGCTGCCCCCGCCCTAATATTTACCCCTATTA\n",
">3L0.assembled GGCTGCGTAAGAATGTCGACCTGACGACAAAGCTGCCCCCGCCCTAATATTTACCCCTATTA\n",
"// - - \n",
">1A0.assembled GCATGAGCGAAAAATCTTACGCTTAAGGAGTTCGATCTTTTTTCAGGGGCAGTTAGCCACTG\n",
">1B0.assembled GCATGAGCGAAAAATCTTACGCTTAAGGAGTTCGATCTTTTTTCAGGGGCAGTTAGCCACTG\n",
">1C0.assembled GCATGAGCGAAAAATCTTACGCTTAAGGAGTTCGATCTTTTTTCAGGGGCAGTTAGCCACTG\n",
">1D0.assembled GCGTGAGCGAAAAATCTTACGCTTAAGGAGTTCGATCTTTTTTCAGGGGCAGTTAGCCACTG\n",
">2E0.assembled GCATGAGCGAAAAATCTTACGCTTAAGGAGTTCGATCTTTWTTCAGGGGCAGTTAGCCACTG\n",
">2F0.assembled GCATGAGCGAAAAATCTTAMGCTTAAGGAGTTCGATCTTTTTTCAGGGGCAGTTAGCCACTG\n",
">2G0.assembled GCATGAGCGAAAAATCTTACGCTTAAGGAGTTCGATCTTTTTTCAGGGGCAGTTAGCCACTG\n",
">2H0.assembled GCATGAGCGAAAAATCTTACGCTTAAGGAGTTCGATCTTTTTTCAGGGGCAGTTAGCCACTG\n",
">3I0.assembled GCATGAGCGAAAAATCTTACGCTTAAGGAGTTCGATCTTTTTTCAGGGGCAGTTATCCACTG\n",
">3J0.assembled GCATGAGCGAAAAATCTTACGCTTAAGGAGTTCGATCTTTTTTCAGGGGCAGTTATCCACTG\n",
">3K0.assembled GCATGAGCGAAAAATCTTACGCTTAAGGAGTTCGATCTTTTTTCAGGGGCAGTTATCCACTG\n",
">3L0.assembled GCATGAGCGAAAAATCTTACGCTTAAGGAGTTCGATCTTTTTTCAGGGGCAGTTATCCACTG\n",
"// - - - * \n",
">1A0.assembled TCATCCGACCCGCGCCTCCATGTACCATGCCTGTATTTGGTACAAACTACGGTGAAGCCCGC\n",
">1B0.assembled TCATCCGACCCGCGCCTCCATGTACCATGCCTGTATTTGGTACAAACTACGGTGAAGCCCGC\n",
">1C0.assembled TCATCCGACCCGCGCCTCCATGTACCATGCCTGTATTTGGTACAAACTACGGTGAAGCCCGC\n",
">1D0.assembled TCATCCGACCCGCGCCTCCATGTACCATGCCTGTATTTGGTACAAACTACGGTGAAGCCCGC\n",
">2E0.assembled TCATCCGACCCGCGCCTCCATGTACCATGCCTGTATTTGGTACAAACTACGGTGAAGCCCGC\n",
">2F0.assembled TCATCCGACCCGCGCCTCCATGTACCATGCCTGTATTTGGTACAAACTACGGTGAAGCCCGC\n",
">2G0.assembled TCATCCGACCCGCGCCTCCATGTACCATGCCTGTATTTGGTACAAACTACGGTGAAGCCCGC\n",
">2H0.assembled TCATCCGACCCGCGCCTCCATGTACCATGCCTGTATTTGGTACAAACTACGGTGAAGCCCGC\n",
">3I0.assembled TCATCCGACCCGCGCCTCCATGTACCATGCCTGTATTTGGTACAAACTACGGTGAAGCCCGC\n",
">3J0.assembled TCATCCGACCCGCGCCTCCATGTACCATGCCTGTATTTGGTACAAACTACGGTGAAGCCCGC\n",
">3K0.assembled TCATCCGACCCGCGCCTCCATGTACCATGCCTGTATTTGGTACAAACTACGGTGAAGCCCGC\n",
">3L0.assembled TCATCCGACCCGCGCCTCCATGTACCATGCCTGTATTTGGTACAAACTMCGGTGAAGCCCRC\n",
"// - - \n"
]
}
],
"source": [
"%%bash\n",
"head -n 39 outfiles/merged.loci | cut -b 1-80"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## What to do with the non-merged reads"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Assemble them as described in the pairddrad non-merge tutorial [link](link)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Combining the two data sets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"(1) Concatenate the .loci output files from the merged and non-merged analyses to a tempfile."
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"%%bash\n",
"#cat outfiles/merged.loci outfiles/nonmerged.loci > outfiles/totaldata.loci"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"(2) Remove \".assembled.\" and \".unassembled\" from the names in the file."
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"%%bash\n",
"#sed -i s/.unassembled.//g outfiles/totaldata.loci\n",
"#sed -i s/.assembled.//g outfiles/totaldata.loci"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"(3) Set the output prefix (param 14) to the name of your concatenated data file. And ask for additional output formats."
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"%%bash\n",
"#sed -i '/## 14. /c\\totaldata ## 14. prefix name ' ./params.txt\n",
"#sed -i '/## 30. /c\\* ## 30. additional formats ' ./params.txt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"(4) Run step 7 to create additional output file formats using the concatenated .loci file"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"%%bash\n",
"#pyrad -p params.txt -s 7"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "IPython (Python 2)",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment