A wet-dry hybrid biologist's take on genetics and genomics. Mostly is about Linux, R, python, reproducible research, open science and NGS. Grab my book to transform yourself to a computational biologist https://divingintogeneticsandgenomics.ck.page/
you can download it and unzip it.
Then, open your terminal (you can search "terminal" at the upper right corner)
go to the preferences:
and import the color scheme for terminals in the folder osx-terminal.app-colors-solarized, which is in the unzipped the folder solarized. I preferred the Solarized Dark ansi.terminal , and set it as Default.
$ curl -L https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh | sh
You will have to install git first for the upper command to work.
Now, when you fire up your terminal, it looks much more prettier! (there are many other schemes for oh my zsh, I found the default is good. You can change it by modifying the .zshrc file in your home directory.)
After one runs microarray or RNA-seq analysis, usually he would do a Gene Set Enrichment Analysis (GSEA) analysis. There are many tools to use. One of the most commonly used one is GSEA developed in Broad Institute.
It requires four data files to be loaded:
1. Expression dataset in res, gct, pcl or txt format
2. Phenotype labels in cls format
3. Gene sets in gmx or gmt formt
4. Chip annotations
The first impression of mine is that: Oh my, why there are so many different formats? Yes, after merging into the computational biology field for a while, I find that most of the time I spend is on data formatting. That's in consistence with many others' experiences.
Well, for this post, I will specifically show you how to format gene expression data file output from affy (for microarray) to gct format using awk. For RNA-seq data, you can do it similarly for DESeq2 and EdgR outputs (using normalized counts).
Let's look at the expression file output by affy:
# R code
library(affy)
## read in the data
Data<- ReadAffy()
## RMA normalization and get the eset (expressionSet) object
eset<- rma(Data)
e<- exprs(eset)
write.table( e, "raw_expression.txt", row.names=F, quote=F, sep="\t")
The file we have:
The required file format:
we see that the first column is the probe name and the other columns are expression values for different samples. The first problem is that the first line is one grid off; the first column should have a name "Name". In addition, we need to add two lines, and we need to add a dummy column in the second column. We will fix it step by step:
compare:
cat raw_expression.txt | head
cat raw_expression.txt | awk -F"\t" '{if(NR==1) $1="NAME"FS$1}1' OFS="\t" | head
$1 is the first field for each line.
$1="NAME" FS $1 assigns NAME to the first field and seperate by a field seprator (FS), which is a tab.
1 is always TRUE, so awk prints out the rest of the lines.
then, add a second column with a dummy "na":
awk -F"\t" '{$1=$1FS"na"}1' OFS="\t"
How many probes in the file? echo "$(cat raw_expression.txt| wc -l)-1" | bc 22277
bc is a linux command calculator
How many samples in the file? cat raw_expression.txt | awk '{print NF;exit}' 14 samples.
after reading the first line and printing out the column number exit awk.
I was reading a thread on stackoverflow and found that this post was very interesting. I will go through the problem and the awk solution. Again, Awk is awesome! Note:
I do not want to use other blogging platforms for now. So, just bear it and you can go to here to see the clean ipython notebook. Github now randers ipython notebook.
I created some dummy files.
file_a is a tab-delimited bed file with 6 colums:
In [1]:
cat file_a.bed
chr1 123 aa b c d
chr1 234 a b c d
chr1 345 aa b c d
chr1 456 a b c d
file_b is the file that contain additional infomation, which we want to add to file_a:
In [2]:
cat file_b.bed
xxxx abcd chr1 123 aa c d e
yyyy defg chr1 345 aa e f g
we want to annotate file_a based on the fact that columns 3,4,5 in file_b are the same as columns 1,2,3 in file_a.
To do this, we are going to use Awk associated array. see a link
Let me execute the awk one-liner first and then explain what's going on here:
chr1 123 aa b c xxxx abcd
chr1 234 a b c
chr1 345 aa b c yyyy defg
chr1 456 a b c
we annotated file_a using file_b. Aka, we added first two columns from file_b to file_a.
There are several things happening here:
we see built-in variables in awk: NR and FNR. NR is the line number of the
current processing line.
when awk read in multiple files, awk NR variable will give the total number of records
relative to all the input file. Awk FNR will give you number of records for each inpu
file. see a link here
for all the built-in variables in awk.
Let's deomonstrate the difference between NR and FNR:
FILENAME is another built-in variable for the input file name of awk.
There are 4 lines in file_a and 2 lines in file_b, and NR increments for the total lines.
compare with FNR:
Now, awk prints out the line numbers in respect to each file.
From the awk code, we are reading file_b first. NR==FNR means when NR equals to FNR
(this is true only for file_b) do the following: {a[$3,$4,$5]=$1OFS$2;next}.
We created an associated array named a using columns 3,4,5 in file_b as keys and
the columns 1 and 2: $1"\t"$2 as values. we set OFS="\t" in the end of the command. next means to proceed for the next line, rather than execute the following { } code block.
when awk reads in the second file (file_a.bed), NR==FNR is not true, awk
exectues the second { } code block: {$6=a[$1,$2,$3];print}
we look up the associated array a we created from file_b.bed using
the first three columns in file_a.bed as keys, and assign column 6 to
the looked-up values and print it out the whole line.
Conclusion: Awk is very powerful in text wrangling. Once get used to the
syntax, you can do fairly complicated formatting in an awk one-liner.
I strongly recommand you to learn it.
I use command line a lot. It is awesome for data processing, text data formatting and even exploratory data analysis.
Last week, one of my colleagues complained that she forgot how she got the data and processed the data. With a future "ME" in mind, one needs to do extensive documentations of where, when and how you download and process the data, and document the versions of the tools used in the analysis.
Although there are many ways to make tasks on command line reproducible such as using Drake and GNU make, it is still not as straightforward as using Ipython Notebook for python and R markdown files for R, respectively.
Luckily, I got to know from Jeroen Janssens, who wrote the "Data science at command line" Book, that there is a bash_kernal for Ipython notebook, and I gave it a try.
see a screenshot of the notebook:
Essentially, I copied the .ipynb file (it is a JSON file) and pasted it to a gist, and insert the gist link to the nbviewer website http://nbviewer.ipython.org/
With IBash Notebook, one can document the linux commands in a real-time manner and make his research more reproducible!