Creative Commons License
This blog by Tommy Tang is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

My github papge

Tuesday, January 2, 2018

cancer gene census copy number

Set up

library(knitr)
library(here)
## here() starts at /Users/mtang1/projects/mixing_histology_lung_cancer
root.dir<- here()
opts_knit$set(root.dir = root.dir)

read in the data

I am going to check the COSMIC database on cancer genes. Specifically, I want to know which cancer-related genes are found amplified and which are deleted.
library(tidyverse)
## Loading tidyverse: ggplot2
## Loading tidyverse: tibble
## Loading tidyverse: tidyr
## Loading tidyverse: readr
## Loading tidyverse: purrr
## Loading tidyverse: dplyr
## Conflicts with tidy packages ----------------------------------------------
## filter(): dplyr, stats
## lag():    dplyr, stats
library(janitor)
library(stringr)
cancer_gene_census<- read_csv("data/COSMIC_Cancer_gene_Census/cancer_gene_census.csv", col_names = T)
## Parsed with column specification:
## cols(
##   .default = col_character(),
##   `Entrez GeneId` = col_integer(),
##   Tier = col_integer()
## )
## See spec(...) for full column specifications.

tidy the data with janitor::clean_names(), dplyr::unnest

The column names have spaces, and in many columns there are mulitple strings separated by ,.
## dplyr after 0.7.0 use pull to get one column out as a vector, I was using .$
# https://stackoverflow.com/questions/27149306/dplyrselect-one-column-and-output-as-vector

#cancer_gene_census %>% pull(`Gene Symbol`) %>% unique()
#cancer_gene_census %>% distinct(`Gene Symbol`)

## extract genes that are amplified or deleted in cancers. in the `mutation types` column search A for amplificaton and D for large deletion.

## use janitor::clean_names() to clean the column names with space.
cancer_gene_census %>% 
        clean_names() %>%
        mutate(mutation_type =strsplit(as.character(mutation_types), ",")) %>% 
        unnest(mutation_type) %>% tabyl(mutation_type)
##    mutation_type   n      percent valid_percent
## 1              N   1 0.0008257638  0.0008264463
## 2              A   8 0.0066061107  0.0066115702
## 3              D   8 0.0066061107  0.0066115702
## 4              F 139 0.1147811726  0.1148760331
## 5            Mis  68 0.0561519405  0.0561983471
## 6              N 147 0.1213872832  0.1214876033
## 7              O  37 0.0305532618  0.0305785124
## 8              S  76 0.0627580512  0.0628099174
## 9              T  19 0.0156895128  0.0157024793
## 10             A  18 0.0148637490  0.0148760331
## 11             D  35 0.0289017341  0.0289256198
## 12             F  32 0.0264244426  0.0264462810
## 13        F; Mis   1 0.0008257638  0.0008264463
## 14             M   3 0.0024772915  0.0024793388
## 15           Mis 240 0.1981833196  0.1983471074
## 16        Mis. N   1 0.0008257638  0.0008264463
## 17             N  24 0.0198183320  0.0198347107
## 18             O   7 0.0057803468  0.0057851240
## 19  Promoter Mis   1 0.0008257638  0.0008264463
## 20             S   2 0.0016515277  0.0016528926
## 21             T 343 0.2832369942  0.2834710744
## 22          <NA>   1 0.0008257638            NA
It turns out that single mutation_types column is more messy than you think… Multiple entries of A and D in different rows. spaces in the column are the devil.

trim the spaces with stringr::str_trim()

# trim the white space
cancer_gene_census %>% 
        clean_names() %>%
        mutate(mutation_type =strsplit(as.character(mutation_types), ",")) %>% 
        unnest(mutation_type) %>% 
        mutate(mutation_type = str_trim(mutation_type)) %>%
        filter(mutation_type == "A" | mutation_type == "D") %>% tabyl(mutation_type)
##   mutation_type  n   percent
## 1             A 26 0.3768116
## 2             D 43 0.6231884
cancer_gene_census %>% 
        clean_names() %>%
        mutate(mutation_type =strsplit(as.character(mutation_types), ",")) %>% 
        unnest(mutation_type) %>% 
        mutate(mutation_type = str_trim(mutation_type)) %>%
        filter(mutation_type == "A" | mutation_type == "D") %>% count(mutation_type)
## # A tibble: 2 x 2
##   mutation_type     n
##           <chr> <int>
## 1             A    26
## 2             D    43

Sanity check

according to the website http://cancer.sanger.ac.uk/cosmic/census?genome=37#cl_sub_tables there are should be 40 deletions and 24 amplifications while I am getting 43 and 26, respectively.
cancer_gene_census %>% 
        clean_names() %>%
        mutate(mutation_type =strsplit(as.character(mutation_types), ",")) %>% 
        unnest(mutation_type) %>% 
        mutate(mutation_type = str_trim(mutation_type)) %>%
        filter(mutation_type == "A") 
## # A tibble: 26 x 21
##    gene_symbol
##          <chr>
##  1        AKT2
##  2        AKT3
##  3         ALK
##  4       CCNE1
##  5      DROSHA
##  6        EGFR
##  7       ERBB2
##  8         ERG
##  9        FLT4
## 10        GRM3
## # ... with 16 more rows, and 20 more variables: name <chr>,
## #   entrez_geneid <int>, genome_location <chr>, tier <int>,
## #   hallmark <chr>, chr_band <chr>, somatic <chr>, germline <chr>,
## #   tumour_types_somatic <chr>, tumour_types_germline <chr>,
## #   cancer_syndrome <chr>, tissue_type <chr>, molecular_genetics <chr>,
## #   role_in_cancer <chr>, mutation_types <chr>,
## #   translocation_partner <chr>, other_germline_mut <chr>,
## #   other_syndrome <chr>, synonyms <chr>, mutation_type <chr>
I checked by eyes, AKT3 and GRM3 are missing from the table online http://cancer.sanger.ac.uk/cosmic/census/tables?name=amp but when I checked the downloaded table, both AKT3 and GRM3 has Amplification in the mutation types column. I am bit confused, but for now I will stick to the downloaded table.
It teaches me an important lesson.
  1. Do not trust data (blindly) from any resource even COSMIC.
  2. Data are always messy. Tidying data is the big job.
  3. Becareful with the data, check with sanity. Browse the data with your eyes may reveal some unexpected things.

oncogenes are amplified and tumor suppressor genes are always deleted?

In cancer, we tend to think oncogenes are amplified and tumor suppressor genes are deleted to promote tumor progression. Is it true in this setting?
#devtools::install_github("haozhu233/kableExtra")
library(kableExtra)
library(knitr)
cancer_gene_census %>% 
        clean_names() %>%
        mutate(mutation_type =strsplit(as.character(mutation_types), ",")) %>% 
        unnest(mutation_type) %>% 
        mutate(mutation_type = str_trim(mutation_type)) %>%
        filter(mutation_type == "A" | mutation_type == "D") %>% 
        mutate(role = strsplit(as.character(role_in_cancer), ",")) %>%
        unnest(role) %>%
        mutate(role = str_trim(role)) %>%
        filter(role == "TSG" | role == "oncogene") %>%
        count(role, mutation_type) %>% kable()
rolemutation_typen
oncogeneA25
oncogeneD7
TSGA3
TSGD42

what are those genes?

cancer_gene_census %>% 
        clean_names() %>%
        mutate(mutation_type =strsplit(as.character(mutation_types), ",")) %>% 
        unnest(mutation_type) %>% 
        mutate(mutation_type = str_trim(mutation_type)) %>%
        filter(mutation_type == "A" | mutation_type == "D") %>% 
        mutate(role = strsplit(as.character(role_in_cancer), ",")) %>%
        unnest(role) %>%
        mutate(role = str_trim(role)) %>%
        filter(role == "TSG" | role == "oncogene") %>%
        filter((role == "TSG" & mutation_type == "A") | (role == "oncogene" & mutation_type == "D")) %>%
        select(gene_symbol, role_in_cancer, role, mutation_types, mutation_type) %>%
        kable(format = "html", booktabs = T, caption = "TSG and oncogene") %>%
        kable_styling(latex_options = c("striped", "hold_position"),
                full_width = F)
TSG and oncogene
gene_symbolrole_in_cancerrolemutation_typesmutation_type
APOBEC3Boncogene, TSGoncogeneDD
BIRC3oncogene, TSG, fusiononcogeneD, F, N, T, MisD
DROSHATSGTSGA, Mis, N, FA
GPC3oncogene, TSGoncogeneD, Mis, N, F, SD
KDM6Aoncogene, TSGoncogeneD, N, F, SD
MAP2K4oncogene, TSGoncogeneD, Mis, ND
NKX2-1oncogene, TSGTSGAA
NTRK1oncogene, TSG, fusionTSGT, AA
PAX5oncogene, TSG, fusiononcogeneT, Mis, D, F, SD
WT1oncogene, TSG, fusiononcogeneD, Mis, N, F, S, TD
majority of the genes from the output can function as either oncogene or tumor suppressor genes (TSG), which is not totally suprising. Cancer is such a complex disease that a function of the gene is dependent on the context (e.g. cancer types). Interestingly, DROSHA is the only TSG and is amplified.

Sunday, December 31, 2017

The End of 2017

In the end of last year, I wrote a post summarizing the passing 2016. Now, it is time to write the same for 2017! How time flies!

Last year, I wrote:

For the coming 2017, I should be:
1. busy with Phoebe.
2. writing 1-2 papers.
3. writing a book chapter on ChIP-seq for the biostar handbook. It should come out in the mid of 2017.
3. writing a small R package for practice.
4. learning a piano song.

Looking back, it seems I only accomplished 1 and 3 :)  I do have two first-author papers in writing-stage, but I have not finished them yet. I wish I  could get them out in 2018.

The book chapter on ChIP-seq was published here. If you want a PDF of my chapter, you can download from https://drive.google.com/open?id=1UxB0uhsoWlPvymukP3em8v4vBDI-CKwK




I still have not got a chance to write an R package, which is on my list for long. The coming 2018 is a good time for me to get my hands wet. Our epigenomic project was selected by the  Data Science Road-Trip program !! I received the confirmation in the end of 2017. I look forward to learn more R and machine learning for 2 weeks. And the plan is to turn the work into an R package. Many thanks to , I saw this opportunity from his tweet. The development of R package is becoming easier with hadley wickham's work usethis and many others.

I failed #4 totally...I did not get any time to practice the piano. My wife is occupied by Phoebe and does not have time to teach me either.

I have some other achievements that I think I need to celebrate though:)

I am proud to say that I am an official instructor for Data Carpentry! I went to the instructor training in UC Davis hosted by Dr.Titus Brown. I am excited to be part of this welcoming community.

I am also excited to be a  Genomics Advisory committee member of Data Carpentry and a maintainer of the Wrangling Genomics course materials.

I got to practice some teaching skills learned from the instructor training in a ChIP-seq lab, which is part of the GS01 1143 Introduction to Bioinformatics course.

Together with Dr.Kunal Rai, I authored a book chapter on Computational Analysis of Epigenetic Modifications in Melanoma. It should be out in 2018.

The other thing I want to mention is that I wrote several NGS processing workflows using Snakemake, a python extension. It was my first time to write something seriously using python and I like it a lot.

The most complex workflow I have written is the mutation calling pipeline (find it here).
I follow GATK best practices and incorporate two somatic callers: mutect and lancet. In my current lab of Dr.Jianjun Zhang and Dr.Andrew Futreal, I deal with multi-region or longitudinal tumor samples from the same patient. In the pipeline, I implemented copy-number analysis and pyclone clonal architecture analysis. The Snakefile is over 1000 lines, I need to think about modularizing it.

Of course, my ChIP-seq snakemake processing pipeline is used to uniformly process thousands of data sets generated in Dr.Kunal Rai's lab. I am happy that many wet lab members are learning computation and are using it by themselves.

In addition to DNA-seq and ChIP-seq. I developed several other workflows:
RNA-seq
ATAC-seq
RRBSeq

I will need to better document all the workflows.

For the coming 2018:


1. I will need to get out at least 2 papers. I enjoyed the public service such as involving Data Carpentry, but I know if I do not have papers, my academic future is doomed.

2. That does not mean I will spend less time on teaching. In fact, I plan to bring at least one Data Carpentry workshop down to Genomic Medicine department in MD Anderson Cancer Center.

3. Finish that R package of course!

4.  I am expecting my second child Noah in April 2018 ( We survived Hurricane Harvey 2017!!).  I know it will be even busier with 2 kids :) I love my first kid Phoebe, she is now 17 months. The joy she has brought to me is not exchangeable with anything else. I endeavor to be a better researcher, but first I need to be a better husband and father.

Looking forward to a brand new 2018! Good luck everyone.

Wednesday, November 22, 2017

run gistic2 with sequenza segmentation output from whole exome sequencing

Convert sequenza output to gistic input

Gistic was designed for SNP6 array data. I saw many papers use it for whole exome sequencing data as well.
I have the segment files from sequenza and want to convert them to the gistic input.

Input format for gistic:

segment file:
(1) Sample (sample name)
(2) Chromosome (chromosome number)
(3) Start Position (segment start position, in bases)
(4) End Position (segment end position, in bases)
(5) Num markers (number of markers in segment)
(6) Seg.CN (log2() -1 of copy number)
  1. The conversion should be log2 (logarithm base 2) - 1, so that copy number 2 is 0.
  2. Every segment start and end in the segments file should appear in the markers file, not the other way around.
when the copy number is 0 (a homozygous deletion of both copies). You can’t do a log2(0)-1, just put a small number e.g. -5
(1) Marker Name
(2) Chromosome
(3) Marker Position (in bases)
Note gistic2 does not require a marker file anymore.

output of sequenza

sequenza gives a segment file. Segmentation was done by copynumberbioconductor package.
13 columns of the *segments.txt file
"chromosome" "start.pos" "end.pos" "Bf" "N.BAF" "sd.BAF" "depth.ratio" "N.ratio" "sd.ratio" "CNt" "A" "B" "LPP"
We only need the chromosomestart.posend.posN.BAF and depth.ratio columns.
The depth.ratio column is the GC content normalized ratio. a depth ratio of 1 means it has copy number of 2 (the same as the normal blood control in my case).
UPDATED: 12/17/2017. see a comment below. it is not log2(2^ depth.ratio) -1 rather:
To convert to gistic input, I have to do log2(2 * depth.ratio) - 1

UPDATED 01/03/2018
I have a bunch of sgement files in the same folder.
add the sample name in the final column and do the log2 math in R.
library(tidyverse)
library(readr)
seg_files<- list.files(".", pattern = "*segments.txt", full.names = F) 

seg_dat_list <- lapply(seg_files, function(f) {
        dat<- read_tsv(f, col_names = T, col_types = cols(.default = col_character()))
        sample<- gsub("_vs_.*segments.txt", "", f)
        dat$sample<- sample
        return(dat)
})

seg_dat <- do.call(rbind, seg_dat_list)

gistic_input<- seg_dat %>% select(sample, chromosome, start.pos, end.pos, N.BAF, depth.ratio) %>% mutate(depth.ratio = as.numeric(depth.ratio)) %>% mutate(depth.ratio = log2(2 * depth.ratio) -1)

write_tsv(gistic_input, "all_segments.txt")
Back to bash:


## marker file:

cat all_segments.txt | sed '1d' | cut -f2,3 > markers.txt
cat all_segments.txt | sed '1d' | cut -f2,4 >> markers.txt

## sort the files by chromosome, take the unique ones and number the markers.

cat markers.txt | sort -V -k1,1 -k2,2nr | uniq | nl > markers_gistic.txt

Run gistic
modify the gistic2 script a bit. e.g. change MCR_ROOT folder path
#!/bin/sh
## set MCR environment and launch GISTIC executable

## NOTE: change the line below if you have installed the Matlab MCR in an alternative location
MCR_ROOT=/scratch/genomic_med/apps/Matlab_Complier_runTime
MCR_VER=v83

echo Setting Matlab MCR root to $MCR_ROOT

## set up environment variables
LD_LIBRARY_PATH=$MCR_ROOT/$MCR_VER/runtime/glnxa64:$LD_LIBRARY_PATH
LD_LIBRARY_PATH=$MCR_ROOT/$MCR_VER/bin/glnxa64:$LD_LIBRARY_PATH
LD_LIBRARY_PATH=$MCR_ROOT/$MCR_VER/sys/os/glnxa64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
XAPPLRESDIR=$MCR_ROOT/$MCR_VER/MATLAB_Component_Runtime/v83/X11/app-defaults
export XAPPLRESDIR

## launch GISTIC executable
./gp_gistic2_from_seg $@

I removed ./ from the last line since I have put all executables in my path.
mkdir gistic_out
gistic2 -b gistic_out -seg all_segments.txt -refgene /scratch/genomic_med/apps/gistic/refgenefiles/hg19.mat -mk markers_gistic.txt  -genegistic 1 -smallmem 1 -broad 1 -brlen 0.5 -conf 0.90 -armpeel 1 -savegene 1 -gcm extreme

Wednesday, September 27, 2017

Use annovar to annotate variants

Annovar is one of the widely used variants annotation tools. It was cited for over 2000 times which is amazing. It annotates the variants in a tabular format which is easy to parse. Other tools such as VEP and VCFanno are alternatives.
First download the annovar package (you will need to register and an email with downloading link will be sent to you)
wget  http://www.openbioinformatics.org/annovar/download/0wgxR2rIVP/annovar.latest.tar.gz

tar xvzf annovar.latest.tar.gz

Download the databases

cd annovar


perl annotate_variation.pl -buildver hg19 -downdb -webfrom annovar refGene humandb/

perl annotate_variation.pl -buildver hg19 -downdb cytoBand humandb/

perl annotate_variation.pl -buildver hg19 -downdb -webfrom annovar snp129 humandb/

perl annotate_variation.pl -buildver hg19 -downdb -webfrom annovar exac03nontcga humandb/

perl annotate_variation.pl -buildver hg19 -downdb -webfrom annovar gnomad_genome humandb/

perl annotate_variation.pl -buildver hg19 -downdb -webfrom annovar cadd13 humandb/

perl annotate_variation.pl -buildver hg19 -downdb -webfrom annovar clinvar_20170130 humandb/

perl annotate_variation.pl -buildver hg19 -downdb -webfrom annovar cosmic70 humandb/

perl annotate_variation.pl -buildver hg19 -downdb -webfrom annovar dbnsfp33a humandb/

Annotate

First download the annovar package (you will need to register and an email with downloading link will be sent to you) only 5 columns are needed for input: chr, start, end, ref, alt. By default, it assumes the file is in 1-based format.


perl /scratch/genomic_med/apps/annovar/annovar/table_annovar.pl patient1_pre-treatment_vs_patient1_leukocyte_recount_alt_fill.txt /scratch/genomic_med/apps/annovar/annovar/humandb/ -buildver hg19 -out patient1_pre-treatment_vs_patient1_leukocyte_recount_annovar -remove -protocol refGene,cytoBand,snp129,cosmic70,dbnsfp33a,cadd13,clinvar_20170130,exac03nontcga,gnomad_genome  -operation g,r,f,f,f,f,f,f,f -nastring NA -polish -otherinfo -thread 2


perl /scratch/genomic_med/apps/annovar/annovar/table_annovar.pl patient1_pre-treatment_vs_patient1_leukocyte_recount_alt_fill.txt /scratch/genomic_med/apps/annovar/annovar/humandb/ -buildver hg19 -out patient1_pre-treatment_vs_patient1_leukocyte_recount_annovar -remove -protocol refGene,cytoBand,snp129  -operation g,r,f -nastring NA  -polish -otherinfo 

Thursday, August 10, 2017

bwa-mem multi-mapping reads

An error

I was using TEQC to do quality control of my WES bam files aligned by bwa-mem. My data are paired end, so a function reads2pairs is called to make the paired-end reads to be a single fragment. I then get this error:
> readpairs <- reads2pairs(reads)
Error in reads2pairs(reads) : read pair IDs do not seem to be unique
asked in the bioconductor support site and went to the source code of that function. It turns out that TEQC was complaining about the not uniquely mapped reads.

How does BWA-MEM determine and mark the not uniquely mapped reads?

google directed me to biostars again with no surprising. Please read this three posts:
https://www.biostars.org/p/76893/
https://www.biostars.org/p/167892/
one extract multi-mapped reads by looking for mapq < 23 and/or the XA flag on the reads
samtools view -q 40 file.bam
samtools view -q 1 file.bam
samtools view -q 23 file.bam
BWA added tags (Tags starting with ‘X’ are specific to BWA):
TagMeaning
NMEdit distance
MDMismatching positions/bases
ASAlignment score
BCBarcode sequence
X0Number of best hits
X1Number of suboptimal hits found by BWA
XNNumber of ambiguous bases in the referenece
XMNumber of mismatches in the alignment
XONumber of gap opens
XGNumber of gap extentions
XTType: Unique/Repeat/N/Mate-sw
XAAlternative hits; format: (chr,pos,CIGAR,NM;)*
XSSuboptimal alignment score
XFSupport from forward/reverse alignment
XENumber of supporting seeds

Checking my own bam files

samtools view my.bam | grep "XA:" | head -2

E00514:124:H2FC7CCXY:1:1221:16741:50076 97 1 69019 0 151M = 69298 430 CTCCTTCTCTTCTTCAAGGTAACTGCAGAGGCTATTTCCTGGAATGAATCAACGAGTGAAACGAATAACTCTATGGTGACTGAATTCATTTTTCTGGGTCTCTCTGATTCTCAGGAACTCCAGACCTTCCTATTTATGTTGTTTTTTGTAT @@CC?=D@E@=F@>F>?FG==?FAGG?F?FFGA<=?>GFAGF?@=G>?=F?>E@>F=G>>>E?=>>;>D@E@;<EE<E=CAE<>;<D@<<<==D@ED@<D@D@E>E<<=D@E@E>=>C@DD@E=CD@<DD@<>=><>G>>G>>>=;;?;;> XA:Z:15,-102463184,151M,0;19,+110607,151M,1; MC:Z:151M BD:Z:NNNNNNNONONNONNONNOONNNOOONONOONONNNGNNNOONNNOONNOONNNOOOMONFNNONNNNNONONNOOOMOOOOONNNNOONGGGNOOOLOOONONOOOONNONOOONNNONNOONOONNNNNNNNGNNOMNNMNGGGGNMNN MD:Z:151 BI:Z:QQPQQQQPOPQQQQQQQQQQQQQQRQQRQQQQRQQQMQQQRRQQQRQQQRQQQQQQRQQQKQQQQQQQQQPQQQRRQQQQQRQQQQQQQQMMMQQRRPQRQPQPQRQRQQQPQRQQQQQPQQRQQQQQQQQQQQMQQRQQQQQMMMMQQQQ NM:i:0 MQ:i:0 AS:i:151 XS:i:151 RG:Z:F17042956987-KY290-2-WES
E00514:124:H2FC7CCXY:1:2219:12855:17641 81 1 69039 0 150M 13 83828251 0 AACTGCAGAGGCTATTTCCTGGAATGAATCAACGAGTGAAACGAATAACTCTATGGTGACTGAATTCATTTTTCTGGGTCTCTCTGATTCTCAGGAACTCCAGACCTTCCTATTTATGTTGTTTTTTGTATTCTATGGAGGAATCGTGTT =;C?FEAFAFGF<>>>=FF@DD=<@D=<;E<<?D@C@E=<<?D;<;<<E<E;=@DC@D<E@D=<==E<>>=><E@EDD<E=E=E@D<>>F>G@CF>=F>FGAF=GF@?FG===><=@E??E??????F==??F==?FF@EE=<=>D@B@@ XA:Z:15,+102463165,150M,0;19,-110627,150M,1; BD:Z:NOONOOONNNOONNFNNOOONNNOONNOONNNNNOMONGNNNNNNNNOONONOOOMOOOOONNNNOONFFFNNOOLOONONONOOONNNOOONNNNOONOOOOOONNNOONNFNNOMNNMNFFFFNMNNNNNONOONNNNNNOONMMNNN MD:Z:150 BI:Z:QRRQQRQPQQRQQQKQQQRQQQQQQQQRQQQQQPQPQQMQQQQQQQQRQQQQQQQPQRRRQQQQQQRQKKKQQRQPQQQQQQQRQRQQQQQRQQQQRQQRRQRQQQQQQQQQKQQQOQQOQKKKKQOQQQQQQQQQQPQQPPQQQPOQQQ NM:i:0 AS:i:150 XS:i:150 RG:Z:F17042956987-KY290-2-WES
Indeed, most of the reads with XA: tag has a quality score of 0 (fifth column).
samtools view -q 1 my.bam | wc -l
7787794
samtools view my.bam | grep -v "XA:" | wc -l
7972777

## not exactly the same number, what's wrong?
samtools view my.bam | grep  "XA:" | awk '{print $5}' | sort | uniq -c 
201878 0
    463 1
    688 10
    666 11
    677 12
    693 13
    271 14
    777 15
    281 16
    414 17
    564 18
   1429 19
    192 2
    327 20
   3772 21
   1674 22
    742 23
   1543 24
   3106 25
    368 26
   6223 27
    498 28
    514 29
    760 3
    830 30
   1526 31
    954 32
   3726 33
    367 34
    343 35
    379 36
    641 37
    150 38
   1082 39
    442 4
   3058 40
    673 41
    866 42
   2570 43
   1285 44
   6374 45
   6885 46
   7669 47
  22571 48
  17666 49
    611 5
  16128 50
  13824 51
   9864 52
   6277 53
   4328 54
   2568 55
   1440 56
   4547 57
   1462 58
   1888 59
   1171 6
 169162 60
      2 61
      3 62
      1 64
      5 65
      1 67
      2 69
   1611 7
     86 70
   2234 8
   4013 9
so, many reads with XA tag with mapping quality scores > 0 !!

retain only uniquely mapped reads

samtools view -h my.bam | awk '$17 !~ /XA:/|| $1 ~ /^@/' | samtools view -bS - > my_unique.bam