diff --git a/fonts/OpenSans-Bold-webfont.eot b/fonts/OpenSans-Bold-webfont.eot new file mode 100644 index 0000000..e1c7674 Binary files /dev/null and b/fonts/OpenSans-Bold-webfont.eot differ diff --git a/fonts/OpenSans-Bold-webfont.svg b/fonts/OpenSans-Bold-webfont.svg new file mode 100644 index 0000000..364b368 --- /dev/null +++ b/fonts/OpenSans-Bold-webfont.svg @@ -0,0 +1,146 @@ + + + + +This is a custom SVG webfont generated by Font Squirrel. +Copyright : Digitized data copyright 20102011 Google Corporation +Foundry : Ascender Corporation +Foundry URL : httpwwwascendercorpcom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/fonts/OpenSans-Bold-webfont.ttf b/fonts/OpenSans-Bold-webfont.ttf new file mode 100644 index 0000000..2d94f06 Binary files /dev/null and b/fonts/OpenSans-Bold-webfont.ttf differ diff --git a/fonts/OpenSans-Bold-webfont.woff b/fonts/OpenSans-Bold-webfont.woff new file mode 100644 index 0000000..cd86852 Binary files /dev/null and b/fonts/OpenSans-Bold-webfont.woff differ diff --git a/fonts/OpenSans-BoldItalic-webfont.eot b/fonts/OpenSans-BoldItalic-webfont.eot new file mode 100644 index 0000000..f44ac9a Binary files /dev/null and b/fonts/OpenSans-BoldItalic-webfont.eot differ diff --git a/fonts/OpenSans-BoldItalic-webfont.svg b/fonts/OpenSans-BoldItalic-webfont.svg new file mode 100644 index 0000000..8392240 --- /dev/null +++ b/fonts/OpenSans-BoldItalic-webfont.svg @@ -0,0 +1,146 @@ + + + + +This is a custom SVG webfont generated by Font Squirrel. +Copyright : Digitized data copyright 20102011 Google Corporation +Foundry : Ascender Corporation +Foundry URL : httpwwwascendercorpcom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/fonts/OpenSans-BoldItalic-webfont.ttf b/fonts/OpenSans-BoldItalic-webfont.ttf new file mode 100644 index 0000000..f74e0e3 Binary files /dev/null and b/fonts/OpenSans-BoldItalic-webfont.ttf differ diff --git a/fonts/OpenSans-BoldItalic-webfont.woff b/fonts/OpenSans-BoldItalic-webfont.woff new file mode 100644 index 0000000..f3248c1 Binary files /dev/null and b/fonts/OpenSans-BoldItalic-webfont.woff differ diff --git a/fonts/OpenSans-Italic-webfont.eot b/fonts/OpenSans-Italic-webfont.eot new file mode 100644 index 0000000..277c189 Binary files /dev/null and b/fonts/OpenSans-Italic-webfont.eot differ diff --git a/fonts/OpenSans-Italic-webfont.svg b/fonts/OpenSans-Italic-webfont.svg new file mode 100644 index 0000000..29c7497 --- /dev/null +++ b/fonts/OpenSans-Italic-webfont.svg @@ -0,0 +1,146 @@ + + + + +This is a custom SVG webfont generated by Font Squirrel. +Copyright : Digitized data copyright 20102011 Google Corporation +Foundry : Ascender Corporation +Foundry URL : httpwwwascendercorpcom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/fonts/OpenSans-Italic-webfont.ttf b/fonts/OpenSans-Italic-webfont.ttf new file mode 100644 index 0000000..63f187e Binary files /dev/null and b/fonts/OpenSans-Italic-webfont.ttf differ diff --git a/fonts/OpenSans-Italic-webfont.woff b/fonts/OpenSans-Italic-webfont.woff new file mode 100644 index 0000000..469a29b Binary files /dev/null and b/fonts/OpenSans-Italic-webfont.woff differ diff --git a/fonts/OpenSans-Light-webfont.eot b/fonts/OpenSans-Light-webfont.eot new file mode 100644 index 0000000..837daab Binary files /dev/null and b/fonts/OpenSans-Light-webfont.eot differ diff --git a/fonts/OpenSans-Light-webfont.svg b/fonts/OpenSans-Light-webfont.svg new file mode 100644 index 0000000..bdb6726 --- /dev/null +++ b/fonts/OpenSans-Light-webfont.svg @@ -0,0 +1,146 @@ + + + + +This is a custom SVG webfont generated by Font Squirrel. +Copyright : Digitized data copyright 20102011 Google Corporation +Foundry : Ascender Corporation +Foundry URL : httpwwwascendercorpcom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/fonts/OpenSans-Light-webfont.ttf b/fonts/OpenSans-Light-webfont.ttf new file mode 100644 index 0000000..b50ef9d Binary files /dev/null and b/fonts/OpenSans-Light-webfont.ttf differ diff --git a/fonts/OpenSans-Light-webfont.woff b/fonts/OpenSans-Light-webfont.woff new file mode 100644 index 0000000..99514d1 Binary files /dev/null and b/fonts/OpenSans-Light-webfont.woff differ diff --git a/fonts/OpenSans-LightItalic-webfont.eot b/fonts/OpenSans-LightItalic-webfont.eot new file mode 100644 index 0000000..f0ebf2c Binary files /dev/null and b/fonts/OpenSans-LightItalic-webfont.eot differ diff --git a/fonts/OpenSans-LightItalic-webfont.svg b/fonts/OpenSans-LightItalic-webfont.svg new file mode 100644 index 0000000..60765da --- /dev/null +++ b/fonts/OpenSans-LightItalic-webfont.svg @@ -0,0 +1,146 @@ + + + + +This is a custom SVG webfont generated by Font Squirrel. +Copyright : Digitized data copyright 20102011 Google Corporation +Foundry : Ascender Corporation +Foundry URL : httpwwwascendercorpcom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/fonts/OpenSans-LightItalic-webfont.ttf b/fonts/OpenSans-LightItalic-webfont.ttf new file mode 100644 index 0000000..5898c8c Binary files /dev/null and b/fonts/OpenSans-LightItalic-webfont.ttf differ diff --git a/fonts/OpenSans-LightItalic-webfont.woff b/fonts/OpenSans-LightItalic-webfont.woff new file mode 100644 index 0000000..9c978dc Binary files /dev/null and b/fonts/OpenSans-LightItalic-webfont.woff differ diff --git a/fonts/OpenSans-Regular-webfont.eot b/fonts/OpenSans-Regular-webfont.eot new file mode 100644 index 0000000..dd6fd2c Binary files /dev/null and b/fonts/OpenSans-Regular-webfont.eot differ diff --git a/fonts/OpenSans-Regular-webfont.svg b/fonts/OpenSans-Regular-webfont.svg new file mode 100644 index 0000000..01038bb --- /dev/null +++ b/fonts/OpenSans-Regular-webfont.svg @@ -0,0 +1,146 @@ + + + + +This is a custom SVG webfont generated by Font Squirrel. +Copyright : Digitized data copyright 20102011 Google Corporation +Foundry : Ascender Corporation +Foundry URL : httpwwwascendercorpcom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/fonts/OpenSans-Regular-webfont.ttf b/fonts/OpenSans-Regular-webfont.ttf new file mode 100644 index 0000000..05951e7 Binary files /dev/null and b/fonts/OpenSans-Regular-webfont.ttf differ diff --git a/fonts/OpenSans-Regular-webfont.woff b/fonts/OpenSans-Regular-webfont.woff new file mode 100644 index 0000000..274664b Binary files /dev/null and b/fonts/OpenSans-Regular-webfont.woff differ diff --git a/fonts/OpenSans-Semibold-webfont.eot b/fonts/OpenSans-Semibold-webfont.eot new file mode 100644 index 0000000..289aade Binary files /dev/null and b/fonts/OpenSans-Semibold-webfont.eot differ diff --git a/fonts/OpenSans-Semibold-webfont.svg b/fonts/OpenSans-Semibold-webfont.svg new file mode 100644 index 0000000..cc2ca42 --- /dev/null +++ b/fonts/OpenSans-Semibold-webfont.svg @@ -0,0 +1,146 @@ + + + + +This is a custom SVG webfont generated by Font Squirrel. +Copyright : Digitized data copyright 2011 Google Corporation +Foundry : Ascender Corporation +Foundry URL : httpwwwascendercorpcom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/fonts/OpenSans-Semibold-webfont.ttf b/fonts/OpenSans-Semibold-webfont.ttf new file mode 100644 index 0000000..6f15073 Binary files /dev/null and b/fonts/OpenSans-Semibold-webfont.ttf differ diff --git a/fonts/OpenSans-Semibold-webfont.woff b/fonts/OpenSans-Semibold-webfont.woff new file mode 100644 index 0000000..4e47cb1 Binary files /dev/null and b/fonts/OpenSans-Semibold-webfont.woff differ diff --git a/fonts/OpenSans-SemiboldItalic-webfont.eot b/fonts/OpenSans-SemiboldItalic-webfont.eot new file mode 100644 index 0000000..50a8a6f Binary files /dev/null and b/fonts/OpenSans-SemiboldItalic-webfont.eot differ diff --git a/fonts/OpenSans-SemiboldItalic-webfont.svg b/fonts/OpenSans-SemiboldItalic-webfont.svg new file mode 100644 index 0000000..65b50e2 --- /dev/null +++ b/fonts/OpenSans-SemiboldItalic-webfont.svg @@ -0,0 +1,146 @@ + + + + +This is a custom SVG webfont generated by Font Squirrel. +Copyright : Digitized data copyright 20102011 Google Corporation +Foundry : Ascender Corporation +Foundry URL : httpwwwascendercorpcom + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/fonts/OpenSans-SemiboldItalic-webfont.ttf b/fonts/OpenSans-SemiboldItalic-webfont.ttf new file mode 100644 index 0000000..55ba312 Binary files /dev/null and b/fonts/OpenSans-SemiboldItalic-webfont.ttf differ diff --git a/fonts/OpenSans-SemiboldItalic-webfont.woff b/fonts/OpenSans-SemiboldItalic-webfont.woff new file mode 100644 index 0000000..0adc6df Binary files /dev/null and b/fonts/OpenSans-SemiboldItalic-webfont.woff differ diff --git a/images/bullet.png b/images/bullet.png new file mode 100644 index 0000000..0614eb6 Binary files /dev/null and b/images/bullet.png differ diff --git a/images/hr.gif b/images/hr.gif new file mode 100644 index 0000000..bdb4168 Binary files /dev/null and b/images/hr.gif differ diff --git a/images/nav-bg.gif b/images/nav-bg.gif new file mode 100644 index 0000000..4743965 Binary files /dev/null and b/images/nav-bg.gif differ diff --git a/index.html b/index.html index f5dbe61..69ee707 100644 --- a/index.html +++ b/index.html @@ -1,60 +1,1335 @@ - + - + - - - - - Bash-oneliner by onceupon + + + + + + + + - + -
-
-

Bash-oneliner

-

Bash Oneliner learning station. This blog will focus on bash commands for parsing biological data, which are tsv files(tab-separated values); some of the commands are for Ubuntu system maintaining. I apologize that there won't be any citation of the code, but they are probably from dear Google and Stackoverflow. Not all the code here are oneliner (if the ';' counts..). English and bash are not my first language, so... correct me anytime, tks!!

+ -
+
+
+

Bash-oneliner

+

Bash Oneliner learning station. This blog will focus on bash commands for parsing biological data, which are tsv files(tab-separated values); some of the commands are for Ubuntu system maintaining. I apologize that there won't be any citation of the code, but they are probably from dear Google and Stackoverflow. Not all the code here are oneliner (if the ';' counts..). English and bash are not my first language, so... correct me anytime, tks!!

+
+ Project maintained by onceupon + Hosted on GitHub Pages — Theme by mattgraham +
-
-
-

-Welcome to GitHub Pages.

+

+Handy Bash oneliner commands for tsv file editing

-

This automatic page generator is the easiest way to create beautiful pages for all of your projects. Author your page content here using GitHub Flavored Markdown, select a template crafted by a designer, and publish. After your page is generated, you can check out the new gh-pages branch locally. If you’re using GitHub Desktop, simply sync your repository and you’ll see the new branch.

+ -

-Designer Templates

+

+Grep

-

We’ve crafted some handsome templates for you to use. Go ahead and click 'Continue to layouts' to browse through them. You can easily go back to edit your page before publishing. After publishing your page, you can revisit the page generator and switch to another theme. Your Page content will be preserved.

+
+extract text bewteen words (e.g. w1,w2)
-

-Creating pages manually

+
grep -o -P '(?<=w1).*(?=w2)'
-

If you prefer to not use the automatic generator, push a branch named gh-pages to your repository to create a page manually. In addition to supporting regular HTML content, GitHub Pages support Jekyll, a simple, blog aware static site generator. Jekyll makes it easy to create site-wide headers and footers without having to copy them across every page. It also offers intelligent blog support and other advanced templating features.

+
+grep lines without word (e.g. bbo)
-

-Authors and Contributors

+
grep -v bbo
-

You can @mention a GitHub username to generate a link to their profile. The resulting <a> element will link to the contributor’s GitHub Profile. For example: In 2007, Chris Wanstrath (@defunkt), PJ Hyett (@pjhyett), and Tom Preston-Werner (@mojombo) founded GitHub.

+
+grep and count (e.g. bbo)
-

-Support or Contact

+
grep -c bbo filename
-

Having trouble with Pages? Check out our documentation or contact support and we’ll help you sort it out.

+
+insensitive grep (e.g. bbo/BBO/Bbo)
+ +
grep -i "bbo" filename 
+ +
+count occurrence (e.g. three times a line count three times)
+ +
grep -o bbo filename 
+ +
+COLOR the match (e.g. bbo)!
+ +
grep --color bbo filename 
+ +
+grep search all files in a directory(e.g. bbo)
+ +
grep -R bbo /path/to/directory 
+ +

or

+ +
grep -r bbo /path/to/directory 
+ +
+search all files in directory, only output file names with matches(e.g. bbo)
+ +
grep -Rh bbo /path/to/directory 
+ +

or

+ +
grep -rh bbo /path/to/directory 
+ +
+grep OR (e.g. A or B or C or D)
+ +
grep 'A\|B\|C\|D' 
+
+ +
+grep AND (e.g. A and B)
+ +
grep 'A.*B' 
+ +
+grep all content of a fileA from fileB
+ +
grep -f fileA fileB 
+ +
+grep a tab
+ +
grep $'\t' 
+ +

+Sed

+ +

[back to top]

+ +
+remove lines with word (e.g. bbo)
+ +
sed "/bbo/d" filename
+ +
+edit infile (edit and save)
+ +
sed -i "/bbo/d" filename
+ +
+when using variable (e.g. $i), use double quotes " "
+ +

e.g. add >$i to the first line (to make a FASTA file)

+ +
sed "1i >$i"  
+ +

//notice the double quotes! in other examples, you can use a single quote, but here, no way! +//'1i' means insert to first line

+ +
+delete empty lines
+ +
sed '/^\s*$/d' 
+ +

or

+ +
sed 's/^$/d' 
+ +
+delete last line
+ +
sed '$d' 
+ +
+add \n every nth character (e.g. every 4th character)
+ +
sed 's/.\{4\}/&\n/g' 
+ +
+substitution (e.g. replace A by B)
+ +
sed 's/A/B/g' filename 
+ +
+select lines start with string (e.g. bbo)
+ +
sed -n '/^@S/p' 
+ +
+delete lines with string (e.g. bbo)
+ +
sed '/bbo/d' filename 
+ +
+print every nth lines
+ +
sed -n '0~3p' filename
+ +

//catch 0: start; 3: step

+ +
+print every odd # lines
+ +
sed -n '1~2p' 
+ +
+print every third line including the first line
+ +
sed -n '1p;0~3p' 
+ +
+remove leading whitespace and tabs
+ +
sed -e 's/^[ \t]*//'
+ +

//notice a whitespace before '\t'!!

+ +
+remove only leading whitespace
+ +
sed 's/ *//'
+ +

//notice a whitespace before '*'!!

+ +
+remove ending commas
+ +
sed 's/,$//g' 
+ +
+add a column to the end
+ +
sed "s/$/\t$i/"
+ +

//$i is the valuable you want to add +e.g. add the filename to every last column of the file

+ +
for i in $(ls);do sed -i "s/$/\t$i/" $i;done
+ +
+remove newline\ nextline
+ +
sed ':a;N;$!ba;s/\n//g'
+ +
+print a number of lines (e.g. line 10th to line 33 rd)
+ +
sed -n '10,33p' <filename
+ +

+Awk

+ +

[back to top]

+ +
+set tab as field separator
+ +
awk -F $'\t'  
+ +
+output as tab separated (also as field separator)
+ +
awk -v OFS='\t' 
+ +
+pass variable
+ +
a=bbo;b=obb;
+awk -v a="$a" -v b="$b" "$1==a && $10=b' filename 
+ +
+print number of characters on each line
+ +
awk '{print length ($0);}' filename 
+ +
+find number of columns
+ +
awk '{print NF}' 
+ +
+reverse column order
+ +
awk '{print $2, $1}' 
+ +
+check if there is a comma in a column (e.g. column $1)
+ +
awk '$1~/,/ {print}'  
+ +
+split and do for loop
+ +
awk '{split($2, a,",");for (i in a) print $1"\t"a[i]} filename 
+ +
+print all lines before nth occurence of a string (e.g stop print lines when bbo appears 7 times)
+ +
awk -v N=7 '{print}/bbo/&& --N<=0 {exit}'
+ +
+add string to the beginning of a column (e.g add "chr" to column $3)
+ +
awk 'BEGIN{OFS="\t"}$3="chr"$3' 
+ +
+remove lines with string (e.g. bbo)
+ +
awk '!/bbo/' file 
+ +
+column subtraction
+ +
cat file| awk -F '\t' 'BEGIN {SUM=0}{SUM+=$3-$2}END{print SUM}'
+ +
+usage and meaning of NR and FNR
+ +

e.g. +fileA: +a +b +c +fileB: +d +e

+ +
awk 'print FILENAME, NR,FNR,$0}' fileA fileB 
+ +

fileA 1 1 a +fileA 2 2 b +fileA 3 3 c +fileB 4 1 d +fileB 5 2 e

+ +
+and gate
+ +

e.g. +fileA: +1 0

+ +

2 1

+ +

3 1

+ +

4 0

+ +

fileB:

+ +

1 0

+ +

2 1

+ +

3 0

+ +

4 1

+ +
awk -v OFS='\t' 'NR=FNR{a[$1]=$2;next} NF {print $1,((a[$1]=$2)? $2:"0")}' fileA fileB 
+ +

1 0

+ +

2 1

+ +

3 0

+ +

4 0

+ +
+round all numbers of file (e.g. 2 significant figure)
+ +
awk '{while (match($0, /[0-9]+\[0-9]+/)){
+    \printf "%s%.2f", substr($0,0,RSTART-1),substr($0,RSTART,RLENGTH)
+    \$0=substr($0, RSTART+RLENGTH)
+    \}
+    \print
+    \}'
+ +
+give number/index to every row
+ +
awk '{printf("%s\t%s\n",NR,$0)}'
+ +

+Xargs

+ +

[back to top]

+ +
+set tab as delimiter (default:space)
+ +
xargs -d\t
+ +
+display 3 items per line
+ +
echo 1 2 3 4 5 6| xargs -n 3
+ +

//1 2 3 + 4 5 6

+ +
+prompt before execution
+ +
echo a b c |xargs -p -n 3
+ +
+print command along with output
+ +
xargs -t abcd
+ +

///bin/echo abcd +//abcd

+ +
+with find and rm
+ +
find . -name "*.html"|xargs rm -rf
+ +

delete fiels with whitespace in filename (e.g. "hello 2001")

+ +
find . -name "*.c" -print0|xargs -0 rm -rf
+ +
+show limits
+ +
xargs --show-limits
+ +
+move files to folder
+ +
find . -name "*.bak" -print 0|xargs -0 -I {} mv {} ~/old
+ +

or

+ +
find . -name "*.bak" -print 0|xargs -0 -I file mv file ~/old
+ +
+move first 100th files to a directory (e.g. d1)
+ +
ls |head -100|xargs -I {} mv {} d1
+ +
+parallel
+ +
time echo {1..5} |xargs -n 1 -P 5 sleep
+ +

a lot faster than

+ +
time echo {1..5} |xargs -n1 sleep
+ +
+copy all files from A to B
+ +
find /dir/to/A -type f -name "*.py" -print 0| xargs -0 -r -I file cp -v -p file --target-directory=/path/to/B
+ +

//v: verbose| +//p: keep detail (e.g. owner)

+ +
+with sed
+ +
ls |xargs -n1 -I file sed -i '/^Pos/d' filename
+ +
+add the file name to the first line of file
+ +
ls |sed 's/.txt//g'|xargs -n1 -I file sed -i -e '1 i\>file\' file.txt
+ +
+count all files
+ +
ls |xargs -n1 wc -l
+ +
+to filter txt to a single line
+ +
ls -l| xargs
+ +
+count files within directories
+ +
echo mso{1..8}|xargs -n1 bash -c 'echo -n "$1:"; ls -la "$1"| grep -w 74 |wc -l' --
+ +

// "--" signals the end of options and display further option processing

+ +
+download dependencies files and install (e.g. requirements.txt)
+ +
cat requirements.txt| xargs -n1 sudo pip install
+ +
+count lines in all file, also count total lines
+ +
ls|xargs wc -l
+ +

+Find

+ +

[back to top]

+ +
+list all sub directory/file in the current directory
+ +
find .
+ +
+list all files under the current directory
+ +
find . -type f
+ +
+list all directories under the current directory
+ +
find . -type d
+ +
+edit all files under current directory (e.g. replace 'www' with 'ww')
+ +
find . name '*.php' -exec sed -i 's/www/w/g' {} \;
+ +

if no subdirectory

+ +
replace "www" "w" -- *
+ +

//a space before *

+ +
+find and output only filename (e.g. "mso")
+ +
find mso*/ -name M* -printf "%f\n"
+ +
+find and delete file with size less than (e.g. 74 byte)
+ +
find . -name "*.mso" -size -74c -delete
+ +

//M for MB, etc

+ +

+Loops

+ +

[back to top]

+ +
+while loop, column subtraction of a file (e.g. a 3 columns file)
+ +
while read a b c; do echo $(($c-$b));done < <(head filename)
+ +

//there is a space between the two '<'s

+ +
+while loop, sum up column subtraction
+ +
i=0; while read a b c; do ((i+=$c-$b)); echo $i; done < <(head filename)
+ +
+if loop
+ +
if (($j==$u+2))
+ +

//(( )) use for arithmetic operation

+ +
if [[$age >21]]
+ +

//[[ ]] use for comparison

+ +
+for loop
+ +
for i in $(ls); do echo file $i;done
+ +

+Download

+ +

[back to top]

+ +
+download all from a page
+ +
wget -r -l1 -H -t1 -nd -N -np -A mp3 -e robots=off http://example.com
+ +

//-r: recursive and download all links on page

+ +

//-l1: only one level link

+ +

//-H: span host, visit other hosts

+ +

//-t1: numbers of retries

+ +

//-nd: don't make new directories, download to here

+ +

//-N: turn on timestamp

+ +

//-nd: no parent

+ +

//-A: type (seperate by ,)

+ +

//-e robots=off: ignore the robots.txt file which stop wget from crashing the site, sorry example.com

+ +

+Random

+ +

[back to top]

+ +
+random pick 100 lines from a file
+ +
shuf -n 100 filename
+ +
+random order (lucky draw)
+ +
for i in a b c d e; do echo $i; done| shuf
+ +
+echo series of random numbers between a range (e.g. generate 15 random numbers from 0-10)
+ +
shuf -i 0-10 -n 15
+ +
+echo a random number
+ +
echo $RANDOM
+ +
+random from 0-9
+ +
echo $((RANDOM % 10))
+ +
+random from 1-10
+ +
echo $(((RANDOM %10)+1))
+ +

+Others

+ +

[back to top]

+ +
+remove newline / nextline
+ +
tr --delete '\n' <input.txt >output.txt
+ +
+replace newline
+ +
tr '\n' ' ' <filename
+ +
+compare files (e.g. fileA, fileB)
+ +
diff fileA fileB
+ +

//a: added; d:delete; c:changed

+ +

or

+ +
sdiff fileA fileB
+ +

//side-to-side merge of file differences

+ +
+number a file (e.g. fileA)
+ +
nl fileA
+ +

or

+ +
nl -nrz fileA
+ +

//add leading zeros

+ +
+combine/ paste two files (e.g. fileA, fileB)
+ +
paste fileA fileB
+ +

//default tab seperated

+ +
+reverse string
+ +
echo 12345| rev
+ +
+read .gz file without extracting
+ +
zmore filename
+ +

or

+ +
zless filename
+ +
+run in background, output error file
+ +
(command here) 2>log &
+ +

or

+ +
(command here) 2>&1| tee logfile
+ +

or

+ +
(command here) 2>&1 >>outfile
+ +

//0: standard input; 1: standard output; 2: standard error

+ +
+send mail
+ +
echo 'heres the content'| mail -A 'file.txt' -s 'mail.subject' me@gmail.com
+ +

//use -a flag to set send from (-a "From: some@mail.tld")

+ +
+.xls to csv
+ +
xls2csv filename
+ +
+append to file (e.g. hihi)
+ +
echo 'hihi' >>filename
+ +
+make BEEP sound
+ +
speaker-test -t sine -f 1000 -l1
+ +
+set beep duration
+ +
(speaker-test -t sine -f 1000) & pid=$!;sleep 0.1s;kill -9 $pid
+ +
+history edit/ delete
+ +
~/.bash_history
+ +

or

+ +
history -d [line_number]
+ +
+get last history/record filename
+ +
head !$
+ +
+clean screen
+ +
clear
+ +

or

+ +
Ctrl+l
+ +
+send data to last edited file
+ +
cat /directory/to/file
+echo 100>!$
+ +
+run history number (e.g. 53)
+ +
!53
+ +
+run last command
+ +
!!
+ +
+run last command that began with (e.g. cat filename)
+ +
!cat
+ +

or

+ +
!c
+ +

//run cat filename again

+ +
+extract .xf
+ +
1.unxz filename.tar.xz
+2.tar -xf filename.tar
+
+ +
+install python package
+ +
pip install packagename
+ +
+Download file if necessary
+ +
data=file.txt
+url=http://www.example.com/$data
+if [! -s $data];then
+    echo "downloading test data..."
+    wget $url
+fi
+ +
+wget to a filename (when a long name)
+ +
wget -O filename "http://example.com"
+ +
+wget files to a folder
+ +
wget -P /path/to/directory "http://example.com"
+ +
+delete current bash command
+ +
Ctrl+U
+ +

or

+ +
Ctrl+C
+ +

or

+ +
Alt+Shift+#
+ +

//to make it to history

+ +
+add things to history (e.g. "addmetohistory")
+ +
#addmetodistory
+ +

//just add a "#" before~~

+ +
+sleep awhile or wait for a moment or schedule a job
+ +
sleep 5;echo hi
+ +
+count the time for executing a command
+ +
time echo hi
+ +
+backup with rsync
+ +
rsync -av filename filename.bak
+rsync -av directory directory.bak
+rsync -av --ignore_existing directory/ directory.bak
+rsync -av --update directory directory.bak
+ +

//skip files that are newer on receiver (i prefer this one!)

+ +
+make all directories at one time!
+ +
mkdir -p project/{lib/ext,bin,src,doc/{html,info,pdf},demo/stat}
+ +

//-p: make parent directory +//this will create project/doc/html/; project/doc/info; project/lib/ext ,etc

+ +
+run command only if another command returns zero exit status (well done)
+ +
cd tmp/ && tar xvf ~/a.tar
+ +
+run command only if another command returns non-zero exit status (not finish)
+ +
cd tmp/a/b/c ||mkdir -p tmp/a/b/c
+ +
+extract to a path
+ +
tar xvf -C /path/to/directory filename.gz
+ +
+use backslash "\" to break long command
+ +
cd tmp/a/b/c \
+> || \
+>mkdir -p tmp/a/b/c
+ +
+get pwd
+ +
VAR=$PWD; cd ~; tar xvf -C $VAR file.tar
+ +

//PWD need to be capital letter

+ +
+list file type of file (e.g. /tmp/)
+ +
file /tmp/
+ +

//tmp/: directory

+ +
+bash script
+ +
#!/bin/bash
+file=${1#*.}
+ +

//remove string before a "."

+ +
file=${1%.*}
+ +

//remove string after a "."

+ +
+search from history
+ +
Ctrl+r
+ +
+python simple HTTP Server
+ +
python -m SimpleHTTPServer
+ +
+variables
+ +
{i/a/,}
+ +

e.g. replace all

+ +
{i//a/,}
+ +

//for variable i, replace all 'a' with a comma

+ +
+read user input
+ +
read input
+echo $input
+ +
+generate sequence 1-10
+ +
seq 10
+ +
+sum up input list (e.g. seq 10)
+ +
seq 10|paste -sd+|bc
+ +
+find average of input list/file
+ +
i=`wc -l filename|cut -d ' ' -f1`; cat filename| echo "scale=2;(`paste -sd+`)/"$i|bc
+ +
+generate all combination (e.g. 1,2)
+ +
echo {1,2}{1,2}
+ +

//1 1, 1 2, 2 1, 2 2

+ +
+generate all combination (e.g. A,T,C,G)
+ +
set = {A,T,C,G}
+group= 5
+for ((i=0; i<$group; i++));do
+    repetition=$set$repetition;done
+    bash -c "echo "$repetition""
+ +
+read file content to variable
+ +
foo=$(<test1)
+ +
+echo size of variable
+ +
echo ${#foo}
+ +
+array
+ +
declare -A array=()
+ +
+send a directory
+ +
scp -r directoryname user@ip:/path/to/send
+ +

+System

+ +

[back to top]

+ +
+snapshot of the current processes
+ +
ps 
+ +
+check graphics card
+ +
lspci
+ +
+show IP address
+ +
$ip add show
+ +

or

+ +
ifconfig
+ +
+check system version
+ +
cat /etc/*-release
+ +
+Linux Programmer's Manuel: hier- description of the filesystem hierarchy
+ +
man hier
+ +
+list job
+ +
jobs -l
+ +
+export PATH
+ +
export PATH=$PATH:~/path/you/want
+ +
+make file execuable
+ +
chmod +x filename
+ +

//you can now ./filename to execute it

+ +
+list screen
+ +
screen -d -r
+ +
+echo screen name
+ +
screen -ls
+ +
+check system (x86-64)
+ +
uname -i
+ +
+surf the net
+ +
links www.google.com
+ +
+add user, set passwd
+ +
useradd username
+passwd username
+ +
+edit variable for bash, (e.g. displaying the whole path)
+ +
1. joe ~/.bash_profile 
+2. export PS1='\u@\h:\w\$' 
+ +

//$PS1 is a variable that defines the makeup and style of the command prompt

+ +
3. source ~/.bash_profile
+ +
+edit environment setting (e.g. alias)
+ +
1. joe ~/.bash_profile
+2. alias pd="pwd" //no more need to type that 'w'!
+3. source ~/.bash_profile
+ +
+list environment variables (e.g. PATH)
+ +
$echo $PATH
+ +

//list of directories separated by a colon

+ +
+list all environment variables for current user
+ +
$env
+ +
+show partition format
+ +
lsblk
+ +
+soft link program to bin
+ +
ln -s /path/to/program /home/usr/bin
+ +

//must be the whole path to the program

+ +
+show hexadecimal view of data
+ +
hexdump -C filename.class
+ +
+jump to different node
+ +
rsh node_name
+ +
+check port (active internet connection)
+ +
netstat -tulpn
+ +
+find whick link to a file
+ +
readlink filename
+ +
+check where a command link to (e.g. python)
+ +
which python
+ +
+list total size of a directory
+ +
du -hs .
+ +

or

+ +
du -sb
+ +
+copy directory with permission setting
+ +
cp -rp /path/to/directory
+ +
+store current directory
+ +
pushd . $popd ;dirs -l 
+ +
+show disk usage
+ +
df -h 
+ +

or

+ +
du -h 
+ +

or

+ +
du -sk /var/log/* |sort -rn |head -10
+ +
+show current runlevel
+ +
runlevel
+ +
+switch runlevel
+ +
init 3 
+ +

or

+ +
telinit 3 
+ +
+permanently modify runlevel
+ +
1. edit /etc/init/rc-sysinit.conf 
+2. env DEFAULT_RUNLEVEL=2 
+ +
+become root
+ +
su
+ +
+become somebody
+ +
su somebody
+ +
+report user quotes on device
+ +
requota -auvs
+ +
+get entries in a number of important databases
+ +
getent database_name
+ +

(e.g. the 'passwd' database)

+ +
getent passwd
+ +

//list all user account (all local and LDAP) +(e.g. fetch list of grop accounts)

+ +
getent group
+ +

//store in database 'group'

+ +
+little xwindow tools
+ +
xclock
+xeyes
+ +
+change owner of file
+ +
chown user_name filename
+chown -R user_name /path/to/directory/
+ +

//chown user:group filename

+ +
+list current mount detail
+ +
df
+ +
+list current usernames and user-numbers
+ +
cat /etc/passwd
+ +
+get all username
+ +
getent passwd| awk '{FS="[:]"; print $1}'
+ +
+show all users
+ +
compgen -u
+ +
+show all groups
+ +
compgen -g
+ +
+show group of user
+ +
group username
+ +
+show uid, gid, group of user
+ +
id username
+ +
+check if it's root
+ +
if [$(id -u) -ne 0];then
+    echo "You are not root!"
+    exit;
+fi
+ +

//'id -u' output 0 if it's not root

+ +
+find out CPU information
+ +
more /proc/cpuinfo
+ +

or

+ +
lscpu
+ +
+set quota for user (e.g. disk soft limit: 120586240; hard limit: 125829120)
+ +
setquota username 120586240 125829120 0 0 /home
+ +
+show quota for user
+ +
quota -v username
+ +
+fork bomb
+ +
:(){:|:&};:
+ +

//dont try this at home

+ +
+check user login
+ +
lastlog
+ +
+edit path for all users
+ +
joe /etc/environment
+ +

//edit this file

+ +
+show running processes
+ +
ps aux
+ +
+find maximum number of processes
+ +
cat /proc/sys/kernal/pid_max
+ +
+show and set user limit
+ +
ulimit -u
+ +

=-=-=-=-=-A lot more coming!! =-=-=-=-=-=-=-=-=-=waitwait-=-=-=-=-=-=-=-=-=-

-
+ + diff --git a/javascripts/respond.js b/javascripts/respond.js new file mode 100644 index 0000000..76bc260 --- /dev/null +++ b/javascripts/respond.js @@ -0,0 +1,779 @@ +if(typeof Object.create!=="function"){ +Object.create=function(o){ +function F(){ +}; +F.prototype=o; +return new F(); +}; +} +var ua={toString:function(){ +return navigator.userAgent; +},test:function(s){ +return this.toString().toLowerCase().indexOf(s.toLowerCase())>-1; +}}; +ua.version=(ua.toString().toLowerCase().match(/[\s\S]+(?:rv|it|ra|ie)[\/: ]([\d.]+)/)||[])[1]; +ua.webkit=ua.test("webkit"); +ua.gecko=ua.test("gecko")&&!ua.webkit; +ua.opera=ua.test("opera"); +ua.ie=ua.test("msie")&&!ua.opera; +ua.ie6=ua.ie&&document.compatMode&&typeof document.documentElement.style.maxHeight==="undefined"; +ua.ie7=ua.ie&&document.documentElement&&typeof document.documentElement.style.maxHeight!=="undefined"&&typeof XDomainRequest==="undefined"; +ua.ie8=ua.ie&&typeof XDomainRequest!=="undefined"; +var domReady=function(){ +var _1=[]; +var _2=function(){ +if(!arguments.callee.done){ +arguments.callee.done=true; +for(var i=0;i<_1.length;i++){ +_1[i](); +} +} +}; +if(document.addEventListener){ +document.addEventListener("DOMContentLoaded",_2,false); +} +if(ua.ie){ +(function(){ +try{ +document.documentElement.doScroll("left"); +} +catch(e){ +setTimeout(arguments.callee,50); +return; +} +_2(); +})(); +document.onreadystatechange=function(){ +if(document.readyState==="complete"){ +document.onreadystatechange=null; +_2(); +} +}; +} +if(ua.webkit&&document.readyState){ +(function(){ +if(document.readyState!=="loading"){ +_2(); +}else{ +setTimeout(arguments.callee,10); +} +})(); +} +window.onload=_2; +return function(fn){ +if(typeof fn==="function"){ +_1[_1.length]=fn; +} +return fn; +}; +}(); +var cssHelper=function(){ +var _3={BLOCKS:/[^\s{][^{]*\{(?:[^{}]*\{[^{}]*\}[^{}]*|[^{}]*)*\}/g,BLOCKS_INSIDE:/[^\s{][^{]*\{[^{}]*\}/g,DECLARATIONS:/[a-zA-Z\-]+[^;]*:[^;]+;/g,RELATIVE_URLS:/url\(['"]?([^\/\)'"][^:\)'"]+)['"]?\)/g,REDUNDANT_COMPONENTS:/(?:\/\*([^*\\\\]|\*(?!\/))+\*\/|@import[^;]+;)/g,REDUNDANT_WHITESPACE:/\s*(,|:|;|\{|\})\s*/g,MORE_WHITESPACE:/\s{2,}/g,FINAL_SEMICOLONS:/;\}/g,NOT_WHITESPACE:/\S+/g}; +var _4,_5=false; +var _6=[]; +var _7=function(fn){ +if(typeof fn==="function"){ +_6[_6.length]=fn; +} +}; +var _8=function(){ +for(var i=0;i<_6.length;i++){ +_6[i](_4); +} +}; +var _9={}; +var _a=function(n,v){ +if(_9[n]){ +var _b=_9[n].listeners; +if(_b){ +for(var i=0;i<_b.length;i++){ +_b[i](v); +} +} +} +}; +var _c=function(_d,_e,_f){ +if(ua.ie&&!window.XMLHttpRequest){ +window.XMLHttpRequest=function(){ +return new ActiveXObject("Microsoft.XMLHTTP"); +}; +} +if(!XMLHttpRequest){ +return ""; +} +var r=new XMLHttpRequest(); +try{ +r.open("get",_d,true); +r.setRequestHeader("X_REQUESTED_WITH","XMLHttpRequest"); +} +catch(e){ +_f(); +return; +} +var _10=false; +setTimeout(function(){ +_10=true; +},5000); +document.documentElement.style.cursor="progress"; +r.onreadystatechange=function(){ +if(r.readyState===4&&!_10){ +if(!r.status&&location.protocol==="file:"||(r.status>=200&&r.status<300)||r.status===304||navigator.userAgent.indexOf("Safari")>-1&&typeof r.status==="undefined"){ +_e(r.responseText); +}else{ +_f(); +} +document.documentElement.style.cursor=""; +r=null; +} +}; +r.send(""); +}; +var _11=function(_12){ +_12=_12.replace(_3.REDUNDANT_COMPONENTS,""); +_12=_12.replace(_3.REDUNDANT_WHITESPACE,"$1"); +_12=_12.replace(_3.MORE_WHITESPACE," "); +_12=_12.replace(_3.FINAL_SEMICOLONS,"}"); +return _12; +}; +var _13={mediaQueryList:function(s){ +var o={}; +var idx=s.indexOf("{"); +var lt=s.substring(0,idx); +s=s.substring(idx+1,s.length-1); +var mqs=[],rs=[]; +var qts=lt.toLowerCase().substring(7).split(","); +for(var i=0;i-1&&_23.href&&_23.href.length!==0&&!_23.disabled){ +_1f[_1f.length]=_23; +} +} +if(_1f.length>0){ +var c=0; +var _24=function(){ +c++; +if(c===_1f.length){ +_20(); +} +}; +var _25=function(_26){ +var _27=_26.href; +_c(_27,function(_28){ +_28=_11(_28).replace(_3.RELATIVE_URLS,"url("+_27.substring(0,_27.lastIndexOf("/"))+"/$1)"); +_26.cssHelperText=_28; +_24(); +},_24); +}; +for(i=0;i<_1f.length;i++){ +_25(_1f[i]); +} +}else{ +_20(); +} +}; +var _29={mediaQueryLists:"array",rules:"array",selectors:"object",declarations:"array",properties:"object"}; +var _2a={mediaQueryLists:null,rules:null,selectors:null,declarations:null,properties:null}; +var _2b=function(_2c,v){ +if(_2a[_2c]!==null){ +if(_29[_2c]==="array"){ +return (_2a[_2c]=_2a[_2c].concat(v)); +}else{ +var c=_2a[_2c]; +for(var n in v){ +if(v.hasOwnProperty(n)){ +if(!c[n]){ +c[n]=v[n]; +}else{ +c[n]=c[n].concat(v[n]); +} +} +} +return c; +} +} +}; +var _2d=function(_2e){ +_2a[_2e]=(_29[_2e]==="array")?[]:{}; +for(var i=0;i<_4.length;i++){ +_2b(_2e,_4[i].cssHelperParsed[_2e]); +} +return _2a[_2e]; +}; +domReady(function(){ +var els=document.body.getElementsByTagName("*"); +for(var i=0;i=_44)||(max&&_46<_44)||(!min&&!max&&_46===_44)); +}else{ +return false; +} +}else{ +return _46>0; +} +}else{ +if("device-height"===_41.substring(l-13,l)){ +_47=screen.height; +if(_42!==null){ +if(_43==="length"){ +return ((min&&_47>=_44)||(max&&_47<_44)||(!min&&!max&&_47===_44)); +}else{ +return false; +} +}else{ +return _47>0; +} +}else{ +if("width"===_41.substring(l-5,l)){ +_46=document.documentElement.clientWidth||document.body.clientWidth; +if(_42!==null){ +if(_43==="length"){ +return ((min&&_46>=_44)||(max&&_46<_44)||(!min&&!max&&_46===_44)); +}else{ +return false; +} +}else{ +return _46>0; +} +}else{ +if("height"===_41.substring(l-6,l)){ +_47=document.documentElement.clientHeight||document.body.clientHeight; +if(_42!==null){ +if(_43==="length"){ +return ((min&&_47>=_44)||(max&&_47<_44)||(!min&&!max&&_47===_44)); +}else{ +return false; +} +}else{ +return _47>0; +} +}else{ +if("device-aspect-ratio"===_41.substring(l-19,l)){ +return _43==="aspect-ratio"&&screen.width*_44[1]===screen.height*_44[0]; +}else{ +if("color-index"===_41.substring(l-11,l)){ +var _48=Math.pow(2,screen.colorDepth); +if(_42!==null){ +if(_43==="absolute"){ +return ((min&&_48>=_44)||(max&&_48<_44)||(!min&&!max&&_48===_44)); +}else{ +return false; +} +}else{ +return _48>0; +} +}else{ +if("color"===_41.substring(l-5,l)){ +var _49=screen.colorDepth; +if(_42!==null){ +if(_43==="absolute"){ +return ((min&&_49>=_44)||(max&&_49<_44)||(!min&&!max&&_49===_44)); +}else{ +return false; +} +}else{ +return _49>0; +} +}else{ +if("resolution"===_41.substring(l-10,l)){ +var res; +if(_45==="dpcm"){ +res=_3d("1cm"); +}else{ +res=_3d("1in"); +} +if(_42!==null){ +if(_43==="resolution"){ +return ((min&&res>=_44)||(max&&res<_44)||(!min&&!max&&res===_44)); +}else{ +return false; +} +}else{ +return res>0; +} +}else{ +return false; +} +} +} +} +} +} +} +} +}; +var _4a=function(mq){ +var _4b=mq.getValid(); +var _4c=mq.getExpressions(); +var l=_4c.length; +if(l>0){ +for(var i=0;i0){ +s[c++]=","; +} +s[c++]=n; +} +} +if(s.length>0){ +_39[_39.length]=cssHelper.addStyle("@media "+s.join("")+"{"+mql.getCssText()+"}",false); +} +}; +var _4e=function(_4f){ +for(var i=0;i<_4f.length;i++){ +_4d(_4f[i]); +} +if(ua.ie){ +document.documentElement.style.display="block"; +setTimeout(function(){ +document.documentElement.style.display=""; +},0); +setTimeout(function(){ +cssHelper.broadcast("cssMediaQueriesTested"); +},100); +}else{ +cssHelper.broadcast("cssMediaQueriesTested"); +} +}; +var _50=function(){ +for(var i=0;i<_39.length;i++){ +cssHelper.removeStyle(_39[i]); +} +_39=[]; +cssHelper.mediaQueryLists(_4e); +}; +var _51=0; +var _52=function(){ +var _53=cssHelper.getViewportWidth(); +var _54=cssHelper.getViewportHeight(); +if(ua.ie){ +var el=document.createElement("div"); +el.style.position="absolute"; +el.style.top="-9999em"; +el.style.overflow="scroll"; +document.body.appendChild(el); +_51=el.offsetWidth-el.clientWidth; +document.body.removeChild(el); +} +var _55; +var _56=function(){ +var vpw=cssHelper.getViewportWidth(); +var vph=cssHelper.getViewportHeight(); +if(Math.abs(vpw-_53)>_51||Math.abs(vph-_54)>_51){ +_53=vpw; +_54=vph; +clearTimeout(_55); +_55=setTimeout(function(){ +if(!_3a()){ +_50(); +}else{ +cssHelper.broadcast("cssMediaQueriesTested"); +} +},500); +} +}; +window.onresize=function(){ +var x=window.onresize||function(){ +}; +return function(){ +x(); +_56(); +}; +}(); +}; +var _57=document.documentElement; +_57.style.marginLeft="-32767px"; +setTimeout(function(){ +_57.style.marginTop=""; +},20000); +return function(){ +if(!_3a()){ +cssHelper.addListener("newStyleParsed",function(el){ +_4e(el.cssHelperParsed.mediaQueryLists); +}); +cssHelper.addListener("cssMediaQueriesTested",function(){ +if(ua.ie){ +_57.style.width="1px"; +} +setTimeout(function(){ +_57.style.width=""; +_57.style.marginLeft=""; +},0); +cssHelper.removeListener("cssMediaQueriesTested",arguments.callee); +}); +_3c(); +_50(); +}else{ +_57.style.marginLeft=""; +} +_52(); +}; +}()); +try{ +document.execCommand("BackgroundImageCache",false,true); +} +catch(e){ +} + diff --git a/params.json b/params.json index 8f92110..d935516 100644 --- a/params.json +++ b/params.json @@ -1,6 +1,6 @@ { "name": "Bash-oneliner", "tagline": "Bash Oneliner learning station. This blog will focus on bash commands for parsing biological data, which are tsv files(tab-separated values); some of the commands are for Ubuntu system maintaining. I apologize that there won't be any citation of the code, but they are probably from dear Google and Stackoverflow. Not all the code here are oneliner (if the ';' counts..). English and bash are not my first language, so... correct me anytime, tks!!", - "body": "### Welcome to GitHub Pages.\r\nThis automatic page generator is the easiest way to create beautiful pages for all of your projects. Author your page content here [using GitHub Flavored Markdown](https://guides.github.com/features/mastering-markdown/), select a template crafted by a designer, and publish. After your page is generated, you can check out the new `gh-pages` branch locally. If you’re using GitHub Desktop, simply sync your repository and you’ll see the new branch.\r\n\r\n### Designer Templates\r\nWe’ve crafted some handsome templates for you to use. Go ahead and click 'Continue to layouts' to browse through them. You can easily go back to edit your page before publishing. After publishing your page, you can revisit the page generator and switch to another theme. Your Page content will be preserved.\r\n\r\n### Creating pages manually\r\nIf you prefer to not use the automatic generator, push a branch named `gh-pages` to your repository to create a page manually. In addition to supporting regular HTML content, GitHub Pages support Jekyll, a simple, blog aware static site generator. Jekyll makes it easy to create site-wide headers and footers without having to copy them across every page. It also offers intelligent blog support and other advanced templating features.\r\n\r\n### Authors and Contributors\r\nYou can @mention a GitHub username to generate a link to their profile. The resulting `` element will link to the contributor’s GitHub Profile. For example: In 2007, Chris Wanstrath (@defunkt), PJ Hyett (@pjhyett), and Tom Preston-Werner (@mojombo) founded GitHub.\r\n\r\n### Support or Contact\r\nHaving trouble with Pages? Check out our [documentation](https://help.github.com/pages) or [contact support](https://github.com/contact) and we’ll help you sort it out.\r\n", + "body": "##Handy Bash oneliner commands for tsv file editing\r\n\r\n- [Grep](#grep)\r\n- [Sed](#sed)\r\n- [Awk](#awk)\r\n- [Xargs](#xargs)\r\n- [Find](#find)\r\n- [Loops](#loops)\r\n- [Download](#download)\r\n- [Random](#random)\r\n- [Others](#others)\r\n- [System](#system)\r\n\r\n##Grep\r\n#####extract text bewteen words (e.g. w1,w2)\r\n \r\n```bash\r\ngrep -o -P '(?<=w1).*(?=w2)'\r\n```\r\n\r\n#####grep lines without word (e.g. bbo)\r\n \r\n```bash\r\ngrep -v bbo\r\n```\r\n\r\n#####grep and count (e.g. bbo)\r\n \r\n```bash\r\ngrep -c bbo filename\r\n```\r\n\r\n#####insensitive grep (e.g. bbo/BBO/Bbo)\r\n \r\n```bash\r\ngrep -i \"bbo\" filename \r\n```\r\n\r\n#####count occurrence (e.g. three times a line count three times)\r\n \r\n```bash\r\ngrep -o bbo filename \r\n```\r\n\r\n#####COLOR the match (e.g. bbo)!\r\n \r\n```bash\r\ngrep --color bbo filename \r\n```\r\n\r\n#####grep search all files in a directory(e.g. bbo)\r\n \r\n```bash\r\ngrep -R bbo /path/to/directory \r\n```\r\n\r\nor\r\n \r\n```bash\r\ngrep -r bbo /path/to/directory \r\n```\r\n#####search all files in directory, only output file names with matches(e.g. bbo)\r\n \r\n```bash\r\ngrep -Rh bbo /path/to/directory \r\n```\r\nor\r\n```bash \r\ngrep -rh bbo /path/to/directory \r\n```\r\n\r\n#####grep OR (e.g. A or B or C or D)\r\n \r\n```\r\ngrep 'A\\|B\\|C\\|D' \r\n```\r\n\r\n#####grep AND (e.g. A and B)\r\n \r\n```bash\r\ngrep 'A.*B' \r\n```\r\n\r\n#####grep all content of a fileA from fileB\r\n \r\n```bash\r\ngrep -f fileA fileB \r\n```\r\n\r\n#####grep a tab\r\n \r\n```bash\r\ngrep $'\\t' \r\n```\r\n\r\n##Sed\r\n[[back to top](#handy-bash-oneliner-commands-for-tsv-file-editing)]\r\n\r\n#####remove lines with word (e.g. bbo)\r\n \r\n```bash\r\nsed \"/bbo/d\" filename\r\n```\r\n\r\n#####edit infile (edit and save)\r\n \r\n```bash\r\nsed -i \"/bbo/d\" filename\r\n```\r\n#####when using variable (e.g. $i), use double quotes \" \"\r\ne.g. add >$i to the first line (to make a FASTA file)\r\n \r\n```bash\r\nsed \"1i >$i\" \r\n```\r\n//notice the double quotes! in other examples, you can use a single quote, but here, no way! \r\n//'1i' means insert to first line\r\n\r\n\r\n#####delete empty lines\r\n \r\n```bash\r\nsed '/^\\s*$/d' \r\n``` \r\nor\r\n \r\n```bash\r\nsed 's/^$/d' \r\n```\r\n#####delete last line\r\n \r\n```bash\r\nsed '$d' \r\n```\r\n\r\n#####add \\n every nth character (e.g. every 4th character)\r\n \r\n```bash\r\nsed 's/.\\{4\\}/&\\n/g' \r\n```\r\n\r\n#####substitution (e.g. replace A by B)\r\n \r\n```bash\r\nsed 's/A/B/g' filename \r\n```\r\n#####select lines start with string (e.g. bbo)\r\n \r\n```bash\r\nsed -n '/^@S/p' \r\n```\r\n#####delete lines with string (e.g. bbo)\r\n \r\n```bash\r\nsed '/bbo/d' filename \r\n```\r\n#####print every nth lines\r\n \r\n```bash\r\nsed -n '0~3p' filename\r\n```\r\n//catch 0: start; 3: step\r\n\r\n\r\n#####print every odd # lines\r\n \r\n```bash\r\nsed -n '1~2p' \r\n```\r\n#####print every third line including the first line\r\n \r\n```bash\r\nsed -n '1p;0~3p' \r\n```\r\n#####remove leading whitespace and tabs\r\n \r\n```bash\r\nsed -e 's/^[ \\t]*//'\r\n```\r\n//notice a whitespace before '\\t'!!\r\n\r\n\r\n#####remove only leading whitespace\r\n \r\n```bash\r\nsed 's/ *//'\r\n```\r\n//notice a whitespace before '*'!!\r\n\r\n\r\n#####remove ending commas\r\n \r\n```bash\r\nsed 's/,$//g' \r\n```\r\n#####add a column to the end\r\n \r\n```bash\r\nsed \"s/$/\\t$i/\"\r\n```\r\n//$i is the valuable you want to add\r\ne.g. add the filename to every last column of the file\r\n \r\n```bash\r\nfor i in $(ls);do sed -i \"s/$/\\t$i/\" $i;done\r\n```\r\n\r\n#####remove newline\\ nextline\r\n \r\n```bash\r\nsed ':a;N;$!ba;s/\\n//g'\r\n```\r\n\r\n#####print a number of lines (e.g. line 10th to line 33 rd)\r\n \r\n```bash\r\nsed -n '10,33p' file\\' file.txt\r\n```\r\n\r\n#####count all files\r\n\r\n```bash\r\nls |xargs -n1 wc -l\r\n```\r\n\r\n#####to filter txt to a single line\r\n\r\n```bash\r\nls -l| xargs\r\n```\r\n\r\n#####count files within directories\r\n\r\n```bash\r\necho mso{1..8}|xargs -n1 bash -c 'echo -n \"$1:\"; ls -la \"$1\"| grep -w 74 |wc -l' --\r\n```\r\n// \"--\" signals the end of options and display further option processing\r\n\r\n\r\n#####download dependencies files and install (e.g. requirements.txt)\r\n \r\n```bash\r\ncat requirements.txt| xargs -n1 sudo pip install\r\n```\r\n\r\n#####count lines in all file, also count total lines\r\n\r\n```bash\r\nls|xargs wc -l\r\n```\r\n\r\n\r\n\r\n##Find \r\n[[back to top](#handy-bash-oneliner-commands-for-tsv-file-editing)]\r\n#####list all sub directory/file in the current directory\r\n \r\n```bash\r\nfind .\r\n```\r\n\r\n#####list all files under the current directory\r\n \r\n```bash\r\nfind . -type f\r\n```\r\n\r\n#####list all directories under the current directory\r\n \r\n```bash\r\nfind . -type d\r\n```\r\n\r\n#####edit all files under current directory (e.g. replace 'www' with 'ww')\r\n \r\n```bash\r\nfind . name '*.php' -exec sed -i 's/www/w/g' {} \\;\r\n```\r\nif no subdirectory\r\n \r\n```bash\r\nreplace \"www\" \"w\" -- *\r\n```\r\n//a space before *\r\n\r\n\r\n#####find and output only filename (e.g. \"mso\")\r\n \r\n```bash\r\nfind mso*/ -name M* -printf \"%f\\n\"\r\n```\r\n\r\n#####find and delete file with size less than (e.g. 74 byte)\r\n \r\n```bash\r\nfind . -name \"*.mso\" -size -74c -delete\r\n```\r\n//M for MB, etc\r\n\r\n\r\n##Loops\r\n[[back to top](#handy-bash-oneliner-commands-for-tsv-file-editing)]\r\n#####while loop, column subtraction of a file (e.g. a 3 columns file)\r\n \r\n```bash\r\nwhile read a b c; do echo $(($c-$b));done < <(head filename)\r\n```\r\n//there is a space between the two '<'s\r\n\r\n#####while loop, sum up column subtraction\r\n \r\n```bash\r\ni=0; while read a b c; do ((i+=$c-$b)); echo $i; done < <(head filename)\r\n```\r\n\r\n#####if loop\r\n \r\n```bash\r\nif (($j==$u+2))\r\n```\r\n//(( )) use for arithmetic operation\r\n \r\n```bash\r\nif [[$age >21]]\r\n```\r\n//[[ ]] use for comparison\r\n\r\n#####for loop\r\n \r\n```bash\r\nfor i in $(ls); do echo file $i;done\r\n```\r\n\r\n\r\n##Download\r\n[[back to top](#handy-bash-oneliner-commands-for-tsv-file-editing)]\r\n#####download all from a page\r\n \r\n```bash\r\nwget -r -l1 -H -t1 -nd -N -np -A mp3 -e robots=off http://example.com\r\n```\r\n//-r: recursive and download all links on page\r\n\r\n//-l1: only one level link\r\n\r\n//-H: span host, visit other hosts\r\n\r\n//-t1: numbers of retries\r\n\r\n//-nd: don't make new directories, download to here\r\n\r\n//-N: turn on timestamp\r\n\r\n//-nd: no parent\r\n\r\n//-A: type (seperate by ,)\r\n\r\n//-e robots=off: ignore the robots.txt file which stop wget from crashing the site, sorry example.com\r\n\r\n\r\n\r\n##Random\r\n[[back to top](#handy-bash-oneliner-commands-for-tsv-file-editing)]\r\n#####random pick 100 lines from a file\r\n \r\n```bash\r\nshuf -n 100 filename\r\n```\r\n\r\n#####random order (lucky draw)\r\n \r\n```bash\r\nfor i in a b c d e; do echo $i; done| shuf\r\n```\r\n\r\n#####echo series of random numbers between a range (e.g. generate 15 random numbers from 0-10)\r\n \r\n```bash\r\nshuf -i 0-10 -n 15\r\n```\r\n\r\n#####echo a random number\r\n \r\n```bash\r\necho $RANDOM\r\n```\r\n\r\n#####random from 0-9\r\n \r\n```bash\r\necho $((RANDOM % 10))\r\n```\r\n\r\n#####random from 1-10\r\n \r\n```bash\r\necho $(((RANDOM %10)+1))\r\n```\r\n\r\n\r\n\r\n##Others\r\n[[back to top](#handy-bash-oneliner-commands-for-tsv-file-editing)]\r\n#####remove newline / nextline\r\n \r\n```bash\r\ntr --delete '\\n' output.txt\r\n```\r\n\r\n#####replace newline\r\n \r\n```bash\r\ntr '\\n' ' ' log &\r\n```\r\n\r\nor\r\n\r\n```bash\r\n(command here) 2>&1| tee logfile\r\n```\r\n\r\nor\r\n\r\n```bash\r\n(command here) 2>&1 >>outfile\r\n```\r\n//0: standard input; 1: standard output; 2: standard error\r\n\r\n\r\n#####send mail\r\n \r\n```bash\r\necho 'heres the content'| mail -A 'file.txt' -s 'mail.subject' me@gmail.com\r\n```\r\n//use -a flag to set send from (-a \"From: some@mail.tld\")\r\n\r\n\r\n#####.xls to csv\r\n \r\n```bash\r\nxls2csv filename\r\n```\r\n#####append to file (e.g. hihi)\r\n \r\n```bash\r\necho 'hihi' >>filename\r\n```\r\n\r\n#####make BEEP sound\r\n \r\n```bash\r\nspeaker-test -t sine -f 1000 -l1\r\n```\r\n\r\n#####set beep duration\r\n \r\n```bash\r\n(speaker-test -t sine -f 1000) & pid=$!;sleep 0.1s;kill -9 $pid\r\n```\r\n\r\n#####history edit/ delete\r\n \r\n```bash\r\n~/.bash_history\r\n```\r\nor\r\n\r\n```bash\r\nhistory -d [line_number]\r\n```\r\n\r\n#####get last history/record filename\r\n \r\n```bash\r\nhead !$\r\n```\r\n\r\n#####clean screen\r\n \r\n```bash\r\nclear\r\n```\r\n\r\nor\r\n\r\n```bash\r\nCtrl+l\r\n```\r\n\r\n#####send data to last edited file\r\n \r\n```bash\r\ncat /directory/to/file\r\necho 100>!$\r\n```\r\n\r\n#####run history number (e.g. 53)\r\n \r\n```bash\r\n!53\r\n```\r\n\r\n#####run last command\r\n \r\n```bash\r\n!!\r\n```\r\n\r\n#####run last command that began with (e.g. cat filename)\r\n \r\n```bash\r\n!cat\r\n```\r\n\r\nor\r\n\r\n```bash\r\n!c\r\n```\r\n//run cat filename again\r\n\r\n\r\n#####extract .xf\r\n \r\n 1.unxz filename.tar.xz\r\n 2.tar -xf filename.tar\r\n\r\n#####install python package\r\n \r\n```bash\r\npip install packagename\r\n```\r\n\r\n\r\n#####Download file if necessary\r\n \r\n```bash\r\ndata=file.txt\r\nurl=http://www.example.com/$data\r\nif [! -s $data];then\r\n echo \"downloading test data...\"\r\n wget $url\r\nfi\r\n```\r\n\r\n#####wget to a filename (when a long name)\r\n \r\n```bash\r\nwget -O filename \"http://example.com\"\r\n```\r\n\r\n#####wget files to a folder\r\n\r\n```bash\r\nwget -P /path/to/directory \"http://example.com\"\r\n```\r\n\r\n#####delete current bash command\r\n \r\n```bash\r\nCtrl+U\r\n```\r\n\r\nor\r\n\r\n```bash\r\nCtrl+C\r\n```\r\n\r\nor\r\n\r\n```bash\r\nAlt+Shift+#\r\n```\r\n//to make it to history\r\n\r\n\r\n#####add things to history (e.g. \"addmetohistory\")\r\n \r\n```bash\r\n#addmetodistory\r\n```\r\n//just add a \"#\" before~~\r\n\r\n\r\n#####sleep awhile or wait for a moment or schedule a job\r\n \r\n```bash\r\nsleep 5;echo hi\r\n```\r\n\r\n#####count the time for executing a command\r\n \r\n```bash\r\ntime echo hi\r\n```\r\n\r\n#####backup with rsync\r\n \r\n```bash\r\nrsync -av filename filename.bak\r\nrsync -av directory directory.bak\r\nrsync -av --ignore_existing directory/ directory.bak\r\nrsync -av --update directory directory.bak\r\n```\r\n//skip files that are newer on receiver (i prefer this one!)\r\n\r\n\r\n#####make all directories at one time!\r\n \r\n```bash\r\nmkdir -p project/{lib/ext,bin,src,doc/{html,info,pdf},demo/stat}\r\n```\r\n//-p: make parent directory\r\n//this will create project/doc/html/; project/doc/info; project/lib/ext ,etc\r\n\r\n\r\n#####run command only if another command returns zero exit status (well done)\r\n \r\n```bash\r\ncd tmp/ && tar xvf ~/a.tar\r\n```\r\n\r\n#####run command only if another command returns non-zero exit status (not finish)\r\n \r\n```bash\r\ncd tmp/a/b/c ||mkdir -p tmp/a/b/c\r\n```\r\n\r\n#####extract to a path\r\n \r\n```bash\r\ntar xvf -C /path/to/directory filename.gz\r\n```\r\n\r\n#####use backslash \"\\\" to break long command\r\n \r\n```bash\r\ncd tmp/a/b/c \\\r\n> || \\\r\n>mkdir -p tmp/a/b/c\r\n```\r\n\r\n#####get pwd\r\n \r\n```bash\r\nVAR=$PWD; cd ~; tar xvf -C $VAR file.tar\r\n```\r\n//PWD need to be capital letter\r\n\r\n#####list file type of file (e.g. /tmp/)\r\n \r\n```bash\r\nfile /tmp/\r\n```\r\n//tmp/: directory\r\n\r\n\r\n#####bash script\r\n \r\n```bash\r\n#!/bin/bash\r\nfile=${1#*.}\r\n```\r\n//remove string before a \".\"\r\n\r\n```bash\r\nfile=${1%.*}\r\n```\r\n//remove string after a \".\"\r\n\r\n#####search from history\r\n \r\n```bash\r\nCtrl+r\r\n```\r\n\r\n#####python simple HTTP Server\r\n \r\n```bash\r\npython -m SimpleHTTPServer\r\n```\r\n\r\n#####variables\r\n \r\n```bash\r\n{i/a/,}\r\n```\r\ne.g. replace all\r\n```bash\r\n{i//a/,}\r\n```\r\n//for variable i, replace all 'a' with a comma\r\n\r\n#####read user input\r\n \r\n```bash\r\nread input\r\necho $input\r\n```\r\n\r\n#####generate sequence 1-10\r\n \r\n```bash\r\nseq 10\r\n```\r\n\r\n#####sum up input list (e.g. seq 10)\r\n \r\n```bash\r\nseq 10|paste -sd+|bc\r\n```\r\n\r\n#####find average of input list/file\r\n \r\n```bash\r\ni=`wc -l filename|cut -d ' ' -f1`; cat filename| echo \"scale=2;(`paste -sd+`)/\"$i|bc\r\n```\r\n\r\n#####generate all combination (e.g. 1,2)\r\n \r\n```bash\r\necho {1,2}{1,2}\r\n```\r\n//1 1, 1 2, 2 1, 2 2\r\n\r\n#####generate all combination (e.g. A,T,C,G)\r\n \r\n```bash\r\nset = {A,T,C,G}\r\ngroup= 5\r\nfor ((i=0; i<$group; i++));do\r\n repetition=$set$repetition;done\r\n bash -c \"echo \"$repetition\"\"\r\n```\r\n\r\n#####read file content to variable\r\n```bash\r\nfoo=$(