Burrows-Wheeler block sorting text compression algorithm, and Huffman coding è il più efficiente in termini di spazio utilizzato
bzip2 mydata.doc bzip2 *.jpg # interesting flags: -d --decompress force decompression -z --compress force compression -k --keep keep (don't delete) input files -f --force overwrite existing output files -q --quiet suppress noncritical error messages -v --verbose be verbose (a 2nd -v gives more) -s --small use less memory (at most 2500k) -1 .. -9 set block size to 100k .. 900k --fast alias for -1 --best alias for -9
decompressione:
bzip2 -d mydata.doc.bz2 gunzip mydata.doc.bz2
gzip(.gz): Lempel-Ziv coding (LZ77)
gzip mydata.doc gzip *.jpg
decompressione:
gzip -d mydata.doc.gz gunzip mydata.doc.gz
comprimere un file e contare i bytes
uglifyjs yourlib.js | gzip -9f | wc -c
molto portabile su windows.
zip -r filename.zip files zip -r9 filename.zip folder zip mydata.zip mydata.doc zip data.zip *.doc
decompressione:
unzip filename.zip unzip data.zip resume.doc
# create zipped file tar -cvzf filename.tar.gz files/directories #extract tar -xvf foo.tar #-z: use gzip compress #-j: use bzip2 compress tar -zcvf data.tgz *.doc tar -zcvf pics.tar.gz *.jpg *.png tar -jcvf data.tbz2 *.doc
decompressione:
tar -zxvf data.tgz tar -zxvf pics.tar.gz *.jpg tar -jxvf data.tbz2
gzip -l mydata.doc.gz unzip -l mydata.zip tar -ztvf {.tar.gz} tar -jtvf data.tbz2