What are all these file formats? How can I view them? Are they convertable? Where is the editor, or do I create them from scripts?
This is the standard gnu gzip format, which is open and efficient.
This is the traditional Unix compression. Many people prefer gzip, which makes files about 25% smaller than compress can manage.
This is the standard unix tar file (tape-archive). Most people use this to package up a subdirectory of files into one file.
tar archives are useful, because they retain the original file modification date, rather than making everything have todays date (cpio does the same).
This is the same as .tar.gz, ie a compressed tar archive.
The easiest way to view a .tgz file is to use mc (mc-3.1), and press Return over the filename.tgz file. Then browse inside the file. Eg with the prep.ai.mit.edu = GNU = CDROM-disk3, browse to /cdrom/gnu/gnuplot.tgz, then browse inside to find ./gnuplot/docs/latextut/tutorial.tex.
View this file with F3, then copy it to /tmp using the F5 key. You will use it later to see how tex works.
Unlike tar, there is no standard name.ext for cpio archives.
cpio is an SVR3 / SVR4 sort of utility, but gnu supports it with a cpio utility. Some people like it because it can recover from broken tapes, but I never got much luck when this happened to me. cpio is very good at holding /dev/nodes and symbolic links.
I use cpio to duplicate directories on the disk, eg
FROM=/usr/local DEST=/hdc2/local cd $FROM || exit $? find . -print | cpio -pvdm $DEST
A .shar file is a shell script that expands into several files. Beware of running these blindly, as you will be giving a trojan a free ride. Fortunately, being source, trojans get spotted quickly, and kicked off reputable systems.
These are much like .tgz files, except they have more of a DOS history. Netscape uses a .zip file to hold it's java extensions. Probably because the authors have all the .zip tools they need to make good sense of this format, even if the content varies between systems.
This is a file format that RedHat have licenced to the world, to package source and binary packages, and install them.
The format is not immediately readable by simple utilities, but I did manage to install an .rpm package, by copying the /rpm/ directory from the CD, and running it on one of its own files. For the latest version of RPM look on http://www.redhat.com. You can then install .rpm packages on a slackware (or other) system. If not, raise a bug.
,rpm packages are ususally pre-compiled, and pre-configured to make sense in a RedHat Linux environment. Ie they are the original package plus a few diffs to make it install smoothly. Mixing slackware and RedHat system files may cause problems that neither side will help you with. Most of the files are "effectively the same" (eg Slackware "ls" vs RedHat "ls"), but there are different approaches to sytem configuration, especially that part which is meant to be done by the user. EG /etc/rc.d, and network configuration files.
However, if you keep to categories of application, you should be ok. EG installing scilab.rpm on a slackware system might work. If you aren't the first to attempt it.
,srpm packages are the source, used to build .rpm's
These have been pre-formatted by a word processor into formats that are ready to be printed by a postscript printer, or are in a portable format that gets the correct font selections.
postscript can be viewed in X11 using ghostview or gs. It can also be converted to other formats (eg epson-dot-matrix) using gs.
Postscript is a full blown language, capable of calling programs and reading files. Always use the -dSAFER option, to prevent trojan's taking control of your system. just because you viewed a .ps file.
dvi files come from TeX. They are like postscript files but smaller. Use the xdvi command to view them, and dvips to convert them to postscript.
OK .ps and .dvi are online.
These are man pages for the (1) directory. When installed use the man or xman command to find and view them. They get formatted by groff using the -man macros. groff -T ascii -man FILE.1
,me files are formatted using a command like nroff and the '.me' macros
nroff -T ascii -me filename.me | less nroff -T ascii -mandoc filename.man | lessExcept of course read.me!
These are either totally plain, or mostly plain files, that need no formatting to view or edit.
The ez word processor creates .ez files for text and sheets. Use the ez word processor to view them, though they are almost plain text. The AUIS system can be fount in /cdrom/slackwar/contrib and others.
There are plenty of graphics file formats, and I am well confused by them all. Some come from other machines, and may be less used.
Some swiss-army-knife utilities handle several different formats, and can convert between them. Try xpaint, xloadimage, xv, netscape, ...
These are X11 bitmaps, with N colours. You can view them with xpaint or xloadimage.
There are several types of bitmap, depending on how many colours there are, how they are selected and palletted. Whose system you are working on!
All I know is that FAX files get reduced in resolution x4, when they get converted from .g3 to .pgm. The same happens when a .pbm gets converted to .pgm. This conversion is 'lossy', and ugly (possibly due to a library bug).
These are compressed graphics files, with a proprietry algorithm. Now that Unisys is charging for use of the Licence, alternatives are appearing.
These are editable text files, with embedded TeX commands. You run the TeX processor on them to produce .dvi files. Then you either run xdvi or dvips and ghostscript/ghostview. If TeX finds an error in the text (not uncommon), it usually stops to give you a chance to correct the errors. Either press CTRL-D or quit, and it will go into batch mode.
You might have to run TeX twice, to get the indexing and x-referencing correct. Here is a worked example:
mkdir /tmp/temp cd /tmp/temp cp /usr/doc/TeX/Gentle.tex . tex Gentle.tex # takes 20 seconds tex Gentle.tex # takes 20 seconds # repeat for index dvips Gentle.dvi # takes 15 seconds ghostview Gengle.ps & # takes 10 seconds xdvi Gentle.dvi & # takes 3 minutes + 10 seconds xdvi Gentle.dvi & # takes 10 secondsActually this is doing things twice, you don't have to both every time. This is so that you can see the difference between ghostview and xdvi. IE they both take the same time, but xdvi may have to create some fonts. If they were already on the disk it's just as fast. Also xdvi looks a lot easier to read than ghostview - but this may be due to the fonts.
(( is there a way to configure ghostscript to use the fonts that xdvi uses? ))
Personally, I think there is a bug in ghostscript, but it's probably only visible at 75 dpi, not at 300 dpi (or even 600 dpi if you have such a printer). At every step run ls, to see what files get created, and how big they are.
The xdvi command took ages, because it had to create all the fonts in the right format. The second time, the fonts already existed, so it took a few seconds.
The ghostview took about the same time as xdvi - but only when both have the fonts already prepared.
Both xdvi and ghostview allow you to select which page to view. They both give you a magnifier (click or hold a mouse button over the text, try another button), so you can see the any hard to read bits. There is another 'gv' ghostview viewer available on the net somewhere, but I haven't tried it yet. It might have other command options, but will probably have the same display quality (unless ...).
What do you think?
These are viewable using lynx, arena, mosaic, netscape, chimera or your favourite web browser.
They contain X-REF's to other pages "URLS", which may be on the same machine or on different remote machines.
The Hyper Text Marup Language HTML standard keeps changing, as the heavy-weights try to jostle it out of popularity. The next thing is style-sheets, where you select header fonts in a site-wide config file. Just a sniff of creeping functionality, with a strange SGML base format.
These look like html, but may get compiled to TeX or compiled into documents. The tag names are completely different from HTML, and could define any set of name tags.
These are not usually on-line browsable.
These are files that get interpreted or compiled to be programs, or structured data.
filename.c is a 'C' program source, that gcc can compile into filename.o Other names are .cc or .cpp for 'C++' source which you can compile with g++. If the source follows all the standards, the program should be portable to many machines and environments.
filename.h is a 'C' program header file (or other language header file). It ususlly contains constant declarations, type declarations and function declarations (that later get defined in .c files).
Other languages use other filename extensions for their sources, such as .pas (pascal) .bas (basic) .m4 (m4 macro text) .cpp .hpp (the list goes on). Some languages might have interpreters, 'C has a compiler.
When you compile a .c file, it usually becomes a .o (object) file.
This is machine code (binary executable for the specific architecture of the CPU and system), along with a symbol table so that the linker can find all the pieces by name.
You can't run, since it doesn't have a program header.
You can pre-link a group of .o files into one libname.a file or a shared libname.so (shared object) which can be linked into a running program, either statically or dynamically. Shared object libraries are usually left as-is to be linked at run time.
When a program that uses shared libraries is started, it loads the header index of each shared library it uses. You can get a list of .so's that a program uses with:
# fogrottenSome programs load the libraries later, (as required), but most print a list of absent ones and die.