5. Pipes and Filters

Let’s Get Started

Now that we know a few basic commands, we can look at the shell’s most powerful feature: the ease with which it lets us combine existing programs in new ways. We’ll start with a directory called molecules that contains six files describing some simple organic molecules. The .pdb extension indicates that these files are in Protein Data Bank format, a simple text format that specifies the type and position of each atom in the molecule.

Let’s go into that directory with cd and run the command wc *.pdb. wc is the “word count” command: it counts the number of lines, words, and characters in files (from left to right, in that order).

The * in *.pdb matches zero or more characters, so the shell turns *.pdb into a list of all .pdb files in the current directory:

If we run wc -l instead of just wc, the output shows only the number of lines per file:

Why Isn’t It Doing Anything?

We can also use -w to get only the number of words, or -c to get only the number of characters.

Which of these files contains the fewest lines? It’s an easy question to answer when there are only six files, but what if there were 6000? Our first step toward a solution is to run the command:

The > symbol redirects command output to a file instead of displaying it on the screen. This explains the lack of screen output; everything that wc would have printed goes into the lengths.txt file. If the file doesn’t exist, the shell creates it. If it does, it’s silently overwritten, so be cautious. ls lengths.txt confirms that the file exists:

We can now send the content of lengths.txt to the screen using cat lengths.txt. cat stands for “concatenate”: it prints the contents of files one after another. There’s only one file in this case, so cat just shows us what it contains:

Output Page by Page

We’ll use cat in this lesson for convenience and consistency, but it has the disadvantage of dumping the whole file onto your screen. In practice, the more useful less, which you use with less lengths.txt. This displays a screenful of the file, and then stops. You can go forward one screenful by pressing the spacebar, or back one by pressing b. Press q to quit.

Now let’s use the sort command to sort its contents.

We will use the -n option to specify that the sort is numerical instead of alphanumerical. This does not change the file; instead, it sends the sorted result to the screen:

We can put the sorted list of lines in another temporary file called sorted-lengths.txt by putting > sorted-lengths.txt after the command, just as we used > lengths.txt to put the output of wc into lengths.txt. Once we’ve done that, we can run another command called head to get the first few lines in sorted-lengths.txt:

Using -n 1 with head tells it that we only want the first line of the file; -n 20 would get the first 20, and so on. Since sorted-lengths.txt contains the lengths of our files ordered from least to greatest, the output of head must be the file with the fewest lines.

What Does >> Mean?

We have seen the use of >, but there is a similar operator >> which works slightly differently. We’ll learn about the differences between these two operators by printing some strings. We can use the echo command to print strings e.g.

Appending Data

We have already met the head command, which prints lines from the start of a file. tail is similar, but prints lines from the end of a file instead.

If you think this is confusing, you’re in good company: even once you understand what wc, sort, and head do, all those intermediate files make it hard to follow what’s going on. We can make it easier to understand by running sort and head together:

The vertical bar, |, between the two commands is called a pipe. It tells the shell that we want to use the output of the command on the left as the input to the command on the right.

Nothing prevents us from chaining pipes consecutively. That is, we can for example send the output of wc directly to sort, and then the resulting output to head. Thus we first use a pipe to send the output of wc to sort:

And now we send the output of this pipe, through another pipe, to head, so that the full pipeline becomes:

The redirection and pipes used in the last few commands are illustrated below:

/Node%20anatomy

Piping Commands Together

The idea of linking programs together is why Unix is so successful. Instead of creating enormous programs that do many different things, Unix programmers focus on creating lots of simple tools that each do one job well, and work well with each other. This programming model is called “pipes and filters”.

We’ve already seen pipes; a filter is a program like wc or sort that transforms a stream of input into a stream of output. Most Unix tools operate similarly: unless told to do otherwise, they read from standard input, do something with what they’ve read, and write to standard output.

Any program reading and writing lines of text through standard input and output can be combined with other programs similarly. You can and should write your programs this way so that you and other people can put those programs into pipes.

Pipe Construction

Which Pipe?

Nelle’s Pipeline: Checking Files

Nelle has run her samples through the assay machines and created 17 files in the north-pacific-gyre/2012-07-03 directory described earlier. As a quick sanity check, starting from her home directory, Nelle types:

The output is 18 lines that look like this:

Now she types this:

Whoops: one of the files is 60 lines shorter than the others. When she goes back and checks it, she sees that she did that assay at 8:00 on a Monday morning — someone was probably in using the machine on the weekend, and she forgot to reset it. Before re-running that sample, she checks to see if any files have too much data:

Those numbers look good — but what’s that ‘Z’ doing there in the third-to-last line? All of her samples should be marked ‘A’ or ‘B’; by convention, her lab uses ‘Z’ to indicate samples with missing information. To find others like it, she does this:

When checking the log on her laptop, there’s no depth recorded for two samples. Since it’s too late to get the information now, she decides to exclude those files from her analysis. Rather than deleting them using rm, she might want to so some analysis later where depth is irrelevant so she’ll be careful and use a wildcard expression *[AB].txt.

As always, the * matches any number of characters; the expression [AB] matches either an ‘A’ or a ‘B’, so this matches all the valid data files she has.