"The Linux Gazette...making Linux just a little more fun!"


(?) The Answer Guy (!)


By James T. Dennis, [email protected]
LinuxCare, http://www.linuxcare.com/


(?) Cross Comment: 2cent Tips

From Flavio Poletti on Mon, 06 Mar 2000

[Recently I've been going through 2cent Tips looking for ideas to fill out a "Linux Tips" fortune file. So this comment stems from a comment I made to the author of one script. You'll have to peruse the 2cent Tips to find out more]

(?) What a honor - am I answering the *Answer Guy*? ;)

(!) That's me.

(?) Flavio,

I was curious about your choice of PERL for this script. So I wrote a version in sh. It's attached.

I thank you very much for your interesting. My choice of PERL for this script is very simple - I found a good Perl manual to download (Perl 5 by example), and I have nothing like this about sh/bash. I am able to write very simple sh scripts, but I must confess I've never used `read' or `case'. I hope that your script will help me to understand something.

(!) That's O.K. PERL is a much better job skill. Putting "sh" on your resume won't attract much attention. Putting PERL there, and being able to back it up with any reasonable code will get you work all over.
sh is like the "Don Rinkles" of scripting languages. You'll get no respect. If you claim it as a skill people will assume that the most you can do with it is list five line abbreviation macros with a cd or pushd, cp (make a backup of some file), and a launch of the "real command" that you're using.
awk gets about the same appreciation.

(?) Do you know a good (and easy to read) source of information for learning sh programming?

(!) I like the O'Reilly "Learning the Bash Shell" and I'm also quite partial to Mark G. Sobell's "Practical Guide to Linux" (or any edition of Practical Guide to UNIX).

(?) Mine is about ten lines (30%) shorter.

And it seems to do much more!

(!) Well, it does a little more --- just because I wanted it to look a little different for my needs.

(?) The main difference is that I pipe my netstat output right into my main loop, and my fuser output into an inner loop. This simplifies the code path somewhat, and eliminates the need for any arrays and hashes.

PERL has native data structures like arrays and hashes - so why don't use them?

(!) It's a philosophical issue.
Since we were using the same external programs (netstat and fuser) the question was how to use them most efficiently. The most efficient way to use your data is in a loop, as you recieve it.
When you use the back tick operators and fill a hash then you have allocated a bunch of memory, caused a latency (collecting the data) and are left with the task of separately sifting back through the data to process it.
When you use the pipeline each bit of output data is used as it becomes available and discard when its no longer required. Both processes (the netstat which is feeding our loop, and the bit of shell code which is reading it in a subshell) are able to work in parallel (particularly if we have an SMP system). The pipe blocks whever its buffer is full and the other end isn't ready.
(So, when the receiving end is forking off a copy of fuser to process the current line, the netstat process is blocked, and when the netstat process is blocked waiting for system calls or on file I/O (parsing through /proc entries) then the subshell just sleeps because its on a blocking read()).
This is the way that UNIX was designed to work.

(?) I'd really like to see more people learn this shell scripting pattern. It allows one to do quite a bit with shell without requiring awk, PERL etc. and without making lots of nasty temp files or using large variable/values.

You are surely true, but... I've been able to find (and effectively use) a PERL manual (the supercited one) and a awk manual (Effective AWK Programming), and nothing more than a bash man page!

I'll surely use your piece of advice to get rid of temp files, in the future... I felt they were a sign of weakness in my scripts!

Thanks and bye, Flavio.

(!) Maybe I'll write one some day. Until then look at Rob Pike's "The UNIX Programming Environment" and the two I mentioned up above.
More generally you can find lots of good books looking at Eric S. Raymond's "Linux Reading List HOWTO" http://www.linuxdoc.org/HOWTO/Reading-List-HOWTO.html
Though I note that he didn't add my book to that yet ;)
Looking at a couple of other categories:
Under TeX/LaTeX I noticed that Eric isn't listing:
"A Guide to LaTeX2e" by Helmut Kopka and Patrick W. Daly (Addison Wesley, 1993)
and under C programming I have to recommend:
"Beginnning Linux Programming 2nd Ed." Richard Stones and Neil Matthew (Wrox Press, 1996)
... This latter one has improved considerably since the first edition. I just bought it today.
[Eric, You could toss those into the next version of the reading HOWTO --- along with a link to the SAGE (USENIX) "SysAdmin's Bookshelf" at:
SAGE - The Sysadmin's Bookshelf
http://www.usenix.org/sage/sysadmins/books/booklist.html ]
[Flavio, With your permission I'll publish this in next month's column, with a link to your tip (February, tip #2) and any follow up that goes into next months Tips].


Copyright © 2000, James T. Dennis
Published in The Linux Gazette Issue 52 April 2000
HTML transformation by Heather Stern of Tuxtops, Inc., http://www.tuxtops.com/


[ Answer Guy Current Index ] [ Index of Past Answers ] greetings 1 2 3 4
5 6 7 8 9
10 11 12 13 14 15 16 17
18 19 20 21 22 23 24


[ Table Of Contents ] [ Front Page ] [ Previous Section ] [ Linux Gazette FAQ ] [ Next Section ]