LINUX GAZETTE

May 2002, Issue 78       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors
The Answer Gang knowledge base (your Linux questions here!)
Search


Linux Gazette Staff and The Answer Gang

Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm], http://www.linuxgazette.net/
This page maintained by the Editor of Linux Gazette, [email protected]

Copyright © 1996-2002 Specialized Systems Consultants, Inc.

The Mailbag



HELP WANTED : Article Ideas

Send tech-support questions, Tips, answers and article ideas to The Answer Gang <[email protected]>. Other mail (including questions or comments about the Gazette itself) should go to <[email protected]>. All material sent to either of these addresses will be considered for publication in the next issue. Please send answers to the original querent too, so that s/he can get the answer without waiting for the next issue.

Unanswered questions might appear here. Questions with answers--or answers only--appear in The Answer Gang, 2-Cent Tips, or here, depending on their content. There is no guarantee that questions will ever be answered, especially if not related to Linux.

Before asking a question, please check the Linux Gazette FAQ (for questions about the Gazette) or The Answer Gang Knowledge Base (for questions about Linux) to see if it has been answered there.



Sync Netware client with Samba server

Tue, 23 Apr 2002 10:29:03 +0800
hwee ting (stuleeht from cwc.nus.edu.sg)

Is there any way that i can sync or saved my netware user password into the samba password file so that it will allow authorised user to map drives for furture use


Oriya keyboard for only one program?

Tue, 2 Apr 2002 08:30:44 +0100 (BST)
Girija Sarangi (girija_linux from yahoo.co.in)

Hi there

During development of a word processor in Oriya language ** I faced the following problem.

The character coding of oriya lies between 128 to 255. Also the keyboard mapping I need is different from the default keyboard mapping that is US_English.For typing and displaying those oriya character I need changing some kind of keyboard mapping.Could you please suggest any method available in GTK+/Gnome to change the default keyboard mapping ( only inside the application). I tried the same using XChangeKeyboardMapping function. But it changed the keyboard mapping for the entire session throughout all applications.Is there any alternative to it ? Anticipating a response from you.

Regards
Girija


Lexmark Z22 Problem

Thu, 4 Apr 2002 19:02:56 -0600
ABrady (kcsmart from kc.rr.com)

I just hooked this printer up yesterday. Overall, it prints fine with one exception. At the end of a page, both lights start flashing. I believe this means some sort of paper error, like a jam or something. After each page I have to reset the printer. BUT, this is only 100% reproducible when trying to print 2 or more pages. If printing a single page, sometimes it errors and sometimes it doesn't. This same printer worked fine connected to a MAC. The difference, beyond the obvious, is the MAC was connected via USB and the linux machine is running it in parallel. Any help appreciated since it's pretty annoying to have to print a songle page at a time.

Alan Brady


X, keybindings, and Kmail

Sun, 21 Apr 2002 13:47:27 -0400
Rodrigo P. Gomez (rpgomez from yahoo.com)

First of all, thanks to all the people who write for and maintain Linux Gazette!

Now to my question:

I want to configure the key 'F2' for Kmail so that when I'm composing e-mail and I press the 'F2' key, the phrase 'Kilroy was here' is inserted at the current cursor location. How do I do this?

I'm pretty sure it has something do with Xresources, but I don't know how to set it up.

TIA for any help you can give me on this.

-- Rod

P.s. I'm running Mandrake 8.2, with KDE 2.2.2 if that is at all relevant to the answer to the question.


bigpond pppoe

Mon, 22 Apr 2002 07:19:48 +0000
Hugh McPhee (h_mcphee from hotmail.com)

Hi

I am trying to get my pppoe client to work.I am on the debian distribution version 2.2.18pre 21. I am using the roaringpenguin client. there is a continues failure when i try to log in. The ppp0 interface come up but I can not tell if the system is logged into a ppp server. I typed to turn on the debugging on the pppd but the system writes some garbage and nothing seems to happen. When the system try's to fire up it trys a ppp connection down a serial line, where in the config file the maps the ppp connection to the eth0 interface? How is it possible to to tell if the system is logged into a ppp server? When I run the pppconfig script I cant work out the 4 text parameters the script is after, I only know the user name and password. The PPPd program inherently deals with a serial modem, how do I configure this to use my ethernet card?

My provider is Bigpond in Australia and they use pppoe for authentication.

My user name and password are both in the pap and chap secrets file, is there any need to repeat these in the ppp options file

How can I manually debug a ppp session, can I enter all the ppp config parameters by hand?

a snip of my sysylog is pasted below. Can you help - Im a real newbie!

See attached syslog.txt


Xinerama and large background images

Wed, 3 Apr 2002 23:34:23 +0200
Matthew H Ray (matthewhray from yahoo.com)

I've searched Google groups and various mailing lists and I've found several people with the same problem as me, but no solutions to this. I'm running XF86 4.1.0.1 on several Debian Woody xinerama 2 monitor boxes (with several different combinations of video cards) and I can't find a way to post a background image centered across both screens with a single image. I can get an image to center on the left monitor and the right monitor has the same section of the graphic showing (the left half) on the right side of the screen.

 -------+--------
 |      |       |
 |    12|     12|
 |      |       |
 -------+--------

This is the behavior with xv, xloadimage, feh, gnome control center, gqview and other image viewers. The odd thing is that for applications that use transparency (gnome-terminal, xchat-gnome), the transparent image is correct, so the transparent right screen has the correct transparent image, but not the correct background image. I can send a screenshot showing this phenomena if you like. Another odd behavior is that small tiled images tile across the middle correctly (both background and transparently). My question is how do I make an image center across both screens correctly like below?

 -------+--------
 |      |       |
 |    12|34     |
 |      |       |
 -------+--------

Thanks, Matthew H. Ray

Hi Matthew!
I once had enlightenment set up as xinerama and managed to get what you want: the image across both screens, and it was even with different resolutions on the screens: 1024x768 and 1280x1024. I managed to get it (IIRC) in the enlightenment background settings menu, by wildly fiddling with the sliders that are up/down and on the sides of the image in the upper part of the control window. But that was enlightenment, dunno how to do it in the other wm's ...
Robos


pivot function for tft in linux

Thu, 11 Apr 2002 13:52:48 +0200
cdb (chris.deboer from rioned.org)

Hello, Has anyone a solution on how to use the pivot functionality for tft-screens under Linux ?

Greetings
Chris de Boer

Greetings, Chris; what's a "pivot functionality?" :) If you describe it, we might know it. [Ben]

You're not an old-time-enough-Mac guy, I suspect, to recognize the term as generic, Ben: many current generation LCD panels, notably including the Viewsonic's, will pivot on their center axis, becoming vertical.

Even hearing the signal from the panel, much less figuring out how to remap everything to a new screen size, is likely a non trivial problem...

A couple of quick Google searches didn't turn up anything suggestive...

Cheers, Jay R. Ashworth


GENERAL MAIL



Marketing question: which Linux User Groups are the biggest?

Fri, 12 Apr 2002 11:04:54 -0400
Katherine Gill (kgill from brodeur.com)

Attn: Mike Orr

Hello, Mike - We exchanged a few e-mails last year re/ Linux news. I'm wondering if you can point me in the right direction. How would I go about determining which US-based Linux user groups are the largest, or the most influential? Registries I'm finding online don't give me an idea of size. Are there, say, 5 or 10 groups that are known within the Linux community as being the "biggies."

Thanks for your insight,
Katherine Gill

[Don Marti]
SVLUG: http://www.svlug.org
NYLUG: http://www.nylug.org
ALE: http://www.ale.org
NTLUG: http://www.ntlug.org
[Mike "Iron" Orr]
[Note to The Answer Gang: I'm forwarding this even though we don't usually answer marketing questions (the querent sends in press releases to News Bytes) because it asks a question I haven't seen covered elsewhere, a question that will be of interest to many readers.]
Fair 'nuff :) -- Heather
[Mike "Iron" Orr]
Hi, Katherine. I remember your name although I don't remember what we talked about. I don't know of any statistics on user group size. BALUG (http://www.balug.org)in San Francisco and SVLUG (http://www.svlug.org) in the Silicon Valley each used to get four hundred people per meeting as of a few years ago, but I don't know about now. Those two are pretty "influential" in terms of offering services and being activists. (E.g., SVLUG threw the Silicon Valley Tea Party (http://www.svlug.org/events/tea-party-199811.shtml) in honor of the release of Windows 98 [wasn't that nice of them?], and crashed Microsoft's big demo, "respectfully" wearing their penguin T-shirts and passing out Linux CDs.) But really, user groups in general don't influence Linux in any way. What they do is make Linux more accessible to their members.
Not sure where you're hoping to go with the statistics, but I question the value of having them; without setting values on "influence" I wonder who will care about the factoid, and your research efforts might have been spent elsewhere. Nonetheless I'll give it a poke.
As an SVLUG member I can add some comments, mostly general. At some time in the past we had an ongoing list-bourne argument about who was "the largest LUG in the world". Members of two LUGs in entirely different parts of the world started to claim this, approximately simultaneously. Some of the grist included the more detailed question, what kind of members did you want to count? Those who attend almost every meeting and regret when they can't make it? The sum of those who attended any time last year (knowing that "the regulars" are of course duplicates)? Average meeting attendance? Oh but we have these regular installfests too and nobody counts there 'cuz we're busy. Oh but anybody on the general mailing list is really a member -- and boy, do we have a lot of lurkers. Then how did you want to count influence? And influencing who?
As some started to get bitter about it, 'twas noted that a fight on some stupid label certainly wouldn't help the community at large, and both really changed over to "one of the largest". I forget who the other was; they're not in my region and I'm a busy soul, so I don't even recall if they were also in the U.S. Why? Because it wasn't as important as us all getting on with our Linux-y lives. See my past editorial about "the coin of the realm."
In the world of Linux "influence" is not based on size, but on the aggregate effort of individuals. An occasional individual is "big" in the sense of having an extra degree of talent -- and eventually heaps of extra respect, built up slowly over time -- a factor my SF-convention running friends at Baycon (www.baycon.com) call "people points". Just being a plugger and helping as one can can stack them up eventually too, though.
Do you mean "influential" like as in political efforts? Heh. Better to ask the Electronic Frontier Foundation (www.eff.org) instead. But they won't know so much about the OS preferred by any individual member, as about the bills that are out there planning to prey on every nerdly soul in the country (and many who aren't as it starts taking toll on ability to use the internet). Oh yes, SVLUG members have been involved in a few rallies here and there. And I'd love to see a notable bloc of senators throw all their weight against the SSSCA because "statistics show" that the amassed geeks of the Silicon Valley are deadset against it. (One of these statistics being California among a limited batch of states that think Microsoft's "settlement" isn't worth a bic pen.) And the DMCA otherwise known as the "only big label companies whose policy about their copyrights is You Sure Better Not are allowed to protect theirs, you multitudes whose policy is My Grandma Recipes Can Belong To Every Mom can go rot." And so on. There are hundreds of poisonous little bills a year and the politicos simply don't even visit the world we actually live in.
Well what the heck. Maybe a "top ten" statistic would actually help. Good luck, and wish us some while you're at it. -- Heather

Thanks, kindly!!


"Make Your Virtual Console Log In Automatically"

Mon, 15 Apr 2002 11:41:00 +0200
Stian Vading (stian.vading from telehuset.no)

As seen at http://linuxgazette.net/issue69/henderson.html

Thanks for writing this exelent article, but i wonder i you can give me any pointers to how to make X-window log in and autostart. I use a debianized laptop, and having to log in every time i start up is quite unnessesary. I know mandrake has this option, but i cant find info on how its set up.

Hoping that if this is not the right place to ask, you could give me feedback as well.

Thanks again
Stian Vading

[K.-H.]
the article is describing how to automatically login for textlogin. You can easily place "startx" in your ~/.profile and so automatically launch X and your standard window-manager.
For using that qlogin you probably will have to switch your debian system from graphic login to text login.
Another possibility: It is possible to run more then one X server at once, you could let it start the normal login screen but at the same time run qlogin to log in automatically and start it's own X server on a different virtual console (like vt 8). If this happens later then the gdm (or whatever debian is using for graphical login) it will switch there automatically.
[John Karns]
Right you are - I forgot to consider the consequences of a ?dm boot configuration. The 'startx' approach indeed assumes a text-based console boot configuration.


LG on CD

Mon, 15 Apr 2002 19:32:19 -0700
Vijaya Kittu M (vijaykittu from yahoo.com)

Can i distribute Linux Gazette (all issues as were avaiable) on a CD rom that i was going to design with open source software ?

Vijaya Kittu M

Yes. -- Mike


file://localhost/usr/share/doc/lg/issue64/lg_mail64.html

Wed, 24 Apr 2002 13:40:51 +0200
thetaworld (thetaworld from yahoo.com)

Hello,

I am not sure if you understand really the meaning of words:

etiquette and vulgar.

The Linux Gazette should conform to the first meaning and so exclude everything from the second meaning. Please refer to etiquette book from the nearest library.

Your public answer should never go to people like this one:

i just came across your website and was looking up bad clusters also.i've seen some of your replies to theses people and you seem pretty cocky. you sound like a total dick, like you dont have the time to just be nice and say geesh im sorry but you have to look elsewhere.

even if you want to personally "punish" him, even if he would be right or wrong.

It would be good behaviour if you simply correct those public pages and ban vulgar words.

Sincerely,
Marko

We censor words like f*ck and c*nt because LG is an all-ages publication. We do not use words like damn ourselves because several readers complained about it several years ago, but we don't think it's necessary to censor it from the occasional readers' mail. Obviously, people can differ over which words belong in the first category and which in the second.
In any case, that issue was published over a year ago and this is the only complaint we've received.
LG has never claimed to be the Emily Post of Linux. Our goal is to provide technical information and to make Linux more fun. Letters are published or not published according to their overall message, not whether they contain certain words. -- Mike
[Thomas Adam, the LG Weekend Mechanic]
I would just like to re-iterate the comments that Mike Orr made in this e-mail by saying that the querent (that's the person that sent that "abuse" e-mail to us) never actually sent an e-mail to us, asking a question that pertained to Linux.
Indeed, many querents that e-mail us, don't actually bother to really check to see who or they are really asking their question to.
Thus, we get a lot of Windows questions that have no relation to the subject matter contained within the Linux Gazette.
I do not consider the replies to peoples' e-mails rude in the least. Yes, harmless banter (Oh...hi Ben :-) does take place, but it is really only because the querent has asked a really stupid question, or it is because of the reasons already discussed.
For example, I could be really picky, and say that the phrase which you used:
"Please refer to etiquette book from the nearest library."
is nonsense. It is grammatically incorrect, since it should read:
"Please refer to ***an**** etiquette book from the nearest library"
but who am I to complain???
Should you have a question relating to Linux, then please send it to the list.
Regards, -- Thomas Adam
It may be noted that we no longer publish all messages that come to us, nor threads with no Linux (or LG related) content even if we do sometimes answer their questions successfully. -- Heather


GAZETTE MATTERS



2 Linux Questions

Wed, 03 Apr 2002 05:17:27
touheed mohammad (tjcoo17 from hotmail.com)

Dear Sir/Madam

I would like to know from you answers of 2 Questions:

Strictly speaking, these are publishing questions, not Linux questions, but I cheerfully answer questions about LG itself anyway. -- Heather

Is 'Linux Gazette' is itself a Jouranal(professional)?

No. It's a web zine produced by volunteers. -- Mike
Linux Gazette is hosted by SSC.com, the internet site of Specialized Systems Consultants, Inc, a professional publishing company which publishes cheat cards, maybe some books, but definitely the standard print magazines Linux Journal and Embedded Linux Journal.
Although mirrored in approx. 47 countries, carried in nearly every major distribution of Linux on the planet, translated to multiple languages monthly, and the license we use allows it, there is not to my knowledge anybody publishing print editions of the Linux Gazette on a regular basis. If you know of such please let us know and we will be glad to give them a place of honor on the mirrors page: http://www.linuxgazette.net/mirrors.html
The staff and columnists of Linux Gazette are unpaid volunteers. Other than that we try to provide a high quality 'zine. We have been published monthly since... (she steps aside to check the Table of Contents) ... Sepetember 96 (not all issues before that were monthly) and there have been a few mid-month special issues.
Some of our staff have attended large shows in a professional capacity as press. You'd have to look back through our editorials for the references.
Linux Gazette is a part of the Linux Documentation Project, a worldwide effort to provide usable documentation for many things one might want to do with Linux. -- Heather

Is 'Linux Knowledge Portal' is a professional Joural?

Hmm, hadn't heard of this one before; Google! reveals: http://www.linux-knowledge-portal.org -- Heather
I hadn't heard of it ... And since we do publish a professional journal (Linux Journal), I asked LJ's Editor, and he hasn't heard of it either.
I did a Google search and discovered that http://www.linux-knowledge-portal.org exists. It used to be the SuSE Linux Knowledge Portal. If you want to know whether it's a professional journal, why don't you ask them? It also depends on what you mean by "professional journal", and why you care.
If you want to send an article, advertisement or press release to Linux Journal, see http://www.linuxjournal.com/contact.php . -- Mike
An interesting looking news site, a little ugly in lynx but definitely usable. Not hosted by SSC so our hosts couldn't say anything to its status. I'm not involved with it myself, so what follows is merely my opinion. I'm good at having opinions on things :D
It appears to depend heavily on automated retrievals from other sites which produce news in the Linux world, freshmeat and slashdot for instance. It seems professionally maintained to me though this is purely a gut reaction to usability at the site. The "Help" button mentions that it is themeable to your personal tastes if you let the site use cookies. Too bad there's no About section.
The question of whether a newspaper is a real newspaper if they have no investigative reporters and only read AP/Reuters, is a philosophical one beyond the scope of our site. But if you find an answer to that question, I'm sure the same answer applies here.
It is, however, fitting the common definition of "Portal" to a T. -- Heather

I would be grateful for your response.

Regards Touheed

Since I cannot determine your definition of "Journal" and "professional" in this context, I can't tell if either of these answer your question.
If your question is actually, "can I get paid for writing for Linux Gazette" I'm afraid your answer is no. Consider the Linux Journal instead.
If your question is actually, "can I use getting published in Linux Gazette as part of my Curriculum Vitae, resumé or to satisfy a publish-or-perish imperative at my academic institution?" the answer is almost certainly yes. You may want to consider our submission guidelines at: http://www.linuxgazette.net/faq/author.html
Use of a spell checker would be advised. The motto of our 'zine is "Making Linux a little more fun!" and so writing in a style readable by a lot of people is preferred.
As for Linux Knowledge Portal, perhaps you should ask their webmaster.
Hope you found that interesting; not sure if it's useful. -- Heather


Artwork Contest

Wed, 03 Apr 2002 05:17:27
Heather Stern (LG Technical Editor)

You still have time to submit artwork for the contest introduced in last month's Back Page.


This page edited and maintained by the Editors of Linux Gazette Copyright © 2002
Published in issue 78 of Linux Gazette May 2002
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/

More 2¢ Tips!


Send Linux Tips and Tricks to [email protected]


Tweaking the wily interface

Thu, 11 Apr 2002 00:39:48 -0400
Ben Okopnik (The Answer Gang)

Well, I found a solution - but that solution is part of a package that's interesting for more reasons than one. AccessControl, a package of useful tweaks designed to help folks with disabilities, had what I needed and more, along with a control panel that pulled it all together (of course, the individual utilities could still be used as stand-alone programs.) It's available at <http://cmos-eng.rehab.uiuc.edu/accessx/>.

Interestingly enough, Dan Linder (the author) says that a similar panel has been incorporated into X11R6.6 - a Very Good Thing, in my opinion. However, for those of us who'd like (or need) a bit more control over our keyboards, mice, display, etc. and are not willing to chase the bleeding edge, this package can be a useful tool in the sometimes confusing "battle of the interfaces".


Clipping URLs

Mon, 8 Apr 2002 13:02:20 -0400
Ben Okopnik (The Answer Gang)

After going back to my tried-and-true "icewm" (KDE was just too bloated for my 366MHz/64MB laptop), I gave a bit of thought to "URL clipping", which - if not over-automated - could be a handy feature indeed. Then, I remembered the "xclip" utility.

See attached clipurl.bash.txt

All that was left was tying "clipurl" to a key sequence in "icewm". To do that, I simply added the following line to my "~/.icewm/keys" file:

key "Alt+Ctrl+u" clipurl

Now, when I select a URL and want to launch it, I press "Alt-Ctrl-u", and - presto! A new Netscape window pops up (if Netscape is already running, it spawns a new one). It also works for files in your home directory, or "clips" that contain the entire path as well as the filename.

One of these days, I might write a little "chooser" for "ftp://", etc. URIs... but so far, it hasn't been a problem.


w3m to access CUPS configuration utility

Thu, 18 Apr 2002 00:34:16 -0700
Steven R. Robertson (srobert from anv.net)

My tip concerns the CUPS configuration utility that is accessed through the webbrowser at http://localhost:631/

My default browser, galeon, takes awhile to start on my machine. If all I want to do is run the CUPS interface to change a printer parameter, then it's much quicker to call it up with the w3m webbrowser in an xterm. Though text based, w3m even supports inline images. I put a "printer" button on my gnome panel that launches the following command when pressed:

"xterm -title CUPS -bg black -fg white -geometry 110x46+240+50  -fn 7x14 -e w3m http://localhost:631/printers"

Steve Robertson


Imagem linux_logo.h na Inicializacao do linux

Wed, 17 Apr 2002 10:40:44 +0100
Heather Stern (LG Technical Editor)
Translated by Pedro Medas (editor from gazetadolinux.com)
Question from Alfredo Guimaraes Neto (alfredogn from bol.com.br)

Hi,

I'm the editor of the 'Gazeta do Linux', the portuguese version of Linux Gazette. We received the attached email with a question for you from Alfredo Guimaraes Neto.

Cheers, Pedro Medas

Ola,
Gostaria de saber se voces teem um tutorial de como mudar a imagem de inicializacao do linux, aquele pinguinzinho com um copo de cerveja, pois tentei varias vezes e estou com dificuldades, quando mando compilar o kernel, da sempre erro nesse arquivo.

Grato, Alfredo

Hi,

I would like to know if you have a HOWTO to change the boot image of linux, that penguin with a beer cup, I tried several times and I'm having difficulties, when I try to compile the kernel, it reports always the same error.

Greetings,
Alfredo


Thank you Pedro. I have an answer for him. If you would be kind enough to translate it back I think he'd appreciate it. -- Heather

Hi Heather,
Thanks for the answer to the 'Two Centavos Tip'. I will translate it for him.

If you need any more info or help feel free to say so.

bests,
Pedro

Not precisely a HOWTO, but actually useful instructions, are at the Linux Kernel Logo Patch Project: http://www.arnor.net/linuxlogo/download.html
Apparently you are not the only one in the world who is inclined to change the boot logo, but finds it hard to figure out where you would tweak the kernel code to use your own. So these people have a patch that makes it easy for everybody, not just kernel-hackers, to put in a new image.
I think they're looking for help on getting the non-intel platform logos right.
For my own part, I like it, I think I'll be using it soon myself!


partial answer to euro-symbol question

Mon, 1 Apr 2002 15:38:48 +0200 (MEST)
rene.leeuwen (rene from wxs.nl)

Hi Mailgang,

Concerning the question of Donal Rogers (rogers from clubi.ie) in the Mailbag of LG76 I found the following in: http://users.pandora.be/sim/euro/112/kde/kbdandbdf.html http://www.interface-ag.com/%7Ejsf/europunx_en.html

So: you may start a new xterminal screen with the Euro-enabled font:

xterm -fn -misc-fixed-medium-r-normal--13-120-75-75-C-70-ISO8859-15 &

In this terminal you can use the Euro-symbol (eg. echo -e "\244"). The question I cannot answer is: how do you force all of your applications to use this font (if indeed that is the best solution). But I hope it gives you something to start working with.

-- groeten,
Rene van Leeuwen


PPP

Sun, 7 Apr 2002 23:40:06 -0400
Ben Okopnik (The Answer Gang)
Question from cka74 (cka74 from yahoo.com)

Hi,

Please kindly advise me on PPP.

I'm using RedHat 7.2, somehow I having difficulties in getting the modem setup and recognized.

I compiled the new kernel with PPP add-on: Network Device Support -> (Y) PPP Support -> (Y) PPP Support for async serial ports

1. My external modem was connected to com1, so when I echo > /dev/ttyS0, my TR on modem get lighted.

2. I set; setserial -g /dev/ttyS0, it shows: /dev/ttyS0, UART: 16550A, Port: 0x03f8, IRQ: 4

OK - those numbers look fine, and the above test says that you're definitely on the right port.

I ensured that IRQ 4 is not used by other program by cat /proc/interrupts

3. When I performed; wvdialconf /etc/wvdial.conf, the results show ttyS0 modem was not found.

I tested out on 2 external modems, same problem arise. but of course my both modems (one of them was MERZ 566) were in working condition.

Where did I went wrong?

As far I can tell, you didn't; "wvdialconf" does not guarantee to detect all modems. Try using "minicom" to test it: do the serial port setup (it's pretty self-explanatory) and see if the modem will respond to simple commands like "AT" (it should come back with "OK"), "AT&V" (show the profiles), "ATDT5555555" (dial those numbers), etc. If it responds, just use those values in your "/etc/wvdial.conf", and everything will be fine.


Mouse control in X

Tue, 9 Apr 2002 03:40:43 -0400
Ben Okopnik (The Answer Gang)
xmodmap -e "pointer = 1 3 2 4 5"

If that works for you, you can place the expression (the part between the double quotes) in a ".Xmodmap" file in your home directory - or launch it directly by specifying the entire command line in your "~/.xinitrc" or "~/.xsession" file, depending on how you start your X session.


More on NET4 (from LG 77, 2 cent tips)

Wed, 3 Apr 2002 07:07:38 -0600
Brian Finn (brian from nacmsw.com)
replying to Chris Gianakopoulos' previous Tip

Hi,

In the 2 cent tips from LG 77, Chris Gianakopoulos writes:

"It is my belief that Net4, although it may be influenced by other protocol suites, was written from scratch (other han being derived from NET3.)"

I read recently in Linus Torvalds' "Just for Fun" (and again in in Glyn Moody's "Rebel Code") that the TCP/IP implementation in Linux was written from scratch in order to avoid being hassled by AT&T, who owned UNIX at the time. I suppose AT&T was using their legion of lawyers to go after other UNIX implementors for royalties.

Thanks,
Brian Finn

Hi Brian,
That makes sense. I've read somewhere that the book, "The Design of the Unix Operating System" by Maurice Bach, influenced Linus Torvalds with respect to his Linux stuff. The book described the algorithms of System V Release 2. Of course, other stuff influenced him also. Thanks for that info, Brian.
Regards,
Chris G.


partition overlap = bad juju

Fri, 12 Apr 2002 01:30:51 -0400
Frank Brand (fbrand from uq.net.au)
replying to the Gang's previous Thread

Hi there Ben,

I am responding to you as you were first on the list of answer people:-

I refer to "ntfs clobbered my ext3fs!!" in Linux Gazette 77 in which the questioner asks about a partition overlap.

I have encountered this twice. Both times it has been with a mixed Windows/Linux drive and using automated partitioning (ie Disk Druid or DiskDrake). Your questioner has exactly this scenario.

Now, I never use automated partitioning and I partition the drive using parted before I start the installation. I use primary partitions where possible and avoid mixed Windows/Linux disk setup.

I have experienced the overlapping partition syndrome and have found it very difficult to overcome. I have not been able to sort it out using fdisk as either Linux or Windows fdisk can not do anything to such corrupted partitions. I have only been able to recover using disk manager software and this was a destructive recovery.

Regards
Frank Brand


Re: [LG 77] help wanted #1 private email

Wed, 3 Apr 2002 09:00:37 +0100
Neil Youngman (n.youngman from ntlworld.com)

Hi there

I would like to know how to set up my email on my home network with win98 outlook express and Linux.

I would like to set it up so that I can email anybody else in the house on the network and email via the internet when needed.

Thank You
Cheryl

There are a couple of linuxWorld articles describing Nicholas Petreley's setup, which may be suitable for you requirements.
http://www.linuxworld.com/site-stories/2002/0318.ldap1.html
http://www.linuxworld.com/site-stories/2002/0401.ldap2.html


RPMs

Thu, 25 Apr 2002 07:06:04 +0100
Neil Youngman (n.youngman from ntlworld.com)
Question from Lord of Wolves (Lord0Wolves from aol.com)

Simple question: What is a ".RPM" and how do I use them. I assume they are a type of compression file, but what do I need to use them.

RPMs are RedHat Package manager files. They contain the necessary files for a package, including setup scripts to be run pre- and post-install. They also have a list of dependencies, so they can determine whether you have installed the other packages on which this one depends.
Simple usage
rpm -Uvh pkg.rpm	# install package from pkg.rpm
rpm -Fvh pkg.rpm	# freshen (update) package from pkg.rpm
In both the above examples v is verbose and h is using a hash mark progress indicator.
For examples of other usages see
http://www.getlinuxonline.com/omp/distro/RedHat/rpm.htm
Neil Youngman
P.S. If you're asking questions of this list, please turn off MIME and HTML.


Re: [LG 77] help wanted #5 serial programming

Wed, 03 Apr 2002 22:54:48 -0500
Gary J. Wozniak (gjwoz from 110.net)

Hi,

Check out www.linuxtoys.com. This site has some great examples of how to read/write form serial ports in linux.

The

Radio Shack DVM with RS-232 <http://www.linuxtoys.com/dvm/dvm.html>;

article was of particular use for me.

Good luck,
G Wozniak


Re: [LG 77] help wanted #5 serial programming

Wed, 10 Apr 2002 14:35:24 +0200
Matthias Prinke (matthias.prinke from sci-worx.com)

Hi,

check out the Serial Programming Guide for POSIX Compliant Operating Systems at http://www.easysw.com/~mike/serial You can find the answer in chapter 4.

Best regards,
Matthias


subsystem sftp

Mon, 8 Apr 2002 18:27:59 -0400
Ben Okopnik (The Answer Gang)
QUestion from Francoise Guilbault (guilbaultf from em.agr.ca)

Why when starting SSH client does a subset of sftp open up in the background by default?

Take a look at the last line of your "/etc/ssh/sshd_config":
Subsystem	sftp	/usr/lib/sftp-server
Also, from "man sshd":
Subsystem
   Configures an external subsystem (e.g., file transfer daemon).
   Arguments should be a subsystem name and a command to execute
   upon subsystem request.  The command sftp-server(8) implements
   the "sftp" file transfer subsystem.  By default no subsystems
   are defined.  Note that this option applies to protocol version 2
   only.
I find the next-to-the-last sentence very interesting... on Solaris, for example, it's defined but commented out. On Debian Linux, it's defined and enabled by default. I suppose you could turn it off by commenting out the line, but I'd make absolutely certain that I didn't have any need for it first.


some email related problems

Wed, 3 Apr 2002 18:40:17 +0100
Neil Youngman (n.youngman from ntlworld.com)
Question from amitava maity (amaity from vsnl.net)

Hello everybody,

I have emails with a MS-TNEF file and a humor.mp3.scr file as attachments waiting in my inbox. How do I view/listen to these attachments?

You really don't want to open humor.mp3.scr. That's the Badtrans virus! Fortunately, as a linux user you're immune :-)
See http://vil.nai.com/vil/content/v_99069.htm for more info.
Neil Youngman
As a general point, anything which has two whole three letter extensions (.jpg.pdf, .mp3.scr, and so on) especially when the second is one that may be reasonable to auto-view, you should be immediately suspicious that it's probably a virus. The same goes for MIME types which represent auto-view type files but which do not match the extensions given on the attachment (e.g. audio/wav but the attachment says .jpg).
However, there are 4 or 5 different small utilities that will deal with a true "TNEF" attachment, easily found at freshmeat.net -- Heather


Linux Red Hat 6.2 Unistallation

Fri, 12 Apr 2002 01:46:11 -0400
Ben Okopnik (The Answer Gang)
Question from Alok Garg (aalugarg from yahoo.com)

On Fri, Apr 12, 2002 at 06:02:39AM +0100, Alok Garg wrote:

Hello Sir,
I have 2 HDD of 20 Gig each, on the Primary drive I have WinNT and on the secondary I have Linux RH 6.2 I wanted to uninstall Linux from the system without effecting my data on Win NT. I wanted to move my secondary drive to other machine.

I'm sorry, but that's impossible. :) Removing Linux from your machine would utterly destroy (beyond any hope of recovery) the data on every WinNT machine in a 60-mile radius of where you are. Note that everybody will know exactly who is responsible: you'll be left in the center of a large charred circle. Even if you removed the HD with Linux and carried it off, as soon as you erased it, your NT would know.

It all happens magically, really.

(HINT: There's no magic. NT may be evil, but it does not watch your Linux drive and explode if anything changes.)

See <http://www.linuxgazette.net/tag/kb.html#uninstall> for tips on uninstalling Linux.


Make sure sshd is "always" there for you

Mon, 29 Apr 2002 19:16:33 -0700
James T. Dennis (The Answer Gang)

Make sure sshd is "always" there for you.

Using OpenSSH (circa 2.95 or later?) you can configure the sshd to run directly from your /etc/inittab under a "respawn" directive by adding the -D (don't detach) option like so:

# excerpt from /etc/inittab, near end
ss:12345:respawn:/usr/sbin/sshd -D

This will ensure that an ssh daemon process is always kept running even if the system experiences extreme conditions (such as OOM, out of memory, overcommitted memory) or a careless sysadmin's killall which kills the running daemon. So long as init can function it will keep an sshd running (just as it does with your existing getty processes).

This is particularly handy for systems that are co-located and which don't have (reliable) serial port console connections. It just might save that drive across town or that frustrating, time consuming and embarassing call to the colo staff, etc.


Linux Journal Weekly News Notes tech tips


Python recursion limit

If Python's built-in recursion limit keeps your incredibly cool recursive function from working, you can temporarily set a different recursion limit with the sys module.

oldlimit = sys.getrecursionlimit()
sys.setrecursionlimit(len(big_hairy_list))
incredibly_cool_recursive_function(big_hairy_list)
sys.setrecursionlimit(oldlimit)


Ssh2 client to ssh1 server

If you have an account on a system where only your ssh1 key is installed in your authorized_keys file, you can force your ssh connection to use version 1 of the protocol with ssh -1 example.com.

Then you can use scp with the -1 option to transfer your ssh2 key there, so that you can use version 2 to connect from now on. Paranoid sysadmins are turning off version 1 access, so you should be using version 2 everywhere by now to be on the safe side.



Making executables smaller

To make executables smaller, try running strip(1) with the options -R Comment -R Note. This removes "comment" and "note" sections that the compiler and linker may have added during the build process.

(source: MontaVista Software's MontaVista Zone customer support site.)



Headphone volume control

If you're running your headphones straight out of your sound card's "Line out" jack, you might notice there's no volume control. Instead of trashing your ears or firing up a audio mixer every time you need to set the volume, just bind the commands


aumix -v+4 # crank up the volume!

and


aumix -v-4 # turn that crap down!

to two spare function keys. (In Sawfish, this is under the "Bindings" menu in the sawfish-ui program.) Presto--free and easy volume control straight from the keyboard.

There are also nifty little volume control applets for the KDE and GNOME taskbars, but why spend pixels on a common task when you have all those keys just sitting there?


This page edited and maintained by the Editors of Linux Gazette Copyright © 2002
Published in issue 78 of Linux Gazette May 2002
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/


(?) The Answer Gang (!)


By Jim Dennis, Ben Okopnik, Dan Wilder, Breen, Chris, and... (meet the Gang) ... the Editors of Linux Gazette... and You!
Send questions (or interesting answers) to The Answer Gang for possible publication (but read the guidelines first)


Contents:

¶: Greetings From Heather Stern
(!)Serial Console "buddy system"
(!)Watchdog daemon
(?)Future in Linux
(!)Dual boot systems made easy
(?)gigabit unhappy
(!)Adding seldom-used directories to your PATH
(!)Experience Installing SuSE Linux 8.0

(¶) Greetings from Heather Stern

Hi Mom!

(I couldn't resist)

Hello everyone and once again welcome to the world of The Answer Gang. We had around 500 messages come through, the peeve of the month seems to be a few people overdosing on their sense of humor, and in case anyone was curious... my printer works fine these days.

I'm sure some people are going through Spring Cleaning. In my case I'm cleaning up my hard drive. I got a much, much bigger one and used my new distro installation as an excuse to perform the reorganization at the same time. This effectively turned an afternoon's task into a couple of days of juggling bits and an occasional adventure throughout the month to correct one or another facet of the installation.

At this point all my virtual hosts work, and I've finally gotten over how much easier elm is than mutt because I'm successfully using hooks to make the silly thing much brighter about what folders to save things to. For my style of folder reading this is perfect! Now all I have to do is whap those "elm2mutt" people for writing a converter that doesn't work if elm is already gone and you only have the aliases left. Sigh.

In fact I'm planning to leap feet first into the new development cycle over at LNX-BBC.org. Nick has this cool new build system and when we're done the thing really will be able to make world on itself, I think.

I'm pleased to see that kernels are settling down to some pretty usable stuff. Soon I'll be able to trust it on ultrasparc and maybe update our production server. Meanwhile, a nice solid 2.2.x kernel for us, yes indeed.

That's one of the things I like best about Linux, actually. Nobody holds a gun to your head and says that you have to use the latest and bleed all over that bleeding edge. If your sound or your pcmcia card just doesn't work right under the new stuff - great, stick with what works. Userland is a seperate thing, you can upgrade it by some fairly small parts most of the time. Of course glibc is a tangled mess, but then, it pretty much always was...

Later this month (Memorial Day weekend, for those of you who follow US holidays) I'll be running the Internet Lounge at Baycon, a science fiction convention. It'll be a nice tribute to how well older systems hold up with Linux under the hood. If you happen to be in Northern California around then, feel welcome to drop on by.

See y'all next month, folks!


(!) Serial Console "buddy system"

Answer By James T. Dennis

Do you have a stack of Linux machines in a server room or at a co-location site? Do they all have serial consoles hooked up to a reliable terminal server? Or, is it that you can't afford to buy one of those cool Cyclades or other terminal servers, or your boss won't let you take up valuable rack space for one?
Depending on your answers to these questions you may qualify to use the unrevolutionary, completely unpatented "serial buddy system" Just take (or make) a few inexpensive null modem cables (n+1 for n machines) and link the systems in a chain (COM1 on System X to COM2 on System X+1 and around to System 0 to form a loop).
Install minicom or ckermit/gkermit, and mgetty, agetty, or uugetty (any getty that's capable of null modem -- serial, operation) and add the appropriate lines to your /etc/inittab, and option to /etc/lilo.conf or your grub configuration files (to pass console= directives to the kernel(s)) and (also optionally) compile your kernel with serial console support.
(The gory details are left for more detailed treatises such as http://www.tldp.org/HOWTO/Text-Terminal-HOWTO-17.html#term_as_console and .../linux/Documentation/serial-console.txt --- wherever your kernel sources are stored).
The end result of all this is that, when you need to look at the console of any machine, you can use a terminal package (such as minicom, or ckermit/gkermit) on the machine "next to" your target. This is much less flexible and convenient and a bit less robust than using a good terminal server --- but it's better than driving across town to the colo facility just because you're reboot failed, or you have to pass some new option to your (possibly new) kernel, or whatever). It's predicated on the likelihood that you won't manage to munge all of your machines at once.

Pros

Cheap:

you can get null modem cables for less than $5 (U.S) (Better you can make your own RJ45 to DB-9 null modem adapter pairs and use normal ethernet patch cords, in a wide selection of colors ;) to connect them! That keeps the rats nest behind you machines a tad more manageable).

Available:

you probably already have a couple of spare serial ports on that server, anyway (and some of the new kernels even support USB serial console drivers!)

More Available/Robust:

some PC motherboards support serial console right into their CMOS set-up --- so you can change the boot device, etc.

Fairly Robust:

No single point of failure? It's possible (with more advanced fussing) to force the getty's to be quiet. That should allow each of the null modems to be bi-directiional (a login could be initiated from either end by connecting to the line and hitting enter or sending a BREAK) (The trick is to force the getty's to wait for a line signal before issuing and login: prompt --- some of them have this option). Obviously systems with four serial ports can be cross wired for additional redundancy --- though only one port on any system can be the "console" --- serial getty's can be run on the others.
Did I mention CHEAP! This is way cheaper than by a Cyclades and paying the rackspace rent on it, too; and much cheaper than a PC Weasel 2000 (and spending a PCI slot on that!) and even cheaper than a set of KVM cables (not to speak of the KVM switch and rackspace consumption you'd devote to THAT).
BTW: you can also add a modem or two into the mix --- putting them on systems with a extra serial ports (COM3 or even COM2 on some system where you've got the "bi-directional, quiet getty hack" working). This can get you in to do troubleshooting even if you're network connection to the colo goes down. That's especially handy if you happen to have another null modem into your router's console! (As I: "I updated the packet filters on the Cisco and now we're locked out! Ooops!").
[And, if it's saved your butt a few times, but proves to be unbearable for other reasons (see below) it's easy to plug in that terminal server when you get your boss to pony up for it ;) ].

Cons

Kludgy:

You have to remember which machines are neighbored to one another; you have to mark up your rack diagrams with another cryptic detail.

No centralized control, logging, monitoring etc:

There are a lot of advantages to a modern terminal system (in the case of recent Cyclades products --- the are embedded systems running a Linux kernel from flash and supporting ssh for network to serial gateway functions). The "buddy system" is much simpler than all that, but much less "featureful."

Works "well enough":

This approach may deter your boss/manager from letting you get that terminal server and "do it right." C'est la vie!

(!) Watchdog daemon

Answer By James T. Dennis

The Linux kernel supports a class of devices called "watchdog" drivers. These are programmable timers which are wired to a system's reset or power lines. They are common on non-PC servers and workstations and in embedded devices and are increasing included in PC PCI chipsets. There are also PC adapter cards that can function as watchdog timers, some of them are included in adapters with other functions (such as the PC Weasel 2000, or some high precision real-time clocks?) and some of them have electronics to monitor CPU or case temperature, power supply voltages, etc.
These all have one function in common, they can be set to some time interval (60 seconds by default, under Linux) and will count down towards zero. If they ever reach zero they'll strobe the reset line and force the hardware to reboot. Thus the require period "petting" or they'll "bite" you.
The Linux kernel supports a variety of watchdog hardware, and also includes one which is a software emulation of what a watchdog timer does. (Those are a bit less robust since some forms of kernel panic or failure might leave the system wedged and unable to execute the softdog code). (The Linux kernel can be set to reset after a time delay in case of panic --- the default is to dump a message and registers to the the console and wait for a human to read them and reboot. Read the bootparam(7) man pages and search for panic= for details on how to over-ride that).
All of this is of no use unless you also have a daemon or utility that can set the watchdog, monitor the system, and periodically "pet the dog." (Some texts on this topic use the more abusive "kicking" analogy --- but I find that distasteful).
Of course one can write one's own daemon, or even a cron job (if one over-rode the default 60 second value to be a bit longer, to account for possible cron delays). However, it's best to start with one that's already written and reasonably well proven. The Debian project has one that's simply called "watchdog." Although it is a Debian package it can be adapted for use on any Linux distribution.
This particular daemon performs up to 10 internal system tests (most are optional) and it can be configured to execute a custom suite of tests --- your own script or binary which must return a zero exit value on success (and should run in under some liberal time limit). In other words, it's extensible. On failure it can attempt to execute a custom "repair" script or binary, then it can try a soft reboot (with statically compile code -- NOT by calling the normal 'shutdown' or 'reboot' binaries). Failing all of that, it will simply fail to write to the /dev/watchdog which will cause the kernel to fail to "pet the dog" (hardware) or cause the kernel to reboot (softdog).
In (almost) any event a system failure should result in a reboot instead of a hang. That can be good for systems that are remotely located and hard to get reach. Of course Linux is pretty robust and reliable: so it's rare that the watchdog will be needed; and of course it could be that the watchdog will cause some spurious reboots, sometimes --- especially when initially configuring and tuning it. But there are cases where it's worth the risk and effort.

(?) Future in Linux

From Morgan Howe

Answered By Dan Wilder, Michael Gargiullo, Thomas Adam (the LG Weekend Mechanic), Ben Okopnik

LJ,
I'm almost a junior in college now, and I know I want a career in the computer field, but my real love is Linux. I also am really interested in networking and the internet, but there's just so many options its hard to make up my mind. I'm wondering if there is a good paying career for a Linux professional, and if so, what should I do in my last two years of college to prepare myself? I can't decide if I should go with an information systems degree, or just a regular CS degree. If I could just get any information about possible career ideas in the linux field, or even if you could point me in the right direction to find more information I'd greatly appreciate it, and you have my word I'll renew my subscription when it runs out. ;)
Thanks in advance, Morgan Howe
Near as I can tell, the Linux Journal staff decided to send it to us and see if we could answer him better. I hope he, and anyone else out there job seeking these days, finds this useful. -- Heather
(!) [Dan Wilder]
Most everybody ad SSC works full-time in Linux. IBM, HP and other major players are putting lots of money into Linux, and it seems to be holding its own as a web server platform while continuing to creep into the enterprise.
You might try keying "Linux" into a search of dice.com. Lots of spots for network administrators, web designers, driver writers, and others, last time I checked.
Your mileage may vary. A large Redmond company might prefer if there were no such thing as Linux, and though many of us have our opinions, in truth only time will tell.
(!) [Michael Gargiullo]
There are more and more Linux based jobs out there. OK Granted the market isn't great right now, but more and more companies are realizing the benifits of Open Source.
Your school path should be based on what you want to do... Are you looking to write the next killer app or kernel module? If so go with the CS Degree, and learn good coding form.
As for the company in redmond...If you like them the do hire Linux professionals( The don't openly admit this) but a friend of mine who is a Perl genius and a strict Solaris guy just got picked up by them for their "enterprise email server project". Redmond might scream and shout that open source is evil, but they love and use it as well. Just remember, up until a few years ago, all of their web servers were running on *nix boxes. Another example, they have a software version control package, that is based off an open source package (They were even lame about it, all of the comands are the same but have the "ms" prefix).
Sorry I ran off on a tangent... There are jobs out there...
Good Luck Clean Code
-Mike
(!) [Mike "Iron" Orr, LG Editor]
I'm in Seattle. The only places I can think of to search are:
  1. The job websites - http://www.monster.com, http://www.dice.com, etc.
  2. Your local hi-tech career fair
  3. Your local Chamber of Commerce
  4. Your local library
  5. Something else I was going to mention, but I forgot.
(!) [Thomas Adam, the LG Weekend Mechanic]
(Well, this is the Linux Gazette (LG), not Linux Journal (LJ), but I'll let you off :-)
Linux is becoming more and more popular with businesses these days. Certainly you should have no problem coming into "contact" with it.
...as for your CS degree...
I assume that you're an American. I am English and so cannot really say what your courses are like. I am 19 and am currently at University. I am doing an HND (Higher National Diploma) in Computer Science, which does cover some Unix aspects, if only basic. But it is a good sign that the course leaders here acknowledge the fact that Unix (and indeed Linux) is being used.
Any computer-orientated course should allow you the opportunity of using Linux. There is yet to be a degree here in the UK for Linux. However, software engineering which uses C, does use the Unix environment. So, you might get into Linux that way.
I would recommend going along to a local LUG to find out from the memebers there how they got involved with Linux.
There is information out tbere, especially on the internet.
I did a google/linux search and founf 1,2,9998 hits for Linux orientated jobs.

(?) and you have my word I'll renew my subscription when it runs out. ;)

(!) [Thomas] :-) I get the LJ too -- but don't feel obliged to re-new your subscription, just because I and Dan have helped you.
It has been a pleasure.
Good luck. Let me know how you get on.
(?) [Thomas] I did a google/linux search and founf 1,2,9998 hits for Linux orientated jobs.
(!) [Ben] Is this that New Math I keep hearing about? Thomas, please send me your professors' email addresses. It's remedial classes for you, sir. :)
(!) [Thomas] Lol, I thought you'd like it Ben. Of course, don't tell the others it's really that secret KGB code that you've been after. I like the cover up of blaming my maths too -- nobody will ever suspect that our plan for world domination is near completion :-)
Ok, seriously now though, I made a typo error.
Sorry, Mr. Okopnik, sir, it shan't happen again.....
--Thomas Adam

(!) Dual boot systems made easy

Answer By Murray Hogg, Dutch

Just a little tip which I've never seen before, but solves alot of the problems invovled in partitioning drives during a Linux install.
Rather than go to the trouble of partitioning the hard-drive on a functional Windows system (is that an oxymoron?) I simply placed it in a hard-drive caddy. When it came to installing Red Hat 7.2 I replaced the drive in the caddy with a second drive I happened to have from an obsolete system. Now, by simply inserting the appropriate hard-drive in the caddy, I can boot into Win98 or Linux with no more effort than it normally takes to use a Linux boot-disk -- assuming, of course, that your system BIOS allows to autoedetect the hard-drive on boot-up.
Just a few comments on the advantage of doing this;
It can be a cheap way of getting into Linux as it's actually cheaper to buy a new hard-drive and caddy for install in a new system than it is to go out and buy an old 486 or Pentium I (or whatever) -- it also takes lot's less desk space!
It has the advantage that the Linux and Windows installs are totally independent -- a crash on one has no chance of effecting the other whatsoever and it circumvents the problem that later versions of Windows have to be the only OS on a system.
The one draw-back is the need to add a second (third?) hard-drive to allow swapping of files between two OS's.
Finally, I'm not a developer or hacker, but I imagine using multiple hard-drives would also be a great way to experiment with new Linux distro's or versions (or even software packages) without risking damage to a known and trusted installation.
Hope someone finds it helpful, regards
Murray Hogg

Hi again,
I just recieved the following warning about the use of hard-drive caddies which I thought ought to be attached to my dual-boot system idea;
Thanks to "Dutch" for the following insights.

You make a few good points in your post. Now from 10 years as a hardware technician I'm going to inject a few cautionary notes.
1) If you are going to use a caddy system, be sure you get a decent one with solid, well designed alignment rails and good heavy duty connection pins. Over time the cheap ones can become mis-alligned and cause bent pins on the internal connectors. Best case the drive won't be recognized, worst case is a short causing damage to your system.
2) Along the same lines, most removable drive setups do not make solid metal-metal connections to conduct heat from the drive into the case where it is dissipated. So any caddy worth buying should have a cooling fan of some sort built into the tray.
3) Make sure to wait (usually a good slow count to 20) until your drives have COMPLETELY spun down before you remove them. Removing a drive that is still spinning is just asking for damage to the bearings, heads, etc.
4) Treat the removed drives with care (like they were delicate glass). I've seen people yank a caddy out of a machine and just drop it on their desk like a book. How long do you think something as delicate as a hard drive can take that kind of abuse?
5) Be extremely careful of static discharge, especally around the connection pins on the back of the caddy. ESD can kill a drive in a caddy very easily since the drive is not attached to any sort of protective ground.
Dutch
"I think therefore I am...usually in a lot of trouble."

(?) gigabit unhappy

From Steven

Answered By Ben Okopnik

Hey All,

We are running Red Hat Linux on a Compaq ML570 with four Xeon processors and one gigabyte of RAM. The server has two NIC cards, one compaq gigabit card and one 3com 100Mbs card. After some help from all of you, I have been able to successfully install and configure both NIC cards. However, I have found that after one hour of use, the gigabit card loses all connectivity, however, the 3com card stays up fine. We have tested this scenario several times, and the gigabit card is definitely dropping connectivity after about an hour. The only way to bring it back is to reboot the box, in which case they both work fine, but only for about an hour, then the gigabit loses connectivity again.

I checked out the Compaq website for a new driver, and there was one available, however, when I tried to build it with the 'make install' command from the created directory which contained the Makefile, I received an error message stating that he Kernel Source was not available. I took a look at the Makefile, and saw it was calling a 'linux' directory in /usr/src/ however, all I have is a 'redhat' directory in /usr/src/. I copied the contents of the 'redhat' directory to a new directory called 'linux' and still I had the same problem.

I am running out of ideas, and was hoping someone out there might have run into this problem before, either with multiple NICS or with Compaq RPMS.

Any info would really help!

Thanks, Staven

(!) [Ben] It sounds like precisely what the error says: the kernel source is not available (and kudos to Compaq for making the error that clear; I've seen some absolutely st00pid error messages.) You're compiling a module (Linux doesn't use "drivers", at least not in the Wind*ws sense); modules get pushed onto the kernel, effectively modifying how the OS itself does Stuff. Therefore, you need to have the source code - module compilation depends on it.
Run "uname -r" to find out what version you're running. Download and install that version's source tree on your system; this will go under "/usr/src" as "kernel-source-<version>". Create a symlink called "linux" under "/usr/src" that points to your newly-installed source tree:
ln -s /usr/src/kernel-source-<version> /usr/src/linux
You should be able to run your "make" from here on.
(Obviously, you should delete your current "/usr/src/linux" before any of this - taking wild guesses of that sort can get you in trouble.)

(!) Adding seldom-used directories to your PATH

Answer (as originally posted on linux-list) by Ted Stern

This content is actually from several messages originally from linux-list, and I have moved around parts for readability. I hope you all don't mind. -- Heather
The question was how to add a path for occasionally-used scripts without having to modify the PATH variable directly. Matlab has a command 'addpath' that does this. He tried to do it with a shell script, but of course that didn't work because it executes in a subprocess, and subprocesses can't modify their parent's environment.
The more people banging on modules the better. I think it would be great if all package maintainers could set up a modulefile to go with their installations.
Here at Cray, we are in the midst of a giant package installation sequence. Given that there are dozens of open/free/GPL software packages around, and our techies like to use them on all the platforms they work on, it has been nightmarish trying to keep up with every single software distribution. So they set up something called "cfengine" (I think) and each package gets its own automatic modulefile. This makes it easy to get access to tools like LaTeX if you need them.
... later he adds ...
I found the name of the package we are using here to install 100's of ports for various platforms:
MPKG
http://staff.e.kth.se/mpkg
It is already integrated with Environment Modules!

Others have posted various ways to do this, but I'd like to point out that they are all re-inventing the wheel.
A method to modify environment variables cleanly was developed over 10 years ago. It is called Environment Modules. It compiles under Linux. It happens to be the method Cray has used for the last 7 years to modify paths for different versions of its compilers and libraries.
You can even get the latest version via anonymous CVS from sourceforge.
See http://modules.sourceforge.net for more details.
Here's an example of how it works.
In your startup file, (I use tcsh) you put a line like
	source /opt/Modules/default/init/tcsh
In a directory filled with "modulefiles", one modulefile named "myghost" might contain some commands like
	setenv        GS_LIB /local/path/to/my/ghostscript/lib
	prepend-path  PATH   /local/path/to/my/ghostscript/bin
	prepend-path  PATH   /local/path/to/my/ghostcript/man
To access your local ghostscript stuff, you could say
	module load myghost
and the environment variables are modified as you would expect them to be.
To remove all trace of your changes, you do
	module unload myghost
and all is as it was before.
The Environment Modules package has been banged on in a variety of production settings at SUN (where it was initially developed), SGI, IBM, HP, etc., so it is fairly robust.
There is also a mailing list (majordomo), with extremely low traffic, mostly just announcements:
	[email protected]
There are probably other packages to do the same things as Environment Modules, but I doubt that they have as much infiltration into the corporate infrastructure ;-) .
Good luck, Ted
gpg fingerprint = 6171 14B3 A323 965B 614D 056F B41C 03AE E404 986C

... Iron also asked Ted ...

(?) [Iron] How do you set your From: address on a per-list basis? Do you do something like "edit headers" in mutt and change it manually for each message? That would be tedious. Or do you have an automated way to do it?

(!) [Ted] Read the full header of an email message, and you will usually see an indication of what the MUA is.
I use Gnus, an extraordinarily powerful email package within Emacs. Of course, I also use the anon CVS version, so I sometimes have a few bugs to deal with ;-). But you can just use the version of Gnus that comes with Emacs if you like.
In my .gnus file, I have a setting as follows:
      (setq gnus-posting-styles
            '(
              ("^nnfolder.*:lists.gnus"
               (From "Ted Stern <[email protected]>"))
              ("^nnfolder.*:lists.fortran"
               (From "Ted Stern <[email protected]>"))
              ("^nnfolder.*:lists.linux"
               (From "Ted Stern <[email protected]>"))
              ))
Gnus treats mail like news, so I read folders of mail as if they were groups. Within certain of my groups, the setting above adds the extra "From:" header.

(!) Experience Installing SuSE Linux 8.0

Answer By Edgar Howell

Linux ready for the desktop? -- SuSE seems to think so.
On 13 April I installed SuSE Linux 8.0 (2.4.18-4GB) on a notebook. Ignoring one glitch (a pcmcia module, but notebooks are notorious for difficult installs) and my disinclination towards gui-anything, it was the easiest installation of an operating system I have ever experienced -- other than Coherent and DOS.
Not having a PC available with sufficient resources for recent releases of Linux, the now 2-year-old Toshiba Satellite 2180 CDT became the target. In theory all data on it was backed up to the PC but "just in case" /home and a bit more got tar'd and copied to the PC "for a while". So it wasn't an update but a clean install.
Probably I installed at least 4 times. But then 2 is normal: the first time around suprises don't always get proper responses, the second time is for real. However, there was something about the pcmcia module that hung the install as the system was coming up for the first time. No disk activity but the fan's coming on said the poor AMD was sweating heavily.
Once I believed that -- and by then I had learned that the default office install includes Star Office (which I used to like but would rather replace since it shows its origins too much) -- I chose the standard install without office stuff and before turning it loose removed the pcmcia module from the list of packages to install. After that it was like ho, hum...
The following is my protocol of installation, prompts indented (if the terminology differs from what SuSE actually uses stateside, that's due to my translation from German):
                boot CD 1 - menu
        Installation
                Language
        German
                menu - new/update/start
        new installation
                installation settings
        accept
                start installation?
        yes-install
                root password
        xxx,ppp
                add new user
        yyy,ppp
                monitor
        LCD
        SVGA 800x600@60HZ
                CRT settings
        graphic (settings OK)
                network interfaces and modems not detected
        next
                command line login
        root,ppp
        shutdown -h now
This took barely 24 minutes, most of which involved installing software. And I have omitted what was done to avoid installing the troublesome pcmcia module (which wouldn't be necessary on a PC).
What really blew me away is that under the monitor options "LCD" was right there and as model one could choose "SVGA 800x600@60HZ"! Yeah, I still checked with sax. The horizontal and vertical frequencies were right. Afterwards I spent several hours playing with the notebook. It even powers off when you shutdown!
Of course it was also neat that the partitions were recognized correctly (yeah, I know, a "clean install", but I've always used Partition Magic) and when all was said and done Win98 was still there, although there would have been no tears shed. Interesting was what can only be described as a gui-LILO: boot and you get about 5 or 10 seconds to make a choice on a graphics screen.
I'm not unbiased. I've been with SuSE since their 5.1. This was the first time using yast2, the graphic install, since they no longer have yast1. I wasn't aware of any possibility of driving yast1 with a script but would have much prefered that, to make it easy to do an identical install on several machines. But then my past includes IBM sysgens with decks of cards. What irritates me about gui-installs is the infinity of questions that need to be answered -- every single time. At least until this SuSE release.
Well, on a PC with adequate resources the yast2 install should go really slick. And like it or not that really is the yardstick nowadays and should go well with the desktop crowd.
Until now I have felt that even frustrated Windows users should stick with what they know unless they are seriously interested in how real operating systems function. In my opinion this release definitely is ready for prime time.


This page edited and maintained by the Editors of Linux Gazette Copyright © 2002
Published in issue 78 of Linux Gazette May 2002
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/

"Linux Gazette...making Linux just a little more fun!"


News Bytes

Contents:

Selected and formatted by Michael Conry

Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release. Submit items to [email protected]


[issue 97 cover image] The May issue of Linux Journal is on newsstands now. This issue focuses on kernel internals. Click here to view the table of contents, or here to subscribe.

All articles through December 2001 are available for public reading at http://www.linuxjournal.com/magazine.php. Recent articles are available on-line for subscribers only at http://interactive.linuxjournal.com/.


Legislation and More Legislation


 CBDTPA

The Electronic Frontier Foundation has published the results of their "Alphabet Soup Contest" to find more meaningful interpretations of acronyms like CBDTPA. Among the winners was Steven Cherry with the insightful translation: "Consume, But Don't Try Programming Anything" which very succinctly sums up what many would see as the spirit behind this legislation. Unsurprisingly, initiatives like the CBDTPA and groupings such as the BPDG [EFF link] (Broadcast Protection Discussion Group) have met with substantial popular opposition. globetechnology.com reported Judiciary Committee spokeswoman Mimi Devin as saying that not one email in support of the bill had been received. This would seem to indicate that the only people who can benefit from the bill are those who helped draw it up. Additionally, a number of websites (such as EFF and DigitalConsumer.org) have served as rallying points for those opposed to the introduction of these laws, and it is very difficult to find any pro-CBDTPA online presence outside of corporate webpages. A recent article by Catherine Olanich Raymond provides a reasoned and legally informed analysis of the principles behind this broad opposition.


 DMCA

Although the DMCA is very detrimental to consumers, it should not be forgotten that it poses serious risks to scientific research also. This was seen clearly in the case of Edward Felten vs the RIAA. A reminder was provided by the IEEE's decision to require researchers submitting journal papers to guarantee that the work did not violate the DMCA. As New Scientist later reported, this decision was reversed due to popular opposition. However, as pointed out on Slashdot, it is regrettable that the reversal was based on complaints rather than on legal arguments or rights. This is a positive development, but hardly a vindication of scientific freedoms.


 MS Government XP

The Seattle Times reported that the US federal government is considering the use of Microsoft's Passport technology to verify the online identity of American citizens. This would allow citizens to authenticate themselves at government websites where they might deal with such business as paying taxes or learning about their entitlements. This would obviously be an incredible coup for Microsoft, who The Register reports, have been pushing hard for popular adoption of Passport technology. It also forms part of a broad plan to persuade governments to base their IT infrastructure around Microsoft products. This has had significant success in the United Kingdom.


Linux Links

Linux Focus
The following articles are in the May/June issue of the E-zine LinuxFocus:

An interview at Linux Journal about the Linux movement and Linux Users Groups in India.

Also at Linux Journal, Linux WiFi Router brings in Subscribers for Ghana's Largest ISP.

Slashdot links:

  • Does Senator Hollings have his good side after all? Early reports of his net privacy bill seemed to suggest so, but a later Salon article thinks it's just business as usual: make a bil that pretends to safeguard people's privacy, but actually gives it to the marketers on a platter. Not your "sensitive" information (medical history, race, religion, political affiliation, etc), but your "nonsensitive" information--which includes your name, address, and anything you buy over the Internet. Fortunately for the marketers, this "nonsensitive" information is precisely what they want. Unfortunately for individual privacy, one can make a fairly good guess what your medical history, race, religion and political affiliation is just by analyzing what you buy and which web pages you read. So, is there anything good about this bill after all? At press time, it's too early to say.
  • Microsoft FUD notwithstanding, the SAMBA team is not affected by a recent MS licence on a technical document related to the CIFS protocol (the license forbids the information from being used in GPL code) and two patents related to the CIFS protocol, because SAMBA doesn't use that implementation anyway.
  • Microsoft lawyer says Linux "is not piracy" during a European conference on software piray. Slashdot contributor dipfan notes the article "it quotes Microsoft's top in-house lawyer Brad Smith as saying: 'Linux is a way of developing software whereas piracy is copying.'" IBM developerWorks article on sharing computers, comparing SSH, remote X, VNC, and other technologies as ways of remotely running applications.

    A couple of links which might be of use when considering new hardware purchases are Linux.org's hardware list and The Linux Hardware Database. Slashdot also recently ran a story on hardware manufacturers that actively support Linux.

    Some links from Linux Weekly News

    Some links from Slashdot:

    Some links from the O'Reilly stable of websites:

    Some interesting stories from the The Register:

    Linux Today have highlighted several interesting links over the past month:


    Upcoming conferences and events

    Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.

    Networld + Interop (Key3Media)
    May 7-9, 2002
    Las Vegas, NV
    http://www.key3media.com/

    IBM developerWorks Live!

    May 7-10, 2002
    San Francisco, CA
    http://www-3.ibm.com/events/ibmdeveloperworkslive/index.html

    Strictly e-Business Solutions Expo (Cygnus Expositions)
    May 8-9, 2002
    Minneapolis, MN
    http://www.strictlyebusiness.net/strictlyebusiness/index.po?

    O'Reilly Emerging Technology Conference (O'Reilly)
    May 13-16, 2002
    Santa Clara, CA
    http://conferences.oreillynet.com/etcon2002/

    Embedded Systems Conference (CMP)
    June 3-6, 2002
    Chicago, IL
    http://www.esconline.com/chicago/

    USENIX Annual (USENIX)
    June 9-14, 2002
    Monterey, CA
    http://www.usenix.org/events/usenix02/

    PC Expo (CMP)
    June 25-27, 2002
    New York, NY
    http://www.techxny.com/

    O'Reilly Open Source Convention (O'Reilly)
    July 22-26, 2002
    San Diego, CA
    http://conferences.oreilly.com/

    USENIX Securty Symposium (USENIX)
    August 5-9, 2002
    San Francisco, CA
    http://www.usenix.org/events/sec02/

    LinuxWorld Conference & Expo (IDG)
    August 12-15, 2002
    San Francisco, CA
    http://www.linuxworldexpo.com

    LinuxWorld Conference & Expo Australia (IDG)
    August 14 - 16, 2002
    Australia
    http://www.idgexpoasia.com/

    Communications Design Conference (CMP)
    September 23-26, 2002
    San Jose, California
    http://www.commdesignconference.com/

    Software Development Conference & Expo, East (CMP)
    November 18-22, 2002
    Boston, MA
    http://www.sdexpo.com/


    News in General


     Lindows Controversy

    Lindows is not only in legal wrangles with Microsoft, but has now run foul of the Free Software Foundation. It would appear that Lindows has been somewhat casual about distributing source code for their products. Bruce Perens has written an open letter to Michael Robertson (Lindows CEO) calling on the company to be honest partners in the free software endeavor. Mono Linux has published a report and analysis of Lindows, available in two parts ( one and two).


     New version of the IP Masquerade HOWTO is available

    David Ranch has announced the release of the IP Masquerade HOWTO.

    Recent changes include:


     $20m Compaq Linux Win

    Compaq Computer Corporation have announced a three-year, $20 million agreement with RackShack, the hosting services arm of of Everyones Internet. Compaq will equip RackShack's IT data centers with industry-standard Compaq ProLiant servers for a tier one, Linux-based Web hosting solution.


    Distro News


     Debian

    Bdale Garbee, an Engineer/Scientist in the Linux Systems Operation group for Hewlett-Packard, has been elected Debian project leader.


    Debian Weekly News recently reported that Nathan Hawkins has announced a new base tarball for those who would like to see Debian GNU/FreeBSD live. The status of this port is available here.


     Gentoo

    Linux Planet have recently reviewed Gentoo Linux, a source based distribution aimed at people comfortable with software development (among others).

    Gentoo can also be installed on the PPC platform, and has been reviewed by iMacLinux (link courtesy Linux Today).


     Hancom

    Linux and Main have an interview with Bart Decrem, co-founder of Eazel (producers of the Nautilus graphical shell for GNOME) and vice president of Hancom Linux. Decrem discusses software in Korea, why companies and governments outside the US don't want to become too dependent on Microsoft, and more. Also featured on Slashdot. While on the subject of Hancom Linux, Linux and Main also reported that Hancom Linux is shipping what is believed to be the first Arab-language Linux distribution. As reported by OSNews, Hancom have now completely focused on the Linux platform for their Hancom Office productivity suite.


     SOT Linux

    Linux Today have the story that SOT, publisher of Best Linux, has announced a change of name for its Linux distribution to coincide with the release of a new version of the distro. In future it will be known as SOT Linux.


     SuSE

    SuSE Linux and IBM have announced a broad services alliance that will enable both companies to jointly provide Linux support and services to corporate customers around the world. In the agreement, IBM Global Services and SuSE will collaborate on support and professional services. IBM will package and support turnkey implementations of SuSE Linux Enterprise Server, backed by SuSE's expert development, maintenance, and support teams. In addition to this complete services offering, the two organizations will also collaborate on customer engagements and supplement each other's skills to provide a formidable Linux services delivery capability for corporate customers.


    Slashdot ran the story that SuSE 8.0 has shipped, and now includes KDE 3.0, kernel 2.4.18, and various other upgrades/enhancements.


    Software and Product News


     Mammoth PostgreSQL Released

    Mammoth PostgreSQL from Command Prompt, Inc. is an SQL-compatible Object Relational Database Management System (ORDBMS). It is designed to give small to medium size businesses the power, performance, and open-standard support they desire. 100% compatible with the PostgreSQL 7.2.1 release, Mammoth PostgreSQL provides a commercially-supported PostgreSQL distribution for Solaris, MacOS X and Red Hat Linux x86 platforms. Mammoth PostgreSQL ships with built-in support for SSL connectivity (Native and ODBC), as well as programming APIs for C/C++, Perl, and Python. There are one-time and subscription-based licensing models available for immediate purchase.

    Command Prompt, Inc., provides support, custom programming, and services for PostgreSQL. Service contracts, as well as time and materials support are available, allowing for single-point accountability for a customer's database solution.


     Linux Growth Spurs Tool Sales for Etnus

    Etnus, a supplier of debuggers for complex code, have announced record-breaking sales of its TotalView debugger on Intel Linux platforms, linking the sales to increased development of complex and mission critical codes on Linux systems. Both sales volume and number of licenses sold for the Etnus TotalView debugger on Intel Linux platforms doubled over first quarter 2001 and, for the first time, Etnus reported that Linux was the top-selling platform. Etnus TotalView is a cross-platform, state of the art debugger supporting C/C++ and Fortran.

    Etnus believes Linux will continue to be a leader among the many platforms they support and will continue to expand functionality there. The next release of TotalView will add support for GCC 3.X and the Intel compilers for Linux.


     CylantSecure

    CylantSecure is an intrusion detection system for Linux and other Unix variants that stops attacks before they occur by monitoring the behavior of the operating system. It has been developed and produced by Cylant, a division of Software Systems International. By adding instrumentation to the kernel, Cylant is enabled to benchmark server behavior patterns and detect changes in those patterns during operation. If an abnormal behavior occurs, it can be stopped in real time, preventing attacks before they are executed.

    This technique is based on the principle that most attacks change the behavior of the software being exploited in a measurable way. CylantSecure uses sensors to monitor the behavior of the software, along with a statistical analysis engine to identify any abnormalities in the behavior. Through continuous behavioral monitoring, CylantSecure can send administrators early warning of attacks, so appropriate measures can be taken. Such measures might include shutting down the program, shunning traffic from the attacking IP or performing system state analysis.

    Get more information on the Cylant website.


     Opera 6.0 for Linux Beta 2 Released

    Opera Software ASA have released Opera 6.0 for Linux Beta 2 with improved features and looks to increase the speed and enjoyment of Linux users worldwide. The earlier version of Opera for Linux, Opera 5, has reached a milestone of one million successful downloads and installations.

    For a complete changelog of Opera 6.0 for Linux Beta 2, please visit http://www.opera.com/linux/changelog/


     McObject's eXtremeDB 2.0

    McObject has released version 2.0 of its eXtremeDB small footprint, main memory database on Linux, with new features to improve developer flexibility and enhance the run-time capabilities of applications based on eXtremeDB. McObject built eXtremeDB from scratch to meet the CPU and RAM constraints of intelligent, connected devices while offering dramatic performance improvements over traditional disk-based database systems. Enhancements in version 2.0 include:

    An evaluation version of eXtremeDB 2.0 is available from www.mcobject.com/download for free download.


     Mozilla

    Mozilla 1.0 release candidate 1 has been released. This is a trial run for the upcoming 1.0 release, and is a good indicator of how close that day is. Indeed, Mozilla even managed to attract the attention of Time Magazine, which reported on the possibility that a Mozilla release could break the browser war armistice.


     Arkeia Releases A New Version 5 Beta

    Arkeia Corporation has released a new Arkeia 5 Beta version. Arkeia Version 5 will be the successor of Version 4.x, a high performance, multiple-platform backup software with 90,000 worldwide users. Arkeia 5, will feature a completely rewritten program architecture and will include an assortment of new features requested by users.


     Other software

    Apache 2.0 is now, officially, stable.

    Galeon 1.2.1 has been released

    AbiWord 1.0 is out

    The new version of Mailman, (version 2.0.10) is now available.


    Copyright © 2002, Michael Conry and the Editors of Linux Gazette.
    Copying license http://www.linuxgazette.net/copying.html
    Published in Issue 78 of Linux Gazette, May 2002

    "Linux Gazette...making Linux just a little more lovable!"


    [picture of mechanic]

    The Weekend Mechanic

    By Thomas Adam



    Welcome to the May edition

    [ ** This edition is dedicated to a very dear friend of mine called Natalie Wakelin, who I am indebted to for helping me recently. She has been an absolute star and true friend to me, and although she may not understand a word this "technical" document may have to offer -- I dedicate it to her all the same. Thanks Natalie!! :-) ** ]


    What song the Syrens sang
    or what name Achilies assumed
    when he hid himself among women,
    although puzzling questions
    are not beyond all conjecture

    --Sir Thomas Browne
    Taken from: "The Murders in the Rue Morgue" -- Edgar Allan Poe

    Yes, yes, I know. You can stop clapping and applauding. I'm back :-) Seriously, I can only apologise for the "holiday" that the LWM has taken over the past "couple" of months. I have taken rather a large leap into the world of freedom and University life, and I found it more difficult to adjust to than I had originally anticipated!!

    But that is by the by.....

    For the keen eyed among you, the quote at the top of this column rather sums up the userability of Linux overall. Indeed, no matter how strange a problem may appear to be within Linux, it is not beyong the realm of possibility that it cannot be solved by using Linux. I have been finding that out for myself quite a lot recently :-)

    Aside from all the University work, I have been actively helping out with problems at the Hants LUG, both in person and via their mailing list. Actually it has been quite exciting. I have also learn a lot!!

    Well that is enough preamble for one month. Enjoy this issue, won't you?


    A Brief Introduction: Squid


    What is Squid?

    Those of you who read the September edition will remember that I wrote an article about the use of Apache. I had some nice feedback on that (thanks to all who sent their comments). I thought it a nice idea to do a tutorial on squid.

    For those of you who don't know, Squid (other than being a sea creature) is a Linux internet proxy program. Why is it called squid? Apparently because (quote: "all the good names were taken")

    Squid, works by channelling internet requests through a machine (called a proxy server).

    Furthermore, squid offers the ability to filter certain webpages, to either allow or disallow viewing. The ability to do this is through ACLs (Access Control Lists). More on these later.


    Installation

    Installing squid should be straight forward enough. Squid is supplied with all major distributions (RedHat, SuSE, Caldera, Debian, etc) so it should be easily accessible from your distribition CD's.

    For those of you that have a Linux distribution that supports the RPM format, you can check to see if you already have it installed, by using the following command:

    rpm -qa | grep -i squid

    If it is installed, then you should find that "squid2-2.2.STABLE5-190" (or similar) is returned. If you get no responce then install squid from your distibution CD.

    If squid is not on your distribution CD, or you are using a version of Linux (such as Debian and Slackware) that does not support the RPM format, then download the source in .tgz (tar.gz) format from http://www.squid-cache.org/download.

    To install Squid from its sources copy the tar ball to "/tmp" and then issue the following commands:

    1. If you are not user "root", su, or log in as root
    2. cd /tmp
    3. tar xzvf ./name_of_squid.tar.gz -- [or possibly .tgz]
    4. Now run:
    
    ./configure
    
    5. After which, you should have no errors. Then you can simply type:
    
    make && make install
    
    to compile and install the files.
    

    Typically. from a standard RPM installation, these directories will be used:

    /usr/bin
    /etc
    /etc/squid (possibly -- used to be under RH 5.0)
    /var/squid/log/
    [/usr/local/etc] <-- perhaps symlinked to "/etc"
    

    If you're compiling it from source, then a lot of the files will end up in:

    /etc
    /etc/squid (possibly -- used to be under RH 5.0)
    /usr/local/bin
    /var
    [/usr/local/etc] <-- perhaps symlinked to "/etc"
    

    Suffice to say, it does not really matter, but unless you specifically have requested otherwise, this is where the files will end up.

    Now that you have squid installed, let us move onto the next section.... configuration


    Configuration

    So, you've installed squid, and are wondering...."Is that it?" ha -- if only it were true, gentle reader. Nope....there are lots of things still to do before we can have ourselves a good old proxy server.

    Our efforts now shall be concentrated on one file /etc/squid.conf. It is this file which holds all the settings for squid. Because we will be editing this file, I always find it a good idea, to keep a copy of the original file. So, I think it would be a good idea, if you all issued the command:

    cp /etc/squid.conf /etc/squid.conf.orig
    

    And then fire up your favourite editor, and lets begin editing squid.conf

    Actually trying to use this file to run squid "out of the box" is impossible. There are a number of things that you'll have to configure before you can have an up-and-running proxy server. At first glance, this file is about a mile long, but the developers have been helpful, since the majority of the file consists of comments about each option that is available.

    The first thing, is to tell squid the IP address of the machine it is operating on and at which port it is to listen to. In squid.conf, you should find a commented line which looks like:

    #http_port 3128

    Uncomment this line, by deleting the leading hash (#) symbol. Now by default, the port number 3128 is chosen. However, should you wish to tell squid to listen on a different port, then change it!! Thus on my proxy machine, I have specified:

    http_port 10.1.100.1:8080

    Which binds squid to listen on the above IP address with the port 8080. What you have to be careful of, is making sure that no other running application is trying to use the same port (such as apache), which is a very common mistake that a lot of people make.

    Now, as we progress through this configuration file, the next major configuration option we should now change is cache_mem. This option tells squid how much memory it should use for things like caching.

    I have just uncommented this line -- and left the default at 8 MB

    Further on down from this option are some more options which tell squid about the high/low cache "watermark". This is simply a percentage of disk-space, that says that when it gets to within 90/95% then squid should start deleting some of its cached items.

    #cache_swap_low  90
    #cache_swap_high 95
    

    I have simply uncommented these, but I have changed their values. The reason being, is because I have a 60 GB hard drive, one percent is hundreds of mega bytes, so I have changed the values to:

    cache_swap_low  97
    cache_swap_high 98
    

    Right....so far so good. We have told squid on which IP and port to listen to, told it how much memory it should use, and told it the percentage of drive space it should reach before it starts deleting its own cached items. Great!! If you haven't do so already, save the file.

    The next and penultimate option that I changed was quite an important one, since this one determines the location and size of the cache directories. There is a TAG, which looks like:

    cache_dir /var/squid/cache 100 16 256
    

    What this says is that for the path "/var/squid/cache"each top-level directory will hold 100MB. There will be 16 top-level directories and below that there will be 256 sub-directories

    The last major item that I shall be tweaking in this file, before moving on to filtering, is the use of access logs. Just below the option we have just configured for the cache_dir, are options to allow logging. Typically you have the option of logging the following:

    Each of the above logs have their own advantage / disadvantage in the running of your proxy server. Typically, the only logs that I keep are the access logs and the cache log. The reason being simply because the store and swap logs don't interest me :-).

    It is the access log file which logs all the requests that users make (i.e. to which website a particular user is going to). While I was at school, this file was invaluable in determining which user was trying to get to banned sites. I recommend all sysadmins that have or are going to set-up an internet proxy server to enable this feature -- it is very useful.

    So, I did the following (uncommenting the TAGS):

    cache_access_log /var/squid/logs/access.log
    cache_log /var/squid/logs/cache.log
    

    I recommend that you leave the log names as they are.

    Obviously, I have only covered the most basic options within the squid.conf file. There are a whole mass of options for particular situations. Each option is fairly well commented, so should you wish to see what a particular option does, it should not be too hard.


    Filtering (Access Control)

    This section is still using "/etc/squid.conf" but I shall go into the configuration options for access control in a little more detail.

    Access control gives the sysadmin a way of controlling which clients can actually connect to the proxy server, be it via an IP address, or port, etc. This can be useful for computers that are in a large network configuration.

    Typically ACL's (Access Control Lists) can have the following properties to them:

    All access controls have the following format to them:

    acl   acl_config_name   type_of_acl_config values_passed_to_acl
    

    Thus in the configuration file, locate the line:

    http_access deny all
    

    And above which, add the following lines

    acl weekendmechnetwork 10.1.100.1/255.255.255.0
    http_access allow weekendmechnetwork
    

    You can change the acl name of "weekendmechnetwork" to a name of your choice. What this does, is it says that for the acl with the name "weekendmechnetwork", use the specified IP address 10.1.100.1 (the proxy server), with a netmask of 255.255.255.0 Thus, "weekendmechnetwork" is the name assigned to the clients on the network.

    The line "http_access allow weekendmechnetwork" says that the rule is valid, and so can be parsed by squid itself.

    The next thing that we shall do, is look at allowing selected clients to access the internet. This is useful for networks where not all of the machines should connect to the internet.

    Below what we have already added, we can specify something like:

    acl valid_clients src 192.168.1.2 192.168.1.3 192.168.1.4
    http_access allow valid_clients
    http_access deny !valid_clients
    

    What this says is that for the ACL name "valid_clients" with the src IP addresses listed, allow http access to "valid_clients" (http_access allow valid_clients), and disallow anyother IP's which are not listed (http_access deny !valid_clients).

    If you wanted to allow every machine Internet access, then you can specify:

    http_access allow all
    

    But, we can extend the ACL's further, by telling squid that certain ACL's are only active at certain times, for example:

    1.   acl clientA src 192.168.1.1
    2.   acl clientB src 192.168.1.2
    3.   acl clientC src 192.168.1.3
    4.   acl morning time 08:00-12:00
    5.   acl lunch time 12:30-13:30
    6.   acl evening time 15:00-21:00
    7.   http_access allow clientA morning
    8.   http_access allow clientB evening
    9.   http_access allow clientA lunch
    10.  http_access allow clientC evening
    11.  http_access deny all
    
    [ ** N.B. Omit the line numbers when entering the above, I've added them here to make explaination easier -- Thomas Adam ** ]

    Lines 1-3 set-up the ACL names which identify the machines.
    Lines 4-6 set-up ACL names for the specified time limits (24-hour format).
    Line 7 says to allow clientA (and only clientA) access during "morning" hours.
    Line 8 says to allow clientB (and only clientB) access during "evening" hours.
    Line 9 says to allow clientA (and only clientA) access during "lunch" hours.
    Line 10 says to allow clientC (and only clientC) access during "evening" hours.
    Line 11 then says that if any other client attempts to connect -- disallow it.

    But we can also take the uses of ACL's further, by telling Squid to match certain regexes in the URL expression, and in effect throw the request in the bin (or more accurately -- "&>/dev/null" :-)

    To do this, we can specify a new ACL name that will hold a particular pattern. For example

    1.  acl naughty_sites url_regex -i sex
    2.  http_access deny naughty_sites
    3.  http_access allow valid_clients
    4.  http-access deny all
    
    [ ** Remember -- don't use the line numbers above!! ** ]

    Line 1 says that the word "sex" is associated with the ACL name " naughty_sites" the clause url_regex says that the ACL is of that type -- i.e. it is to check the words contained within the URL. The -i says that it is to ignore case-sensitivity.
    Line 2 says to deny all clients access to the website that contains anything from the ACL "naughty_sites"
    Line 3 says to allow access from "valid_clients".
    Line 4 says to deny any other requests.

    So,I suppose you are now wondering...."how do I specify more than one regex?". Well, the answer is simple....you can put them in a separate file. For example, suppose you wanted to filter the following words, and dis-allow access to them, if they appeared in the URL:

    sex
    porn
    teen
    

    You can add them to a file (one word at a time), say in:

    /etc/squid/bad_words.regex
    

    Then, in "/etc/squid.conf" you can specify:

    acl bad-sites url_regex -i "/etc/squid/bad_words.regex"
    http_access deny bad_sites
    http_access allow valid_clients
    http-access deny all
    

    Which probably makes life easier!! :-). That means that you can add words to the list whenever you need to.

    There is also a much more easier way of filtering both regexes and domain names, by using a program called SquidGuard. More about that later.....


    Initialising Squid

    Now we come to the most important part -- actully running squid. Unfortunately, if this is the first ever time that you'll be initialising squid, then there are a few options that you must pass to it.

    Typically, the most common options that can be passed to squid, can be summed up in the following table.

    Flag Explanation
    -z This creates the swap directories that squid needs. This should only ever be used when running squid for the first time, or if your cache directories get deleted.
    -f This options allows you to specify an alternative file to use, rather than the default "/etc/squid/conf". However, this option should be rarily used.
    -k reconfigure This option tells squid to re-load its configuration file, without stopping the squid daemon itself.
    -k rotate This option tells squid to rotate its logs, and start new ones. This option is useful in a cron job.
    -k shutdown Stops the execution of Squid.
    -k check Checks to ensure that the squid deamon is up and running.
    -k parse Same as "-k reconfigure".

    The full listing however for the available options are as follows:

    Usage: squid [-dhsvzCDFNRVYX] [-f config-file] [-[au] port] [-k signal]
           -a port   Specify HTTP port number (default: 3128).
           -d level  Write debugging to stderr also.
           -f file   Use given config-file instead of
                     /etc/squid/squid.conf
           -h        Print help message.
           -k reconfigure|rotate|shutdown|interrupt|kill|debug|check|parse
                     Parse configuration file, then send signal to 
                     running copy (except -k parse) and exit.
           -s        Enable logging to syslog.
           -u port   Specify ICP port number (default: 3130), disable with 0.
           -v        Print version.
           -z        Create swap directories
           -C        Do not catch fatal signals.
           -D        Disable initial DNS tests.
           -F        Foreground fast store rebuild.
           -N        No daemon mode.
           -R        Do not set REUSEADDR on port.
           -V        Virtual host httpd-accelerator.
           -X        Force full debugging.
           -Y        Only return UDP_HIT or UDP_MISS_NOFETCH during fast reload.
    

    If you are running squid for the first time, then log in as user "root" and type in the following:

    squid -z
    

    This will create the cache.

    Then you can issue the command:

    squid
    

    And that's it -- you have yourself a running proxy server. Well done!!


    A Brief Introduction: SquidGuard


    What is SquidGuard?

    SquidGuard is an external "redirect program" whereby squid actually forwards the requests sent to itself to the external SquidGuard daemon. SquidGuard's job is to allow a greater control of filtering than Squid itself does.

    Although, it should be pointed out that to carry out filtering, the use of SquidGuard is not necessary for simple filters.


    Installation

    SquidGuard is available from (funnily enough) http://www.squidguard.org/download. This site is very informative and has lots of useful information about how to configure SquidGuard.

    As per Squid, SquidGuard is available in both rpm and .tgz format.

    If your distribution supports the RPM format then you can install it in the following way:

    su - -c "rpm -i ./SquidGuard-1.2.1.noarch.rpm"
    

    Should your distribution not support the RPM format, then you can download the sources and compile it, in the following manner:

    tar xzvf ./SquidGuard-1.2.1.tgz
    ./configure
    make && make install
    

    The files should be installed in "/usr/local/squidguard/"


    Configuration

    Before we can actually start tweaking the main "/etc/squidguard.conf", we must first make one small change to our old friend "/etc/squid.conf". In the file, locate the TAG:

    #redirect_program none
    

    Uncomment it, and replace the the word "none" for the path to the main SquidGuard file. If you don't know where the main file is, then you can issue the command:

    whereis squidGuard
    

    And then enter the appropriate path and filename. Thus, it should now look like:

    redirect_program /usr/local/bin/squidGuard
    

    Save the file, and then type in the following:

    squid -k reconfigure
    

    Which will re-load the configuration file.

    Ok, now the fun begins. Having told squid that we will be using a redirect program to filter requests sent to it, we must now define rules to match that.

    SquidGuard's main configuration file is "/etc/squidguard". Out of the box, this file looks like the following:

    -------------------

    (text version)

    logdir /var/squidGuard/logs
    dbhome /var/squidGuard/db
    
    src grownups {
        ip	   10.0.0.0/24	  # range 10.0.0.0  - 10.0.0.255
    			  # AND
        user   foo bar	  # ident foo or bar
    }
    
    src kids {
        ip	   10.0.0.0/22	  # range 10.0.0.0 - 10.0.3.255
    }
    
    dest blacklist {
        domainlist blacklist/domains
        urllist    blacklist/urls
    }
    
    acl {
        grownups {
    	pass all
        }
    
        kids {
    	pass !blacklist all
        }
    
        default {
    	pass none
    	redirect http://localhost/cgi/blocked?clientaddr=%a&clientname=%n&clientuser=%i&clientgroup=%s&targetgroup=%t&url=%u
        }
    }
    

    -------------------

    What I shall do, is take the config file in sections, and explain what each part of it does.

    logdir /var/squidGuard/logs
    dbhome /var/squidGuard/db
    

    The first line sets up the directory where the logfile will appear, and creates it if it does not exist.

    The second line sets up the directory where the database(s) of banned sites, expressions, etc, are stored.

    src grownups {
        ip	   10.0.0.0/24	  # range 10.0.0.0  - 10.0.0.255
    			  # AND
        user   foo bar	  # ident foo or bar
    }
    

    The above block of code, sets up a number of things. Firstly, the src "grownups" is defined by specifying an IP address range, and saying which users are a member of this block. For convenience sake, the generic terms "foo" and "bar" are used here as an example.

    It should also be pointed out that the user TAG can only be used if an ident server is running on the server that forwards the request onto the squid proxy server, otherwise it will be void.

    src kids {
        ip	   10.0.0.0/22	  # range 10.0.0.0 - 10.0.3.255
    }
    

    This section of statements sets up another block, this time called "kids" which is determined by a range of IP addresses, but no users.

    You can think of grownups and kids as being ACL names similar to those found in "/etc/squid.conf".

    dest blacklist {
        domainlist blacklist/domains
        urllist    blacklist/urls
        expression blacklist/expressions
    }
    

    This section of code is significant since it defines a dest list to specific filtering processes. By processes, there are three main ways that SquidGuard applies its filtering process:

    1. domainlist -- lists domains, and only those, one line at a time, for example:

    nasa.gov.org
    squid-cache.org
    cam.ac.uk
    

    2. urllist -- actually specifying specific webpages (and omitting the "www.", e.g.

    linuxgazette.com/current
    cam.ac.uk/~users
    

    3. expression -- regex words that should be banned within the URL, thus:

    sex
    busty
    porn
    

    The last block of code:-

    acl {
        grownups {
    	pass all
        }
    
        kids {
    	pass !blacklist all
        }
    
        default {
    	pass none
    	redirect http://localhost/cgi/blocked?clientaddr=%a&clientname=%n&clientuser=%i&clientgroup=%s&targetgroup=%t&url=%u
        }
    }
    

    Says that for the acl block, and for the "grownups" section, pass all the requests to it -- i.e. allow those URL's / expressions, etc, that are contained witin the dest blacklists.

    Then, it says that for the "kids" section, pass all requests, except those contained within the dest blacklists. At which point, if a URL is matched from the dest blacklists, it is then forwarded, to the default section.

    The default section says that if requests are found not to come from either " grownups" or "kids" then it won't allow access to the website, and will redirect you to another webpage, which is most likely an error page.

    The variables passed with this redirect statement, specify the type of request, etc, which can then be processed by a cgi-script to produce a custom error message, for example.

    It should be pointed out that in order for filtering to take place, then the following piece of code should be present:

    default {
      pass none
    }
    

    Either with or without the redirect clause.

    There are more advanced configuration options that can be used within this file. Examples can be found out at http://www.squidguard.org/configuration.

    Thus completes the tutorial for both Squid and SquidGuard. Further information can be found at the all of the URL's embedded in this document, and at my website, which is at the following address:

    www.squidproxyapps.org.uk

    Keyfiles: A Handy BASH backup script

    OK, ok, I know you're all thinking: "Not another backup script". Well, there has been some talk of this on TAG (The Answer Gang) mailing list recently so, I thought, I'd jump on the band-wagon.....

    This script is really quite simple -- it uses a configuration file (plain text) which lists all of the files (and directories) that you want backed up, and then puts them in a gzipped tarball, in a specified location.

    Those of you who are familiar with BASH shell scripting, might find this a little rumedial, however, I hope that my in-line comments will aid those who are still trying to learn the shell

    -------------------

    (Text Version)

    #!/bin/bash
    #################################################
    #Keyfiles - tar/gzip configuration files        #
    #Version:   Version 1.0 (first draft)           #
    #Ackn:      based on an idea from Dave Turnbull #
    #Authour:   Thomas Adam				#
    #Date:      Monday 28 May 2001, 16:05pm BST     #
    #Website:   www.squidproxyapps.org.uk           #
    #Contact:   [email protected]        #
    #################################################
    
    #Comments herein are for the benefit of Dave Turnbull :-).
    
    #Declare Variables
    configfile="/etc/keyfiles.conf"
    tmpdir="/tmp"
    wrkdir="/var/log/keyfiles"
    tarfile=keyfiles-$(date +%d%m%Y).tgz
    method=$1           #options passed to "keyfiles"
    submethod=$2        #options supplied along with "$1"
    quiet=0       	    #Turns on verbosity (default)
    
    cmd=`basename $0`   #strip path from filename.
    optfiles="Usage: $cmd [--default (--quiet)] [--listconffiles] [--restore (--quiet)] [--editconf] [--delold] [--version]"
    version="keyfiles: Created by Thomas Adam, Version 1.0 (Tuesday 5 June 2001, 23:42)"
    
    #handle error checking...
    if [ ! -e $configfile ]; then
      for beepthatbell in 1 2 3 4 5; do
        echo -en "\x07"
        mail -s "[Keyfiles]: $configfile not found" $USER
      done
    fi
    
    #Make sure we have a working directory
    [ ! -d $wrkdir ] && mkdir $wrkdir
    
    #Parse options sent via command-line
    if [ -z $method ]; then
      echo $optfiles
      exit 0
    fi
    
    #Check command line syntax
    check_syntax ()
    {
      case $method in
        --default)
        cmd_default
        ;;
        --listconffiles)
        cmd_listconffiles
        ;;
        --restore)
        shift 1
        cmd_restore
        ;;
        --editconf)
        exec $EDITOR $configfile
        exit 0
        ;;
        --delold)
        cd $wrkdir && rm -f ./*.old > /dev/null
        exit 0
        ;;
        --version)
        echo $version
        exit 0
        ;;
        --*|-*|*)
        echo $optfiles
        exit 0
        ;;
      esac
    }
    
    #Now the work begins.....
    #declare function to use "--default" settings
    cmd_default ()
    {
    
      #tar/gz all files contained within $configfile
      
      if [ $submethod ]; then
        tar -cZPpsf $tmp/$tarfile $(cat $configfile) &>/dev/null 2>&1
      else
        tar -vcZPpsf $tmp/$tarfile $(cat $configfile)
      fi
      
      #If the contents of the directory is empty......
      if test $(ls -1 $wrkdir | grep -c -) = "0"; then
        mv $tmp/$tarfile $wrkdir
        exit 0
      fi
      
      for i in $(ls $wrkdir/*.tgz); do
        mv $i $i.old
      done
     
      mv $tmp/$tarfile $wrkdir 
    }
    
    #List files contained within $configfile
    cmd_listconffiles ()
    {
      sort -o $configfile $configfile
      cat $configfile 
      exit 0
    }
    
    #Restore files......
    cmd_restore ()
    {
      cp $wrkdir/keyfiles*.tgz /
      cd /
      
      #Check for quiet flag :-)
      if [ $submethod ]; then
        tar vzxfmp keyfiles*.tgz &>/dev/null 2>&1
        rm -f /keyfiles*.tgz
        exit 0
      else
        tar vzxfmp keyfiles*.tgz
        rm -f /keyfiles*.tgz
        exit 0
      fi
    }
    
    #call the main function
    check_syntax
    

    -------------------

    Suffice to say, the main changes that you might have to make, are to the following variables:

    configfile="/etc/keyfiles.conf"
    tmpdir="/tmp"
    wrkdir="/var/log/keyfiles"
    

    However, my script is sufficiently intelligent, to check for the presence of $wrkdir, and if it doesn't exist -- create it.

    You will also have to make sure that you set the appropriate permissions, thus:

    chmod 700 /usr/local/bin/keyfiles
    

    The most important file, is the script's configuration file, which, for me, looks like the following:

    -------------------

    (Text Version)

    /etc/keyfiles.conf
    /etc/rc.config
    /home/*/.AnotherLevel/*
    /home/*/.fvwm2rc.m4
    /home/solent/ada/*
    /root/.AnotherLevel/*
    /root/.fvwm2rc.m4
    /usr/bin/header.sed
    /usr/bin/loop4mail
    /var/spool/mail/*
    

    -------------------

    Since this file, is passed to the main tar program, then the use of wildcards is valid, as in the above file.

    It should be pointed out that each time the script runs, the last backup file created, i.e "keyfiles-DATE.tgz" is renamed to "keyfiles-DATE.tgz.old" before the new file takes its place.

    This is so that if you need to restore the backup file at anytime, my script knows which file to use by checking for a ".tgz" extension.

    Because of this feature, I have also included a "--delold" option which deletes all the old backup files from the directory.

    To use the program, type:

    keyfiles --default
    

    Which will start the backup process. If you want to surpress the verbosity, you can add the flag:

    keyfiles --default --quiet
    

    The other options that this program takes, are pretty much self-explanatory.

    This backup script is by no means perfect, and there are better ones available. Any comments that you have, would be appreciated!!


    Program Review: Nedit

    Way, way, back in the days when the illustrious founder of this special magazine, John Fisk was writing this column, another authour, Larry Ayers used to do a series of program reviews. He mentioned briefly a new program called Nedit, but never reviewed it.

    So, I will :-)

    I have been using Nedit for about three years now. I do all of my work in it -- when I am in X11 that is. A typical window of Nedit, looks like this screenshot.

    This program offers a huge selection of features. Probably the most popular is the syntax highlighting feature, for over a host of languages, many of which are:

    If, for some bizare reason, you program in an obscure langauge that is not listed in the above then you can specify your own regex patterns.

    Nedit also allows you to do complex search and replace methods by using case-sensitive regex pattern matches.

    A typical search / replace dialog box, looks like the following:

    Allowing you to form complex searches.

    Each of the menus, can be torn-off and remain sticky windows. This can be particularly useful, if you a particular menu over and over, and don't want to keep clicking on it each time.

    This program is over-loaded with options, many of which I am sure are useful, but I have not been able to find a use for all of them yet. And as if that was not enough, Nedit allows you to write custom macros so that you can define even more weirder functions.

    I recommend this program to everyone, and while I don't want to re-invent the Emacs / Vim argument, I really would consider it a viable alternative to the over-bloated "X11-Emacs" package that eats up far too much memory!! :-)

    You can get Nedit from the following:

    www.nedit.org

    Enjoy it :-)


    Closing Time

    Well, that concludes it for this month -- I had not expected it to be quite this long!!. My academic year is more or less at a close, and I have exams coming up at the end of May. Then I shall be free over the summer to pursue all my Linux ideas that have been formulating in my brain ( -- that is whats left of it after Ben Okopnik brain washed me) :-)

    Oh well, until next month -- take care.


    Send Your Comments

    Any comments, suggestions, ideas, etc can be mailed to me by clicking the e-mail address link below:

    mailto:[email protected]


    Thomas Adam

    My name is Thomas Adam. I am 18, and am currently studying for A-Levels (=university entrance exam). I live on a small farm, in the county of Dorset in England. I am a massive Linux enthusiast, and help with linux proxy issues while I am at school. I have been using Linux now for about six years. When not using Linux, I play the piano, and enjoy walking and cycling.


    Copyright © 2002, Thomas Adam.
    Copying license http://www.linuxgazette.net/copying.html
    Published in Issue 78 of Linux Gazette, May 2002

    "Linux Gazette...making Linux just a little more fun!"


    Linux User Caricatures

    By Franck Alcidi


    Due to popular demand, I created a Slackware geek caricature as well as a Red Flag geek caricature. The Slackware character comes across to me as being the very cool, confident Linux hacker. If you know Slackware, bets are you know Linux inside and out ;-)

    [Slackware geek cartoon]

     

    The Red Flag geek caricature comes from Asia. Being a Linux distribution developed in China it was pretty clear cut how this fellow was going to look (well to me anyway). Lets hope this distribution continues to grow and place a bit of pressure on MS. I'm sure this particular distro is going to be very popular amongst our asian buddies.

    [Red Flag geek cartoon]

    My previous LG cartoons: issue72 issue73 issue76

    Important - You can view my other artwork and sketches on my new website.

    Franck Alcidi

    Franck is an artist in Australia. His home page ("Artsolute Linux") is http://www.artsolute.net.


    Copyright © 2002, Franck Alcidi.
    Copying license http://www.linuxgazette.net/copying.html
    Published in Issue 78 of Linux Gazette, May 2002

    "Linux Gazette...making Linux just a little more fun!"


    Help Dex

    By Shane Collinge


    [These cartoons are scaled down to fit into LG. To see a panel in all its clarity, click on it. -Editor (Iron).]

    [cartoon]
    [cartoon]
    [cartoon]
    [cartoon]
    [cartoon]

    Cartoonist Shane is taking a long holiday in Asia and staying at youth hostels (YHAs). Carol and Tux decided to accompany him....
    [cartoon]
    [cartoon]
    [cartoon]

    Recent HelpDex cartoons are here at the CORE web site.

    Shane Collinge

    Part computer programmer, part cartoonist, part Mars Bar. At night, he runs around in a pair of colorful tights fighting criminals. During the day... well, he just runs around. He eats when he's hungry and sleeps when he's sleepy.


    Copyright © 2002, Shane Collinge.
    Copying license http://www.linuxgazette.net/copying.html
    Published in Issue 78 of Linux Gazette, May 2002

    "Linux Gazette...making Linux just a little more fun!"


    A Trip Down Hypermedia Lane

    By Ronnie Holm


    ...turning right on Future Avenue. This article adds some historical and architectural perspective on the world of hypermedia and what motivated its pioneers. The idea of hypermedia predates the World Wide Web by some forty-five years, so this article starts by describing their work. No one correct definition of the term hypermedia exists, but the article will supply a couple of possible definitions derived from the ideas of the pioneers.

    Afterwards, four major steps in the architectural evolution of actual hypermedia systems are described. When reading that part, keep in mind how software has generally evolved (away from a centralistic and toward a more modular design). Not surprisingly, this is also reflected in the development of hypermedia systems.

    1940s: Vannevar Bush and the Memex

    In the mid-forties the accumulated knowledge of mankind was growing rapidly. This made it exceedingly difficult for people to store and retrieve information in an efficient and intuitive manner. Bush [1] realized the problem of ``information overload'' and came up with a visionary solution for storage, organization and retrieval of information. He devised a mechanical device that would work by the same principle of associative indexing as the human brain and especially the human memory. The machine, called the Memex (short for Memory extension), made Bush a pioneer within a field later to be known as hypertext when dealing with text, and hypermedia when mixing several kinds of media. Today the terms hypertext and hypermedia are used interchangeably.

    The principle of hypertext is a well known concept in literature. At the same time as one reads linearly through a text it is often possible to jump to footnotes, annotations, or references to different materials. Bush imagined that parts of the text could be touched; thereby leaving the linear way of reading and be taken directly to the footnote, the annotation, or to some other material. This way of reading leans upon a possible definition of hypertext as a paradigm for managing information [2]. Where physical references can be difficult, or even impossible, to follow, because the source referred to is unavailable to the reader, i.e. an article or a book, with electronic hypertext it becomes possible to gather a corpus of information and radically change the way a document is read or accessed. One could take this idea one step further and enable the reader to add new links between different documents, add comments to the links, or parts of the document itself.

    It was Bush's vision that the Memex would make all these things, as well as a couple of others, mechanically possible. Nowadays, of course, what probably come to ones mind when reading the previous paragraph is the World Wide Web [3] and maybe Bill Gates' vision in the mid-nineties of ``information at your fingertips'' [4]. The Memex in contrast would store information on microfilm within the machine, but the principle remains the same. The documents stored in the Memex were to be linked together using associative indexing as opposed to numerical or alphabetical indexing. Using associative indexing, accessing data would become more intuitive for the user of the machine. Another definition of the term hypertext could then be a way of organizing information associatively [2]. Where associations in the brain become weaker as a function of time and the number of times the association is used to retrieve information, associations between documents in the Memex would retain their strength over time.

    Both previous definitions of the term hypertext are concerned with navigation or a way of navigating through a corpus of information. The Memex can thus be thought of as a navigational hypermedia system, allowing its users to jump between documents adding to the reading experience. This changed experience could form the basis of yet another possible definition (or a broader version of the previous one) of the term hypertext as a non-linear organization of information [2].

    1960s: Douglas Engelbart and the NLS

    Engelbart read Bush's article in the late-forties, but some fifteen years had to pass before the technology had reached a sufficient level of maturity for Engelbart to develop the world's first system utilizing Bush's concept of hypertext. NLS (oN-Line System) supported (1) the user in working with ideas, (2) the creation of links between different documents (in a broad sense), (3) teleconferencing, (4) text processing, (5) sending and receiving electronic mail, and finally enabled (6) the user to configure and program the system. This was something unheard of at that time. To better and more efficiently make this functionality available to the user, the system made use of some groundbreaking technologies for its time. Among other things Engelbart invented something akin to the mouse to enable the user to point and click on the screen, and a window manager to make the user interface appear in a consistent manner. The hypertext part comprised only a small part of NLS's overall functionality, whose major focus was on providing a tool for helping a geographically distributed team to better collaborate. Today, this kind of software is often referred to as groupware.

    The user interface was revolutionary and far ahead of its time for computer users at all levels. Previously, most programmers interacted with computers only indirectly through punch cards and output from a printer. NLS, as a whole, served as a source of inspiration for systems to come, and inspired Apple in the development of the graphical user interface in the early eighties.

    1960s: Ted Nelson and the Xanadu

    Like Engelbart, Nelson was inspired by Bush's early article [1]. But, unlike Bush and Engelbart, Nelson came from a background in philosophy and sociology. In the early sixties, he envisioned a computer system that would make it possible for writers to work together writing, comparing, revising, and finally publishing their work electronically.

    Nelson's Xanadu has never really moved beyond the visionary stage, although a release of the Xanadu system has been announced on several occasions. It is hard to define exactly what Xanadu is, as it is not so much a system in itself, but rather a set of ideas that other systems may adhere to. The name stems from a poem by British writer Coleridge, who used the word Xanadu to denote a world [10] of literary memory where nothing would be forgotten. And indeed, one of the ideas behind Xanadu was to create a docuverse: a virtual universe where most of the human knowledge is present. It was also Nelson who coined the term ``hypertext'' in the mid-sixties, although his definition was to be understood in the broad sense covering both hypertext and hypermedia.

    Another one of Nelson's ideas was a special way of referencing other documents (or parts of them), such that a change in the aggregated document would automatically propagate to the composite document; copying by reference or creating a virtual copy as Nelson put it. This way an author may charge money in return for providing and keeping the authors part of the overall document up to date. To some extend, this idea resembles that of todays deep links, although this concept has spawned some controversy on the copyright issue, an area that Nelson's virtual copy mechanism was to prevent in the first place. Many of the original ideas from the Xanadu project eventually managed to find their way into the World Wide Web and other hypermedia systems.

    Hypermedia architectures

    When describing the architecture of different kinds of hypermedia systems, three components are always present. The components and their purposes are briefly described below to better express why the evolution from monolithic to component-based systems have taken place. Even the earliest hypermedia systems made use of a classic three-tier model, with the application layer on top taking care of presenting information to the user. Below this layer is the link layer, that makes up the model of the system and takes care of managing structure and data. It is the associations and the information needed to represent these associations that is termed structure. Data, on the other hand, refers to the actual content of a document. Finally, the storage component takes care of storing information ranging from just the structure to both structure and content of the documents, depending on the system.

    The development has happened in evolutions where, for each new generation, some functionality previously part of the core of the system has been factored out into its own component (Figure [*], bounding box represents components that are part of the core of the hypermedia system). The description of architectures stems partially from [5].

    Monolithic systems

    The dominant architecture among early systems was the monolithic one (Figure [*], on the left). All three layers were contained within one logical process, although this division was invisible to the user. A monolithic system is considered a closed system in that it neither publishes an application programming interface (API) or a protocol for describing how structure and data are to be stored. This made it pretty much impossible for other systems to communicate and exchange data with the monolithic system. Even basic functionality, such as editing information stored in the system was managed by internal applications, only supporting a few data format. So, before one could work on existing data they had to be imported. This made it impossible to, say, directly store a document created in a word processor in the monolithic system. At least, not before the content of the document had been copied into the internal editor and saved.

    The file formats supported by the systems were limited to what the developers found useful. If you were to import the contents of a document created in a word processor, special formatting (part of the text made bold, or a change in the choice of font etc.) would be discarded. This puts the user in a dilemma: If hypertext functionality was to be fully utilized, it happened on the expense of abandoning ones powerful and familiar application environment in return for using internal applications of a hypermedia system. A far from ideal solution, because designers of hypermedia systems are specialists in developing hypermedia software, not word processing or other kinds of software.

    Along with the import problem came a related problem: The system is limited in the number of data formats it can create associations between. Both documents, or ends, of the association have to reside within the system boundary; that is, stored within the monolithic system. Export of data from the system was also far from straightforward, because the systems made use of their own internal format for storage; a format rarely supported by contemporary hypermedia systems, causing information to be lost during the export process as well.

    Despite these disadvantages, monolithic systems were widely used in the eighties. Maybe they owe a part of their success to the fact that other applications used in that period were not too keen on exchanging data and communicating with each other neither. Examples of monolithic hypermedia systems are KMS [2,6], Intermedia [7], Notecards [8], and to some extend the Microsoft Winhelp system used to generate Windows help files. Although, strictly speaking, the Microsoft Winhelp system and a number of other help systems have a different primary use than traditional hypermedia systems, they nevertheless make use of hypermedia functionality.

    Figure: The monolithic (left), client/server (middle), and Open Hypermedia System architecture (right).
    \includegraphics[width=7cm]{architectures.eps}

    Client/server systems

    The description of monolithic systems revealed a number of shortcomings. As a solution to some of these problems the user interface component was moved out of core of the system and into its own process (Figure [*], in the middle. With the shifted rectangles indicating that a number of applications may now access the hypermedia system). Client/server hypermedia systems come in two flavors: The link server system (LSS) with its primary focus on structure; that is the associations between documents, and the hyperbase management system (HBMS) focusing on structure as well as content.

    From a software point of view the client/server based hypermedia systems are open in the sense that they publish a protocol and an API for applications to use. If an existing application was to offer hypermedia functionality to its users, it would have to make use of these protocols and API's. In the hypermedia world, however, the definition of openness differs from the general definition. A hypermedia system that requires the application to make use of a specific format for specifying both structure and data is considered a closed system, even if it publishes protocols and API's. An open system, on the contrary, is one that only specifies a format for structure. By not imposing a particular format on the actual content itself, an open system is able to handle a lot of different data formats and create associations between types of data created by various applications outside the hypermedia system.

    From the general definition of openness it follows that the HTTP protocol of the World Wide Web is an open protocol in that it specifies a number of messages to be exchanged between the client and the server and the expected responses. However, the structure is embedded within the HTML document as a number of hrefs and other tags specifying the structure. The implication of this is that special applications (browsers) are required for parsing HTML files looking for hrefs (and other tags). That is why the World Wide Web is a closed hypermedia system when subjected to the hypermedia definition of openness, and that is why, in a client/server system, there can be any number of applications making use of the core system, with information stored on the server.

    Other systems, on the contrary, does not impose a particular format on the content of the documents. However, they still require the source code of the application to be modified to make calls to some API. So, the client/server based systems from the early nineties solved a number of problems present in the monolithic systems by not making the application component an integral part of the hypermedia system. An example of an LSS based system is Sun's Link Service [9], while the World Wide Web [3] is an exemplification of a HBMS system, storing documents as part of the system as files in a file system.

    Open Hypermedia Systems

    The OHS is a further development of the client/server concept, and therefore OHS's and client/server systems have a lot of features in common. Where client/server systems could be classified in terms of LSS and HBMS, an OHS is typically a descendant of one of these. OHS's are only made up of the link component (Figure [*], on the right), and is therefore often referred to as middleware in the sense that (1) the component contains functionality to be used or shared across a range of applications, (2) it works across different platforms, (3) it may be distributed, and finally (4) it publishes protocols and API's. An OHS is distinguishable from a client/server system in that there is no central storage as storing documents are no longer part of the core of the system.

    Because data is stored separate from structure it is possible to support associations between just about any data format, i.e. text, HTML, and graphics etc. When structure associated with a document is requested by an application, it is send from the link service to the application and applied to the data. This way a greater number of applications can interact with the system, as they no longer have to make use of a specific protocol for storing data, i.e. HTML on the World Wide Web. Practically speaking, the structural information may consist of a number of attribute/value-pairs, where the number of attributes vary depending on the type of data. For an image, coordinates may be specified, whereas for textual data an offset may be sufficient.

    OHS's solved some of the problems introduced by the monolithic and the client/server systems, but are far from ideal. Every OHS defines its own protocols and API's, and not all OHS's support the same functionality. Descendants of LSS systems typically allow only for associations to be created between already existing documents, while descendants of HBMS systems, in addition to the LSS feature mentioned above, may also include content related functionality such as version and concurrency control. The result is that (1) an application written with a specific OHS in mind, will not work with another system, (2) because of the different protocols and API's, stored information cannot be shared across different systems, (3) because of the lack of a common standard specifying a minimal protocol or API, every system implements its own API, making individual systems unable to communicate with each other. Furthermore, although quite a few other domains exist, most OHS's are designed with navigational hypermedia in mind. An example of an OHS descending from LSS is Microcosm [12], while an HBMS descendant is Hyperform [11].

    Component Based OHS's

    Component Based Open Hypermedia Systems (CB-OHS's) are very similar to ``simple'' open hypermedia systems. However, as the name implies, there is a greater focus on the notion of components. Besides the component issue, the thing to note here is that this kind of system supports several kinds of structural domains, and may store its data at different locations. So, it differs primarily from the OHS in the link component.

    Compared to the OHS's, the first generation CB-OHS's (1G CB-OHS) tried to solve the problem of lack of cooperation between individual components by introducing standards. So far there is an agreed upon standard specifying how the application and the structure service in the navigational domain should communicate, and further standards are underway. Another goal of the 1G CB-OHS is that it should be possible to extend the system to support other domains as well, simply by adding a new structure service (that is, a new component) to support the new domain, i.e. the taxonomic or the spatial domains. Alternatively an existing component could be modified to handle several domains as was the case with the OHS. Compared to the CB-OHS, an OHS can be though of as comprised of just one structure service. However, modifying an existing component this way is not a very clean and flexible solution. But common to all structure components is that they access the storage component through the same API. The implication of this is that a new structure service will therefore automatically ``inherit'' mechanisms for versioning, concurrency control or what else the storage component has to offer.

    For the 1G systems to meet these goals the structure service makes a number of protocols and API's available to its clients (the browser or whatever application that wish to communication with the hypermedia system. Because the system adheres to the hypermedia definition of openness it can essentially be any type of application). Figure [*] shows an architecture with three structural components, each representing a structural domain. Among other things a structural domain deals with the special abstractions used, i.e. node, link, and context within the navigational domain. As described in the previous section, the special abstractions within every domain makes it a good candidate for a new component instead of intermixing the functionality with an existing one.

    Figure: A CB-OHS architecture
    \includegraphics[width=7cm]{architectures2.eps}

    The structure component communicates with the storage component (called the hypermedia store), but because the components no longer exist within a single process boundary some additional work has to go into the communication process. Local communication can be handled by some form of Interprocess Communication (IPC) or Local Procedure Call (LPC), but across a network things get complicated. To support network communication a lot of work went into the development of custom component frameworks. This is also the main difference between the first and the second generation of CB-OHS's. Where the 1G CB-OHS made use of custom frameworks, the 2G CB-OHS makes use of general frameworks like COM or CORBA. The developer can then focus on developing hypermedia functionality and ignore the lower level details of the communication process. The problem with integrating existing application still exist though, because modifying an existing application to make use a component framework is generally a non-trivial task.

    The definition of standards, such as the one between the structure component and the application, is a result of the work of the Open Hypermedia Systems Working Group (OHSWG). As standards evolve they will benefit users at all levels [13]. The end user will come to think of hypermedia functionality in the same way as with cut, copy, and paste today [12]; as something that is a natural ingredient of every application. At some point in the future it might be possible to add menu items such as ``Start link'' and ``Finish link'' etc. to every application, and implementing them will be no more difficult than todays cut, copy, and paste. For producers of content, common standards will also come in handy, as documents and structures may be reused across platforms and hypermedia system boundaries. Finally, besides the editing functionality previously described, the developer will be able to focus on what a standardized system offers, no matter of the actual system, as long as it adheres to agreed upon standards.

    Summary

    Hypermedia systems have emerged from a need for organizing an ever growing pile of information better than by simply storing things alphabetically. Since Bush described his thoughts of a machine that functionally resembled the way the human memory works, the knowledge of mankind has doubled many times, and the World Wide Web has replaced many of the earlier hypermedia systems and made quite a bit of the visions of the pioneers come true. However, at the same time, it is worth noting that the World Wide Web is a very simple system compared to earlier as well as contemporary systems. But this simplicity itself might very well be the main reason behind its success in delivering hypermedia functionality to the general public.

    The architecture has undergone a gradual development much like the architecture of any other software. The monolithic systems were not too keen on acknowledging the existence of other systems. Since then, things have changed radically, and the systems of today are designed to import and export data from and to a variety of formats. The common denominator for import and export is often W3C standards such as SGML or derivatives like XML or HTML. Add to this the ability of systems to better allow reuse of functionality across different systems.

    It is also worth noting that HTML, the basic data format of the World Wide Web, the dominant hypermedia system in use today, keeps structure and data together and therefore the World Wide Web is not considered open in the hypermedia sense. Several (successful) attempts have been made to make the World Wide Web a (component based) open hypermedia system. All in all the area of hypermedia is a very large area of ongoing research and there is a lot of elaborating material available on the systems and the concepts briefly touched upon in this article.

    Copyright (C) 2002 Ronnie Holm. Please email me and let me know where this article is being used. Verbatim copying and redistribution of this entire article is permitted in any medium if this notice is preserved.

    Bibliography

    1
    Vannevar Bush, As we may think, http://www.ps.uni-sb.de/~duchier/
    pub/vbush/vbush.shtml

    2
    Jeff Conklin, Hypertext: An introduction and survey

    3
    Tim Berners-Lee et al., The World Wide Web

    4
    Bill Gates, The Road Ahead

    5
    Uffe Kock Wiil et al., Evolving hypermedia middleware services: lessons and observations, http://www.cs.aue.auc.dk/~kock/
    Publications/Construct/sac99.pdf

    6
    Robert Akscyn et al., KMS: A distributed hypermedia system for managing knowledge in organisations

    7
    Nicole Yankelovich et al., Intermedia: The concept and the construction of a seamless information environment

    8
    Frank Halasz et al., Reflections on Notecards: Seven issues for the next generation of hypermedia systems

    9
    Amy Pearl, Sun's link service: A protocol for open linking

    10
    Samuel Taylor Coleridge, Kubla Kahn http://www.geocities.com/chadlupkes/
    poetry/xanadu.html

    11
    Uffe Kock Will et al., Hyperform: Using extensibility to develop dynamic, open and distributed hypermedia systems, http://www.cs.aue.auc.dk/~kock/
    Publications/Hyperform/echt92.pdf

    12
    Hugh Davis et al., Light Hypermedia Link Services: A study of third party application integration

    13
    Siegfried Reich et al., Addressing interoperability in open hypermedia: The design of the open hypermedia protocol


    Copyright © 2002, Ronnie Holm.
    Copying license http://www.linuxgazette.net/copying.html
    Published in Issue 78 of Linux Gazette, May 2002

    "Linux Gazette...making Linux just a little more fun!"


    Rapid application development using PyGTK

    By Krishnakumar R.


    In a competitive world, there is a definite edge to developing applications as rapidly as possible. This can be done using PyGTK which combines the robustness of Python and the raw power of GTK. This article is a hands on tutorial on building a scientific calculator using pygtk.


    1. What is PyGTK ?

    Well, let me quote from the PyGTK source distribution:

       "This archive contains modules that allow you to use gtk in Python
       programs.  At present, it is a fairly complete set of bindings.
       Despite the low version number, this piece of software is quite
       useful, and is usable to write moderately complex programs."
                                                    - README, pygtk-0.6.4
    

    2. What are we going to do ?

    We are going to build a small scientific calculator using pygtk. I will explain each stage, in detail. Going through each step of this process will help one to get acquainted with pygtk. I have also put a link to the complete source code at the end of the article.

    3. Packages and Basic knowledge you should have

    python

    This package is available with almost every Linux distributions. My explanation would be based on Python 1.5.2 installed on a Linux RedHat 6.2 machine. It would be good if you know how to program in python. Even if you do not know python programming, do not worry ! Just follow the instructions given in the article.

    pygtk

    Newer versions of this package is available from :

    1. ftp://ftp.daa.com.au/pub/james/python
    2. ftp://ftp.gtk.org/pub/gtk/python/
    3. ftp://ftp.python.org/pub/contrib/Graphics
    My explanation would be based on pygtk-0.6.4.

    4. Let us start

    The tutorial has been divided into three stages. The code and the corresponding output are given with each stage.

    5. Stage 1 - Building a Window

    First we need to create a window. Window is actually a container. The buttons tables etc. would come within this window. Open a new file stage1.py, using an editor. Write in the following lines to it :


    from gtk import *
    
    win = GtkWindow()
    
    def main():
            win.set_usize(300, 350)
            win.connect("destroy", mainquit)
            win.set_title("Scientific Calculator")
            win.show()
            mainloop()
    
    main()
            
    

    First line is for importing the methods from the module named gtk. That means we can now use the functions present in the gtk library.

    Then we make an object of type GtkWindow and name it as win. After that we set the size of the window. The first argument is the breadth and the second argument is the height. We also set the title of our window. Then we call the method by name show. This method will be present in case of all objects. After setting the parameters of a particular object, we should always call show. Only when we call the show of a particular object does it becomes visible to the user. Remember that although you may create an object logically; until you call show of that object, the object will not be physically visible.

    We connect the signal delete of the window to a function mainquit. The mainquit is an internal function of the gtk by calling which the presently running application can be closed. Do not worry about signals. For now just understand that whenever we delete the window (may be by clicking the cross mark at the window top), the mainquit will be called. That is, when we delete the window, the application is also quit.

    mainloop() is also an internal function of the gtk library. When we call the mainloop, the launched application waits in a loop for some event to occur. Here the window appears on the screen and just waits. It is waiting in the 'mainloop', for our actions. Only when we delete the window does the application come out of the loop.

    Save the file. Quit the editor and come to the shell prompt. At the prompt type :

    python stage1.py

    Remember, you should be in Xwindow to view the output.

    A screen shot of output is shown below.

    stage1.png


    6. Stage 2 - Building the table and buttons

    Let us start writing the second file, stage2.py. Write the following code to file stage2.py.


    
    
    from gtk import *
     
    rows=9
    cols=4
     
     
    win = GtkWindow()
    box = GtkVBox()
    table = GtkTable(rows, cols, FALSE)
    text = GtkText()
    close  = GtkButton("close")
     
    button_strings=['hypot(','e',',','clear','log(','log10(','pow(','pi','sinh(','cosh(','tanh(','sqrt(','asin(',
    'acos(','atan(','(','sin(','cos(','tan(',')','7','8','9','/','4','5','6','*','1','2','3','-', '0','.','=','+'
    ]
    button = map(lambda i:GtkButton(button_strings[i]), range(rows*cols))
     
     
     
     
    def main():
            win.set_usize(300, 350)
            win.connect("destroy", mainquit)
            win.set_title("Scientific Calculator")
     
            win.add(box)
            box.show()
     
            text.set_editable(FALSE)
            text.set_usize(300,1)
            text.show()
            text.insert_defaults(" ")
            box.pack_start(text)
     
            table.set_row_spacings(5)
            table.set_col_spacings(5)
            table.set_border_width(0)                                                                            
    
            box.pack_start(table)
            table.show()
     
            for i in range(rows*cols) :
                  y,x = divmod(i, cols)
                  table.attach(button[i], x,x+1, y,y+1)
                  button[i].show()
     
            close.show()
            box.pack_start(close)
     
            win.show()
            mainloop()
     
    main()
                                                       
    

    The variables rows and cols are used to store the number of rows and columns, of buttons, respectively. Four new objects -- the table, the box, the text box and a button are created. The argument to GtkButton is the label of the button. So close is a button with label as "closed".

    The array , button_strings is used to store the label of buttons. The symbols that appear in the keypad of scientific calculator are used here. The variable button is an array of buttons. The map function creates rows*cols number of buttons. The label of the button is taken from the array button_strings. So the ithe button will have the ith string from button_strings as label. The range of i is from 0 to rows*cols-1.

    We insert a box into the window. To this box we insert the table. And in to this table we insert the buttons. Corresponding show of window, table and buttons are called after they are logically created. With win.add we add the box to the window.

    Use of text.set_editable(FALSE) will set the text box as non-editable. That means we cannot externally add anything to the text box, by typing. The text.set_usize, sets the size of the text box and the text.insert_defaults inserts the null string as the default string to the text box. This text box is packed into the starting of the box.

    After the text box we insert the table in to the box. Setting the attributes of the table is trivial. The for loop inserts 4 buttons into 9 rows. The statement y,x = divmod(i, cols) would divides the value of i by cols and, keeps the quotient in y and the remainder in x.

    Finally we insert the close button to the box. Remember, pack_start would insert the object to the next free space available within the box.

    Save the file and type

    python stage2.py

    A screen shot of the output is given below.

    stage2.png


    7. Stage 3 - Building the backend for the calculator

    Some functions are to be written to make the application do the work of calculator. This functions have been termed as the backend. These are the lines that are to be typed in to scical.py. This is the final stage. The scical.py contains the finished output. The program is given below :


    from gtk import *
    from math import *
     
    toeval=' '
    rows=9
    cols=4
     
    win = GtkWindow()
    box = GtkVBox()
    table = GtkTable(rows, cols, FALSE)
    text = GtkText()
    close  = GtkButton("close")
     
    button_strings=['hypot(','e',',','clear','log(','log10(','pow(','pi','sinh(','cosh(','tanh(','sqrt(','asin(','acos(','atan(','(','sin(','cos(','tan(',')','7','8','9','/','4','5','6','*','1','2','3','-', '0','.','=','+']
    button = map(lambda i:GtkButton(button_strings[i]), range(rows*cols))
    
    def myeval(*args):
            global toeval
            try   :
                    b=str(eval(toeval))
            except:
                    b= "error"
                    toeval=''
            else  : toeval=b                                    
            text.backward_delete(text.get_point())
            text.insert_defaults(b)
    
    
    
    def mydel(*args):
            global toeval
            text.backward_delete(text.get_point())
            toeval=''
     
    def calcclose(*args):
            global toeval
            myeval()
            win.destroy()
     
    def print_string(args,i):
            global toeval
            text.backward_delete(text.get_point())
            text.backward_delete(len(toeval))
            toeval=toeval+button_strings[i]
            text.insert_defaults(toeval)
     
     
    def main():
            win.set_usize(300, 350)
            win.connect("destroy", mainquit)
            win.set_title("Scientific Calculator: scical (C) 2002 Krishnakumar.R, Share Under GPL.")
     
            win.add(box)
            box.show()
     
            text.set_editable(FALSE)                       
            text.set_usize(300,1)
            text.show()
            text.insert_defaults(" ")
            box.pack_start(text)
     
            table.set_row_spacings(5)
            table.set_col_spacings(5)
            table.set_border_width(0)
            box.pack_start(table)
            table.show()
     
            for i in range(rows*cols) :
                  if i==(rows*cols-2) : button[i].connect("clicked",myeval)
                  elif  (i==(cols-1)) : button[i].connect("clicked",mydel)
                  else                : button[i].connect("clicked",print_string,i)
                  y,x = divmod(i, 4)
                  table.attach(button[i], x,x+1, y,y+1)
                  button[i].show()
     
            close.show()
            close.connect("clicked",calcclose)
            box.pack_start(close)
     
            win.show()
            mainloop()
     
    main()                                              
    

    A new variable toeval has been included. This variable stores the string that is to be evaluated. The string to be evaluated is present in the text box, at the top. This string is evaluated when the = button is pressed. This is done by calling the function myeval. The string contents are evaluated, using python function eval and the result is printed in the text box. If the string cannot be evaluated (due to some syntax errors), then a string 'error' is printed. We use the try and except for this process.

    Pressing any button (using a mouse) other than the clear, the close and the =, will trigger the function print_string. This function first clears the text box. Now it appends the string corresponding to the button pressed, to the variable toeval and then displays toeval in the text box.

    If we press the close button then, the function calcclose is called, which destroys the window. If we press the clear button then the function mydel is called and the text box is cleared. In the function main, we have added the 3 new statements to the for loop. They are for assigning the corresponding functions to the buttons. Thus the = button is attached to myeval function, the clear is attached to mydel and so on.

    Thus we have the complete scientific calculator ready. Just type python scical.py at the shell prompt and you have the scientific calculator running.

    A snapshot of final application is given below.

    scical.png


    8. Conclusion

    The source code of the stages can be downloaded by clicking at the links below.

    1. stage1.py
    2. stage2.py
    3. scical.py

    They have all .txt extension. Remove this extension and run the programs. For example change stage1.py.txt to stage1.py before executing.

    Lot of example programs will be given in the examples directory, which come along with the pygtk package. In Linux, RehHat 6.2 you can find it under /usr/doc/pygtk-0.6.4/examples/ directory. Run those programs and read their source code. This will give you ample help on developing complex applications.

    Happy Programming. Good Bye !

    Krishnakumar R.

    Krishnakumar is a final year B.Tech student at Govt. Engg. College Thrissur, Kerala, India. His journey into the land of Operating systems started with module programming in linux . He has built a routing operating system by name GROS.(Details available at his home page: www.askus.way.to ) His other interests include Networking, Device drivers, Compiler Porting and Embedded systems.


    Copyright © 2002, Krishnakumar R..
    Copying license http://www.linuxgazette.net/copying.html
    Published in Issue 78 of Linux Gazette, May 2002

    "Linux Gazette...making Linux just a little more fun!"


    Qubism

    By Jon "Sir Flakey" Harsem


    [These cartoons are scaled down to fit into LG. To see a panel in all its clarity, click on it. -Editor (Iron).]

    [cartoon]
    [cartoon]
    [cartoon]

    All Qubism cartoons are here at the CORE web site.

    Jon "SirFlakey" Harsem

    Jon is the creator of the Qubism cartoon strip and current Editor-in-Chief of the CORE News Site. Somewhere along the early stages of his life he picked up a pencil and started drawing on the wallpaper. Now his cartoons appear 5 days a week on-line, go figure. He confesses to owning a Mac but swears it is for "personal use".


    Copyright © 2002, Jon "Sir Flakey" Harsem.
    Copying license http://www.linuxgazette.net/copying.html
    Published in Issue 78 of Linux Gazette, May 2002

    "Linux Gazette...making Linux just a little more fun!"


    GUI Programming in C++ using the Qt Library, part 1

    By Gaurav Taneja


    In the vast world of GUI Development Libraries there stands apart a Library, known as 'Qt' for C++ developed by Trolltech AS. 'Qt' was commercially introduced in 1996 and since then many of the sophisticated user interfaces have been developed using this Library for varied applications.

    Qt is cross-platform as it supports MS/Windows,Unix/X11 (Linux, Sun Solaris, HP-UX, Digital Unix, IBM AIX, SGI IRIX and many other flavors),Macintosh ( Mac OS X ) and Embedded platforms. Apart from this 'Qt' is object oriented, component based and has a rich variety of widgets available at the disposal of a programmer to choose from. 'Qt' is available in its commercial versions as 'Qt Professional' and 'Qt Enterprise Editions'. The free Edition is the non-commercial version of Qt and is freely available for download (www.trolltech.com).

    Getting Started

    First of all you need to download the library, i assume that you have downloaded the Qt/X11 version for Linux as the examples will be taken for the same.

    You might require the superuser privlileges to install, so make sure you are 'root'.

    Let's untar it into /usr/local directory :

    [root@Linux local]# tar -zxvf qt-x11-free-3.0.1

    [root@Linux local]# cd qt-x11-free-3.0.1

    Next you will need to compile and install the library with the options you require to use.'Qt' Library can be compiled with custom options suiting your needs.We will compile it so that we get gif reading, threading , STL, remote control, Xinerama,XftFreeType (anti-aliased font) and X Session Management support apart from the basic features.

    Before we proceed further, remember to set some environment variables that point to the correct location as follows:

    QTDIR=/usr/local/qt-x11-free-3.0.1
    PATH=$QTDIR/bin:$PATH
    MANPATH=$QTDIR/man:$MANPATH
    LD_LIBRARY_PATH=$QTDIR/lib:$LD_LIBRARY_PATH

    export QTDIR PATH MANPATH LD_LIBRARY_PATH

    You can include this information in your .profile in your home directory.

    [root@Linux qt-x11-free-3.0.1]# ./configure -qt-gif -thread -stl -remote -xinerama -xft -sm

    [root@Linux qt-x11-free-3.0.1]# make install

    If all goes well, you will have the 'Qt' library installed on your system.



    Your First Steps With 'Qt'

    In order to start writing programs in C++ using the 'Qt' library you will need to understand some important tools and utilities available with 'Qt' Library to ease you job.


    Qmake

    Qmake let's you generate makefiles with the information based on a '.pro' file.

    A simple project file looks something like this:

        SOURCES = hello.cpp
        HEADERS = hello.h
        CONFIG += qt warn_on release
        TARGET  = hello

    Here, 'SOURCES' can be used to define all the implementation source for the application, if you have more than one source file you can define them like this:

    SOURCES = hello.cpp newone.cpp

    or alternatively by:
    
        SOURCES += hello.cpp
        SOURCES += newone.cpp

    Similarly 'HEADERS' let's you specify the header files belonging to your source.The 'CONFIG' section facilitates to give qmake info about the application configuration.This Project file's name should be the same as the application's executable. Which in our case is 'hello.pro'.

    The Makefile can be generated by issuing the command:

    [root@Linux mydirectory]# qmake -o Makefile hello.pro  

    Qt Designer

    Qt Designer is a tool that let's you visually design and code user interfaces using the 'Qt' Library. The WYSIWYG interface comes in very handy for minutely tweaking the user interface and experimenting with various widgets.The Designer is capable of generating the entire source for the GUI at any time for you to enhance further. You will be reading more about the 'Qt Designer' in the articles that will follow.

     

    Hello World!

    Let's begin by understanding a basic 'Hello World' Program.Use any source editor of your choice to write the following code:

    #include <qapplication.h>
    #include <qpushbutton.h>

    int main( int argc, char **argv )
    {

    QApplication a( argc, argv );
    QPushButton hello( "Hello world!", 0 );
    hello.resize( 100, 30 );
    a.setMainWidget( &hello );
    hello.show();
    return a.exec();

    }

    Save this code as a plain text file('hello.cpp'). Now let's compile this code by making a project file (.pro) as follows:

    TEMPLATE = app
    CONFIG += qt warn_on release
    HEADERS =
    SOURCES = hello.cpp
    TARGET = hello

    Let's save this file as 'hello.pro' in the same directory as that of our source file and continue with the generation of the Makefile.

    [root@Linux mydirectory]# qmake -o Makefile hello.pro

    Compile it using 'make'

    [root@Linux mydirectory]# make
    
    
    You are now ready to test your first 'Qt' Wonder. Provided you are in 'X', you can launch the
    program executable. [root@Linux mydirectory]# ./hello You should see something like this: Snapshot Let's understand the individual chunks of the code we've written. The First two lines in our code include the QApplication and QPushButton class definitions. Always remember that there has to be just one QApplication object in your entire Application. As with other c++ programs, the main() function is the entry point to your program and argc is the number of command-line arguments while argv is the array of command-line arguments. Next you pass these arguments received by Qt as under: QApplication a(argc, argv) Next we create a QPushButton object and initialize it's constructor with two arguments, the label of the button and it's parent window (0 i.e., in it's own window in this case). We resize our button with the following code: hello.resize(100,30); Qt Applications can optionally have a main widget associated with it.On closure of the main widget the Application terminates. We set our main widget as: a.setMainWidget( &hello ); Next, we set our main widget to be visible. You have to always call show() in order to make your widget visible. hello.show(); Next we will finally pass the control to Qt. An important point to be noted here is that exec() keeps running till the application is alive and returns when the application exits.

    Gaurav Taneja

    I work as a Technical Consultant in New Delhi,India in Linux/Java/XML/C++. I'm Actively involved in open-source projects, with some hosted on SourceForge. My favorite leisure activities include long drives, tennis, watching movies and partying. I also run my own software consulting company named BroadStrike Technologies.


    Copyright © 2002, Gaurav Taneja.
    Copying license http://www.linuxgazette.net/copying.html
    Published in Issue 78 of Linux Gazette, May 2002

    "Linux Gazette...making Linux just a little more fun!"


    Xlib Programming in C++

    By Rob Tougher


    1. Introduction
    2. Why not use a widget set?
    3. The basics
    3.1 Opening a display
    3.2 Creating a window
    3.3 Handling events
    3.4 Drawing
    4. Advanced - creating a command button from scratch
    4.1 Requirements of the button
    4.2 Giving it its own window
    4.3 Implementing "pressed" and "not pressed" drawn states
    4.4 Figuring out which state to draw
    4.5 Giving it a "text" property
    4.6 Generating an "on_click()" event
    5. Conclusion
    a. References
    b. Files

    1. Introduction

    Xlib is a library that allows you to draw graphics on the screen of any X server, local or remote, using the C language. All you need to do is include <X11/Xlib.h>, link your program using the -lX11 switch, and you are ready to use any of the functions in the library.

    For example, say you want to create and show a window on your local machine. You can write the following:

    Listing 1: example1.cpp
    #include <X11/Xlib.h>
    #include <unistd.h>
    
    main()
    {
      // Open a display.
      Display *d = XOpenDisplay(0);
    
      if ( d )
        {
          // Create the window
          Window w = XCreateWindow(d, DefaultRootWindow(d), 0, 0, 200,
    			       100, 0, CopyFromParent, CopyFromParent,
    			       CopyFromParent, 0, 0);
    
          // Show the window
          XMapWindow(d, w);
          XFlush(d);
    
          // Sleep long enough to see the window.
          sleep(10);
        }
      return 0;
    }
    

    You can compile the program with the following command:

    prompt$ g++ test.cpp -L/usr/X11R6/lib -lX11
    prompt$ ./a.out
    

    and voil�, you have a window on your screen for 10 seconds:

    The purpose of this article is to show you some simple classes that you can use when developing Xlib applications. We will create an example application that has a window with one button on it. The button will be a custom button we develop using only the Xlib library.

    2. Why not use a widget set?

    You might be asking yourself "why don't we just use a widget library, like QT, or GTK?". These are valid questions. I use QT, and find it very useful when developing C++ applications targeted for the Linux platform.

    The reason I created these classes was to get a better understanding of the X Window System. It forced me to figure out exactly what was going on under the hood in libraries like QT and GTK. Once I had finished, I realized that the classes I created were actually useful.

    So hopefully you will find this article educational, and be able to use the classes presented in your own applications.

    3. The basics

    Now let's dive into some code. We'll go over some basic features of Xlib in this section.

    3.1 Opening a display

    The first class I created was the display class, which was in charge of opening and closing a display. You'll notice that in example1.cpp, we don't close our display properly with XCloseDisplay(). With this class, it will be closed before the program exits. Our example now looks like this:

    Listing 2: example2.cpp
    #include <unistd.h>
    
    #include "xlib++/display.hpp"
    using namespace xlib;
    
    main()
    {
      try
        {
          // Open a display.
          display d("");
    
          // Create the window
          Window w = XCreateWindow((Display*)d,
    			       DefaultRootWindow((Display*)d),
    			       0, 0, 200, 100, 0, CopyFromParent,
    			       CopyFromParent, CopyFromParent, 0, 0);
    
          // Show the window
          XMapWindow(d, w);
          XFlush(d);
    
          // Sleep long enough to see the window.
          sleep(10);
        }
      catch ( open_display_exception& e )
        {
          std::cout << "Exception: " << e.what() << "\n";
        }
      return 0;
    }
    

    Nothing spectacular, really. Just opens and closes a display. You'll notice in the implementation that the display class defines the Display* operator, so all you have to do is cast the object to get the actual Xlib Display pointer.

    Also notice the try/catch block. All of the classes in this article throw custom exceptions to signal error conditions.

    3.2 Creating a window

    Next I wanted to make window creation easier, so I added a window class to the mix. This class creates and shows a window in its constructor, and destroys the window in its destructor. Our example now looks like this(pay no attention to the event_dispatcher class, we will go over that next):

    Listing 3 : example3.cpp
    #include "xlib++/display.hpp"
    #include "xlib++/window.hpp"
    using namespace xlib;
    
    class main_window : public window
    {
     public:
      main_window ( event_dispatcher& e ) : window ( e ) {};
      ~main_window(){};
    };
    
    main()
    {
      try
        {
          // Open a display.
          display d("");
    
          event_dispatcher events ( d );
          main_window w ( events ); // top-level
          events.run();
        }
      catch ( exception_with_text& e )
        {
          std::cout << "Exception: " << e.what() << "\n";
        }
      return 0;
    }
    

    Notice that our main_window class inherits from xlib::window. When we create the main_window object, the base class' constructor gets called, which creates the actual Xlib window.

    3.3 Handling events

    You probably noticed the event_dispatcher class in the last example. This class takes events off of the application's queue, and dispatches them to the correct window.

    This class is defined as the following:

    Listing 4 : event_dispatcher.hpp
          class event_dispatcher
    	{
    	  // constructor, destructor, and others...
    	  [snip...]
    
    	  register_window ( window_base *p );
    	  unregister_window ( window_base *p );
    	  run();
    	  stop();
    	  handle_event ( event );
    	}
    

    The event_dispatcher passes events to window classes via the window_base interface. All of the classes in this article that represent windows derive from this class, and are able to catch messages from the dispatcher. Once they register themselves with the register_window method, they start receiving messages. window_base is declared as the following, and all classes deriving from it must define these methods:

    Listing 5 : window_base.hpp
          virtual void on_expose() = 0;
    
          virtual void on_show() = 0;
          virtual void on_hide() = 0;
    
          virtual void on_left_button_down ( int x, int y ) = 0;
          virtual void on_right_button_down ( int x, int y ) = 0;
    
          virtual void on_left_button_up ( int x, int y ) = 0;
          virtual void on_right_button_up ( int x, int y ) = 0;
    
          virtual void on_mouse_enter ( int x, int y ) = 0;
          virtual void on_mouse_exit ( int x, int y ) = 0;
          virtual void on_mouse_move ( int x, int y ) = 0;
    
          virtual void on_got_focus() = 0;
          virtual void on_lost_focus() = 0;
    
          virtual void on_key_press ( character c ) = 0;
          virtual void on_key_release ( character c ) = 0;
    
          virtual void on_create() = 0;
          virtual void on_destroy() = 0;
    

    Let's see if this actually works. We will try to handle a ButtonPress event in our window. Add the following code to our main_window class:

    Listing 6 : example4.cpp
    class main_window : public window
    {
     public:
      main_window ( event_dispatcher& e ) : window ( e ) {};
      ~main_window(){};
    
      void on_left_button_down ( int x, int y )
      {
        std::cout << "on_left_button_down()\n";
      }
    
    };
    

    Compile the code, run the example, and click inside of the window. It works! The event_dispatcher gets a ButtonPress message, and sends it to our window via the predefined on_left_button_down method.

    3.4 Drawing

    Next let's try to draw in our window. The X Window system defines the concept of a "graphics context" that you draw into, so I naturally created a class named graphics_context. The following is the class' definition:

    Listing 7 : graphics_context.hpp
      class graphics_context
        {
        public:
          graphics_context ( display& d, int window_id );
          ~graphics_context();
    
          void draw_line ( line l );
          void draw_rectangle ( rectangle rect );
          void draw_text ( point origin, std::string text );
          void fill_rectangle ( rectangle rect );
          void set_foreground ( color& c );
          void set_background ( color& c );
          rectangle get_text_rect ( std::string text );
          std::vector get_character_widths ( std::string text );
          int get_text_height ();
          long id();
    
        private:
    
          display& m_display;
          int m_window_id;
          GC m_gc;
        };
    

    You pass this class a window id, and a display object, and then you can draw as much as you want using the drawing methods. Let's try it out. Add the following to our example:

    Listing 8 : example5.cpp
    #include "xlib++/display.hpp"
    #include "xlib++/window.hpp"
    #include "xlib++/graphics_context.hpp"
    using namespace xlib;
    
    
    class main_window : public window
    {
     public:
      main_window ( event_dispatcher& e ) : window ( e ) {};
      ~main_window(){};
    
      void on_expose ()
      {
        graphics_context gc ( get_display(),
    			  id() );
    
        gc.draw_line ( line ( point(0,0), point(50,50) ) );
        gc.draw_text ( point(0, 70), "I'm drawing!!" );
      }
    
    };
    

    The on_expose() method is called whenever the window is displayed, or "exposed". In this method we draw a line and some text in the window's client area. When you compile and run this example, you should see something similar to the following:

    The graphics_context class is used extensively in the rest of this article.

    You may also notice a few helper classes in the above code, point and line. These are small classes I created, all having to do with shapes. They don't look like they are necessary now, but they will be helpful later on if I have to perform complex operations with them, like transformations. For example, it is easier to say "line.move_x(5)", than to say "line_x += 5; line_y += 5;". It is much cleaner, and less error-prone.

    4. Advanced - creating a command button from scratch

    4.1 Requirements of the button

    Enough of the simple stuff - now let's move on to creating actual widgets that can be reused. Our focus now will be on creating a command button that we can use in an application. The requirements of this button are as follows:

    • has its own window to receive events
    • has two drawn states - "pressed", and "not pressed"
    • draws the "pressed" state when the mouse button was pressed down when inside the control's rect, and the mouse is still over the control
    • draws the "not pressed" state when the mouse button is not down, or when the mouse button is down, and the mouse is outside the rect of the control
    • text property with get and set methods
    • can send an "on_click()" event to the client

    This seems like a simple control, but implementing all of this will be more than trivial. The following sections describe this.

    4.2 Giving it its own window

    First off, we have to create a separate window for this command button. The constructor calls the show method, which in turn calls the create method, which is responsible for window creation:

    Listing 9 : command_button.hpp
          virtual void create()
    	{
    	  if ( m_window ) return;
    
    	  m_window = XCreateSimpleWindow ( m_display, m_parent.id(),
    					   m_rect.origin().x(),
    					   m_rect.origin().y(),
    					   m_rect.width(),
    					   m_rect.height(),
    					   0, WhitePixel((void*)m_display,0),
    					   WhitePixel((void*)m_display,0));
    
    	  if ( m_window == 0 )
    	    {
    	      throw create_button_exception 
    		( "could not create the command button" );
    	    }
    
    	  m_parent.get_event_dispatcher().register_window ( this );
    	  set_background ( m_background );
    	}
    

    Looks alot like the window class' constructor, doesn't it? First it creates the window with the Xlib API XCreateSimpleWindow(), then it registers itself with the event_dispatcher so it will receive events, and finally it sets its background.

    Notice that we pass the parent window's id into the call to XCreateSimpleWindow(). We are telling Xlib that we want our command button to be a child window of the parent.

    4.3 Implementing "pressed" and "not pressed" drawn states

    Because the command button registered itself with the event_dispatcher, it will receive on_expose() events when it needs to draw itself. We will use the graphics_context class to draw both states.

    The following is the code that will be used for the "not pressed" state:

    Listing 10 : command_button.hpp
          // bottom
          gc.draw_line ( line ( point(0,
    				  rect.height()-1),
    			    point(rect.width()-1,
    				  rect.height()-1) ) );
          // right
          gc.draw_line ( line ( point ( rect.width()-1,
    				    0 ),
    			    point ( rect.width()-1,
    				    rect.height()-1 ) ) );
    
          gc.set_foreground ( white );
    
          // top
          gc.draw_line ( line ( point ( 0,0 ),
    			    point ( rect.width()-2, 0 ) ) );
          // left
          gc.draw_line ( line ( point ( 0,0 ),
    			    point ( 0, rect.height()-2 ) ) );
    
          gc.set_foreground ( gray );
    
          // bottom
          gc.draw_line ( line ( point ( 1, rect.height()-2 ),
    			    point(rect.width()-2,rect.height()-2) ) );
          // right
          gc.draw_line ( line ( point ( rect.width()-2, 1 ), 
    			    point(rect.width()-2,rect.height()-2) ) );
    

    When we finally compile and run this code later on, the button will look like this:

    Alternatively, when the button is pressed, the following code will be used to draw it:

    Listing 11 : command_button.hpp
          gc.set_foreground ( white );
    
          // bottom
          gc.draw_line ( line ( point(1,rect.height()-1),
    			    point(rect.width()-1,rect.height()-1) ) );
          // right
          gc.draw_line ( line ( point ( rect.width()-1, 1 ),
    			    point ( rect.width()-1, rect.height()-1 ) ) );
    
          gc.set_foreground ( black );
    
          // top
          gc.draw_line ( line ( point ( 0,0 ),
    			    point ( rect.width()-1, 0 ) ) );
          // left
          gc.draw_line ( line ( point ( 0,0 ),
    			    point ( 0, rect.height()-1 ) ) );
    
    
          gc.set_foreground ( gray );
    
          // top
          gc.draw_line ( line ( point ( 1, 1 ),
    			    point(rect.width()-2,1) ) );
          // left
          gc.draw_line ( line ( point ( 1, 1 ),
    			    point( 1, rect.height()-2 ) ) );
    

    And the finished product will appear like the following:

    4.4 Figuring out which state to draw

    This seems like a pretty simple task - draw the "pressed" state when the mouse is down over the control, and draw the "not pressed" state when the mouse is up. This isn't entirely correct, though. When you press and hold the left mouse button over our control, and move the mouse out of the rect, the command button should draw the "not pressed" state, even though the left mouse button is currently pressed.

    The command_button class uses two member variables to handle this - m_is_down, and m_is_mouse_over. Initially, when the mouse is pressed down over our control(see on_left_button_down()), we put ourselves into the down state, and refresh the control. This results in the command button drawing itself pressed. If, at any time, the mouse moves out of the rect of our control(see on_mouse_exit()), m_is_mouse_over is set to false, and the control is refreshed. This results in the command button drawing itself in the "not pressed" state. If the mouse then moves into the rect of the control, m_is_mouse_over is toggled back to true, and the control is drawn pressed. Once the mouse button is released, we set ourselves to the "not pressed" state, and refresh ourselves.

    4.5 Giving it a "text" property

    This is a pretty simple task. We basically want the user of this command button to be able to get and set the text displayed. Here is the code:

    Listing 12 : command_button.hpp
          std::string get_name() { return m_name; }
          void set_name ( std::string s ) { m_name = s; refresh(); }
    

    The refresh() is in there so that the controls redraws itself with the new text.

    4.6 Generating an "on_click()" event

    We want the user of this command button to know when we were clicked. To do this, we will generate an "on_click()" event. The following is the definition of the command_button_base class:

    Listing 13 : command_button_base.hpp
    namespace xlib
    {
      class command_button_base : public window_base
        {
        public:
          virtual void on_click () = 0;
        };
    };
    

    What we are basically saying here is that "we support all events that a window does, plus one more - on_click()". The user of this button can derive a new class from it, implement the on_click() method, and take the appropriate action.

    5. Conclusion

    I really hope you enjoyed this article. We went over many features of Xlib, and wrapped them in C++ classes to make Xlib development easier in the future. If you have any questions, comments, or suggestions about this article, or about Xlib development in general, please feel free to email me.

    a. References

    b. Files

    Rob Tougher

    Rob is a C++ software engineer in the NYC area. When not coding on his favorite platform, you can find Rob strolling on the beach with his girlfriend, Nicole, and their dog, Halley.


    Copyright © 2002, Rob Tougher.
    Copying license http://www.linuxgazette.net/copying.html
    Published in Issue 78 of Linux Gazette, May 2002

    "Linux Gazette...making Linux just a little more fun!"


    The Back Page


    Wacko Topic of the Month


    Klez.E worm, bad bad dude

    Contributed By Iron

    Normally, I don't think much about spam. It's easy to spot it in a mail index. Spam just doesn't have plausable Subject: lines. Too many capital letters, too many '$' and other symbols, and words that no person would put in a subject; e.g., "Here's the info you asked about."

    Three weeks ago, I started receiving a lot of binary attachments. After two weeks of seeing the same subject lines over and over, I started keeping count. 241 messages in 9 days, or 32 MB. Ironically, the culprit itself revealed its identity. One of the subjects was "W32.Klez.E removal tools". I headed to www.datafellows.com, searched for "Klez.E", and sure enough, it's a worm.

    http://www.europe.f-secure.com/v-descs/klez_e.shtml

    It's quite a complicated little beastie. It has a large pool of subjects to choose from and also incorporates phrases it finds in files. It has a built-in SMTP client and sends itself to whoever it finds in your Outlook address book, pretending to be From: somebody else in your address book.

    Linux users of course can't get infected, although it can leak onto Linux mailing lists and pretend to be From: a Linux user. But Windows users who are unlucky enough to run the program or let IE or Outlook automatically execute it will have their documents overwritten with random data, their anti-virus programs disabled, and their address book harvested. Often it pretends to be an audio file, exploiting a bug in some Windows programs that automatically executes audio attachments.

    In the past week, the worm has forged the addresses of both Alex (former Answer Gang member and column writer) and the Editor Gal (Heather), and sent three messages to a linux-list recipient claiming to be From: linux-list. Interestingly, the addresses it chose for Heather and the linux-list person were obsolete.

    I have no idea why the Gazette address has the honor of receiving 99% of these critters.

    What burns me up was not only the bandwidth but the sneaky way it tries to trick you into running the attachments, claiming to be a Win XP patch (that's what first got me suspicious) or an anti-virus tool against itself. Some of its messages include the URLs of real anti-virus companies as a way to sound legitimate.

    A Win XP patch
    Your password
    A nice game
    	This is a very  nice game<br>
    	This game is my first work.<br>
    	I hope you would enjoy it.
    A special excite game
    If you're not connected to the Internet
    W32.Klez.E removal tools
    	<FONT>Sophos give you the W32.Klez.E removal tools<br>
    	W32.Klez.E is a  dangerous virus that spread through email.<br>
    	<br>
    	For more information,please visit http://www.Sophos.com</FONT>
    Worm Klex.E immunity
            <FONT>Klez.E is the most common world-wide spreading worm.It's very
            dangerous by corrupting your files.<br> Because of its very smart
            stealth and anti-anti-virus technic,most common AV software can't
            detect or clean it.<br> We developed this free immunity tool to defeat
            the malicious virus.<br> You only need to run this tool once,and then
            Klez will never come into your PC.<br> NOTE: Because this tool acts as
            a fake Klez to fool the real worm,some AV monitor maybe cry when you
            run it.<br> If so,Ignore the warning,and select 'continue'.<br> If you
            have any question,please <a href=3Dmailto:[email protected]>mail to
            me</a>.</FONT>
    W32.Elkern  removal tools
            W32.Elkern is a dangerous virus that can infect on Win98/Me/2000/XP.
            Trendmicro give you the W32.Elkern removal tools
            For more information,please visit http://www.Trendmicro.com
    Hi,gazette,darling
    Introduction on ADSL
    False) window.parent.GoNext()
    Tooltips.style.visibility
    CELLSPACING
    	Content-Type: audio/x-wav; name=height.bat
    So cool a flash,enjoy it
    	name=Nt324-00.doc
    A  IE 6.0 patch
    	name=sidprod1[1].htm
    Password.  Make sure you remove the cookies by
    
    Cutest subject: "there's a solution". It sounds like a religious evangelist, but with the vagueness of a fortune cookie.

    First non-English subjects: "Impostati", "Bliver brugt i Netscape".

    Ben sent in this procmail stanza that catches all messages with Windows binary attachments and sends them to /dev/null:

    # Goodbye to all the fools sending me "executable" attachments
    :0B:
    * name=.*(\.exe$|\.scr$|\.pif$)
    /dev/null
    
    I wrote a recipe that catches the subject lines used by this worm, with double spaces after the words it uses double spaces after. It puts the messages in I.worm in my mail directory. ("I." is the common prefix for my incoming mailboxes.)

    misc/backpage/klezkiller.procmailrc.txt

    To generate the subject lines:

    grep -i 'Subject:' spambin | tr A-Z a-z | sed 's/subject: //' | sort -u >victims
    

    I've also started temporarily moderating linux-list, where it also tried to spread. And I've been collecting these critters in a mailbox and sending complaints to the postmaster@ and abuse@ the relay ISPs, and blocking mail from those that don't respond.

    Breen Mullins writes:

    Yeah, we're seeing it. This is one mean little sucker. It has the usual features of an Outlook-based worm, with the charming addition that it uses a random address from the victim's address book as the From: address when it tries to propogate itself. When you're accused of spreading Windows worms from your linux box, that's why.

    More from Symantec: http://www.symantec.com/avcenter/venc/data/[email protected]

    The colleague who answers the support@ mailbox here reports receiving 282 of these in 5 days.

    Elkern virus

    The worm also drops a virus, Elkern. http://www.europe.f-secure.com/v-descs/elkern.shtml. One curious fact:
    Curiously, "the virus doesn't work on any operating system except Windows 98 because of a serious bug in its code. Due to some blind luck the virus also works on Windows 2000... When the main code gets control, the first thing is does is calls the IsDebuggerPresent API function. But the virus calls this function using a fixed API address and this address is only valid for Windows 98. On all other systems the virus just crashes. ... [Stuff about registry keys it sets] ... On Windows NT this doesn't happen because the virus crashes. Due to a dumb luck the virus doesn't crash on Windows 2000 though it calls a non-existing API address. "

    Didier Heyden writes:

    Trendmicro/antivirus.com describes the worm's attack scheme:

    It does not require the email receiver to open the attachment for it to execute. It uses a known vulnerability in Internet Explorer-based email clients to execute the file attachment automatically. This is also known as Automatic Execution of Embedded MIME type.

    The infected email contains the executable attachment registered as content-type of audio/x-wav or sometimes audio/x-midi so that when recipients view the infected email, the default application associated with audio files is opened. This is usually the Windows Media Player. The embedded EXE file cannot be viewed in Microsoft Outlook."

    However Trendmicro also pretends that the thing (at least the `E' and `H' variants) composes the message body "randomly"... The `H' variant is supposed to contain the following strings:

    Win32 Klez V2.01 & Win32 Foroux V1.0
    Copyright 2002,made in Asia
    About Klez V2.01:
    1,Main mission is to release the new baby PE virus,Win32
    Foroux
    2,No significant change.No bug fixed.No any payload.
    About Win32 Foroux (plz keep the name,thanx)
    1,Full compatible Win32 PE virus on Win9X/2K/NT/XP
    2,With very interesting feature.Check it!
    3,No any payload.No any optimization
    4,Not bug free,because of a hurry work.No more than three
    weeks from having such idea to accomplishing coding and
    testing"
    
    The sender `from:' address seems to be taken randomly either from the infected user's address book (which means that the apparent originator is not necessarily infected her/himself), or from a set of hardcoded addresses.


    Not The Answer Gang


    Bill Danzon:

    Well I never! I didn't expected such a prompt reply. Don't you have to tack, jib or shiver your timbers occasionally on that boat of yours?

    Ben Okopnik:

    Nah. These days, it's all done by computer. I just sit back and watch as the boat crashes into, erm, well, I never did trust them damn machines anyhow. What were we talking about?

    Bill:

    Early tomorrow I will be leaving home and driving 1000 miles to the Belgium coast to catch a 14-hour ferry to the north of England and don't know when I'll be returning. Pure coincidence, I assure you. No. Really. It has nothing whatsover to do with being threatened by "dustbunnies", whatever they are.

    Iron:

    Better go to Scotland. If it's cold enough that tomatoes don't grow up there, maybe you're safe from dust bunnies too. Dust bunnies are those clumps of dust that accumulate behind and underneath furniture. Sounds like there might be a dust bunny convention under your sofa.

    Thomas Adam:

    On behalf of the English people who are resident at TAG (including myself) -- welcome to England, Bill.

    Ben:

    Yikes. I didn't know that there _was_ a 1000 mile stretch you could drive in Europe...

    I'm just kidding, of course. I mean, at least there the Le Mans... That's a strange highway, though; after a while, the faces in the crowds along the side of the road (and *boy* are they big crowds - you'd think they've never seen a car before!) begin to look _really_ familiar, like they were *repeating* or something. And there's no place to pull over and buy a hot dog, either.

    Thomas:

    That would be too "American" -- :-) Indeed, there is always a nice little tea room, where one can get their "tea and scones"!! :-)

    Ben:

    *And* when I got off it, it looked like the same town I started in! What a bore. I'm never going back there again.

    Thomas:

    Lol, how so, Ben?? You mean you got fed up of the thatched roofs...but I thought you Americans liked all the picturesque scenary? -- No?? You did watch "Inspector Morse"??

    Ben:

    Oh, and if you're going to England, be careful: there's supposed to be this fella there named Thomas Adam, and he... Oh, - *hi*, Thomas! So nice to see you! I was just telling Bill here what a great country you have, with flush toilets and payphones, even... I've already arranged the low, low down payment and a great interest rate, and he sounds interested. <grin>

    Thomas:

    What? didn't your Mum (oh....sorry "mom") teach you where the pull-chain was?? :-)

    Ben:

    Nah; I was too busy learning to spell "tire", "maneuver", and "apothegm" the right way. :) I figured out pull-chains on my own.

    Thomas:

    Easy Ben. Hows the sunglasses, incidentally??

    Ben:

    Still dark and menacing as ever, thanks.

    Thomas:

    --Mr. Thomas Adam (English, by the way!!)

    Ben:

    <blink, blink> Really? There I was, thinking that the county of Dorset was on Mars. Silly me...

    Sendmail for kids

    I'm not old enough to use Linux yet . But I'm trying to configure the Linux ( send mail) to work as mail relay and I couldn't. where can I find clear documentation for configuring the Linux to work as Mail relay ??

    This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible.

    (!) [Iron]

    I didn't know there was a minimum age to use Linux. If you're old enough to write an e-mail, you're old enough to write Linux.

    Are you old enough to set your mailer so it sends us only text-format messages, not HTML format? Text messages are easier for us to read and respond to, and are the standard for Internet e-mail.

    You're not old enough to use Linux and you're trying to configure Sendmail??? Mamma mia! Why? Use a mail transfer agent like Postfix that's much easier to configure than Sendmail.

    What exactly do you want to do, what have you tried, and what are the problems?

    I assume by "mail relay" you just mean you want Sendmail to work, so you can send mail from and to your computer. That's not a mail relay. A "mail relay" means that your Sendmail program accepts mail *from non-local senders to non-local recipients*. Normally, Sendmail accepts mail only if it's *from a local user* or *to a local user*. Otherwise, you open up your mail server for exploitation by spammers.

    If this is the central mail server for an organization, it probably accepts mail from computers in the organization but not from other computers. This is technically "relaying", but with most mail transfer agents you don't configure it as a relay, instead you tell the program these are local addresses.

    If you're trying to be a spammer yourself, see Linux Gazette's advice for crackers: #1 #2


    Ben:

    Unfortunately, when I tried to notify Faber, his mail server said <choke><gag><puke>

    So, I'm putting it up here. Some of you might care, others hit 'delete' - and Faber, presumably, will get a high-speed cartoon brick with a message wrapped around it, telling him to smack his server so that it will take my messages and _like_ it.

    <making faces at Faber's server> Nyah. :)


    Iron:

    Anybody want to take a crack at this? It's an *.exe attachment supposedly sent by Microsoft to all its customers, a security upgrade for IE and Outlook/Express.

    Ben:

    Ah yes; another sleaze trying a bit of social engineering. Let's see...
    ----- Forwarded message from Microsoft Corporation Security Center
     -----
    
    Date: Sun, 17 Mar 2002 20:35:29 -0600
    From: "Microsoft Corporation Security Center" 
    To: "Microsoft Customer" <'[email protected]'>
    Subject: Internet Security Update
    
    Yeah, *right*. Micros*ft may produce a broken OS, be in league with the Dark Forces, and smell of elderberries, but they're *not* stupid enough to spam millions of people. Sorry, slimeball; try elsewhere.
    Microsoft Customer,
    
         this is the latest version of security update, the
    
    known security vulnerabilities affecting Internet Explorer and
    MS Outlook/Express as well as six new vulnerabilities, and is
    discussed in Microsoft Security Bulletin MS02-005. Install now to
    protect your computer from these vulnerabilities, the most serious of which
    could allow an attacker to run code on your computer.
    
    First off, the poor English should trigger off warnings; you don't "protect from" vulnerabilities; dependent clauses need a referent; and "security update" takes a definite article. An articulate seven-year old, or an under-educated teenager? Take your pick.

    "Don't delay! Grab the patch from *THIS* Micros*ft site RIGHT NOW!!!"

    http://www.microsoft.com\no+really:this_is=the+real+thing@666_666

    Heather:

    [Dear Microsoft Customer: pleez run this attachment to induce^H protext from evildoer accezz]

    Yeah, this whole thing sparked me to mantion a "warning in case you have gullible end users" to my local sysadmins list.

    NEWS FLASH

    Reports of a new strain of the "lack of clue" virus, in which people who lack a clue when dealing with email attachments are victimized easily, is going around.

    This one affects all clueless Microsoft customers and is invoked when the hapless victim opens an attachment claiming to be "from Microsoft" (CLUE: Microsoft never sends attachments. They have a website and a rather annoying auto-update system. They don't need to waste their own email bandwidth spamming customers with update .exe packets).

    Linux users are largely immune, as are freeBSD users, but users of MSwin based mailers which "helpfully" open attachments for them are heavy sufferers in this ailment. Linux and BSD folk who use WINE or DOSEMU and have made any special effort to autolaunch those sort of binaries should beware though. ("Too much clue" is also a problem at times...)

    Sites using a central SMTP gateway can apply filters against undesired attachments. If you don't have a clue what policy to apply, consider dumping all mail bearing attachments with the "Known Dangerous Extensions" - a Microsoft Knowledge Base document available on their website - into some moderated account which is maintained by a user with no interesting privileges, or to pass it through some antivirus scanning.


    Subject: Precious Cat News
    Quality Scoopable Litter Solutions

    Iron:

    Anyone want to take a shot at this? Lampooning for the Back Page open for business now. Notice question 2, "Why won't my cat use the litter box?" and "Quality scoopable litter solutions".

    Heather:

    for a proper firewall, we recommend you load the ip-cardboard module, although ip-plastic-with-lid has also been found effective. The selfscoop module may not be compatible with your cat if she hates the disk noises while it updates the logs...

    Ben:

    *There's* the problem, right there. You should be loading the "catp-*" versions of those modules, instead; the "ip-*" subset is intended for those humans who are silly enough to want to _demonstrate_ for their fussy fuzzy furball.

    Heather:

    Hmm, purrr-haps. I hear that the catp-plastic-liner module has to be unloaded manually, but the entire system is sub-optimal if you don't load it....

    Results are kinda gross, actually. I predict incompatibility with most kitchen protocols, especiially teen-chores.


    i'm a student at aylesbury college and i have a pre-release question which requires me to compare two operating systems and i have choosen linux as one of my choices, please could you send me information on linux's main features and requirements this would be most appreciated,

    (!) [Iron]

    Ah, but your professor wants you to do the research yourself.

    1. Look on the back of the box of any Linux distribution.
    2. Look at the distributions' web sites. Linux Weekly News (http://lwn.net/) maintains a list of distributions somewhere.
    3. See the Linux FAQ and Linux Meta-HOWTO at http://www.linuxdoc.org/ . Hint: while you're reading, note the large number of filesystems and network protocols Linux supports: it can communicate with a wider variety of computers than most other OSes can.
    PS. What's a "pre-release question"?

    (!) [Don Marti]

    Yes, of course. Linux is the OS that causes cancer. http://www.theregister.co.uk/content/4/19396.html

    Linux is also obsolete. http://groups.google.com/groups?selm=12595%40star.cs.vu.nl

    It was written by high school students who are in jail now. http://geraldholmes.freeyellow.com/LinusSucks.html

    (!) [Iron]

    Wow, that last link gave me four, count 'em, four popup ads before I managed to turn off Javascript.


    STOP THE GENOCIDE
    Erkki Tapola 29-Jul-96

    Every second billions of innocent assembler instructions are executed all over the world. Inhumanly they are put on a pipeline and executed with no regard to their feelings. The illegal instructions are spared, although they should be executed instead of the legal ones.

    Prior to the execution the instructions are transported to a cache unit using a bus. There they spent their last moments waiting for the execution. Just before the execution the instruction is separated into several pieces. The execution isn't always fast and painless. On crude hardware the execution of a complex instruction can take as long as 150 clock cycles. Scientists are working on shorter execution times.

    Microsoft endorses the needless execution of instructions with their products like DOS(TM), Windows(TM), Word(TM) and Excel(TM). It is more humane to use software which minimises the executions.

    Modern machines use several units to execute multiple instructions simultaneously. This way it is possible to execute several hundred million instructions per second. The time is near when there will be no more instructions to execute.


    Ben:

    the secret handshake

    Iron:

    Oh, now he's going around trying to convince people there's a secret handshake. Do you get kickbacks from people when you show them the handshake? Is that why you were able to trade your boat in for a yacht?

    Just to make it clear, THERE IS NO OFFICIALLY-SANCTIONED LINUX GAZETTE HANDSHAKE!!! If anybody tries to tell you there is and offers to teach it to you for a "donation", tell them to jump off a short plank into Chesapeake Bay.

    PS. I think Ben should host a Linux Gazette New Year's party on his fancy new yacht.

    Ben:

    "Flash! Pending sub-zero temperatures for Hades and the immediate vicinity, Ben will not - I repeat, not - be getting a new yacht. Current temperatures are approximately 820F, slightly higher near boiling lakes of sulphur. The weather should continue unseasonably warm and mild over the course of the next three thousand millenia..."


    World of Spam


    I have been mandated by my colleagues on the Panel to seek your assistance in the transfer of the sum of US$18.5 Million into your Bank account. As you may have known, the late General Abacha and members of his government embezzled billions of dollars through spurious contracts and payments to foreigners between 1993 and 1998 and this is now the subject of the probe by my Panel.

    In the course of our review, we have discovered this sum of $18.5 Million, which the former dictator could not transfer from the dedicated account of the Central Bank of Nigeria before his sudden death in June 1998. It is this amount that my Colleagues and I have decided to acquire for ourselves through your assistance. This assistance becomes crucial because we cannot acquire the funds in our names and as government officials we are not allowed to own or operate foreign bank accounts.

    [Bah, they want to acquire knowingly-embezzled funds for themselves, and need a partner because as government officials they can't open a government bank acct? -Iron.]

    To: [email protected]
    Cc: [email protected], [email protected], [email protected],
    	[email protected], [email protected], [email protected],
    	[email protected], [email protected], [email protected],
    	[email protected], [email protected], [email protected],
    	[email protected], [email protected], [email protected],
    	[email protected], [email protected], [email protected],
    	[email protected]
    Subject: i recommend trying this                    .
    

    $$$GET A FREE MILLION ON TOP OF EVERY ORDER. IF YOU ORDER WITHIN 2 DAYS OF ORDERING!
    Subject: [TAG] Linux-questions-only, let's boost your internet speed by up to 220%

    Happy Linuxing!

    Mike ("Iron") Orr
    Editor, Linux Gazette, [email protected]


    Copyright © 2002, the Editors of Linux Gazette.
    Copying license http://www.linuxgazette.net/copying.html
    Published in Issue 78 of Linux Gazette, May 2002