"Linux Gazette...making Linux just a little more fun! "


More 2¢ Tips!


Send Linux Tips and Tricks to [email protected]


Contents:


Boot Information Display

Date: Wed, 2 Jul 1997 18:18:11 -0400
From: Jon Cox [email protected]

I saw an article in July's LG that talked about using watch as a better way to monitor ftp downloads -- there 's an even BETTER way: Check out ncftp. It works much like ftp, but shows a progress bar, estimates time to completion, and saves bookmarks of where you've been. I think ncftp is pretty standard on all distributions these days.

-Enjoy Jon


Consider Glimpse Instead of Grep

Date: Wed, 2 Jul 1997 18:18:11 -0400
From: Jon Cox [email protected]

While grep works as a tool for searching through a big directory tree for a string, it's pretty slow for this kind of thing & a much better tool exists --Glimpse. It even has an agrep-style stripped down regexp capability for doing "fuzzy search", and is astonishingly fast. Roughly speaking:
glimpse is to grep as
locate is to find

I believe the latest rpm version is glimpse-4.0-4.i386.rpm You can find it in any site that mirrors Red hat's contrib directory.

Enjoy!
-Jon


Diald Remote Control

Date: Wed, 2 Jul 1997 18:18:11 -0400
From: Wim Jongman [email protected]

I have hacked a helpful utility. Please have a look at it.

Regards,
Wim Jongman


Diald Remote Control


I have been a satisfied diald user for  quite some time. one of the things that were on my list of favorites was the possibility to activate the link from another location.  I have written a small shell script that waits for activity on my telephone line.

If activity  has been detected the script submits the ping utility which  causes diald to set up a link to my ISP.  If activity is detected from the inside (diald does the dialing) then the ping is also performed but there can be no harm in that.

My /etc/diald.conf looks like this:

mode cslip
connect /usr/local/bin/connect
device /dev/cua2
speed 115200
modem
lock
crtscts
local local.ip.ad.dres
remote ga.te.way.address
mtu 576
defaultroute
ip-up /usr/local/bin/getmail &
ip-down /usr/local/bin/waitmodem &
include /usr/lib/diald/standard.filter

The first time the link goes down, the program waitmodem is submitted. The script for /usr/local/bin/waitmodem is:

#!/bin/bash

# This script waits for data entering the modem. If data has arrived,
# then a host is pinged to allow diald to
# setup a connection (and you to telnet in.)

if test -f /var/locks/waitmodem
 then
 exit 0
   else
 touch /var/locks/waitmodem
 sleep 5
 read myvar < /dev/cua2
 ping -c 10 host.com > /dev/nul & > /dev/nul
 rm /var/locks/waitmodem
 exit 0
fi

If the diald decides to drop the link, the ip-down keyword activates the waitmodem script. This creates a lock in /var/lock(s) and sleeps for five seconds to allow the modem buffers to flush. Then the modem device is read and if activity occurs, the ping is submitted. Change the italic bits in the scripts. The lock is removed and diald dials out. This allows you to access your machine. I guess you have to have a static ip for it to be useful.

Regards,

Wim Jongman


A New Tool for Linux

Date: Wed, 2 Jul 1997 18:18:11 -0400
From: Jordi Sanfeliu [email protected]

hi !

This is my contribution to this beautiful gazette !! :))

tree is a simple tool that allows you to see the whole directory tree on your hard disk.

I think that is very cool, no?

#!/bin/sh
#         @(#) tree      1.1  30/11/95       by Jordi Sanfeliu
#                                         email: [email protected]
#
#         Initial version:  1.0  30/11/95
#         Next version   :  1.1  24/02/97   Now, with symbolic links
#
#         Tree is a tool for view the directory tree (obvious :-) )
#
search () {
   for dir in `echo *`
   do
      if [ -d $dir ] ; then
         zz=0
         while [ $zz != $deep ]
         do
            echo -n "|   "
            zz=`expr $zz + 1`
         done
         if [ -L $dir ] ; then
            echo "+---$dir" `ls -l $dir | sed 's/^.*'$dir' //'`
         else
            echo "+---$dir"
            cd $dir
            deep=`expr $deep + 1`
            search    # with recursivity ;-)
            numdirs=`expr $numdirs + 1`
         fi
      fi
   done
   cd ..
   if [ $deep ] ; then
      swfi=1
   fi
   deep=`expr $deep - 1`
}

# - Main -
if [ $# = 0 ] ; then
   cd `pwd`
else
   cd $1
fi
echo "Initial directory = `pwd`"
swfi=0
deep=0
numdirs=0
zz=0

while [ $swfi != 1 ]
do
   search
done
echo "Total directories = $numdirs"

Have fun !
Jordi


Hex Dump

Date: Wed, 18 Jun 1997 10:15:26 -0700
From: James Gilb [email protected]

I liked your gawk solution to displaying hex data. Two things (which people have probably already pointed out to you).

  1. If you don't want similar lines to be replaced by * *, use the -v option to hexdump. From the man page:

    -v: The -v option causes hexdump to display all input data. Without the -v option, any number of groups of output lines, which would be identical to the immediately preceding group of output lines (except for the input offsets), are replaced with a line comprised of a single asterisk.

  2. In emacs, you can get a similar display using ESC-x hexl-mode. The output looks something like this:
    00000000: 01df 0007 30c3 8680 0000 334e 0000 00ff  ....0.....3N....
    00000010: 0048 1002 010b 0001 0000 1a90 0000 07e4  .H..............
    00000020: 0000 2724 0000 0758 0000 0200 0000 0000  ..'$...X........
    00000030: 0000 0760 0004 0002 0004 0004 0007 0005  ...`............
    00000040: 0003 0003 314c 0000 0000 0000 0000 0000  ....1L..........
    00000050: 0000 0000 0000 0000 0000 0000 2e70 6164  .............pad
    00000060: 0000 0000 0000 0000 0000 0000 0000 0014  ................
    00000070: 0000 01ec 0000 0000 0000 0000 0000 0000  ................
    00000080: 0000 0008 2e74 6578 7400 0000 0000 0200  .....text.......
    00000090: 0000 0200 0000 1a90 0000 0200 0000 2a98  ..............*.

    (I don't suppose it is surprising that emacs does this, after all, emacs is not just and editor, it is its own operating system.)


Hard Disk Duplication

Date: Tue, 24 Jun 1997 11:54:48 +0200
From: Jerko Golubovic [email protected]

A comment on article "HARD DISK DUPLICATION" written by [email protected] in Linux Gazette #18 (June 97).

What I did at my place is following:

I SetUp root-NFS system to boot usable configuration over network. I just need a floppy with appropriate kernel command-line and system brings up.

When system brings up I mount as /root NFS volume where I store compressed images. In that way I have them readily available when I log-in.

With dmesg I find about geometry of the hard disk of the target system. Then, for taking a new image I do:

cat /dev/hda | gzip -9 > <somename>.gz

And for restore:

zcat <somename>.gz > /dev/hda

Of course, I don't have to use such system. It is enough to prepare one boot floppy containing just FTP client and network config. I made two shell scripts:

b:
----------------------
#!/bin/sh
cat /dev/hda | gzip -9

r:
----------------------
#!/bin/sh
gzip -d > /dev/hda

Then, in FTP you do:

put |./b <somename>.gz            - to save image
get <somename.gz> |./r             - to restore image

ANY FTP server on ANY platform can be used for storage.

Not only that - you don't have to use FTP at all - you can use smbclient instead - and read directly from Win or Lanman shares - doing basically the same thing.


More on Grepping Files in a Directory Tree

Date:Tue, 1 Jul 1997 13:12:34
From: Gene Gotimer [email protected]

In Linux Gazette Issue 18, Earl Mitchell ([email protected]) suggested

 grep foo `find . -name \*.c -print`

as a way to grep files in a directory tree. He warned about a command line character limit (potentially 1024 characters).

Another way to accomplish this, without the character limit, is to use the xargs command:

find . -name '*.c' -print | xargs grep foo

The xargs command accepts arguments on standard input, and tacks them on the end of the specified command (after any supplied parameters).

You can specify where in the command xargs will place the arguments (rather than just on the end) if you use the -i option and a pair of curly braces wherever you want the substitution:

ls srcdir | xargs -i cp srcdir/{} destdir/{}

xargs has a number of options worth looking at, including -p to confirm each command as it is executed. See the man page.

-- Gene Gotimer


More on Hard Disk Duplication

Date: Mon, 23 Jun 1997 08:45:48 +0200
From: Jean-Philippe CIVADE [email protected]

I've written an utility under Windows 95 able to copy from disk to disk in a biney way. It's called Disk2file. It's findable on my web site under tools. The primary purpose of this utility was to make iso images from a hard disk (proprietary file system) to record them on a cdrom. I've used it yesterday do duplicate a red hat 4.1 installed disk with success. The advantage of this method is this is possible to product a serial of disk very quickly. This utility is written to tranfert up to 10Mb /s. The duplication time for a 540 Mb is about 10 mins.

The way to use it is:

  1. start the program. Select scsi controller.
  2. Select a disk and a file where to put image file
  3. Select the source disk
  4. select disk2file mode and click "run"
  5. after completion, select the new disk where the image have to be written
  6. Select file2disk mode
  7. Click run

It's referenced as a shareware in the docs but I conced the freeware mode to the Linux community for disk duplication only.

-- Best Regards Jean-Philippe CIVADE


A Script to Update McAfee Virus

Date: Fri, 20 Jun 1997 00:05:33 -0500 (CDT)
From: Ralph [email protected]

Here is a script I hacked together (trust me after you see it I'm sure you'll understand why this is my first script hack I'm sure) to ftp McAfee virus definitions unzip then and run a test to make sure they are ok...now ya gotta have vscan for linux located at ftp://ftp.mcafee.com/pub/antivirus/unix/linux

the first one does the work of pulling it down unzipping and testing

#!/bin/sh
# =====================================================================
# Name:         update-vscan
# Goal:         Auto-update McAfee's Virus Scan for Linux
# Who:          Ralph Sevy [email protected]
# Date:         June 19 1997
# ----------------------------------------------------------------------
# Run this file on the 15th of each month to insure that the file gets
# downloaded
# ======================================================================
datafile=dat-`date +%y%m`.zip
mcafeed=/usr/local/lib/mcafee
ftp -n ftp.mcafee.com << !
user anonymous [email protected]
binary
cd /pub/antivirus/datfiles/2.x
get $datafile
quit
!
if [ -f $mcafeed/*.dat ]; then
        rm *.dat
fi
unzip $datafile *.DAT -d $mcafeed
for file in $(ls $mcafeed/*.DAT); do
        lconvert $mcafeed/*.DAT
done
uvscan $mcafeed/*
exit
---------------------------------------------------------------------------
CUT HERE

lconvert is a 3 line script I stole looking in the gazette

CUT HERE
--------------------------------------------------------------------------
#!/bin/tcsh
# script named lconvert
foreach i (*)
mv $1 `echo $1 | tr '[A-Z]' '[a-z]'`
-------------------------------------------------------------------------
CUT HERE

The last thing you want to do is add an entry to crontab to update your files once a month....I prefer the 15th as it makes sure I get the file (dunno really how to check for errors yet, its my next project)

# crontab command line
# update mcafee data files once a month on the 15th at 4am
* 4 15 * * /usr/local/bin/update-vscan

Its not pretty I'm sure, but it works

Ralph http://www.kyrandia.com/~ralphs


Handling Log Files

Date: Thu, 3 Jul 1997 11:13:56 -0400
From: Neil Schemenauer [email protected]

I have seen a few people wondering what to do with log files that keep growing. The easy solution is to trim them using:

cat </dev/null >some_filename
The disadvantage to this method is that all your logged data is gone, not just the old stuff. Here is a shell script I use to prevent this problem.
#!/bin/sh
#
# usage: logroll [ -d <save directory> ] [ -s <size> ] <logfile>


# where to save old log files
SAVE_DIR=/var/log/roll

# how large should we allow files to grow before rolling them
SIZE=256k

while :
do
	case $1 in
	-d)
		SAVE_DIR=$2
		shift; shift;;
		
	-s)
		SIZE=$2
		shift;shift;;
	-h|-?)
		echo  "usage: logroll [ -d <save directory> ] [ -s <size> ] <logfile>"
		exit;;

	*)
		break;;
	esac
done

if [ $# -ne 1 ]
then
	echo  "usage: logroll [ -d <save directory> ] [ -s <size> ] <logfile>"
	exit 1
fi


if [ -z `find $1 -size +$SIZE -print` ]
then
	exit 0
fi

file=`basename $1`
if [ -f $SAVE_DIR/$file.gz ]
then
	/bin/mv $SAVE_DIR/$file.gz $SAVE_DIR/$file.old.gz
fi

/bin/mv $1 $SAVE_DIR/$file
/bin/gzip -f $SAVE_DIR/$file
# this last command assumes the PID of syslogd is stored like RedHat
# if this is not the case, "killall -HUP syslogd" should work
/bin/kill -HUP `cat /var/run/syslog.pid`
Save this script as /root/bin/logroll and add the following to your /etc/crontab:
# roll log files
30 02 * * * root /root/bin/logroll /var/log/log.smb
31 02 * * * root /root/bin/logroll /var/log/log.nmb
32 02 * * * root /root/bin/logroll /var/log/maillog
33 02 * * * root /root/bin/logroll /var/log/messages
34 02 * * * root /root/bin/logroll /var/log/secure
35 02 * * * root /root/bin/logroll /var/log/spooler
36 02 * * * root /root/bin/logroll /var/log/cron
38 02 * * * root /root/bin/logroll /var/log/kernel
Now forget about log files. The old log file is stored in /var/log/roll and gzipped to conserve space. You should have lots of old logging information if you have to track down a problem.

Neil


Exciting New Hint on xterm Titles

Date: Fri, 27 Jun 1997 15:43:44 +1000 (EST)
From: Damian Haslam [email protected]

Hi, after searching (to no avail) for a way to display the currently executing process in the xterm on the xterm's title bar, I resorted to changing the source of bash2.0 to do what I wanted. from line 117 of eval.c in the source, add the lines marked with # (but don't include the #)

 117: if (read_command () == 0) 
 118:        { 
#119:          if (strcmp(get_string_value("TERM"),"xterm") == 0) { 
#120:            printf("^[]0;%s^G",make_command_string(global_command)); 
#121:            fflush(stdout); 
#122:          } 
#123: 
 124:          if (interactive_shell == 0 && read_but_dont_execute) 
.....
you can then set PROMPT_COMMAND to reset the xterm title to the pwd, or whatever takes your fancy.

cheers - damian


C Source with Line Numbers

Date: Sun, 29 Jun 1997 10:09:52 -0400 (EDT)
From: Tim Newsome [email protected]

Another way of getting a file numbered:

grep -n $ <filename>
-n
tells grep to number its output, and $ means end-of-line. Since every line in the file has an end (except possibly the last one) it'll stick a number in front of every line.

Tim


Another Reply to "What Packages Do I Need?"

Date: Wed, 02 Jul 1997 20:17:26 +0900
From: Matt Gushee [email protected]

About getting rid of X components, Michael Hammel wrote that "...you still need to hang onto the X applications (/usr/X11R6/bin/*)." We-e-ll, I think that statement needs to be qualified. Although I'm in no sense an X-pert, I've poked around and found quite a few non-essential components: multiple versions of xclocks (wristwatches are more accurate and give your eyes a quick break). Xedit (just use a text-mode editor in an xterm). Fonts? I could be wrong, but I don't see any reason to have both 75 and 100dpi fonts; and some distributions include Chinese & Japanese fonts, which are BIG, and which not everyone needs. Anyway, poking around for bits and pieces you can delete may not be the best use of your time, but the point is that X seems to be packaged with a very broad brush. By the way, I run Red Hat, but I just installed the new (non-rpm) XFree86 3.3 distribution--and I notice that Red Hat packages many of the non-essential client programs in a separate contrib package, while the Xfree86 group puts them all in the main bin/ package.

Here's another, maybe better idea for freeing up disk space: do you have a.out shared libraries? If you run only recent software, you may not need them. I got rid of my a.out libs several months ago, and have installed dozens of programs since then, and only one needed a.out (and that one turned out not to have the features I needed anyway). Of course, I have the RedHat CD handy so I can reinstall them in a moment if I ever really need them.

That's my .02 .
--Matt Gushee


Grepping Files in a Tree with -exec

Date: Wed, 2 Jul 1997 09:46:33 -0400 (EDT)
From: Clayton L. Hynfield [email protected]

Don't forget about find's -exec option:

find . -type f -exec grep foo {} \;

Clayton L. Hynfield


How Do You Un-Virtual a Virtual Screen?

Date: Mon, 07 Jul 97 15:08:39 +1000
From: Stuart Lamble [email protected]

With regards to changing the size of the X screen, I assume you're using XFree86. XFree will make your virtual screen size the larger of: *the specified virtual screen size *the _largest_ resolution you _might_ use with your video card (specified in 'Section "Screen"').

Open your XF86Config file in any text editor (ae, vi, emacs, jed, joe, ...) _as root_. (You need to be able to write it back out again.) Search for "Screen" (this is, IIRC, case insensitive, so for example, under vi, you'd type:

/[Ss][Cc][Rr][Ee][Ee][Nn]
yeah, yeah, I know there's some switch somewhere that makes the search case insensitive (or if there isn't, there _should_ be :), but I can't remember it offhand; I don't have much use for such a thing.)

You'll see something like:

Section "Screen"
    Driver      "accel"
    Device      "S3 Trio64V+ (generic)"
    Monitor     "My Monitor"
    Subsection "Display"
        Depth       8
        Modes       "1024x768" "800x600" "640x480"
        ViewPort    0 0
        Virtual     1024 768
    EndSubsection
    Subsection "Display"
        Depth       16
        Modes       "800x600" "640x480"
        ViewPort    0 0
        Virtual     800 600
    EndSubsection
    Subsection "Display"
        Depth       24
        Modes       "640x480"
        ViewPort    0 0
        Virtual     640 480
    EndSubsection
EndSection
(this is taken from a machine I use on occasion at work.)

The first thing to check is the lines starting with Virtual. If you want the virtual resolution to be the same as the screen size, it's easy to do - just get rid of the Virtual line, and it'll be set to the highest resolution listed in the relevant Modes line. (In this case, for 24bpp, it would be 640x480; at 16bpp, 800x600; at 8bpp, 1024x768.) Just be aware that if you've got a 1600x1200 mode at the relevant depth listed, the virtual screen size will stay at 1600x1200. You'd need to get rid of the higher resolution modes in this case.

I would strongly recommend you make a backup of your XF86Config file before you mess around with it, though. It's working at the moment; you want to keep it that way :-)

All of this is, of course, completely incorrect for MetroX, or any other commercial X server for Linux.

Cheers.


File Size Again...

Date: Sun, 6 Jul 1997 13:13:29 -0400 (EDT)
From: Tim Newsome [email protected]

Since nobody has mentioned it yet: procps (at least version 1.01) comes with a very useful utility named watch. You can give it a command line which it will execute every 2 seconds. So, to keep track of file size, all you really need is: watch ls -l filename Or if you're curious as to who's logged on: watch w You can change the interval with the -n flag, so to pop up a different fortune every 20 seconds, run: watch -n 20 fortune Tim


syslog Thing

Date: Fri, 04 Jul 1997 14:50:08 -0400
From: Ian Quick [email protected]

I don't know if this is very popular but my friend once told me a way to put your syslog messages on a virtual console. First make sure that you have the dev for what console you want. (I run RedHat 4.0 and they have them up tty12). Then edit your syslog.conf file and add *.* <put a few tabs for format> /dev/tty12. Reboot and TA DA! just hit alt-F12 and there are you messages logged to a console.

-Ian Quick


Ascii Problems with FTP

Date: Mon, 7 Jul 1997 15:59:39 -0600 (CST)
From: Terrence Martin [email protected]

This is a common problem that occurs with many of our Windows users when they upload html and perl cgi stuff to our web server.

The real fix for this has been available for years in ftp clients themselves. Every ftp client should have support for both 'Binary or type I' and 'Ascii or type 2' uploads/downloads. By selecting or toggling this option to Ascii mode (say in ws_ftp) the dos format text files are automagically translated to unix style without the ^M. Note you definitely do not want to transfer binary type files like apps or programs as this translation will corrupt them.

Regards
Terrence Martin


Running Squake from Inside X

Date: Fri, 11 Jul 1997 00:27:49 -0400
From: Joey Hess [email protected]

I use X 99% of the time, and I was getting tired of the routine of CTRL-ALT-F1; log in; run squake; exit; switch back to X that I had to go through every time I wanted to run squake. So I decided to add an entry for squake to my fvwm menus. To make that work, I had to write a script, I hope someone else finds this useful, I call it runvc:

	#!/bin/sh
	# Run something on a VC, from X, and switch back to X when done.
	# GPL Joey Hess, Thu, 10 Jul 1997 23:27:08 -0400
	exec open -s -- sh -c "$* ; chvt `getvc`"
Now, I can just type runvc squake (or pick my fvwm menu entry that does the same) and instantly be playing squake, and as soon as I quit squake, I'm dumped back into X. Of course, it works equally well for any other program you need to run at the console.

Runvc is a one-liner, but it took me some time to get it working right, so here's an explanation of what's going on. First, the open -s command is used to switch to another virtual console (VC) and run a program. By default, it's going to switch to the next unused VC, which is probably VC 8 or 9. The -s has to be there to make open actually change to that console.

Next, the text after the -- is the command that open runs. I want open to run 2 commands, so I have to make a small shell script, and this is the sh -c "..." part. Inside the quotes, I place $*, which actually handles running squake or whatever program you told runvc to run.

Finally, we've run the command and nothing remains but to switch back to X. This is the hard part. If you're not in X, you can use something like open -w -s -- squake and open will run squake on a new VC, wait for it to exit, and then automatically switch back to the VC you ran it from. But if you try this from inside X, it just doesn't work. So I had to come up with another method to switch back to X. I found that the chvt command was able to switch back from the console to X, so I used it.

Chvt requires that you pass it the number of the VC to switch to. I could just hard code in the number of the VC that X runs on on my system, and do chvt 7, but this isn't portable, and I'd have to update the script if this ever changed. So I wrote a program named 'getvc' that prints out the current VC. Getvc is actually run first, before any of the rest of the runvc command line, because it's enclosed in backticks. So getvc prints out the number of the VC that X is running on and that value is stored, then the rest of the runvc command line gets run, and eventually that value is passed to chvt, which finally switches you back into X.

Well, that's all there is to runvc. Here's where you can get the programs used by it: