"Linux Gazette...making Linux just a little more fun!"


More 2¢ Tips!


Send Linux Tips and Tricks to [email protected]


Contents:


RE: Photogrammetry tools for Linux? in Issue 30

Date: Wed, 29 Jul 1998 10:01:14 -0500
From: John Prindle, [email protected]

In the July 1998 issue of LG, this message was listed in the "Help Wanted" section.

From: Maurizio Ferrari, [email protected]
I am looking for a Linux program to do some close-range photogrammetry. Close range photogrammetry is a technique that enables to reconstruct 3D images from a series of 2D pictures. There are a few powerful (and relatively inexpensive) tools for Windows but none so far for Linux, that I know of. There was something once upon a time called Photo4D. Despite my massive Internet search, any occurrence of Photo4D seems to have been wipe erased from the face of earth. It is listed in SAL but all the links fail. I don't want to resort to buy and use Windows software for this. Help, anyone?
I have tried to e-mail the user back at his given address with some info found on the company and product, but the address given is not valid. So, here it is:

CompInt
712 Seyton Drive
Nepean, Ontario K2H 9R9
Canada
General e-mail : [email protected]
http:/www.igs.net/~compint/
This page updated 8/15/97 at 5:45:19 AM ET.

I found this article about the product on Computer Graphics World's site.

http://www.cgw.com/cgw/Archives/1996/09/09prod1_05.html

Product Spotlight
New Motion-Capture Tool
CGW Magazine - September 1996

With CompInt's Photo4D-Pro, animators can now capture 2D and 3D motion based on video recordings. The Windows 95/NT-based program, available for $490, features auto-detection and auto-marking tools which use pattern recognition technology to automatically detect and mark similar feature points in images, making it possible to effectively digitize a large number of points. The software enables users to capture accurate 3D motion from multiple video recordings of a subject by tracking the feature points in videos and computing their x, y, and z coordinates in each frame. Furthermore, its advanced algorithms can synchronize recorded videos to sub-frame accuracy, allowing the use of low-cost home video cameras.

Coinciding with this product launch, the company is also releasing Photo4D-Lite V2.0, a $99 product designed for users who require only 3D digitizing and modeling capabilities. Both products will be available on Windows 95/NT, SGI, Sun, HP, and Linux platforms. (Nepean, Ontario; 613-721-1643)

The web page that is listed is not valid, but hopefully this may help people trying to locate this product.

John


Re: Suggestion for Article, simultaneous versions of Kernels

Date: Wed, 01 Jul 1998 10:39:21 +0100
From: Hans-Georg Esser, [email protected]

From: Renato Weiner, [email protected]
Recently I was looking at the Gazette and I think I have a good suggestion of an article that will be very useful for the Linux community. I have had some technical difficulties of having two simultaneous versions of Kernels in my system. I mean a stable one and a developing one. I searched the net looking for information of how to co-exist both but it's completely fragmented. If somebody more experienced could put all this information together, it will certainly help a lot of people from kernels developers to end-users.
Let me state the following:

HOW TO HAVE COEXISTING KERNELS

First let me assume that, with "coexisting kernels", you meant to have several different kernels (with different kernel numbers such as 2.0.34 and 2.1.101) each of which can be chosen at boot time to be started. (The point is: I suppose, you don't want to simultaneously __run__ different kernels, which of course is impossible.)

So, all you have to do is this:

For each kernel you want to use, get the kernel sources, e.g. as .tgz file, cd to /usr/src, do a

 
  tar xzf ../where/ever/it/is/package.tgz
then cd to /usr/src/linux-2.0.34 (e.g.) and do the ordinary kernel configuration / compilation, i.e.
 
  make config  (or menuconfig or xconfig, whatever you like)
  make zImage modules modules_install
  cp arch/i386/boot/zImage /linux-2.0.34  (e.g.)
The last bit of the make will generate a directory /lib/modules/2.0.34 (e.g.) where the modules are put.

Then edit the /etc/lilo.conf. Copy the parts that configure your "normal" system start and change the name of the configuration. Also change the name of the kernel binary to /linux-2.0.34 (e.g.).

Then proceed with the next kernel in identic behaviour. Nothing can be overwritten during this process, because all of the kernel compilation is done in its separate directory /usr/src/linux-2.x.y, and all the generated modules will be put in a separate directory /lib/modules/2.x.y, and your zImage copy (residing in /) will have a new name, as you have used an other kernel version.

When you're through with all your kernel versions and have added the last portion to the /etc/lilo.conf file do a

 
  lilo
at the prompt which will make lilo reinstall the boot manager with the changed values. Now reboot, press [TAB] at the LiLo prompt and choose a kernel to use. If you followed these steps, you will not have deleted your original entry in /etc/lilo.conf, so if none of your newly compiled kernels can boot properly, you can still boot the old kernel.

Hope it helps,

H.-G. Esser


Secondary IDE interface CDROM detection/automounting tip

Date: Wed, 1 Jul 1998 14:09:24 -0400
From: Jim Reith, [email protected]
In the Linux Gazette #28 the question was asked:
Hello.I have the Linux Slackware 2.0.30 Walnut Creek.I installed it on a Pentium 200 MMX with a 24x CD-ROM. During the installation I had to write "ramdisk hdd=cdrom" for reading the CD-ROM, but after the installation Linux doesn't see the CD-ROM. I have an atapi CD-ROM, and when I tried to compile my kernel another time, I saw that atapi is the default !!! So I don't understand where is the problem . What can I do?
I ran into this same problem on my home machine. I found that the rc.cdrom script wasn't checking for my drive properly. It couldn't find /dev/hdc and I had to change/add in /dev/hd1a in order to get the master on the secondary IDE interface. Once I put that in the list it worked fine. I would suspect you should use /dev/hd1b for the slave?

Jim Reith [email protected]


Re ext2 partitions

Date: Thu, 2 Jul 1998 21:25:27 +0100
From: Alex Hornby, [email protected]

A much simpler solution to Albert T. Croft's file finding troubles of only wanting to look at ext2 drives so as to exclude the vfat partitions is:

 
find . -fstype ext2 -name foo
Replacing foo with whatever you are looking for.

Cheers, Alex.


pdf resumes: pdflatex

Date: 04 Jul 1998 11:42:17 -0700
From: Karl M. Hegbloom, [email protected]

Dave Cook, the man who wrote the 2cent tip about createing a .pdf file of a resume, must not have the latest TeTeX installed. Either that, or he's not explored it much. ;-)

There is a `pdflatex' now, that creates .pdf files directly. It works really well. There is also `pdftex', and `pdftexinfo'. You can typeset texinfo documents with `info2pdf' now.

Last time I tried it, there was an off by one bug, apparently... When you click a section heading in the table of contents panel, it would jump to one section lower than the one you click. The bug has been reported to the Debian bug tracking system.

Karl


Re: CHAOS

Date: Fri, 03 Jul 1998 16:07:14 +0100
From: Dom Mitchell, [email protected]

A point to note: the IP addresses used for the network should probably be modified to be in one of the ranges set aside in RFC 1918. In summary, they are:

 
     10.0.0.0        -   10.255.255.255  (10/8 prefix)
     172.16.0.0      -   172.31.255.255  (172.16/12 prefix)
     192.168.0.0     -   192.168.255.255 (192.168/16 prefix)
These addresses are guaranteed to not be in use on the Internet, should you get connected later. See the RFC for the full rational.

Dom Mitchell


Re: 3com network cards

Date: Fri, 03 Jul 1998 20:33:13 +1000
From: leon, [email protected]

Re: complaint about 3com network card being slow in 2c tips.

3com 3c590 3c900 and 3c905 cards have a setting stored into them. Unlike traditional settings like IO port , Interrupt, or media type, these cards just take one setting ...

They actually have a setting that slows down the card so that the CPU time isnt chewed up with a flood of network traffic.

There is also a maximum throughput setting and a medium setting.

leon


ext2 Partitions

Date: Thu, 2 Jul 1998 17:58:32 -0700 (PDT)
From: David Rudder, [email protected]

In your 30th issue, Albert Croft wrote in with a script to search only ext2 partitions. I believe you can do the same thing by using

 
find / -fstype ext2
David Rudder


RE: Searching (somewhat in vain) for sources on shell scripting

Date: Mon, 06 Jul 1998 12:37:05 -0400
From: "Paul L. Lussier", [email protected]

Well, my 2 sec search turned this up. In addition, www.oreilly.com is the only site you need for the definitive source on anything related to Unix.

Unix Shell Programming Revised Ed.
Kochan, Stephen G.; Wood, Patrick H.
0-672-48448-X
Hayden Books

Korn Shell Programming Tutorial
Rosenberg, Barry
0-201-56324-X
Addison Wesley

AWK Language Programming; A User's Guide for GNU AWK
Robbins, Arnold D.
1-882114-26-4
Free Software Foundation

Learning Perl, 2nd Edition
2nd Edition July 1997
Randal L. Schwartz & Tom Christiansen Foreword by Larry Wall
1-56592-284-0
302 pages, $29.95

Programming Perl, 2nd Edition
Larry Wall, Tom Christiansen & Randal L. Schwartz
2nd Edition September 1996
1-56592-149-6
670 pages, $39.95

Advanced Perl Programming
By Sriram Srinivasan
1st Edition August 1997
1-56592-220-4
434 pages, $34.95

Paul


Re: $.02 tips on ext2 Partitions

Date: Mon, 06 Jul 1998 13:23:42 -0400
From: "Paul L. Lussier", [email protected]
In the July 1998 issue of Linux Gazette, Albert T. Croft said:
We knew the files we were looking for would only be on the ext2 partitions. We tried writing a batch file, using grep and gawk to get the mount points for the ext2 partitions and handing them to find. This proved unworkable if we were looking for patterns, such as h2*. We then tried to write just a find command, using gawk and grep to get the mount points. This was somewhat better, but using a print statement in gawk to get the names of the mount points wouldn't work. Some help came with remembering that gawk has a printf statement,allow. Our final product, which we found quite useful and now have in our .bashrc > files as linuxfind, is the following:
find `mount|grep ext2|gawk '{printf "%s ", $3}'` -name
A quick perusal of the mount man page would have revealed the -t flag obviating the necessity of the grep and gawk in the above command. Therefore the command could have been shortened to:
 
	find `mount -t ext2` -name
Also, the "locate" command is also available on linux (and has been documented within the pages of LG and LJ a number of times). From the man page:
locate searches one or more databases of file names and displays the file names that contain the pattern.
In addition, one could use 'which', 'whence' and 'whereis' to assist in the location of files.

Paul


LG30 ext2 Partition tip

Date: Fri, 10 Jul 1998 21:31:03 +0100 (BST)
From: Simon Huggins, [email protected]

Thanks for your tip which I saw in the Linux Gazette.

I think you may want to add the -mount switch to your command line though.

That way find won't go onto other filesystems except those listed.

Since on my system, / is ext2 and /hdd/c is vfat, without the mount switch, find *WOULD* search the vfat partitions too. The mount switch limits it to those partitions which you list with your grep/gawk combination

Hope that helps.


Modem Connecting Speed

Date: Wed, 22 Jul 1998 23:06:27 +0000
From: NP, [email protected]

What speed is my modem connecting at ?

Got a new 56K modem and wondering how it's doing ? Fed up with seeing "115200" ?

(This assumes Red Hat 5.0)...

Edit /etc/sysconfig/network-scripts/chat-ppp1 (or whatever chat file you use) Insert a line:

 
'REPORT' 'CONNECT'
Edit /etc/sysconfig/network-scripts/ifup-ppp

Change this line:

 
connect "/usr/sbin/chat $chatdbg -f $CHATSCRIPT"
to:
 
connect "/usr/sbin/chat $chatdbg -f $CHATSCRIPT" 2>/dev/console
- to log to the console, or:
 
connect "/usr/sbin/chat $chatdbg -r /var/log/modem-speed -f $CHATSCRIPT"
- to log to a file /var/log/modem-speed

You'll see entries like:

 
chat:  Jul 22 22:31:06 CONNECT 52000/ARQ/V90/LAPM/V42BIS
(If you're lucky!)

NP


Short Article on upgrading to SMP

Date: Mon, 27 Jul 1998 16:06:08 -0500 (CDT)
From: Andy Carlson, [email protected]

My son and I upgraded to an SMP machine this last weekend. We encountered some problems, and thought it might make an interesting short article. Use it if you can :).

About a month ago, I acquired two 4.3GB UW SCSI drives from IBM. At the time, I was running an old Adaptec 1542 SCSI card (with no problems I might add), but it does not support Ultra Wide, and it was an ISA card. In the process of looking at PCI Ultra Wide SCSI Cards (I was going to purchase an Adaptec 2940UW since I had some experience with them), I came across a Micronics W6-LI motherboard, dual Pentium Pro, with builtin Adaptec AIC7880 SCSI UW chip. This is the story of that project.

My son and I started at 8:00 Saturday morning. We took my existing ATX machine, which housed an Intel VS440 motherboard, 2GB IDE drive, 2GB SCSI drive, and SCSI cdrom apart. We removed everything - Motherboard, Drives, Powersupply, etc. This is because the Micronics board is big, and we wanted as few obstructions as possible while we put the motherboard in. We put the motherboard, two 4.3GB UW drives, CDROM, and powersupply back in. I only needed the data from the IDE drive, so we hooked that up also, but did not install it in the case. We booted into the bios, and set a few things, including setting it to use the MP1.4 spec. We inserted the Slackware 3.4 boot and root disks, and it booted just fine. The hardware portion was a snap.

We set up the partitions on the two UW drives, and copied the data from the IDE drive to a partition on the frist UW drive. We then started the installation of Linux. We installed the Slackware 3.4, with kernel 2.0.30. This went well. We booted, and this came up. We were anxious to try SMP, so we compiled a kernel with SMP, and this was where the problems started. The machine would hang after running about a minute in SMP mode. We decided to download a newer kernel, so we tried 2.0.34. There is apparently a nasty bug in 2.0.34 on SMP machines. The SCSI chip could not be reset, and was looping trying to do this. We also tried 2.0.35, with no luck. This behaviour happened whether we compiled for single or multiple processor. The next step was to try a development kernel (this was the first time for me). We downloaded 2.1.107, and installed it. We also found that to use this kernel, we need updated binutils, modutils, libc, ld.so, procps, procinfo, and mount. The upshot of this was, that 17 hours after we started, we had a running multiprocessor machine.

Some things to keep in mind:


Cross-platform Text Conversions

Date: Thu, 30 Jul 1998 14:28:37 +0900
From: Matt Gushee, [email protected]

Well, I had some text files that I needed to convert from UNIX to DOS format. Downloaded the 'unix2dos' program ... and discovered to my horror that it was an A.OUT BINARY! Thought they'd purged all of those from the archives ;-) But seriously, I couldn't run the program, so I came up with a Tcl script to do the job. It can convert text files in any direction between UNIX, DOS and Mac formats. It has only been tested w/ Tcl 8.0, but since it's very simple, I imagine it'll work with earlier versions too. It has a small bug: when converting from DOS format, it adds one extra newline at the end of the file.

Why Tcl? Well ...

To use the script, you should:
  1. If necessary, edit the pathname for tclsh.
  2. Save it wherever you want to, with any name (I call it textconv.tcl), and make it executable.
  3. symlink it to any or all of the following names, depending on which conversions you want to do, in a directory in $PATH:
    d2m	d2u	m2d	m2u	u2d	u2m
    
    These names must be exactly as shown in order for the script to work.
  4. To use, type the appropriate command with a source file and destination file as arguments. For example, to convert a Mac text file to UNIX format:
    $ m2u macintosh.txt unix.txt
    
That's it! Hope you find it useful.
------ cut below this line ------------------------------
#!/usr/bin/tclsh

# capture the command name that invoked us and the
# source and destination filenames
set convtype $argv0
set infile [lindex $argv 0]
set outfile [lindex $argv 1]

set inchannel [open $infile "r"]
set outchannel [open $outfile "w"]

# according to the command name, set the end-of-line
# and end-of-file characters to the appropriate values
switch -glob -- $convtype {

    *2d {
	fconfigure $outchannel -translation "crlf" -eofchar "\x1a"
    }

    *2m {
	fconfigure $outchannel -translation cr
    }

    *2u {
 	fconfigure $outchannel -translation lf -eofchar ""
    }

    default {
	error "Invalid command name. This script must be \n\
invoked through a symbolic link with\n one of the following \
names:\n d2m, d2u, m2d, m2u, u2d, or u2m."
    }
    
}

while {[gets $inchannel line] >= 0} {

    # if converting from DOS, lose the end-of-file character
    if {[string match "*d2*" $convtype]} {
	set line [string trimright $line "\x1a"]
    }

    puts $outchannel $line

}

close $inchannel
close $outchannel
#------------ end Tcl script--------------------------------
Matt Gushee Oshamanbe, Hokkaido, Japan


Published in Linux Gazette Issue 31, August 1998


[ TABLE OF 
CONTENTS ] [ FRONT PAGE ]  Back  Next


This page maintained by the Editor of Linux Gazette, [email protected]
Copyright © 1998 Specialized Systems Consultants, Inc.