Article jul2006.tar

Questions and Answers

Amy Rich

Q We have an old Alpha that we’re trying to decommission, which is running Digital Unix 5.1. Our plan is to move all of the people on it to an existing Solaris 8 server, but Digital Unix stores its passwords in a database instead of a flat file. Is there an easy way to pull the hashed password out of the database for each user so we can just migrate their current passwords over, or do we have to make everyone log in and set new passwords on the new machine?

A The answer to your question depends on whether your Digital Unix machine is using 3DES passwords or an extended password format for extra security. If your users’ passwords are all 3DES, then you can pull the necessary information out of the Digital Unix password database. If you’re using something other than 3DES, there’s no way (that I know of) to migrate from that format back to 3DES.

There are two files that hold password hashes:

/tcb/files/auth.db
  Enhanced security password database for system accounts.
/var/tcb/files/auth.db
  Enhanced security password database for user accounts. 
            
So, assuming you’re using 3DES, you can either try to parse each of those files using a script, or you can whip up a short C program to use the getespwuid() call. Here’s a quick and dirty example of a C program that will do what you want (minus decent error checking):
/********************************************************************
** getshadow.c
**
** Retrieve crypted passwd hash of specified UID from protected password
** databases on Digital Unix 5.1.
**
** Compile as cc getshadow.c -lsecurity -ldb -laud -lm -o getshadow
********************************************************************/

#include <sys/types.h>
#include <stdio.h>
#include <sys/security.h>
#include <prot.h>

main (argc, argv)
int        argc;
char      *argv[];
{
  struct es_passwd *acct;
  int uidnum;
  int i;

  uidnum = atoi (argv[1]);
  set_auth_parameters(argc, argv);
  initprivs();

  /* print out a Solaris style /etc/shadow entry for a valid UID */

  if (acct->ufld->fd_encrypt == NULL) {
    printf("BADENTRY:%d:NP:6445:::::: \n", uidinput);
  }
  else if (strlen (acct->ufld->fd_encrypt) != 13) {
    printf("%s:NP:6445::::::\n", acct->ufld->fd_name);
  }
  else {
    printf("%s:%s:6445::::::\n", acct->ufld->fd_name,
    acct->ufld->fd_encrypt);
  }
}
 
To create an entire /etc/shadow file for a Solaris machine, you’d run the above program, feeding it the input of all the UIDs in the /etc/passwd file:
touch /tmp/shadow
chmod 600 /tmp/shadow
for i in `awk -F: '{print $3}' /etc/passwd`; do
  ./getshadow $i >> /tmp/shadow
done
 
The above program makes sure that the crypted password entries are all 13 characters to verify that you’re using 3DES passwords. For those accounts that have more or less than 13 characters, including NOLOGIN, as their crypted password hash, the program gives them a NP (No Password) entry. You may want to change this to *LK* (Locked), depending on whether you want these accounts to be able to run cron jobs.

If you’re instead parsing the output of edauth -g <username>, you’ll want the u_pwd= field. If it’s a 3DES password, you can match it in perl with something like the following:

open (AUTH, "edauth -g $username |") || die or die "Cannot open \
  edauth -g $username\n";
while (<AUTH>) {
  if (/u_pwd=/) {
    $_ =~ s/.*u_pwd=([\w\.\/]*):.*/$1/e;
    chomp; 

QAt our company, we have a gateway machine that handles all incoming mail and hands it off to another machine acting as a mail hub. Both of these boxes are running sendmail 8.13.6. We want to use the redirect functionality to bounce messages for employees who are no longer with the company, but to do that, you need to consult the aliases file. Since the gateway host is forwarding everything to an internal mail hub, it never looks at the aliases file. We could put the redirect entries on the internal mail host, but the problem is that we wind up attempting to bounce a lot of forged spam that’s passed the initial gateway machine.

Is there any way we could use the redirect functionality on the gateway machine so we don’t wind up accepting a lot of spam and then either bouncing the message back to some innocent person who got joe jobbed (or just holding it in our mailq because it’s trying to bounce back to an invalid address)? For reference, here’s our current sendmail.mc:

divert(0)dnl
VERSIONID(`@(#)sendmail.mc       8.13.6.1 (gw.my.domain) 5/4/2006')
OSTYPE(aix5)dnl
FEATURE(`use_cw_file')dnl
FEATURE(`always_add_domain')dnl
FEATURE(`masquerade_envelope')dnl
FEATURE(`masquerade_entire_domain')dnl
FEATURE(`allmasquerade')dnl
MASQUERADE_AS(`my.domain')dnl
FEATURE(`nouucp',`reject')dnl
FEATURE(`redirect')dnl
FEATURE(`access_db')dnl
FEATURE(`blacklist_recipients')dnl
FEATURE(`dnsbl', `dul.dnsbl.sorbs.net', \
 `"550 5.7.1 ACCESS DENIED to "$&{client_addr}" \
 by dul.dnsbl.sorbs.net \
 (http: //www.dnsbl.us.sorbs.net/cgi-bin/lookup?js&IP=)"', `')dnl
FEATURE(`dnsbl', `dnsbl.njabl.org', `"550 5.7.1 ACCESS DENIED to \
 "$&{client_addr}" by dnsbl.najbl.org (see http://njabl.org/)"', `')dnl
FEATURE(`dnsbl', `list.dsbl.org', `"550 5.7.1 ACCESS DENIED to \
 "$&{client_addr}" by list.dsbl.org (see http://dsbl.org/)"', `')dnl
FEATURE(`dnsbl', `sbl.spamhaus.org', `"550 5.7.1 ACCESS DENIED to \
 "$&{client_addr}" by sbl.spamhaus.org \
 (see http: //www.spamhaus.org/sbl/index.lasso)"', `')dnl
define(`MAIL_HUB',`smtp')dnl
define(`confSAFE_QUEUE', `true')
define(`confPRIVACY_FLAGS', ``authwarnings,noexpn,novrfy'')dnl
define(`confTO_IDENT', `0s')dnl
define(`confSMTP_LOGIN_MSG', `$j (WE DO NOT ACCEPT UCE OR UBE)')dnl
MAILER(local)dnl
MAILER(smtp)dnl

A As you pointed out, you can’t use redirect because you’re never consulting your aliases database since you’ve defined a MAIL_HUB definition. What you can do, though, is use an access_db entry to accomplish more or less the same thing. Say you want to inform people that the user joe@my.domain is no longer with your company and has moved to jdoe@another.domain. You’d add something like the following entry to your access file:

To:joe@my.domain ERROR:"550 joe@my.domain is no longer with My
Company, use jdoe@another.domain to reach him regarding personal
matters and newcontact@my.domain for inquiries about My Company."
Be sure to rebuild the access.db file and HUP sendmail for the changes to take effect.

QWe’re centralizing our home directories from various machines onto one Netapp. From here on out, we’ll be using automounter to mount home directories individually via NFS on all of these other machines.

On the current standalone machines, the user’s home directory is stored in a hashed directory structure based on the first two characters of the user’s username. For example, the user fred would have the home directory /home/f/r/fred and the user mary would use /home/m/a/mary. We don’t really have enough users to warrant such a scheme, so we want to flatten this to simply be /home/fred and /home/mary. This will enable us to use a very simple catchall & construct in the automounter map.

What’s the best way to migrate all the user home directories to the Netapp with minimal downtime?

A You can always write a simple script that specifies the location of the home directory on the fly and have automounter use that instead of the & expansion variable. Regardless of your choice to reorganize the directory structure, your best bet for minimal downtime is probably rsync. Since the Netapp doesn’t have rsync as part of its operating environment, you’ll need to do the data move on the client machine via NFS instead of over ssh or rsh.

To begin, mount the Netapp partition on the machine you want to copy the files from, making sure you have root-level access to the filesystem:

mkdir /mnt/netapp
mount netapp:/vol/vol0/homedirs /mnt/netapp
Now perform an initial rsync to copy all of the data over. At this point, we aren’t going to worry about reorganizing the directory structure, since we’ll be doing another rsync just before it’s time to swap the filesystems. The --delete flag will delete anything in /mnt/netapp that does not exist in /home:
rsync -av --delete /home/ /mnt/netapp
Just before you’re ready to swap the disks, make sure that the filesystem is quiescent and that no users are logged in. Run the rsync one final time, then move the directories around:
rsync -av --delete /home/ /mnt/netapp

# In case you have single character usernames that will clash with the
# top level of the hashed level directory structure, put accounts in a
# tmp directory first (this assumes you don't have a user named tmp).
mkdir /mnt/netapp/tmp
mv /mnt/netapp/*/* /mnt/netapp/tmp

# Use rmdir to prune the now empty directories in the hashed tree.
# You'll get an error for tmp, since it's not empty.
rmdir /mnt/netapp/*/*
rmdir /mnt/netapp/*

# move all of the accounts into their final home and remove the tmp
# directory
mv /mnt/netapp/tmp/* /mnt/netapp/
rmdir tmp
At this point you need to modify your fstab (or vfstab) files, removing the entry for the local home directory filesystem, and unmount both filesystems. You’ll also want to turn on automounter and make sure your auto_home maps are up to date. Presumably you’re doing something akin to the following in auto_home:
fred netapp:/vol/vol0/homedirs/&
mary netapp:/vol/vol0/homedirs/&
Have the following in auto_master (perhaps with the addition of the nosuid and/or nodev flags for security):
/home           auto_home       -nobrowse
Should any issues arise and you need to roll back the changes you’ve made (and assuming you haven’t added any new accounts since the migration), it’s best to work from the listings on the old disk. This assumes that you’ve disabled automounter and the Netapp disk is mounted as /home now while the old local home directory disk is mounted as /mnt/oldhome:
# Build up a list of users and copy their files back, ignoring the
# Netapp .snapshot directories.
#
# This assumes that all of your users are nested two directories down
# and have no weird embedded characters in their usernames.

cd /mnt/oldhome

for i in `ls'; do
  cd $i
  for j in `ls' do
    rsync --delete -av --exclude .snapshot /home/$j/ /mnt/oldhome/$i/$j
  done
  cd ..
done
Depending on the reason you need to roll back, you may not want to use the --delete flag (e.g., missing files or certain types of corruption).

Q I’m attempting to create a Solaris package for a machine running 5.9. This particular package has a postinstall script that requires some user input, so I’ve tried using various forms of redirection and piping with the pkgadd command (using things like shift and or read inside the script). No matter what I try, I can’t seem to make the script ask me the necessary questions before the package is installed. I know this is possible because I’ve seen other packages do it, but I’m stumped as to how they manage to take input. Could you write me a very simple postinstall script and an example of pkgadd to illustrate the proper syntax?

A The issue is that there’s no way for any of the scripts (preinstall, postinstall, preremove, postremove) to take user input, and you can’t use redirection or piping with pkgadd. The Solaris packaging suite does include a way to get information from the user outside of these scripts, though. If you take a look through Sun’s Solaris 9 Application Packaging Developer’s Guide, you’ll that there’s a request script that will do what you need: http: //docs.sun.com/app/docs/doc/817-1778 .

Here’s an example request script that checks whether an old version of the package, under a different name, is installed. If so, and you choose not to overwrite it, the request script tells you how to remove the other package. It then asks whether you’d like to install documentation and/or example scripts and prompts for the location if you say yes:

OPKGNAME="oldpkg"
PKGNAME="newpkg"

DEF_DOC_DIR="${BASEDIR}/doc"
DEF_SCRIPT_DIR="${BASEDIR}/bin"
INST_DOC=NO
INST_SCRIPT=NO

pkginfo -q $OPKGNAME
OVERWRITE=$?
if [ $OVERWRITE -eq 0 ]; then
  echo "WARNING: Found previously-installed version of this package: "
  pkginfo -l ${OPKGNAME}
  echo "  Run the command 'pkgrm ${OPKGNAME}'."
  echo "  After ${OPKGNAME} has been successfully removed, \
       run 'pkgadd ${PKGNAME} again."
  exit 1
fi

printf "\n%s" "Install user documentation? [y]: "
read ANSWER
ANS=`echo ${ANSWER} | tr '[:upper:]' '[:lower:]'`
if [ "X${ANS}" = "Xy" -o "X${ANS}" = "Xyes" -o "X${ANS}" = "X" ]; then
  DOC_DIR=`ckpath -aof -d $DEF_DOC_DIR -p \
  "Where would you like to install the documentation? \
  [${DEF_DOC_DIR}]: "` || exit $?
  INST_DOC=YES
fi

printf "\n%s" "Install example scripts? [y]: "
read ANSWER
ANS=`echo $ANS|tr '[:upper:]' '[:lower:]'`
if [ "X${ANS}" = "Xy" -o "X${ANS}" = "Xyes" -o "X${ANS}" = "X" ]; then
  SCRIPT_DIR=`ckpath -aof -d $DEF_SCRIPT_DIR -p \
  "Where would you like to install the example scripts? \
  [${DEF_SCRIPT_DIR}]: "` || exit $?
  INST_SCRIPT=YES
fi

cat >$1 <<!
DOC_DIR=${DOC_DIR}
SCRIPT_DIR=${SCRIPT_DIR}
INST_DOC=${INST_DOC}
INST_SCRIPT=${INST_SCRIPT}
!
exit 0
Now you can put something like the following in your postinstall script to install the documentation and/or example scripts:
umask 022

if [ ${INST_DOC} = YES ]; then
  echo "Extracting documentation into ${DOC_DIR}"
  mkdir -p ${DOC_DIR}
  cd ${DOC_DIR} && if [ -f ${BASEDIR}/${PKGNAME}/docs.zip ]; then
    unzip ${BASEDIR}/${PKGNAME}/docs.zip
  else
    echo "Can`t find ${BASEDIR}/${PKGNAME}/docs.zip"
    exit 1
  fi
fi

if [ ${INST_SCRIPT} = YES ]; then
  echo "Extracting example scripts into ${SCRIPT_DIR}"
  mkdir -p ${SCRIPT_DIR}
 cd ${SCRIPT_DIR} && if [ -f ${BASEDIR}/${PKGNAME}/scripts.zip ]; then
    unzip ${BASEDIR}/${PKGNAME}/scripts.zip
  else
    echo "Can't find ${BASEDIR}/${PKGNAME}/scripts.zip"
    exit 1
  fi
fi
Include the following line in your prototype file before you build the package with pkgmk:
i request

If you later want to automate your package installation for use with JumpStart, you can use the pkgask program to generate a default response file to the questions in your request file.

Amy Rich has more than a decade of Unix systems administration experience in various types of environments. Her current roles include that of Senior Systems Administrator for the University Systems Group at Tufts University, Unix systems administration consultant, author, and charter member of LOPSA. She can be reached at: qna@oceanwave.com.