pass argument from shell script to awk
I have the following script:
Code:
#!/bin/sh
# converts delimited file to a fixed width file
# expects 3 parameters: input file delimiter,
# input file name,
# output file name
# variables
IFS=$1
INPUTFILE=$2
OUTPUTFILE=$3
awk -v FS=$IFS '
BEGIN {
max1=0;
max2=0;
max3=0;
max4=0;
max5=0;
}
{
line[NR]=$0;
# dynamically set max field lengths to lengths of actual fields
if(length($1) > max1) {max1=length($1)}
if(length($2) > max2) {max2=length($2)}
if(length($3) > max3) {max3=length($3)}
if(length($4) > max4) {max4=length($4)}
if(length($5) > max5) {max5=length($5)}
lines=NR
}
END {
fmt="%"max1"s%"max2"s%"max3"s%"max4"s%"max5"s\n"
for(i=1;i<=lines;i++) {
split(line[i], a)
printf fmt, a[1],a[2],a[3],a[4],a[5]
}
}' $INPUTFILE > $OUTPUTFILE
It is interpreting the two variables $INPUTFILE and $OUTPUTFILE correctly but it is not interpreting the variable $IFS correctly. From the shell I'm passing in a comma as the argument for $IFS but awk is using a space as the value, and for the life of me I cannot figure out why. If I hardcode the awk FS value as a comma, the script works great.