>let totalStrtSeconds=$(echo `convert2millisecs $starttime`)
If you're worried about performance, loose the UUOE/UUOBT:
let totalStrtSeconds=$(convert2millisecs $starttime)
Quote:>I need to get the return value of the function and assign it to the
>variable totalStrtSeconds, rather than recurse. Anyway, thanks for
>any help you can offer, I really appreciate it.
I failed to find anything in the code you posted which did any
recursion, so I'm unclear on what the concern there was...
Quote:>function convert2millisecs {
> # Extract time field and convert to milliseconds
> hour=$(echo "$1" | cut -f1 -d ":")
> let hour2Millisecs=strtHour*60*60*1000
> minute=$(echo "$1" | cut -f2 -d ":")
> let minute2Millisecs=strtMinute*60*1000
> temp=$(echo "$1" | cut -f3 -d ":")
> second=$(echo "$temp" | cut -f1 -d ".")
> let sec2Millisecs=strtSecond*1000
> millisecs=$(echo "$temp" | cut -f2 -d ".")
> echo
>$((hour2Millisecs+minute2Millisecs+sec2Millisecs+millisecs))
>}
Avoid all those external calls to "cut":
function convert2millisecs {
typeset IFS=":."
typeset -A hmsl $1
echo $(( ( ( ${hmsl[0]}*60 + ${hmsl[1]} )*60
+ ${hmsl[2]} )*1000 + ${hmsl[3]} ))
}
Quote:>let totallinecount=`wc -l ${FILENAME} | awk '{print $1}'`
A useless optimization way out here outside of any loop, but you
don't need the "awk" call:
let totallinecount=`wc -l < ${FILENAME}`
(Actually, as it will turn out later, you don't even need
"totallinecount" in the first place...)
Quote:> if (( $linecount % 2 == 0 )); then
> finishtime=`sed -n "${linecount}p" ${FILENAME} | awk '{print
>$4}'`
Better:
finishtime=`sed "${linecount}q;d" ${FILENAME} | awk '{print $4}'`
or better still:
finishtime=`awk 'NR=='"${linecount}"'{print $4;exit}' ${FILENAME}`
Or even better still yet: don't re-scan the file for the next
line each time around the loop --- redirect the file into the
loop and "read" each line in for processing.
Quote:> finHour=$(echo "$finishtime" | cut -f1 -d ":")
...
> let
>totalFinMilliSecs=finHour2Millisecs+finMinute2Millisecs+finSec2Millisecs+finMillisecs
Instead of that mess we can use the convert2millisecs function:
let totalFinMilliSecs=$(convert2millisecs "$finishtime")
Quote:> # Extract start time
> nextline=`sed -n "${linecount}p" ${FILENAME} # | awk '{print
>$4}'`
Again, you don't want to be re-scanning the file for a specific
line number each time through the loop.
Quote:> starttime=$(echo "$nextline" | cut -f4 -d " ")
> name=$(echo "$nextline" | cut -f9 -d " ")
Depending on the exact format of the line you can probably avoid
the calls to "cut" by simply specifying 10 variable names to the
"read" command, or using the "set" command.
Quote:> # Extract time field and convert to milliseconds
> strtHour=$(echo "$starttime" | cut -f1 -d ":")
...
> let
>totalStrtSeconds=strtHour2Millisecs+strtMinute2Millisecs+strtSec2Millisecs+strtMillisecs
Again, making use of convert2millisecs:
let totalStrtSecs=$(convert2millisecs "$starttime")
Putting it all together:
#!/bin/ksh
function convert2millisecs {
typeset IFS=":."
typeset -A hmsl $1
echo $(( ((${hmsl[0]}*60+${hmsl[1]})*60+${hmsl[2]})*1000+${hmsl[3]} ))
}
FILENAME=${1##*/}
exec < "$FILENAME"
let linetype=1
while read a b c time e f g h curname j; do
if (( linetype == 0 )); then
# Extract finish time
let totalFinMilliSecs=$(convert2millisecs "$time")
let duration=totalFinMilliSecs-totalStrtMilliSecs
# time stats
print "$name\t\t$duration\t\t\t\t$starttime\t\t$time"
else
# Remember start time
starttime=$time
name=$curname
# Extract time field and convert to milliseconds
let totalStrtMilliSeconds=$(convert2millisecs "$starttime")
fi
let linetype=1-linetype
done
# Is this next line really wanted?
rm "${FILENAME}.sorted" # clean up
That should give you a substantial speed-up with even a
moderately large file.
--Ken Pizzini