## 1.0 < x < 2.0 decimation strategy ?

### 1.0 < x < 2.0 decimation strategy ?

Hi,

which strategy do you use, if you fast decimate /
sub-sample a signal about a (fixpoint) factor less
than 2.0, i.e. 1.0 < factor < 2.0 ?

Constrains:
- droping of source / original samples is not allowed
- you can use a maximum of 4 original samples to
compute subsamples
- Algorithm works in two dimensions too, i.e.
sub-sampling an image.

Which pros (speed, works better near 1.0 etc.)
and cons (alaising artefacts, not band limited etc.)

If this has already been discussed (in this group),
where can I find the summarize ?

Stephan Arens

### 1.0 < x < 2.0 decimation strategy ?

Quote:(Stephan Arens) writes:
>which strategy do you use, if you fast decimate /
>sub-sample a signal about a (fixpoint) factor less
>than 2.0, i.e. 1.0 < factor < 2.0 ?
>Constrains:
>- droping of source / original samples is not allowed
>- you can use a maximum of 4 original samples to
>  compute subsamples
>- Algorithm works in two dimensions too, i.e.
>  sub-sampling an image.
>Which pros (speed, works better near 1.0 etc.)
>and cons (alaising artefacts, not band limited etc.)

Hi Stephan,

The approach I favor is based on rate decimation, expressed as integer ratios.
For example, to decimate by 1.181818, the integer ratio 13/11 would be used.
An approximation might be necessary, since large prime number ratios can lead
to significant implementation penalties.

The above calls for two steps: a rate increase by times 13,  followed
by a decimation by a factor of 11. In hardware the process of increasing the
rate by times 13 would correspond to "zero stuffing" 12 sample values of zero
after each original sample value. This new signal is now low-pass filtered
to remove "image" spectrums. The choice of LPF is based on performance
criteria such as bandwidth, SNR.  Decimation by 11, requires a different LPF,
in this case to prevent aliasing.

An early description of the theory can be found: R W Schafer, L R Rabiner " A
Digital Signal Processing Approach to Interpolation" Proc IEEE June 1973. ( I
think it is a beautiful accomplishment.)

I can highly reccommend the 'green book'   "Multirate Digital Signal
Processing" Crochiere & Rabiner, Prentice-Hall 1983. These people were Bell
Labs people. I've seen some of the algorithms coded in C for this topic.

It might be somewhat dubious to expect only 4 samples to be useful for
decimating an image.  In my other life I designed video effects processors,
which are used in the US TV market extensively.  Interpolation and
decimation are frequent operations in handling those images. No one sells beer
with commercials made from images that have been manipulated with 4 point
convolutions !!!  Of course, you might place a different emphasis on image
quality matters.

-- Dave
Pixel Performance
Simsbury, Conn USA