In working with our imageprocessing software I have noticed this:

If one convolves a sharp image with some PSF, what you realy get is

a image (blurred) with a imaginary part. The imaginary part get lost

when stored as a ordinary image. That means that you cant get back

to the exact (sharp) original because you have lost information. In fact

(?) when dividing the fft (of the blurred image) with the fft of the

psf what I get is completely unusable, I have to restrict the division

to point not to far from the center (of the fft's). The result is

phenomenal (in my mind) but its not as good as the original; it's

greyer and there is some ringing.

Now the question is:

a) is there some way of making a good guess to better the process

b) perheps it's (theoretcally) possible to synthesize the imaginary part

of the image from the psf (if one know it apriori) and the real part of

the blurred image. At least when it comes to cameras and lenses (or a

boxcamera) it would be possible to KNOW the full info about the psf.

Compare these two scenarios

1 you have a blurred photo where one blur is known to come from a point

source.

2: You have a blurred photo and psfinfo about the blurring proceess (you

also know the imaginary part)

Since in 2 you know more - namely ifo about the imaginary part of the psf -

it must be possible to deblurr the image more effectively ?

The question is how ?

Can someone fill me in ?

Pelle

PS If anyone whants to have a look at the software (with some example images)

you can download it (alfa state) at http://www.micro.se (look for quantimg)

Comments are very welcome !