looking for routines (algorithms) to do image comparison

looking for routines (algorithms) to do image comparison

Post by Jeffrey Tsa » Thu, 09 Oct 1997 04:00:00



> There are two
> square (2-dimensional) arrays whose elements contain 0 or 1; each array
> represents a black and white image, where 0 is a black pixel and 1 is a
> white pixel. Given two such arrays, produce a number (on some scale, such
> as 0% to 100%, or 0.0 to 1.0) which estimates how close those images are
> to each other, in terms of "looking the same".

Hi Mike,

This will no doubt be a disappointing reply.
What you are describing is the general problem of image registration.
In your case, you are registering the binary transform of two images.
Unfortunately, there is no known method of determining translation,
rotation, scale and image similarity without brute force computation.
i.e. you compare your images by rotating, translating and scaling one
     image and find its difference from the 2nd image. You iterate this
     process until you can't find a better match (i.e. your optimization
     process has converged.) Then, you may use your method of getting
     the similarity from the image difference.

Quote:> may even perform some degree of feature analysis
> (as a person looking at two pictures would be able to say (for example)
> "well, picture A has the same kind of pattern with circles in it as B, but
> it is a little smaller and off to the left").

Feature extraction, as you suggested, is feasible but it requires
knowing
the kind of images you'll be looking at so that you can program
detectors
to look for the inherent features specifically. (Examples are using
Hough
transform and intensity tensors for these purposes.) However, finding
the
correspondence between similar features on different images will be a
major headache.

Here's a possible NON-BRUTE FORCE scheme but it will ONLY WORK if you
satisfy
the following assumption. Otherwise, you need a brute force scheme. (If
you
can figure out an alternative, please let me know, thanks!)

ASSUMPTION: The binarized object in your images (i.e. the collection of
'1'
            voxels) is NOT cropped by the image edges.

The idea is to use a principal component analysis and align the 2 images
along
their principal axes. Then, you can take the difference of the
superposed and
aligned images.

1. calculate the covariance matrix for each image:
    C = [ Sum(Xi - X)(Xi - X)     Sum(Xi - X)(Yi - Y) ]
        [ Sum(Xi - X)(Yi - Y)     Sum(Yi - Y)(Yi - Y) ]
    where (Xi,Yi) = coordinates of each '1' voxel
          (X, Y ) = average position of all '1' voxels
                    i.e. the centroid or center of mass
2. find the eigenvectors and eigenvalues of C. You can find routines to
solve
   this in Numerical Recipes for C.
3. the eigenvectors are the new X and Y axes that are "aligned" with
your
   binaried object. (Just plot them and you'll see). The eigenvalues
measure
   the spread (i.e. square of the size) of the object along the new X
and
   Y axes (i.e. eigenvalues = size^2 of the projection onto the new X
and Y axes).
4. Translate one image so that the centroid of its binarized object
superposes
   the centroid of the second image.
5. Scale one image with respect to the other based on ratios of the
eigenvalues.
6. Then, rotate one image so that its eigenvector with the larger
eigenvalue is
   aligned with the eigenvector with the larger eigenvalue of the 2nd
image.
7. Compare this translated, scaled, and rotated version of one image
with the other
   (original) image to determine the similarity.
8. NOW, because an eigenvector V is equivalent to an eigenvector -V
(i.e. +'ve = -'ve),
   the new X and Y axes could be flipped to -X and -Y, so you have to
check this.
   To check, rotate ADDITIONALLY the rotated image by 180 degrees and do
the
   image comparison again. Pick the higher similarity value as the true
similarity
   value. The smaller value is when the 2 images are rotated by 180
degrees.

Note:
This is the fastest method for image registration, but it doesn't work
too well unless
the mass distribution of your objects are similar. Otherwise, you have
to resort to
brute force methods. Do a search on "image registration".

Hope this helps!

Cordially,
Jeffrey Tsao
Biomedical Magnetic Resonance Laboratory
University of Illinois at Urbana-Champaign

 
 
 

looking for routines (algorithms) to do image comparison

Post by Stephen A.B. Jone » Tue, 14 Oct 1997 04:00:00


Mr Levin,

        There has been a great deal of research undertaken into exactly this
field with no concrete results to my knowledge. I worked on this problem
for a while with respect to detecting where an object had moved to
between two images taken at different times. I first ran an edge
detection filter on each image and then examined interesections of
edges. Because I knew the original separation and approximate position
of intersections, I could search for the same in the second image. If
the object had moved towards the camera, the ratio of separation
distances remained the same.

        For your application, looking for merely similarity instead of
positional information, you could base "similarity" perhaps on the mean
distance between apparently "matching" edge intersections in the two
images. If you care only that the images are of approximately the same
subject, you could even simply compare the number of edge intersections
found in the two images.

        For a high contrast scene, my method of object tracking worked well.
However, for a scene with relatively little contrast accurate edge
detection can be very difficult and may not be useful. I hope this helps
in some way.

Stephen Jones
--------------
US Army Research Laboratory
Scientific Visualization Group


>    I am looking for pointers to routines (available by FTP on the net
> somewhere) or algorithms to solve the following problem. There are two
> square (2-dimensional) arrays whose elements contain 0 or 1; each array
> represents a black and white image, where 0 is a black pixel and 1 is a
> white pixel. Given two such arrays, produce a number (on some scale, such
> as 0% to 100%, or 0.0 to 1.0) which estimates how close those images are
> to each other, in terms of "looking the same". The simplest thing would be
> to just do a pixel by pixel comparison, but this is too simplistic in that
> it would be fooled by slight shifts or changes of scale. My ideal
> algorithm would be tolerant of some spatial shifts, changes of scale,
> rotations, etc., and may even perform some degree of feature analysis
> (as a person looking at two pictures would be able to say (for example)
> "well, picture A has the same kind of pattern with circles in it as B, but
> it is a little smaller and off to the left").  There probably is no
> algorithm as good as what a human observer could do, but surely there are
> image analysis routines out there more sophisticated than a simple
> pixel-by-pixel comparison. Note that I am *not* looking for simple noise
> reduction - it isn't that one of the images has extra speckles or
> degradation and I need to decide if it is still the same image. Anyhow, if
> anyone has pointers to algorithms that can be turned into code, or even
> better, existing routines (if they are part of some library or software
> package that's available on the net), please email me at

> Mike Levin