> There are two
> square (2-dimensional) arrays whose elements contain 0 or 1; each array
> represents a black and white image, where 0 is a black pixel and 1 is a
> white pixel. Given two such arrays, produce a number (on some scale, such
> as 0% to 100%, or 0.0 to 1.0) which estimates how close those images are
> to each other, in terms of "looking the same".
This will no doubt be a disappointing reply.
What you are describing is the general problem of image registration.
In your case, you are registering the binary transform of two images.
Unfortunately, there is no known method of determining translation,
rotation, scale and image similarity without brute force computation.
i.e. you compare your images by rotating, translating and scaling one
image and find its difference from the 2nd image. You iterate this
process until you can't find a better match (i.e. your optimization
process has converged.) Then, you may use your method of getting
the similarity from the image difference.
Feature extraction, as you suggested, is feasible but it requiresQuote:> may even perform some degree of feature analysis
> (as a person looking at two pictures would be able to say (for example)
> "well, picture A has the same kind of pattern with circles in it as B, but
> it is a little smaller and off to the left").
knowing
the kind of images you'll be looking at so that you can program
detectors
to look for the inherent features specifically. (Examples are using
Hough
transform and intensity tensors for these purposes.) However, finding
the
correspondence between similar features on different images will be a
major headache.
Here's a possible NON-BRUTE FORCE scheme but it will ONLY WORK if you
satisfy
the following assumption. Otherwise, you need a brute force scheme. (If
you
can figure out an alternative, please let me know, thanks!)
ASSUMPTION: The binarized object in your images (i.e. the collection of
'1'
voxels) is NOT cropped by the image edges.
The idea is to use a principal component analysis and align the 2 images
along
their principal axes. Then, you can take the difference of the
superposed and
aligned images.
1. calculate the covariance matrix for each image:
C = [ Sum(Xi - X)(Xi - X) Sum(Xi - X)(Yi - Y) ]
[ Sum(Xi - X)(Yi - Y) Sum(Yi - Y)(Yi - Y) ]
where (Xi,Yi) = coordinates of each '1' voxel
(X, Y ) = average position of all '1' voxels
i.e. the centroid or center of mass
2. find the eigenvectors and eigenvalues of C. You can find routines to
solve
this in Numerical Recipes for C.
3. the eigenvectors are the new X and Y axes that are "aligned" with
your
binaried object. (Just plot them and you'll see). The eigenvalues
measure
the spread (i.e. square of the size) of the object along the new X
and
Y axes (i.e. eigenvalues = size^2 of the projection onto the new X
and Y axes).
4. Translate one image so that the centroid of its binarized object
superposes
the centroid of the second image.
5. Scale one image with respect to the other based on ratios of the
eigenvalues.
6. Then, rotate one image so that its eigenvector with the larger
eigenvalue is
aligned with the eigenvector with the larger eigenvalue of the 2nd
image.
7. Compare this translated, scaled, and rotated version of one image
with the other
(original) image to determine the similarity.
8. NOW, because an eigenvector V is equivalent to an eigenvector -V
(i.e. +'ve = -'ve),
the new X and Y axes could be flipped to -X and -Y, so you have to
check this.
To check, rotate ADDITIONALLY the rotated image by 180 degrees and do
the
image comparison again. Pick the higher similarity value as the true
similarity
value. The smaller value is when the 2 images are rotated by 180
degrees.
Note:
This is the fastest method for image registration, but it doesn't work
too well unless
the mass distribution of your objects are similar. Otherwise, you have
to resort to
brute force methods. Do a search on "image registration".
Hope this helps!
Cordially,
Jeffrey Tsao
Biomedical Magnetic Resonance Laboratory
University of Illinois at Urbana-Champaign