> Does anybody know of any method to find a point on the
> surface f(x,y,z) = 0 which is the closest to a fixed
> point on 3D surface?
> Or specifically, I have a surface Axyz+Bxy+Cyz+Dxz+Ex+Fy+Gz+H=0
> and a 3D point (x0,y0,z0), and some restriction Min<x<Max, Min<y<Max,
> Min<z<Max, what is the simple way to find a point on the surface
> that is closest to (x0,y0,z0).
Here is an algorithm which is essentially a gradient descent (of distance)
whose iterates are constrained to the surface. Let P be the 3-vector whose
distance to the surface is required to be minimum. Let the surface be given
by F(X) = 0 where X is a 3-vector variable. Assume that F is at least
differentiable. Surface normals are given by N(X) = Grad(F(X))/|Grad(F(X))|
where Grad() indicates the gradient of F and where || indicates the length
of a vector.
(1) Let's handle points which are interior to the surface patch. If M is an
interior point which yields locally extremal distance to P (minimum or
maximum), then vector P-M is normal to the surface, so it must be parallel
to N(M). Consequently, P-M has no tangential component to the surface. Let
X be a point on the surface (so F(X) = 0). The idea is to "walk towards P"
while remaining on the surface. That is, your velocity should be the
tangential component of P-X given by
T(X) = (P-X) - ((P-X)*N(X)) N(X)
where * indicates dot product of vectors. The path is implicitly determined
by the system of differential equations
dX/dt = T(X), X(0) = X0
where X0 is your initial guess about where M is. The equilibrium solutions
occur when T(M) = 0 (which will include both local minima and maxima).
Theoretically the solution curves to this system must lie on the surface,
but numerically you may end up off the surface. If you get far enough off
the surface, you may want to make a correction to get back on the surface.
If Y0 is a point not on the surface, then a path back to the surface is
implicitly determined by the system of differential equations
dY/dt = -F(Y) N(Y), Y(0) = Y0
For a smooth surface where Grad(F) is never zero, N(Y) is never zero. The
only equilibrium points occur when F(Y) = 0, that is, when you are on the
surface. The minus sign makes sure that curves are "attracted" by points
where F = 0. Both systems of differential equations can be solved by some
standard algorithm, for example, a fourth-order Runge-Kutta method.
(2) Let's handle the boundary points. In your example the bounding box is
[xmin,xmax]X[ymin,ymax]X[zmin,zmax], a rectangular solid. For the sake of
argument, suppose that the plane x = xmin intersects the surface defined by
F(x,y,z) = 0. To compute the point on the intersection which is closest to
P, it is sufficient to compute the point on the intersection which is
closest to the projection of P onto x = xmin. Now you have the same type
of problem as (1), but the dimension is reduced. The only technical problem
is that the intersection of face with surface may produce multiple connected
components, so you will need to find local extrema for each such component
using the method of (1) (now applied to curves rather than surfaces).
(3) The method described above is a local method, so it may not converge
for some initial guesses. What you can do is select a suitably dense set
of initial guesses of points on the surface, iterate for each guess, and
keep track of those that converge. Hopefully you will have found all local
extrema which you can then select the smallest distance value from them.