Claims :

1. A method for real time mapping of computer generated images when projected onto a display for presenting non-distorted images to a viewer in viewer space by predistor ting the image on a projector image source in projector space, the projector space being divided into a plurality of spans with each of the spans encompassing a plurality of pixels, the method comprising the steps of:

determining coordinates identifying the location of each of the spans in projector space;

obtaining the transfer characteristics of a lens for projecting an image from projector space onto the display;

determining the coordinates of the display with respect to projector space using the lens transfer characteristics;

computing for the span identifying location coordinates corresponding projected coordinates on the display;

computing the apparent location to the viewer of the projected coordinates on the display;

transforming the apparent location of selected points in viewer space to points in projector space;

determining the spans in which the selected points are located in projector space;

creating a pre-distorted image in projector space using the projected coordinates of the spans in viewer space; and

projecting the pre-distorted image on the display.

2. A method for generating an apparent non-distorted image on a viewscreen from a computer generated image source, the image being projected from a projector raster to a viewscreen oriented such that the projector raster appears distorted to the viewer, the method comprising the steps of:

(A) translating selected points from the raster to a projection plane;

(B) mapping the selected points on the projection plane to corresponding points on the viewscreen;

(C) determining for the selected points mapped on the viewscreen their apparent location to a viewer; and

(D) mapping other selected points from viewer space to projector space; and

(E) distorting the image in projector space using the computed locations of the selected points and other selected points for presenting a non-distorted image to the viewer.

3. A method for presenting non-distorted images to a randomly oriented viewer, the images being developed on a planar projection source in projection space and projected onto a viewscreen oriented such that a projector raster appears distorted to the viewer, each image being defined by a plurality of vertices, comprising the steps of:

(A) dividing the raster on the projection source into a plurality of spans defined by corners thereof, each span encompassing a plurality of pixels;

(B) mapping the span corners onto the viewscreen;

(C) computing the apparent location to the viewer of each of the mapped span corners;

(D) determining locations on the viewscreen of image vertices;

(E) mapping vertices on the viewscreen to corresponding locations on the projection source;

(F) locating from step (E) of mapping the spans in which each of the vertices is positioned;

(G) constructing an image on the projection source using the mapped span corners in viewer space and the mapped vertices in projector space; and

(H) projecting the constructed image onto the viewscreen.

4. The method of claim 3 wherein step (G) of constructing an image comprises bilinear interpolation of image parameters at each pixel location from values computed at the span corners.

5. The method of claim 4 and including the steps of:

dividing each pixel into a matrix of subpixels;

defining the image parameters for each subpixel; and

averaging the image parameters of all subpixels in each pixel for obtaining an image parameters for each pixel.

6. A method for real time mapping of points from projector space to viewer, space and from viewer space to projector space as one step in the computer image generation process of producing a predistorted image on a projector raster which, when projected onto a nonlinear screen through a wide-angle lens with high distortion, will appear correct to the viewer, comprising the steps of:

measuring the transfer characteristics of the lens to be used;

mapping points from projector space to viewer space by:

using the measured lens characteristics to define a projector output ray for each point;

using geometric relationships to determine where the projector ray strikes the screen;

determining geometrically where the ray strike point appears to the viewer; and

mapping points from viewer space to projector space by:

determining geometrically a view ray from the viewer to each ray strike point on the screen;

using geometric relationships to determine where this ray strikes the screen;

determining geometrically where a line from the center of the lens to the view ray strike point strikes a hypothetical projector output plane; and

utilizing the measured lens characteristics to determine the point on . the projector raster necessary to produce a projector ray passing through the line strike point on the hypothetical projector output plane.

7. In an image generating system of the type for converting digital data into a sequence of display frames of image data in projector space suitable for display on a video image system in viewer space, the image system forming a display by individually illuminating each of a plurality of color pixels, each of the frame of image data defining a plurality of faces and each of the frames being divided into a plurality of spans, including an electronic control means for converting the digital data, a method for correcting for geometric distortion and optical distortion comprising the steps of:

(a) identifying data for a frame of display, the data defining face locations in viewer space, each of the faces associated with at least one span and being arranged in a descending order of priority;

(b) calculating transformation coefficients for mapping projector space span corners into viewer space;

(c) determining the highest priority face for each span;

(d) determining an area within a viewer space span covered by the highest priority face;

(e) computing pixel image data representative of the pixels within the projector space span covered by the face;

(f) repeating step (c) through step (f) until the last face is processed into pixel image data or until all areas of the spans are fully covered by faces; and

(g) transferring the pixel image data to the video image system.

8. The method of claim 7, wherein the step (b) of calculating, further comprises the step of predetermining the transformation coefficients for a mapping for a fixed projection of the pixel image for a fixed view.

9. The method of claim 7, wherein the step (b) of calculating, further comprises the step of computing a variable mapping for a moving viewer or projection relative to the video image system.

10, The method of claim 7, wherein the step (c) of determining further comprises the step of determining an edge of the highest priority face for a projector space span.

11. The method of claim 10, wherein the step of determining en edge, further comprises the steps of:

(a) identifying data for a face, the data representative of an edge of a face in viewer space, each edge having a starting and a terminating vertex and a slope;

(b) locating the edge vertices in viewer space coordinates;

(c) transforming the edge vertices from viewer space to projector space for defining a starting projector space span and a terminating projector space span for an edge of the face;

(d) transforming the projector space span corners to viewer space span corners;

(e) computing a perpendicular distance in viewer space from the edge of the face to the viewer space span corners;

(f) determining from the viewer space distance a subsequent projector space span intersected by the edge of the face;

(g) storing in a memory each projector space span intersected by the edge;

(h) storing where the edge intersects the span boundaries in projector space ;

(i) repeating step (d) of transforming through step (h) of storing until the last edge of the face is processed; and

(j) repeating step (a) through step (c) using the next face, until all the faces are processed.

12. The method of claim 11, wherein the step (f) of determining further comprises the step of searching along the edges of a face in a clockwise direction, beginning with a first edge of a face, for each span that an edge of a face passes through.

13. The method of claim 11, wherein the step (e) of computing further comprises the steps of:

(a) inputting, into the address lines of a read only memory, data representative of the slope of an edge and the endpoints;

(b) calculating the perpendicular distance from the corner of each span according to the formula D=LO+LI*I*LJ*J, where:

D is the perpendicular distance from a point (I,J) to an edge;

LO is an initial predetermined distance from a fixed reference point such as 1=0 and J=0;

LI is the cosine of the edge slope; and

LJ is the sine of the edge slope.

and (c) outpυtting, from the data lines of the read only memory, data representative of the perpendicular distance from the corner of a span to an edge.

14. The method of claim 11, further comprising the steps of:

(a) determining a projector space span to be processed based upon data representative of the area covered by previously processed faces;

(b) recalling from memory the edges of the highest priority non-processed face intersecting a projector space span;

(c) determining the portions of the span covered by the face defined by the recalled edges: and

(d) repeating steps (a) of determining through step (c) of determining until all spans are covered or until all faces are processed.

15. The method of claim n, wherein step (c) of determining the portions of the span further comprises the steps of:

(a) recalling from memory where the edge intersects the span boundaries in projector span;

(b) calculating new edge coefficients for the projector space span;

(c) determining a pixel of a projector space span intersected by an edge of a face;

(d) dividing the pixel into a plurality of subpi xel areas ;

(e) computing a distance from the center of each pixel area to the edge ;

(f ) determining the area of the subpixel areas covered by the face bounded by the edge;

(g) computing a weight for each subpixel area from the area covered by the face and the color of the face;

(h) repeating step (d) of dividing through step (g) of computing for each edge through the pixel;

(i) summing the weighted values for each

Pixel;

(j) generating a color for each pixel corresponding to the summed weighted values; and

(k) repeating step (b) of determining through step (h) of generating for each pixel intersected by an edge in the span.

16. The method of claim 15, wherein the step (i) of summing further comprises the steps of:

(a) receiving data representative of a translucency of a face in a subpixel area;

(b) summing the translucency of the subpixel area of the face with the weighted value for the subpixel to obtain a translucency weighted value.

17. The method of claim 15, the step (i) of summing further comprises the step of receiving data representative of a translucency of a face in a subpixel area based upon input from an external logic means.

18. The method of claim 15, wherein the step (e) of computing, further comprises the step of oomputing the distance from the center of a subpixel area to the edge by bilinear interpolation of the distances from the corners of the pixel to the edge.

19. The method of claim 15, wherein the step (f) of determining, further comprises the step of determining the subpixel areas assigned to the face as a function of the distance to the edge and slope of the edge.

20. The method of claim 15, wherein the step (e) of computing, further comprises the steps of:

(a) inputting the distance from each pixel to the edge, and the edge slope, into a read only memory having address lines for input and memory lines for output;

(b) calculating the perpendicular distance from the corner of each span according to the formula D=LO+Ll=I+LJ*J, where:

D is the perpendicular distance from a point (I ,J) to an edge;

LO is an initial predetermined distance from a fixed reference point such as I=0 and J=0;

LI is the cosine of the edge slope; and

LJ is the sine of the edge slope,

and

(c) determining the subpixel areas assigned to the face from the distance to the edge and the slope of the edge; and

(d) outputting the subpixel areas assigned to the face on the memory lines of the read only memory.

21. The method of claim 20, further comprising the step of determining the accuracy of a subpixel area assigned to the face by a total area accuracy of at least one-half subpixel area.

22. The method of claim 21, further comprising the step of determining the accuracy cf a subpixel area assigned to the face by a positioned accuracy of at least one subpixel.

23. The method of claim 15, wherein the step (g) of computing, further comprises the step of computing a weight for each subpixel area equal to the area covered by a face times the color.

24. The method of claim 16, further comprising the step of receiving data representative of a translucency of a face in a subpixel area based upon a programatic event representative of smoke, fog, environaientai events, and events characteristic of simulated warfare.

25. The method of claim 7, wherein the step (h) of transferring, further comprises the steps of:

(a) receiving data for a pixel, the data including a haze control, an illumination control and a texture control;

(b) computing a color contribution for the pixel image data from the data and the previous pixel image data; and

(c) transferring the color contribution of the pixel image data to the video display.

26. In an image generating system of the type for converting digital data into a sequence of display frames of imsge data in projector space suitable for display on a video image system in viewer space, the image system forming a display by individually illuminating each of a plurality of color pixels, each of the frame of image data defining a plurality of faces and each of the frames being divided into a plurality of spans, including an electronic control means for converting the digital data, a method for correcting for geometric distortion and optical distortion comprising the steps of:

(a) identifying data for a frame of display, the data defining face locations in viewer space, each of the faces associated with at least one span

(b) calculating transformation coefficients for mapping projector space span corners into viewer space;

(c) determining a face to be processed for each span;

(d) determining an area within a viewer space span covered by said face;

(e) computing pixel image data representative of the pixels within the projector space span covered by said face;

(f) repeating step (c) through step (f) until a last face is processed into pixel image data and;

(g) transferring the pixel image data to the video image system.