Document mosaicing is a process that stitches multiple, overlapping snapshot images of a document together to produce one large, high resolution composite. The document is slid under a stationary, over-the-desk camera by hand until all parts of the document are snapshotted by the camera's field of view. As the document slid under the camera, all motion of the document is coarsely tracked by the vision system. The document is periodically snapshotted such that the successive snapshots are overlap by about 50%. The system then finds the overlapped pairs and stitches them together repeatedly until all pairs are stitched together as one piece of document.[1]

The document mosaicing can be divided into four main processes.

Tracking (simple correlation process)

In this process, the motion of the document slid under the camera is coarsely tracked by the system. Tracking is performed by a process called simple correlation process. In the first frame of snapshots, a small patch is extracted from the center of the image as a correlation template as shown in Figure 1. The correlation process is performed in the four times size of the patch area of the next frame. The motion of the paper is indicated by the peak in the correlation function. The peak in the correlation function indicates the motion of the paper. The template is resampled from this frame and the tracking continues until the template reaches the edge of the document. After the template reaches the edge of the document, another snapshot is taken and the tracking process performs repeatedly until the whole document is imaged. The snapshots are stored in an ordered list to facilitate pairing the overlapped images in later processes.

Feature detecting for efficient matching

Feature detection is the process of finding the transformation that aligns one image with another. There are two main approaches for feature detection.[2][3]

  • Feature-based approach : Motion parameters are estimated from point correspondences. This approach is suitable for the case that there is plenty supply of stable and detectable features.
  • Featureless approach : When the motion between the two images is small, the motion parameters are estimated using optical flow. On the other hand, when the motion between the two images is large, the motion parameters are estimated using generalised cross-correlation. However, this approach requires a computationally expensive resources.

Each image is segmented into a hierarchy of columns, lines, and words to match the organised sets of features across images. Skew angle estimation and columns, lines and words finding are the examples of feature detection operations.

Skew angle estimation

Firstly, the angle that the rows of text make with the image raster lines (skew angle) is estimated. It is assumed to lie in the range of ±20°. A small patch of text in the image is selected randomly and then rotated in the range of ±20° until the variance of the pixel intensities of the patch summed along the raster lines is maximised.[4] See Figure 2.

To ensure that the found skew angle is accurate, the document mosaic system performs calculation at many image patches and derive the final estimation by finding the average of the individual angles weighted by the variance of the pixel intensities of each patch.

Columns, lines and words finding

In this operation, the de-skewed document is intuitively segmented into a hierarchy of columns, lines and words. The sensitivity to illumination and page coloration of the de-skewed document can be removed by applying a Sobel operator to the de-skewed image and thresholding the output to obtain the binary gradient, de-skewed image.[5] See Figure 3.

The operation can be roughly separated into 3 steps: column segmentation, line segmentation and word segmentation.

  1. Columns are easily segmented from the binary gradient, de-skewed images by summing pixels vertically as shown in Figure 4.
  2. Baselines of each row are segmented in the same way as the column segmentation process but horizontally.
  3. Finally, individual words are segmented by applying the vertical process at each segmented row.

These segmentations are important because the document mosaic is created by matching the lower right corners of words in overlapping images pair. Moreover, the segmentation operation can organize the list of images in the context of a hierarchy of rows and column reliably.

The segmentation operation involves a considerable amount of summing in the binary gradient, de-skewed images, which done by construct a matrix of partial sums[6] whose elements are given by

The matrix of partial sums is calculated in one pass through the binary gradient, de-skewed image.[6]

Correspondences establishing

The two images are now organized in hierarchy of linked lists in following structure :

  • image=list of columns
  • row=list of words
  • column=list of row
  • word=length (in pixels)

At the bottom of the structure, the length of each word is recorded for establishing correspondence between two images to reduce to search only the corresponding structures for the groups of words with the matching lengths.

Seed match finding

A seed match finding is done by comparing each row in image1 with each row in image2. The two rows are then compared to each other by every word. If the length (in pixel) of the two words (one from image1 and one from image2) and their immediate neighbours agree with each other within a predefined tolerance threshold (5 pixels, for example), then they are assumed to match. The row of each image is assumed a match if there are three or more word matches between the two rows. The seed match finding operation is terminated when two pairs of consecutive row match are found.

Match list building

After finishing a seed match finding operation, the next process is to build the match list to generate the correspondences points of the two images. The process is done by searching the matching pairs of rows away from the seed row.

Images mosaicing

Figure 5 : Mosaicing of two document images. Blurring is evident in the affine mosaic (b), but not in the mosaic constructed using a plane-to-plane projectivity (a). Close-ups of typical seams of (a) and (b) are shown in (c) and (d) respectively.[1]

Given the list of corresponding points of the two images, finding the transformation of the overlapping portion of the images is the next process. Assuming a pinhole camera model, the transformation between pixels (u,v) of image 1 and pixels (u0, v0) of image 2 is demonstrated by a plane-to-plane projectivity.[7]

The parameters of the projectivity is found from four pairs of matching points. RANSAC regression[8] technique is used to reject outlying matches and estimate the projectivity from the remaining good matches.

The projectivity is fine-tuned using correlation at the corners of the overlapping portion to obtain four correspondences to sub-pixel accuracy. Therefore, image1 is then transformed into image2's coordinate system using Eq.1. The typical result of the process is shown in Figure 5.

Many images coping

Finally, the whole page composition is built up by mapping all the images into the coordinate system of an "anchor" image, which is normally the one nearest the page center. The transformations to the anchor frame are calculated by concatenating the pair-wise transformations found earlier. The raw document mosaic is shown in Figure 6.

However, there might be a problem of non-consecutive images that are overlap. This problem can be solved by performing Hierarchical sub-mosaics. As shown in Figure 7, image1 and image2 are registered, as are image3 and image4, creating two sub-mosaics. These two sub-mosaics are later stitched together in another mosaicing process.

Applied areas

There are various areas that the technique of document mosaicing can be applied to such as :

  • Text segmentation of images of documents[5]
  • Document Recognition[4]
  • Interaction with paper on the digital desk[9]
  • Video mosaics for virtual environments[10]
  • Image registration techniques[3]

Relevant research papers

  • Huang, T.S.; Netravali, A.N. (1994). "Motion and structure from feature correspondences: A review". Proceedings of the IEEE. 82 (2): 252–268. doi:10.1109/5.265351.
  • D.G. Lowe. Perceptual Organization and Visual Recognition. Kluwer Academic Publishers, Boston, 1985.
  • Irani, M.; Peleg, S. (1991). "Improving resolution by image registration". CVGIP: Graphical Models and Image Processing. 53 (3): 231–239. doi:10.1016/1049-9652(91)90045-L. S2CID 4834546.
  • Shivakumara, P.; Kumar, G. Hemantha; Guru, D. S.; Nagabhushan, P. (2006). "Sliding window based approach for document image mosaicing". Image and Vision Computing. 24 (1): 94–100. doi:10.1016/j.imavis.2005.09.015.
  • Camera-Based Document Image Mosaicing. (n.d.). Image (Rochester, N.Y.), 1.
  • Kumar, G. H.; Shivakumara, P.; Guru, D. S.; Nagabhushan (2004). "Document image mosaicing : A novel approach" (PDF). Text. 29 (3): 329–341. CiteSeerX 10.1.1.107.4304. doi:10.1007/bf02703782. S2CID 62593940.
  • Sato, T., Ikeda, S., Kanbara, M., Iketani, A., Nakajima, N., Yokoya, N., & Yamada, K. (n.d.). High-resolution Video Mosaicing for Documents and Photos by Estimating Camera Motion. Mosaic A Journal for the Interdisciplinary Study of Literature.

References

  1. 1 2 Zappalá, Anthony; Gee, Andrew; Taylor, Michael (1999). "Document mosaicing". Image and Vision Computing. 17 (8): 589–595. doi:10.1016/S0262-8856(98)00178-4.
  2. Mann, S.; Picard, R. W. (1995). "Video orbits of the projective group: A new perspective on image mosaicing". Technical Report (Perceptual Computing Section), MIT Media Laboratory (338). CiteSeerX 10.1.1.56.6000.
  3. 1 2 Brown, L.G. (1992). "A survey of image registration techniques". ACM Computing Surveys. 24 (4): 325–376. CiteSeerX 10.1.1.35.2732. doi:10.1145/146370.146374. S2CID 14576088.
  4. 1 2 Bloomberg, Dan S.; Kopec, Gary E.; Dasari, Lakshmi (1995). "Measuring document image skew and orientation" (PDF). In Vincent, Luc M; Baird, Henry S (eds.). Document Recognition II. Proceedings of the SPIE. Vol. 2422. pp. 302–315. Bibcode:1995SPIE.2422..302B. doi:10.1117/12.205832. S2CID 5106427.
  5. 1 2 Taylor, M. J.; Zappala, A.; Newman, W. M.; Dance, C. R. (1999). "Documents through cameras". Image and Vision Computing. 17 (11): 831–844. doi:10.1016/S0262-8856(98)00155-3.
  6. 1 2 Preparata, F.P.; Shamos, M. I. (1985). Computational Geometry: An Introduction. Monographs in Computer Science. Springer–Verlag. ISBN 9780387961316.
  7. Mundy, J.L.; Zisserman, A. (1992). "Appendix-Projective geometry for machine vision". Geometric Invariance in Computer Vision. Cambridge MA: MIT Press. CiteSeerX 10.1.1.17.1329. ISBN 9780262132855.
  8. Martin A. Fischler; Robert C. Bolles (1981). "Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography" (PDF). Communications of the ACM. 24 (6): 381–395. doi:10.1145/358669.358692. S2CID 972888.
  9. Wellner, P. (1993). "Interacting with paper on the digital desk". Communications of the ACM. 36 (7): 87–97. CiteSeerX 10.1.1.53.7526. doi:10.1145/159544.159630. S2CID 207174911.
  10. Szeliski, R. (1996). "Video mosaics for virtual environments". IEEE Computer Graphics and Applications. 16 (2): 22–306. doi:10.1109/38.486677.

Bibliography

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.