|
ISRN Machine Vision 2013
Resection-Intersection Bundle Adjustment RevisitedDOI: 10.1155/2013/261956 Abstract: Bundle adjustment is one of the essential components of the computer vision toolbox. This paper revisits the resection-intersection approach, which has previously been shown to have inferior convergence properties. Modifications are proposed that greatly improve the performance of this method, resulting in a fast and accurate approach. Firstly, a linear triangulation step is added to the intersection stage, yielding higher accuracy and improved convergence rate. Secondly, the effect of parameter updates is tracked in order to reduce wasteful computation; only variables coupled to significantly changing variables are updated. This leads to significant improvements in computation time, at the cost of a small, controllable increase in error. Loop closures are handled effectively without the need for additional network modelling. The proposed approach is shown experimentally to yield comparable accuracy to a full sparse bundle adjustment (20% error increase) while computation time scales much better with the number of variables. Experiments on a progressive reconstruction system show the proposed method to be more efficient by a factor of 65 to 177, and 4.5 times more accurate (increasing over time) than a localised sparse bundle adjustment approach. 1. Introduction The nonlinear error minimisation step commonly referred to as bundle adjustment is a key step in many systems that recover projective geometry from image correspondences. While linear solutions are available in most cases, their accuracy is generally below requirements. Linear methods usually only serve as good initialisation for bundle adjustment [1]. Bundle adjustment remains a relatively computationally expensive part of the reconstruction process, despite a number of attempts to make it more efficient [2–4]. Resection-intersection is an implementation of bundle adjustment that interleaves the steps of refining camera parameters (resection) and 3D points (intersection). In this configuration each camera is treated as being independent of other cameras and each point is treated as being independent of others. While this has a number of advantages leading to reducing time spent on each iteration, the convergence rate is slow, primarily because interactions between variables are not accounted for, and because of redundancies between the two steps. This paper proposes modifications to the classic resection-intersection algorithm that improve its accuracy and convergence rate and efficiently manages the propagation of parameter updates through a large, weakly overlapping network of cameras. The
|