ExpInFluids Stereo PIV SelfCalibration 1

Embed Size (px)

Citation preview

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    1/14

    O R IGINA L S

    B. Wieneke

    Stereo-PIV using self-calibration on particle images

    Received: 25 October 2004/ Revised: 14 February 2005 / Accepted: 1 March 2005 / Published online: 26 May 2005 Springer-Verlag 2005

    Abstract A stereo-PIV (stereo particle image velocime-try) calibration procedure has been developed based onfitting a camera pinhole model to the two cameras usingsingle or multiple views of a 3D calibration plate. A

    disparity vector map is computed on the real particleimages by cross-correlation of the images from cameras 1and 2 to determine if the calibration plate coincides withthe light sheet. From the disparity vectors, the true po-sition of the light sheet in space is fitted and the mappingfunctions are corrected accordingly. It is shown that it ispossible to derive accurate mapping functions, even if thecalibration plate is quite far away from the light sheet,making the calibration procedure much easier. A modi-fied 3-media camera pinhole model has been imple-mented to account for index-of-refraction changes alongthe optical path. It is then possible to calibrate outsideclosed flow cells and self-calibrate onto the recordings.

    This method allows stereo-PIV measurements to be ta-ken inside closed measurement volumes, which was notpreviously possible. From the computed correlationmaps, the position and thickness of the two laser lightsheets can be derived to determine the thickness, degreeof overlap and the flatness of the two sheets.

    1 Introduction

    For stereo-PIV (stereo particle image velocimetry), cor-

    rect calibration is an essential prerequisite for measuringaccurately the three velocity components. Most often, anempirical approach is used by placing a planar calibra-tion target with a regularly spaced grid of marks atexactly the position of the light sheet and moving thetarget by a specified amount in the out-of-plane

    direction to two or morez positions (Soloff et al.1997).At eachz position (light sheet plane defined by z=0), acalibration function with sufficient degrees of freedommaps the world xy plane to the camera planes, while

    the difference between the z planes provides thez derivatives of the mapping function necessary forreconstructing the three velocity components. Thisempirical approach has the advantage that all imagedistortions arising from imperfect lenses or light pathirregularities (e.g. from air/glass/water interfaces) arecompensated for automatically in one step.

    Alternatively, one can use a 3D calibration plate withmarks on two z levels, avoiding the need for rigidmechanical setups with accurate translation stages.Different mapping functions have been used, from asecond-order or third-order polynomial in x and y(Soloff et al. 1997) to functions derived from the per-

    spective equations (camera pinhole model) (Willert1997). A major drawback of this empirical method is theneed to position the calibration plate exactly at the sameposition as the light sheet, which is often very difficult toaccomplish. A correction scheme based on a cross-correlation between the image of cameras 1 and 2 hasbeen proposed (Willert1997;Coudert and Schon2001),which is also the basis for the calibration correctionmethod proposed in this work.

    In parallel, especially with the advent of inexpensivedigital cameras, extensive work has been done in thefield of computer vision and photogrammetry to com-pute accurate camera calibrations. While only a 2D

    mapping function with additional z derivatives is re-quired for stereo-PIV with thin light sheets, computervision, in general, requires a volume mapping functionto map all xyz-world-points to the recorded xy-pixel-locations on one or more cameras. Usually, this isdone with a camera pinhole model with added param-eters for lens distortions (Tsai 1986).

    There are six external projective parameters mappingthe calibration plate by a rotation and translation to theworld camera plane perpendicular to the optical axis andinternal camera parameters, like the focal length, the

    B. WienekeLaVision GmbH, Anna-Vandenhoeck-Ring 19,37081 Go ttingen, GermanyE-mail: [email protected]

    Experiments in Fluids (2005) 39: 267280DOI 10.1007/s00348-005-0962-z

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    2/14

    principal point, which is the foot point of the optical axisonto the CCD, the pixel size and radial lens distortionterms. The optical axis is defined as the line perpendic-ular to the CCD chip passing through the pinhole.

    A variety of 2D calibration and 3D calibration tar-gets have been used successfully. A common calibrationmethod consists of recording a known planar calibrationtarget at a few (48) shifted and rotated positions. Thisis done either by moving a single camera or a stereo rigsetup with two cameras around a fixed target (walk-around problem) or by having the cameras fixed andmoving the target. All fixed parameters (internal cameraparameters and the relative position and orientation ofthe two cameras), together with all parameters uniquefor the particular view, are fitted by a nonlinear least-squares fit (bundle adjustment). Due to a very sparselypopulated Hessian matrix, since the external parametersunique for each view are independent of each other,special provisions are incorporated in the fit algorithmfor better numerical convergence, less computing timeand higher accuracy. A good overview of self-calibrationmethods and bundle adjustment fits is given by Hartley

    and Zissermann (2000).In the present work, a stereo-PIV calibration method

    has been implemented and tested using the camerapinhole model. Instead of requiring a perfect alignmentbetween the calibration plate and the light sheet, a cor-rection scheme has been developed that provides accu-rate mapping functions, even when the calibration plateis quite far away or tilted relative to the light sheet. Thismakes the stereo-PIV calibration easier and moreaccurate. Similar work has been done before on a setupwith telecentric lenses (Fournel et al. 2003) and withstandard lenses with a Scheimpflug adapter (Fournelet al.2004).

    An important application of self-calibration is thecase where it is difficult or even impossible to place acalibration plate inside a closed measurement volume. Inthese cases, one would ideally like to calibrate thecameras outside the flow apparatus and self-calibrateonto the light sheet inside. For these cases, differentstrategies are presented in Sect. 4, together with experi-mental validation.

    2 Self-calibration method

    2.1 Camera pinhole model

    For stereo-PIV, two mapping functions need to bedetermined: M1 for camera 1 and M2 for camera 2,relating a world coordinate Xw=(Xw, Yw, Zw) to pixellocations x1=(x1, y1) and x2=(x2, y2) in the recordedimages of cameras 1 and 2 (Fig. 1):

    x1 M1 Xw andx2 M2 Xw 1

    In the empirical approach with a calibration plate at twoor morez positions (Soloff et al. 1997), it is sufficient to

    know M(Xw, Yw, Zw=0) and the z derivatives dxi/dZwanddyi/dZw,i=1, 2 of the mapping function at (Xw,Yw,Zw=0), which do not change significantly across thethickness of the light sheet.

    In contrast, the camera pinhole model provides acomplete mapping of the volume as given by Eq. 1. Thecamera pinhole model used is based on Tsais 11-

    parameter model (Tsai 1986). The six external cameraparameters are given by the rotationR and translationTof the world coordinates Xw to the camera coordinatesXc=(Xc, Yc, Zc):

    Xc R Xw T 2

    The undistorted and distorted camera position xu=(xu,yu) and xd=(xd, yd), respectively, are computed by:

    xu f XcZc

    yu f YcZc

    3

    with:

    xd xu 1 k1r k2r2

    yd yu 1 k1r k2r

    2 4

    and:

    r2 x2d y2d

    where k1 and k2 are the first-order and second-orderradial distortion terms, respectively. Usually, for goodquality lenses, k1 is sufficient; for wide-angle lenses orconsumer cameras, additional radial terms and even

    R,T

    optical axis

    principal point x , y0 0

    focal length f

    world coordinate system

    camera system(X Y, cc c, Z )

    Zw

    Yw

    Xw

    Fig. 1 Camera pinhole model

    268

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    3/14

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    4/14

    Firstly, the computation of the disparity vector mapof the two dewarped images is computed by a standardcross-correlation PIV technique. The particle patterninside an interrogation window looks quite differentwhen viewed from the two camera viewpoints, since theparticles are dispersed throughout the light sheet.Therefore, it is usually insufficient to simply correlate asingle image pair. Instead, an ensemble-averaging algo-rithm is used by summing the correlation planes of manyimage pairs (Meinhart et al. 1999). Depending on theparticle density and the thickness of the light sheet,about 550 images are typically needed to compute anaccurate disparity map from a well-shaped correlationpeak. For large fields of view (e.g. in wind tunnels), asingle view might be sufficient, which offers the potentialto correct vibrational displacements of the laser sheet orthe cameras (Willert 1997). Multi-pass algorithms withdeformed interrogation windows can be applied to fur-ther enhance the accuracy of the vector map. This meansthat alln images are processed first to arrive at an initialguess of the disparity vector map, which is then used as areference vector field to shift and deform the interroga-

    tion windows in the next pass and so on.Willert (1997) used these disparity vectors to correct

    the position where the vectors are computed. For smallmisalignments, this effectively removes the main errorsource of computing the vectors of cameras 1 and 2 atdifferent positions. In case of larger misalignments be-tween the plate and the light sheet, a more advancedvolume correction scheme must be used to compute acorrect coordinate system of the light sheet plane withaccurate spatial derivatives of the mapping function, asexplained below.

    Once the disparity map has been computed, a corre-sponding world point in the measurement plane is

    computed by a standard triangulation method for eachvector (Fig.2). Errors in the disparity vectors mean thatthe two reprojected lines from each camera do not ex-actly intersect at a single world point. It is optimal tofind a point in space whose projections onto the twocamera images are closest to the measured positions(Hartley and Sturm1994). These criteria can be used toeliminate false disparity vectors using a sensible thresh-old (e.g. 0.5 pixels). Instead of working with dewarpedimages, one can also compute the disparities between theoriginal images and use them in the triangulation step toarrive at the same world points. The advantage of usingdewarped images is only that the PIV user can check for

    remaining disparities (non-zero vectors) more easily.Triangulation is only possible when one has a volumemapping function, but it must not necessarily be a pin-hole mapping function. It can also be an accurateempirical third-order 2D polynomial function calculatedfor a number of parallel z planes, which cover enough ofthe volume to incorporate the laser light sheet to befitted.

    A plane is then fitted through the world points in 3Dspace and the mapping functions of cameras 1 and 2 arecorrected by a corresponding transformation such that

    the fitted measurement plane becomes the z=0 plane.This is done by replacing, in Eq. 2,R byRdR andTbyT+(RdT), where dR is the rotation of the fitted planerelative to the calibration plate and d Tis the distance ofthe plane to the calibration plate. The freedom ofchoosing the new coordinate system due to in-planerotation and choice of origin is reduced by setting thenew origin as the point projected from the previousorigin in camera 1 onto the measurement plane and,also, the previousx axis of camera 1 coincides with thexaxis of the measurement plane. This can be changed laterby the user to set a new origin and x axis or y axis in adewarped particle image. The complete correctionscheme is shown in Fig.3.

    The whole procedure can be repeated again to arriveat better fits. Usually, the process has converged and thedisparity map does not get smaller after two or threepasses. Good results have been achieved by using only asingle-pass cross-correlation and repeating the completecorrection process two or three times.

    The triangulation error and the error from fitting aplane through the world points provide information

    about the quality of the fit. Both errors are affected byinaccuracies in the disparity vectors, which are mostlyrandom errors due to an insufficient number of particlesand the well-known bias errors of the correlation func-tion, together with systematic calibration errors in casethe computed mapping function becomes inaccuratewhen projected in space towards the light sheet plane.Calibration errors often lead to high triangulation er-rors, while the plane fit error might remain small. Asshown later, with a good setup, it is possible to compute

    Computation of camera pinhole model

    using a calibration plate

    Computation of disparity map by

    cross-correlation of camera 12 using

    recorded (dewarped) particle images

    Triangulation of points on the lightsheet

    Fitting a plane through the light sheet

    points

    Adjusting the mapping functions such

    that new plane becomes Z=0

    Defining a new xy-origin and direction

    of x-axis

    Fig. 3 Flow chart of self-calibration procedure

    270

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    5/14

    the position of the light sheet to within 0.1 pixels of thecentre of the sheet, with a thickness of typically 1020 pixelssomething that is hardly possible by simplevisual inspection and manual placement of the plate.

    The correction scheme above assumes that the inter-nal camera parameters, as well as the position and ori-entation of camera 2 relative to camera 1, do not change.In the 22 parameters of the stereo Tsai model, one cansubstitute the external parametersR2andT2of camera 2by the relative transformation R12 and T12 of camera 2relative to camera 1. Then, the self-calibration procedureabove is equivalent to newly fitting R1 and T1, whilstkeeping R12 and T12 fixed. Since the coordinate originand the x axis are freely selectable, this leaves threeparameters ofR1and T1to be fitted, which are the threeparameters of the position and orientation of a plane inspace. Therefore, the procedure of triangulation, planefit and transformation can be replaced by a single non-linear fit of the three free parameters ofR1and T1usingthe relationship between the disparity and the mappingfunction parameter given by the well-known funda-mental equation (Hartley and Zissermann 2000):

    x1 F x2 0 7

    where F is the fundamental 33 matrix of rank 2 with 8degrees of freedom andx1=(x1,y1, 1) andx2=(x2,y2, 1)are the camera coordinates. It is, nevertheless, quiteinstructive to perform the three steps separately toidentify the different error sources and to check theflatness of the light sheet.

    Advanced self-calibration methods can be devised tofit more than the three plane parameters, since the fun-damental equation has 8 degrees of freedom. One mightfit user-adjusted focal lengths or Scheimpflug positionsor even the relative position between cameras 1 and 2with some restrictions. This is a subject of further re-search.

    2.3 Stereo-PIV processing and 3C reconstruction

    Different approaches for stereo vector computation havebeen proposed, as summarised e.g. by Prasad (2000) orCalluaud and David (2004). In all cases, a 2D2C vectorfield is computed for each camera from which a 2D3Cvector field is computed by stereoscopic reconstruction.One has the choice of:

    1. Computing the 2D2C field on a regular grid in theraw images and using the two interpolated vectors tocompute a 3C vector at regular world grid positions

    2. Computing the 2D2C vectors in the raw image at aposition corresponding to the correct world position

    3. Dewarping the images first and computing the 2D2Cvectors at the correct world grid position

    Method 1 has the disadvantage that the vectors arenot computed at the correct world position and, due tovector interpolation, false or inaccurate vectors affectfour final 3C vectors. Method 2 has the disadvantage

    that the size and shape of the interrogation windowsdiffer between the two cameras, due to the perspectiveviewing. For method 3, the computed 2D2C vectors arealready computed at the world correct position andoriginate from the same interrogation window of equalsize and shape, but a sub-pixel interpolation is requiredduring the dewarping, which, together with the sub-pixelinterpolation necessary for the multi-pass windowdeformation scheme, leads to added image degradation.Therefore, a modified method 3 approach is used here,where the dewarping and image deformation is donesimultaneously before each step in the multi-pass itera-tive scheme.

    For the first computational pass, the two frames (t0,t0+dt) of each camera are dewarped and evaluated. Thisalready provides vectors at the correct position in theworld coordinate system. Also, the size and shape of theinterrogation windows for both cameras are the same,which means that the correlation is done on the sameparticles, apart from the effects due to the non-zerothickness of the light sheet. Then, a preliminary 3Creconstruction is done to remove corresponding vectors

    in the 2C vector fields for which the reconstruction erroris too large (e.g. larger than 0.5 or 1 pixel). This methodvery effectively removes spurious vectors, since two falsevectors with random directions are rarely correlated. Atthe end of the first pass, missing vectors are interpolatedand the vector field is smoothed slightly for numericalstability.

    The resulting vector field is used as a reference fordeforming the interrogation windows in the next pass.Actually, not every interrogation window is deformedindividually, but the complete image is deformed at once,with half the displacement in the backwards directionassigned to the first image at t0and the other half in the

    forwards direction to the second image at t0+dt. Imagedeformation requires less floating-point operations, sincee.g. for an overlap of 75%, the same region would bedeformed 16 times using window deformation. Imagedeformation is combined with the dewarping of the ori-ginal image in one step. Usually, after three or four passesat the final interrogation window size, the 2D2C vectorfields have converged sufficiently. Then, the 3C recon-struction is performed, which consists of solving a systemof four linear equations with three unknowns (u, v, w).This is done by using the normal equation, which dis-tributes the error evenly over all three components.Computing from (u, v, w) the (u1, v1) and (u2, v2) com-

    ponents, the deviation from the measured (u1, v1) a n d (u2,v2) can be calculated (reconstruction error). Usually,with a good calibration and 2C vector errors of less than0.1 pixels, the reconstruction error is well below 0.5 pix-els. This can be used as an efficient rejection of false ran-dom vectors, which usually produce large reconstructionerrors. The complete flow chart is shown in Fig. 4. The2D2C vector fields are separately computed for cameras 1and 2. A typical multi-pass scheme consists, for example,of one or two passes with an interrogation window size of6464 pixels and 50% overlap, followed by four passes

    271

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    6/14

    with a window size of 3232 and 75% overlap. After eachpass, a 2D3C reconstruction is done for the purpose of

    eliminating the 2D2C vectors with reconstruction errorsabove some predetermined threshold (e.g. 1 pixel), butthe 2D3C vector field is not used further. Only the data atthe end the reconstructed 2D3C field is taken and vali-dated e.g. by a median filter.

    3 Experiments

    In Sect. 3.1, 16 different calibrations have been takenand self-calibrated on a recording of a flat random

    pattern plate. A bundle adjustment of the closest eightcalibrations serves as a reference. The corrected cali-

    brations are compared in different ways to assess thedifferent residual errors after self-calibration. In Sect.3.2, a flat random pattern plate has been moved by atranslation stage and the measured stereo-PIV dis-placements are compared to the true displacement. Thisis for calibration and self-calibration done in air, as wellas in water, to verify the accuracy of the pinhole modelwhen used with intermediate index-of-refraction chan-ges. In Sect. 3.3, a real experiment in air has been per-formed and the vector fields before and after correctionare computed and analysed.

    Fig. 4 Flow chart of stereo-PIVvector field computation

    272

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    7/14

    3.1 Experimental results with synthetic images

    Sixteen views of a 3D calibration plate with a size ofabout 100100 mm are recorded at different positionsand orientations. The image size is 1,2801,024 pixels,and a small aperture with an fstop of 20 has been usedto ensure a large depth of focus. From each view, acamera pinhole mapping function is calculated withfixed sx=1 and k2=0. Table1 shows some of the pin-hole parameters for camera 1.

    A flat plate with a random dot pattern is recorded,defining a light sheet position. This image is used laterfor self-calibration. A reference mapping function iscomputed by a bundle adjustment of the eight calibra-tion views closest to the random pattern plate. Note thatthe parameters for calibration 1 are almost the same asfor the reference, since the first of the eight views for thebundle adjustment is taken as the reference coordinatesystem z=0.

    Table1 shows the average length of the disparityvectors computed by cross-correlation between therandom dot image of camera 1 and camera 2 for all

    calibrations. Even extreme disparities of up to 500 pixelsare present, which means that only half of the calibra-tion plate was visible to both cameras simultaneously.

    The self-calibration procedure of Sect. 2.2 has beenapplied to the reference mapping function. In the fol-lowing, the other mapping functions are compared tothe corrected reference function in different ways.

    A synthetic double-frame random pattern image hasbeen generated based on a 2D2C velocity field of regularvortices with an average gradient of about 10% and 5-pixel displacement. Seeding density is high, particlediameter is around 2 pixels and no noise is included. Thetwo frames are warped to z=0 using the corrected ref-

    erence mapping function, leading to a 4-frame-referencestereo-PIV image, two frames warped with camera 1

    mapping function, two with camera 2 function. Since allparticles are located at z=0, errors due to particlesspaced throughout a light sheet are avoided. Evaluatingthis image with the stereo-PIV procedure using thecorrected reference mapping function gives back theoriginal 2D3C vector field with a zero out-of-plane wcomponent.

    The still uncorrected 16 mapping functions are usedto compute a stereo-PIV 2D3C vector field from thesynthetic 4-frame image in the usual way, including finalvector validation. The final interrogation window size is3232 pixels, with an overlap of 75%. For a disparity ofonly a few pixels, the errors are still quite small and onlyfew false vectors occur. Calibration 3 with 9-pixel dis-parity shows an error of 0.44 pixels, as is roughly ex-pected from disparityvelocity gradient. For all largerdisparities, a meaningful velocity field can no longer becalculated. All vectors are eliminated either by thereconstruction error filter, which throws out all vectorswith a reconstruction error above 1 pixel, or by the finalvector validation using a regional median filter.

    In the next step, the mapping functions 116 are

    corrected using the self-calibration procedure on therecorded random pattern image. For the correctedmapping functions, the average residual disparity iscomputed again (Table2). It is usually well below0.1 pixels, even for very high initial misalignments. Mostof the remaining errors are due to standard correlationerrors. Only calibrations 10, 12, 14 and 15 have a higherresidual misalignment relative to the reference calibra-tion.

    The corrected mapping functions are compared to thecorrected reference mapping function, firstly with adirect comparison of the functional form of the mappingfunctions. Using a grid of 1010 image points in thez=0

    plane, the values of the mapping functions are computedand compared to the values of the corrected reference

    Table 1 Mapping functionparameters before correction Calibration f1 (mm) Tz1 (mm) Rx1 () Ry1 () Rz1 () Disparity

    (pixel)Synthetic image

    Error inV (pixel)

    Falsevectors

    Reference 60.66 449.4 6.2 29.2 13.5 1.0 0.07 0.0%1 60.03 444.8 6.0 29.8 13.4 1.0 0.07 0.0%2 60.29 443.4 2.1 29.6 6.2 8.2 0.39 6.7%3 60.36 442.9 1.2 29.7 4.2 9.0 0.44 7.1%

    4 61.23 447.3 2.7 33.6 0.6 38 2.2 60%5 61.23 446.8 2.1 32.1 0.6 41 2.2 70%6 60.89 446.6 5.7 29.7 0.3 49 2.8 74%7 61.57 451.3 4.5 30.5 0.5 87 3.4 95%8 60.32 435.4 1.2 30.2 0.0 126 100%9 60.88 448.9 21.7 46.2 1.9 144 100%10 60.28 456.3 25.9 32.4 7.8 166 100%11 61.44 458.3 16.3 34.7 6.9 178 100%12 62.68 465.2 12.6 8.6 0.1 182 100%13 61.23 433.0 3.8 33.9 8.2 263 100%14 62.82 447.9 13.6 30.6 1.6 298 100%15 63.51 444.6 3.0 32.3 13.6 351 100%16 62.11 482.1 7.6 26.0 2.8 502 100%

    273

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    8/14

    mapping function after equalizing the still differentcoordinate origins, in-plane rotations and conversionscales from millimetres to pixels. This positional devia-tion to the reference mapping function is shown in thethird column of Table 2. For higher initial disparities,the final positional errors are also larger, but, in mostcases, still acceptable. While the disparity signifies if thevectors for camera 1 and camera 2 are computed at thesame position, which is most important for the errors ofthe final velocity field in regions of strong gradients, thepositional errors relate to general residual warping of thecoordinate system. The vectors are then computed at anincorrect x/y position (typically

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    9/14

    onto the random pattern target are shown. In both airand water, the small average residual disparity afterself-calibration is mainly due to uncertainties in the peakdetection of the correlation peak. The angle between thecameras is about 45.

    For water, the calculated pinhole parameters arequite different. The principal point lies far outside thechip and the skew factor is much less than 1, as would beexpected for square pixels. This is due to the distortionsof the airglasswater interface, which can only be fittedby the camera pinhole model by non-physical values fors

    x, fand the principal point. Nevertheless, the calibra-

    tion as a whole seems to be very accurate, which isindicated by the fact that, for the bundle adjustment inwater, all calibration points in space were fitted with anaccuracy better than 0.1 pixels.

    In Table4the results of the measured displacementsare shown. The second row shows, for air, the dis-placements using a triangulation and plane-fit method,as in the standard self-calibration procedure. First, thedisparity of the images recorded at e.g. positionz=1 mm are computed relative to the corrected map-ping function at z=0, and, using triangulation, the po-sition of points on the z=1 mm plane are determined,and a plane is fitted through those points. The measuredposition of the z=15 mm planes agree with the trans-lation stage movement to within 4 lm, which proves ahigh accuracy of the mapping function throughout thevolume. This agrees well with the measurement errors of24lm reported by Fournel et al. (2004) for the sametype of experiment.

    The third row shows the stereo-PIV evaluation in airusing the corrected 8-view bundle calibration. The vec-tor computation was performed with multi-pass itera-tions and deformed interrogation windows of size6464 pixels. The measured displacements in air agreewith the uncertainties of the translation stage for 1-mm

    and 2-mm displacements. For larger displacements,there seems to be a systematic bias. Given the goodagreement of the triangulation and plane fit methodusing the same mapping function, this error is probablydue to the fact that the 3C reconstruction uses mappingfunction derivatives calculated only at the z=0 plane.These derivatives change over the quite large distance of5 mm (equal to 100 pixels). A better way would be tocompute the 3C reconstruction using the more accuratetriangulation method. Further investigations are neededto look at this effect in detail. For practical purposes,given a bias of 0.06 mm=1.2 pixels at dz=5 mm=100 pixels would lead to a bias of 0.06 pixels for typicaldisplacements of 5 pixels, which is commonly less thandue to all other error sources.

    The rms in the otheru components andv componentsremain always around 0.03 pixels. While the average vdisplacement is zero everywhere, the u components showa bias of 0.36 pixels at dz=1 mm, increasing linearlywith larger displacements. Closer inspection revealedthat the axis of the translation stage was not exactlyperpendicular to the random pattern plate, resulting in aslight x movement.

    The experiment has also been repeated with a trans-lation in the x direction. Again, the measured displace-ments agree well with the settings from the translationstage within the uncertainties of the translation stage.

    In water, the results are similar. For up to 2 mm(40 pixels), the accuracy is high, with some bias effectsfor larger displacements. The rms values are larger thanin air due to a residual incorrect warping of the coor-dinate system, which, e.g. for dz=1 mm, shows up as agradient across the image in the displacement on theorder of 0.03 pixels. The triangulation and plane fitrecovers accurately the movement of the plate by thetranslation stage. This proves that the camera pinholemodel and the self-calibration method remains suffi-

    Table 3 Mapping function parameter

    Initialdisparity(pixel)

    Residualdisparity(pixel)

    Camera x0/y0(pixel)

    sx f(mm)

    Tx(mm)

    Ty(mm)

    Tz(mm)

    Rx()

    Ry()

    Rz()

    Air, Eight viewsbundle

    65 0.07 1 586/552 0.998 60.8 16.1 1.3 411.7 1.4 20.3 2.72 706/548 0.998 60.9 9.2 0.2 435.2 0.2 24.3 2.2

    Water, Eight viewsbundle

    7.9 0.16 1 1882/560 0.957 81.0 113.9 10.3 537.6 2.7 27.0 4.42 4074/491 0.924 83.4 162.0 8.3 570.5 1.7 34.1 4.2

    Table 4 Comparison of measured displacements with true displacements

    Translation stagemoved by

    dz=1 mm dz=2 mm dz=3 mm dz=4 mm dz=5 mm

    Air triangulation 0.999 mm 2.002 mm 2.996 mm 4.002 mm 5.001 mmAir stereo-PIV 0.9950.001 mm

    20.220.03 pixel1.9920.003 mm40.460.06 pixel

    2.9730.003 mm60.380.05 pixel

    3.9620.004 mm80.470.08 pixel

    4.9410.006 mm100.350.12 pixel

    Water triangulation 1.001 mm 2.005 mm 2.995 mm 3.998 mm 4.992 mmWater stereo-PIV 0.9970.004 mm

    20.590.06 pixel1.9950.006 mm41.180.13 pixel

    2.9770.007 mm61.450.15 pixel

    3.9690.007 mm81.920.15 pixel

    4.9490.009 mm102.150.19 pixel

    275

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    10/14

    ciently accurate, even with strong distortions fromrefractive index changes.

    3.3 Experimental results with particle images

    An experiment has been performed in air with waterdroplets of a few microns in size. Two jets of air arepassing upwards with high speed between two cylindersof 10-mm diameter, generating high shear regions. Thefield-of-view is about 8570 mm at a distance of500 mm. The camera CCD array is 1,2801,024 pixels.The angle between the cameras is about 50. Three cal-ibrations (13) are taken using a 3D calibration plate.The average disparity relative to the recorded particleimages is 44, 82 and 252 pixels, respectively. The dis-parity map for calibration 2 is shown in Fig. 5c, whichcorresponds to a rotational misalignment around the yaxis of about 13, together with a z displacement ofaround 5 mm.

    Vector fields are computed for the high shear region(Fig.5a). The final interrogation window size is

    3232 pixels with 75% overlap. The average velocitygradient is 3%, with maximum values around 20% closeto the jets. Without self-calibration correction, largedisplacement errors are visible and many vectors areremoved due to reconstruction errors larger than 1 pixel,as shown in Fig.6a for calibration 1. For calibrations 2and 3, no meaningful vector fields can be calculated.

    Self-calibration has been performed using interroga-tion window sizes of 128128 pixels with an overlap of50% and a summation of 16 correlation maps, whichprovided correlation peak positions with an accuracy ofbetter than 0.1 pixels, as can be deduced from the laserplane fit error, which is less than 0.1 pixels for all three

    mapping functions. After equalizing the coordinateorigin, the direction of the x axis, and the global scalefrom pixels to millimetres between the three correctedmapping functions, the average difference in the mappedx/yposition between the three corrected mappings is lessthan 0.4 pixels, as calculated directly from the functionalform of the mapping functions.

    With self-calibration correction, the vector fields ofthe three calibrations are almost identical (Fig.6bdwith background colour=vorticity). The average vectordifference between calibration 1 and 3 in the high shearregion is only 0.055 pixels (Fig. 6e). Note that the dis-played vectors are enlarged by a factor of 50.

    4 Self-calibration into closed measurement volumes

    In many applications, it is difficult or impossible toperform an accurate calibration inside the measure-ment volume. Here, it is necessary to calibrate fromthe outside and, somehow, compute a corrected map-ping function for the measurement plane inside thevolume using the disparity map and an appropriate

    correction scheme. Different strategies are investigatedin Sects. 4.1and 4.2, together with experimental veri-fication.

    Fig. 5ac Particle image viewed by camera 1 (a) a n d 2 (b)dewarped with calibration 2. The white rectangle defines the highshear area evaluated. c Corresponding disparity map typical of arotation around the y axis

    276

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    11/14

    4.1 Calibration outside with similar optical setup

    Given that, as shown in Sect. 3.2, the camera pinholemodel without modifications can handle a refractiveindex change with sufficient accuracy, a straightforwardstrategy is to perform the calibration outside underconditions as similar to the real measurement conditions

    as possible. As shown in Fig. 7 (left and middle), thiscan be done by first focussing the cameras onto the lightsheet plane, then sufficiently retracting both cameraswith a translation stage such that a small water basin canbe placed in front of the water channel and performing acalibration inside the water basin in the standard waywith a single or multiple views of a 2D calibration or 3Dcalibration plate. It is important that the distance be-tween the cameras and the front side of the water basinis the same as it is relative to the front side of the waterchannel in the real measurement position (L in Fig.7).Finally, the cameras are moved back to the original

    position, the real experiment is performed and thestandard self-calibration procedure can be applied tocorrect the mapping function onto the light sheet. Withan accurate mechanical setup, the accuracy of this ap-proach is the same as if both the calibration and therecording were done inside the measurement volume.

    A scan in the z direction through the measurementvolume can be done by moving the laser light sheet to anew z position and computing the self-calibration sep-arately for eachz position. If the travel distance is largerthan the depth of focus, it is necessary to move thecameras and the light sheet simultaneously. In this case,it is required to perform a calibration outside in thewater basin for each z scan position separately byadjusting the distance between the cameras and the frontside of the water basin accordingly.

    4.2 Calibration outside in air

    Of course, it would be easier to perform the calibrationoutside in air without a water basin and to self-calibrateonto the recorded light sheet in water (Fig. 7right). Theresults using this method are shown in Table5. Thisapproach leads to large errors for the standard pinhole

    Fig. 7 Left: recording position.Middle: calibration procedureoutside but in a similar opticalsetup.Right: calibration outsidein air and self-calibration ontorecording using 3-media model

    Fig. 6 a Vector field for uncorrected calibration 1. Backgroundcolour=vorticity. bd Velocity fields after self-calibration forcalibrations 13. e Difference between vector field of calibration 1and 3. The vectors in e are enlarged by 50. Field-of-view is about600x500 pixel

    277

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    12/14

    model. The z displacement is shortened by about the

    index-of-refraction of water. During self-calibration, it isnot possible to refit all parameters of the pinhole modelto the distorted water case. Especially, the internalcamera parameters, likesxandf, are not changed duringthe standard self-calibration procedure. In principle, thedisparity map, as defined by the 33 fundamental matrixequation, has 8 degrees of freedom and only the 3-planeparameter need to be refitted by the standard self-cali-bration method. Hence, one has the extra degrees offreedom to fit parameters like sx or the relative cameraorientation. But there are too many parameters to befitted, so one must be restricted to a subset given by theparticular experimental setup.

    A better approach is to modify the camera pinholemodel to accommodate the airglasswater interface.This method allows an accurate physically motivatedmodel. A 3-media model (e.g. airglasswater) has beenimplemented according to Maas (1996). Using Snellslaw and an iterative approach, the bending of light raysthrough glass and water is calculated. The thickness andrefractive index of each medium can be specified. Thethickness dZg-l of the last medium (water) is defined bythe distance between the light sheet plane and the pre-vious medium (glass). Currently, this value must still bemeasured, together with the two angles between the lightsheet and the glass plate. The distance between light

    sheet and glass plate can be measured, for example, byfocussing on the light sheet with a large aperture (smalldepth of focus) and then traversing the cameras back-wards until a target mounted on the front side of thewater channel is in focus.

    The initial calibration in air is done without the 3-media model, which is switched on for self-calibration(triangulation step) and the subsequent stereo-PIV vec-tor computation.

    The results are shown in Table 5. With the 3-mediamodel, the movement of the random pattern plate is

    accurately computed both by the triangulation method

    and the stereo-PIV evaluation. For larger displacements,again, some deviations are visible, showing the limits forthe 3-media model, both for the triangulation and thestereo-PIV method. This is not relevant for typical PIVexperiments, since dz=5 mm corresponds to anextremely large displacement of 100 pixels. The onlydrawback is that one has to know the distance dZg-lbetween the light sheet and the glass plate. In the currentexperiment, dZg-l is 40 mm, which was measured con-ventionally. Ideally, one could also fit this from thedisparity map, but initial tests indicate that the fitalgorithm then becomes unstable and is not able to fitthe particle plane and the glass plate at the same time.

    Further work is needed to explore this possibility.In Table5, the results of the self-calibration proce-

    dure with assumed wrong distances dZg-l to determinethe sensitivity of inaccurate measurements of the posi-tion of the laser sheet, with respect to the glass plate, arealso shown. The computed stereo-PIV displacementsremain accurate to within 1% for offsets of dZg-l up to20 mm. The important feature of the 3-media model isthat the z derivatives of the mapping functions, whichare off by a factor given by the index-of-refraction ofwater, are again accurately calculated. Any remainingin-plane x disparities are compensated by the self-cali-bration procedure. Larger errors are visible for wrongly

    measured tilts of the light sheet. For an assumed tiltaround they axis of 10, dzand dy(=0) are accurate towithin 2%, but there is a systematic error in d x of 3%.For a tilt around thex axis there is a systematic error indyof 8% and residualy disparities of a few pixels, whichcan not be compensated for by self-calibration. Ofcourse, the 3-media model is also of advantage whencalibrating in-situ in water. It has been verified that thecamera pinhole parameters return to physically mean-ingful values in comparison to the strange ones as shownin Table3.

    Table 5 Measured displacements for calibration in air and self-calibration to recording in water

    Translation stage moved by dz=1 mm dz=2 mm dz=3 mm dz=4 mm dz=5 mm

    Standard model Triangulation 0.737 1.491 2.228 2.978 3.720Stereo-PIV 0.7430.004 1.4860.003 2.2170.006 2.9570.007 3.6880.006

    3-media model TriangulationdZg-l=40

    0.996 2.005 3.003 4.014 5.016

    Stereo-PIVdZg-l=40

    1.0060.004 2.0120.004 3.0010.006 4.0010.006 4.9890.015

    Stereo-PIV

    dZg-l=30

    1.0060.003 2.0090.004 2.9960.005 3.9940.006 4.9810.007

    Stereo-PIVdZg-l=20

    1.0030.003 2.0070.004 2.9940.005 3.9910.006 4.9760.007

    Stereo-PIVdZg-l=0

    0.9720.003 1.9440.005 2.8990.006 3.8650.007 4.8190.009

    Stereo-PIVdZg-l=40, ax=10

    1.0210.007 2.0410.013 3.0450.018 4.0590.025 5.0610.032

    Stereo-PIVdZg-l=40, ay=10

    1.0000.001 2.0010.003 2.9950.005 3.9780.006 4.9600.008

    Displacements given in millimetres, 1 mm=21.1 pixelsDistance between glass plate and light sheet dZg-l is 40 mm

    278

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    13/14

    5 Laser light sheet thickness and relative position

    As an added benefit of the correction scheme, thethickness and relative position of the two laser sheets ofthe double-pulse PIV laser can be deduced from thecorrelation maps. The correlation peaks are smeared outdue to particles contributing throughout the light sheet,as shown in Fig.8. Consequently, the light sheet

    thickness can be computed by simple geometric consid-erations from the correlation peak width. When thecross-correlation is done on dewarped images, thecorrelation peak width is given by:

    wc d 1

    tan a1

    1

    tan a2

    8

    with dbeing the light sheet thickness and a1 and a2 arethe viewing angles of cameras 1 and 2 relative to the xaxis in the case when the cameras are placed horizontallyalong the x axis. This is assuming point particles. Forreal particles, the correlation peak is folded with theparticle point spread function, which could be calculatedby auto-correlation. The width of the correlation peak inunits of pixels is a function of the ratio of the thicknessof the light sheet in relation to the distance between thecamera and the light sheet. For a typical stereo-PIVexperiment with measured xy displacements of 510 pixels, one needs a light sheet thickness at least twiceas thick to measure z components of the same order.Therefore, typical correlation peak widths are on theorder of 1020 pixels.

    If this analysis is done for both laser light sheetsseparately, the position of the two planes can be com-pared to determine the overlap of the two light sheetsand the flatness of each sheet. Another method often

    used for determining the overlap of the two lasersheetsbesides visual inspectionis by setting the dtbetween the two laser shots as short as possible. Then,

    the two images of each camera show almost the sameparticle pattern (if the light sheets are well aligned) and across-correlation gives a high correlation coefficient,which is indicative of good light sheet overlap. Theabove method based on the cross-correlation of cameras1 and 2 is computationally more intensive, but offers theadvantage of showing the real position in space of eachlaser beam, together with a clear indication of whichscrew in the laser head should be adjusted.

    6 Summary

    A self-calibration correction scheme has been developedto compensate for misalignment between the calibrationplate and the light sheet. After fitting a camera pinholemodel to a 3D calibration plate, a disparity vector mapis calculated by cross-correlating the dewarped particleimages of cameras 1 and 2 taken at the same time. Forhigher stability and accuracy, the correlation maps ofmany image pairs are summed up. The disparity vectorsare used to calculate world points on the real measure-ment plane by triangulation. A plane is fitted throughthese points.

    Finally, the mapping functions are transformed to thenew plane. It is shown that this calibration schemeprovides highly accurate mapping functions with finaldisplacement errors smaller than is expected from theother error sources, like the basic particle image veloci-metry (PIV) correlation algorithm for real images. Thishas been confirmed for different experimental setups. Ithas been shown that, with such a correction, the z=0plane of the mapping function lies within 0.1 pixels ofthe middle of the light sheet.

    The self-calibration scheme is advisable in any case to

    check the calibration and to improve the accuracy. Sinceit works well even for very large misalignments, iteliminates the need for an alignment of the calibrationplate with the light sheet, which is often difficult andtime consuming.

    A modified 3-media camera pinhole model has beenimplemented to account for index-of-refraction changesalong the optical path. It is then possible to calibratefrom the outside, e.g. a closed water channel, and self-calibrate onto the recordings inside the channel. Thismethod allows stereo-PIV measurements inside closedmeasurement volumes, which was not previously possi-ble.

    As a side benefit, the correlation maps can be anal-ysed to yield the position and thickness of the two lasersheets and, therefore, the degree of overlap and flatnessof each sheet.

    References

    Calluaud D, David L (2004) Stereoscopic particle image veloci-metry measurements of the flow around a surface-mountedblock. Exp Fluids 36:5361

    Fig. 8 Particles throughout the light sheet contribute to thecorrelation peak. From the peak width, the light sheet thicknesscan be computed

    279

  • 8/12/2019 ExpInFluids Stereo PIV SelfCalibration 1

    14/14

    Coudert S, Schon JP (2001) Back-projection algorithm with mis-alignment corrections for 2D3C stereoscopic PIV. Meas SciTechnol 12:13711381

    Fournel T, Coudert S, Lavest JM, Collange F, Schon JP (2003)Self-calibration of telecentric lenses: application to bubbly flowusing moving stereoscopic camera. In: Proceedings of the 4thPacific symposium on flow visualization and image processing(PSFVIP-4), Chamonix, France, June 2003

    Fournel T, Lavest JM, Coudert S, Collange F (2004) Self-calibra-tion of PIV video cameras in Scheimpflug condition. In:Stanislas M, Westerweel J, Kompenhans J (eds) Particle image

    velocimetry: recent improvements. Proceedings of the EURO-PIV 2 workshop, Zaragoza, Spain, March/April 2003. Springer,Berlin Heidelberg New York, pp 391405

    Hartley R, Sturm P (1994) Triangulation. In: Proceedings of theARPA image understanding workshop, Monterey, California,November 1994, pp 957966

    Hartley R, Zissermann A (2000) Multiple view geometry in com-puter vision, 1st edn. Cambridge University Press, Cambridge,UK

    Maas HG (1996) Contributions of digital photogrammetry to 3DPTV. In: Dracos T (ed) Three-dimensional velocity and vor-ticity measuring and image analysis techniques. Kluwer,Dordrecht, The Netherlands, pp 191208

    Meinhart CD, Wereley ST, Santiago JG (1999) A PIV algorithmfor estimating time-averaged velocity fields. In: Proceedings ofoptical methods and image processing in fluid flow, 3rd ASME/JSME joint fluids engineering conference, San Francisco, Cal-ifornia, July 1999

    Prasad AK (2000) Stereoscopic particle image velocimetry. ExpFluids 29:103116

    Soloff SM, Adrian RJ, Liu ZC (1997) Distortion compensation forgeneralized stereoscopic particle image velocimetry. Meas SciTechnol 8:14411454

    Tsai RY (1986) An efficient and accurate camera calibrationtechnique for 3D machine vision. In: Proceedings of the IEEEconference on computer vision and pattern recognition(CVPR86), Miami, Florida, June 1996, pp 364374

    Willert C (1997) Stereoscopic digital particle image velocimetry forapplication in wind tunnel flows. Meas Sci Technol 8:14651479

    280