99
Institutionen för systemteknik Department of Electrical Engineering Examensarbete Calibration of multispectral sensors Examensarbete utfört i Bildbehandling vid Tekniska högskolan i Linköping av Wilhelm Isoz LITH-ISY-EX-3651-2005 Linköping 2005 Department of Electrical Engineering Linköpings tekniska högskola Linköpings universitet Linköpings universitet SE-581 83 Linköping, Sweden 581 83 Linköping

Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Institutionen för systemteknikDepartment of Electrical Engineering

Examensarbete

Calibration of multispectral sensors

Examensarbete utfört i Bildbehandlingvid Tekniska högskolan i Linköping

av

Wilhelm Isoz

LITH-ISY-EX-3651-2005

Linköping 2005

Department of Electrical Engineering Linköpings tekniska högskolaLinköpings universitet Linköpings universitetSE-581 83 Linköping, Sweden 581 83 Linköping

Page 2: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete
Page 3: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Calibration of multispectral sensors

Examensarbete utfört i Bildbehandlingvid Tekniska högskolan i Linköping

av

Wilhelm Isoz

LITH-ISY-EX-3651-2005

Handledare: Thomas SvenssonAvdelningen för Sensorteknik, FOI

Ingmar RenhornAvdelning för Sensorteknik, FOI

Examinator: Per-Erik Forssenisy, Linköpings universitet

Linköping, 13 December, 2005

Page 4: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete
Page 5: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Avdelning, InstitutionDivision, Department

BildbehandlingDepartment of Electrical EngineeringLinköpings universitetS-581 83 Linköping, Sweden

DatumDate

2005-12-13

SpråkLanguage

� Svenska/Swedish� Engelska/English

RapporttypReport category

� Licentiatavhandling� Examensarbete� C-uppsats� D-uppsats� Övrig rapport�

URL för elektronisk versionhttp://www.cvl.isy.liu.se/http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5202

ISBN—

ISRNLITH-ISY-EX-3651-2005

Serietitel och serienummerTitle of series, numbering

ISSN—

TitelTitle

Kalibrering av multispektrala sensorerCalibration of multispectral sensors

FörfattareAuthor

Wilhelm Isoz

SammanfattningAbstract

English: This thesis describes and evaluates a number of approaches and algo-rithms for nonuniform correction (NUC) and suppression of fixed pattern noisein a image sequence. The main task for this thesis work was to create a generalNUC for infrared focal plane arrays. To create a radiometrically correct NUC,reference based methods using polynomial approximation are used instead of themore common scene based methods which creates a cosmetic NUC.

The pixels that can not be adjusted to give a correct value for the incommingradiation are defined as dead. Four separate methods of identifying dead pixelsare used to find these pixels. Both the scene sequence and calibration data areused in these identifying methods.

The algorithms and methods have all been tested by using real image sequences.A graphical user interface using the presented algorithms has been created in Mat-lab to simplify the correction of image sequences. An implementation to convertthe corrected values from the images to radiance and temperature is also per-formed.

Svenska: Den här rapporten beskriver och utvärderar ett antal olika påståen-den och algoritmer för icke-likformiga korrektioner samt reducering av de dödapixlarna i en bildsekvens. Huvuduppgifterna för detta arbetet var att skapa engenerell icke-likformig korrektion för IR-kameror. För at skapa radiometriskt kor-rekt icke-likformig korrektion, valdes en referens baserad metod som använder sigutav polynomekvationer, istället för den mer vanliga typen kallad scen baseradmetod, som skapar en kosmetiskt korrekt icke likformig korrektion. De pixlar sominte kan korrigeras så att de ger ett korrekt värde för den inkommande strålningendefinieras som döda. Fyra olika metoder används för att hitta och definiera dessadöda pixlar. Både scen och kalibrerings data används i dessa metoder.

Alla algoritmerna och metoderna har blivit testade på riktiga scen datasekvenser. Ett grafiskt användargränssnitt, som använder de föreslagna algorit-mer har skapats i Matlab för att förenkla korrigeringen av bildsekvenserna. Enmetod för att konvertera de korrigerade bilderna till radiansnivåer och temperaturhar också skapats.

NyckelordKeywords nonuniform correction, infrared sensor, reference based, radiation, graphical user-

interface

Page 6: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete
Page 7: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

AbstractEnglish: This thesis describes and evaluates a number of approaches and algo-

rithms for nonuniform correction (NUC) and suppression of fixed pattern noisein a image sequence. The main task for this thesis work was to create a generalNUC for infrared focal plane arrays. To create a radiometrically correct NUC,reference based methods using polynomial approximation are used instead of themore common scene based methods which creates a cosmetic NUC.

The pixels that can not be adjusted to give a correct value for the incommingradiation are defined as dead. Four separate methods of identifying dead pixelsare used to find these pixels. Both the scene sequence and calibration data areused in these identifying methods.

The algorithms and methods have all been tested by using real image sequences.A graphical user interface using the presented algorithms has been created in Mat-lab to simplify the correction of image sequences. An implementation to convertthe corrected values from the images to radiance and temperature is also per-formed.

Svenska: Den här rapporten beskriver och utvärderar ett antal olika påståen-den och algoritmer för icke-likformiga korrektioner samt reducering av de dödapixlarna i en bildsekvens. Huvuduppgifterna för detta arbetet var att skapa engenerell icke-likformig korrektion för IR-kameror. För at skapa radiometriskt kor-rekt icke-likformig korrektion, valdes en referens baserad metod som använder sigutav polynomekvationer, istället för den mer vanliga typen kallad scen baseradmetod, som skapar en kosmetiskt korrekt icke likformig korrektion. De pixlar sominte kan korrigeras så att de ger ett korrekt värde för den inkommande strålningendefinieras som döda. Fyra olika metoder används för att hitta och definiera dessadöda pixlar. Både scen och kalibrerings data används i dessa metoder.

Alla algoritmerna och metoderna har blivit testade på riktiga scen data sekvenser.Ett grafiskt användargränssnitt, som använder de föreslagna algoritmer har ska-pats i Matlab för att förenkla korrigeringen av bildsekvenserna. En metod föratt konvertera de korrigerade bilderna till radiansnivåer och temperatur har ocksåskapats.

v

Page 8: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete
Page 9: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Acknowledgements

This master thesis was conducted at the department of IR systems, FOI, Linköping.I would like to thank the following people:

- The initiators to the thesis and my supervisors at department of IR systems,FOI, Thomas Svensson and Ingmar Renhorn.

- The staff at the department of IR systems, FOI.

- Thomas Svensson for the daily discussions and ideas about the thesis work.

- Martin Mileros who introduced principal component analysis (PCA) to me.

- My examiner Per-Erik Forssén, at the department of Electrical Engineering,Linköpings universitet.

- My opponent Mikael Löfqvist.

vii

Page 10: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete
Page 11: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Notation

SymbolsL Spectral radiance [W/(m2 · sr · µm)]M Emittance [W/m2]Me Spectral exitance [W/(m2 · µm)]yij DN of pixel j from frame ixij Corrected DN of pixel j from frame i

AbbreviationsDN Digital numberFPA Focal plane arrayFPN Fixed pattern noiseMWIR Mid wave infraredNUC Non Uniform CorrectionPCA Principal component analysisSWIR Short wave infraredTIR Thermal infraredTN Temporal noise

ix

Page 12: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete
Page 13: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Contents

1 Introduction 11.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Previous work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Goal of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.5 Thesis overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Basic concepts 52.1 Basic radiometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1.1 Properties of radiators . . . . . . . . . . . . . . . . . . . . . 72.1.2 Lambertian radiator . . . . . . . . . . . . . . . . . . . . . . 8

2.2 Infrared imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2.1 Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2.2 From detector to an imaging system . . . . . . . . . . . . . 92.2.3 Responsivity . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2.4 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Equipment used in thesis 153.1 Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.1.1 Multimir . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.1.2 Emerald . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.1.3 Information from files . . . . . . . . . . . . . . . . . . . . . 17

3.2 Radiation sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4 Nonuniform correction of IR-images 214.1 Scene based correction . . . . . . . . . . . . . . . . . . . . . . . . . 23

4.1.1 Statistic methods . . . . . . . . . . . . . . . . . . . . . . . . 234.1.2 Registration-based . . . . . . . . . . . . . . . . . . . . . . . 24

4.2 Reference based correction . . . . . . . . . . . . . . . . . . . . . . . 244.2.1 Correction function . . . . . . . . . . . . . . . . . . . . . . . 25

4.3 Quality measurement . . . . . . . . . . . . . . . . . . . . . . . . . . 30

xi

Page 14: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

5 Identifying and replacing dead pixels 335.1 Temporal noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.2 Extreme values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375.3 Value on polynomial coefficients . . . . . . . . . . . . . . . . . . . . 395.4 Principal component analysis . . . . . . . . . . . . . . . . . . . . . 425.5 Replacement of bad pixels . . . . . . . . . . . . . . . . . . . . . . . 48

6 Results 496.1 NUC and bad pixel filtering . . . . . . . . . . . . . . . . . . . . . . 506.2 Linear responsivity . . . . . . . . . . . . . . . . . . . . . . . . . . . 506.3 Correctability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526.4 Measurement considerations . . . . . . . . . . . . . . . . . . . . . . 546.5 The user-interface of Matlab program . . . . . . . . . . . . . . . . 556.6 DN → physical unit . . . . . . . . . . . . . . . . . . . . . . . . . . 55

7 Conclusions 597.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Bibliography 61

A NIPALS algorithm 63

B Guide to the user interface 65B.1 Startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

B.1.1 Select calibration file . . . . . . . . . . . . . . . . . . . . . . 67B.2 View image and set dead pixel limits . . . . . . . . . . . . . . . . . 69B.3 View file without any correction . . . . . . . . . . . . . . . . . . . 76B.4 Save file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78B.5 IrEval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

C Radiometric calibration 81C.1 Atmospheric absorption . . . . . . . . . . . . . . . . . . . . . . . . 81C.2 Radiance analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Page 15: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Chapter 1

Introduction

1.1 Background

The image data obtained from an infrared sensor are matrices with digital pixelvalues - which are equivalent to the image data obtained from digital cameras inthe visual region. The goal with all imaging is that the information in the imageshould represent the scene as close as possible, which is affected by noise sourceslike the nonuniform response of each pixel in the detector. This type of noise is amuch more serious concern in the infrared region compared to the visual region.Therefore a nonuniform correction (NUC) of the elements are needed, leading toan improved image quality. The need for a NUC becomes clear when viewing anuncorrected image where the nonuniformity appears, see figure 1.1. The imageshows two types of nonuniformities, first it is the pixels with no or almost noresponse, they appears as black and white pixels, secondly there is a Gaussianblur over the image, due to the nonuniform response.

1.2 Previous work

Nonuniformity of focal plane arrays (FPA) is a well known problem and there aremany articles on this subject found in the literature, where each article describesa specific method to reduce the nonuniformity. All these methods have their ownadvantages and disadvantages. A number of them are selected and reviewed inchapter 4. A nonuniform correction method that earlier has been used at FOI,Linköping, was implemented in Matlab as a set of Matlab scripts and workedreasonable well on single images, but the algorithms were not fully developed.The computation time was too long to correct series of images and the control ofdifferent parameters in the nonuniform correction was poor.

1

Page 16: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

2 Introduction

Figure 1.1: Uncorrected image from an infrared sensor

1.3 Goal of the thesisThe goal of this thesis was to design and evaluate nonuniform correction methods.Some of the wishes of this thesis were

• An enhanced image quality.

• The method must be general, it has to work on different kinds of image data(e.g. static scenes, motion, high and low contrast).

• Efficiency concerning the computation time.

• The nonuniform correction should be user friendly.

No article in the literature fulfils all these requirements. The two last require-ments were not the least important, since the amount of image data collectedduring field trials might be large, up to 200GB, divided into a large number ofdata files.

1.4 ImplementationThe focus of the thesis work was on the nonuniform correction. The algorithmswere implemented in Matlab v7.0. The algorithms and utilities were tested onreal infrared sequences registered with two high performance infrared cameras,denoted Multimir and Emerald. Both are available at the Department of infraredsystems and are routinely used in signature measurements. The computer usedwas a Pentium-III 3.0GHz with 1024MB RAM.

Page 17: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

1.5 Thesis overview 3

1.5 Thesis overviewChapter 2 - Basic concepts

Gives an overview of some basic concepts and definitions used in the thesisand an introduction to radiometry.

Chapter 3 - Equipment used in thesisDescribes IR-cameras and other tools used in thesis.

Chapter 4 - Nonuniform correction of IR-ImagesDescribes different techniques for nonuniformity correction of images. Bothscene based and reference based correction methods are described.

Chapter 5 - Identifying and replacing dead pixelsDifferent methods for identifying dead pixels are described and implemented.

Chapter 6 - ResultsThe proposed algorithms presented in previous chapters are merged together.The result of this is evaluated and a presentation of the created GUI is given.

Chapter 7 - ConclusionA conclusion and discussion of the result from this thesis is presented andsuggestions for further work are given.

Appendix A - Nipals algorithmThe used algorithm to calculate the PCA.

Appendix B - Guide to the user interfaceAn overview of the the created graphical user interface.

Appendix C - Radiometric calibrationDescribes the transformation of the pixels digital value into a radiometricunit.

Page 18: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

4 Introduction

Page 19: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Chapter 2

Basic concepts

In this chapter basic concepts used in this thesis work, are described and defined.Radiometry, which describes the energy transfer from a source to a detector istreated in section 2.1. Concepts connected to infrared imaging are treated insection 2.2. The information presented here is mainly gathered from references[4], [5],[9], [12] and [16], if nothing else is specified.

2.1 Basic radiometryRadiometry describes the energy or power transfer from a source to a detector.Passive remote sensing in the optical regime (visible through thermal) dependson two sources of radiation. In the visible to near infrared band, the radiationcollected by a remote sensing system originates with the sun. In the thermalinfrared band, thermal radiation is emitted directly by materials on the earth.Part of the radiation received by a sensor has been reflected at the earth’s surfaceand part has been scattered by the atmosphere, without ever reaching the earth.

All objects with a temperature above zero Kelvin emits thermal radiation,according to Planck’s law, eq 2.1. The spectral emittance [W/(m2, µm)] of anideal blackbody is a function of the absolute temperature and the wavelength andis described by the Planck distribution law

Mλ(λ, T ) =c1

λ5

(1

ec2/λT − 1

) [w

µm ·m2

](2.1)

where

T = Blackbody temperature [K]λ = Wavelength [m]c1 = 3.7418 · 108 [watt · µm4/m2]c2 = 1.43388 · 104 [µm ·K]

In figure 2.1 the Planck distribution is plotted for five different temperatures.

5

Page 20: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

6 Basic concepts

10−1

100

101

102

100

105

200K

300K

600K

1000K

2000K

Spectral radianceS

pect

ral r

adia

nce,

W/(

cm2 ,µ

m)

Wavelength, µm

(a) Spectral emittance

100

101

100

105

200K

300K

600K

1000K

2000K

Spectral radiance

Spe

ctra

l rad

ianc

e, W

/(cm

2 ,µm

)

Wavelength, µm

(b) Spectral emittance within the infraredregion

Figure 2.1: The spectral emittance at five different temperatures

Two main features may be observed in the figure.• The peak shifts to shorter wavelengths as the temperature is raised. The

peak position is given by the Wien’s displacement law.

λ|max = 2898/T (2.2)where

λ|max = Wavelength at which the radiation is a maximum [µm]T = Temperature [K]

• The total energy emitted is strongly dependent on the temperature. Infact it is proportional to the fourth power of temperature, and is known asStephan-Boltzmann law.

M = σ · T 4 [W ·m−2] (2.3)where

σ =π5k4

45c2h3= 5.669 · 10−8 [W ·m−2 ·K−4]

k = Boltzmann constant = 1.380662 · 10−8[J ·K−1]h = Planck’s constant = 6.626176 · 10−34[J · s]T = Temperature [K]

The most interesting spectral regions are shown in table 2.1. They containrelatively transparent atmoshperic window, see appendix C, and there are effectiveradiation detectors in these regions.

The thermal infrared region (TIR) is the part that may be used by infraredsensors to detect emitted radiation. Due to strong absorption bands in the atmo-sphere (mainly H2O and CO2) the transmission is very low between 5− 8µm andabove 14µm.

Page 21: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

2.1 Basic radiometry 7

Name Wavelength range Main radiationsource

Visible 0.4− 0.7µm solarNear Infrared (NIR) 0.7− 1.1µm reflected solarShort Wave Infrared (SWIR) 1.1− 3µm reflrected solarThermal infrared (TIR) 3− 14µmMid Wave Infrared (MWIR) 3− 5µm solar, thermalLong Wave Infrared (LWIR) 8− 14µm thermal

Table 2.1: Spectral regions and its radiation sources

2.1.1 Properties of radiatorsThe maximum emittance of a sample is given by Planck law, eq 2.1. The ratiobetween the emittance of a sample and the emittance from a blackbody at thesame temperature, is called the spectral emissivity and is given by

εs(λ) =Ms

λ(λ)M bb

λ (λ)(2.4)

where

Msλ(λ) = Spectral emittance of sample

M bbλ (λ) = Spectral emittance of a blackbody

Kirchoffs law states that in equilibrium the emission and absorption are equal,and therefore

α(λ) = ε(λ) (2.5)

The specular spectral reflectivity relates the reflected radiation of a surface tothe incident one.

ρ(λ) =Mr

λ(λ)Einc

λ (λ)(2.6)

where

Mrλ(λ) = The reflected radiation

Eincλ (λ) = The incoming irradiance

The spectral transmissivity, relates the transmitted radiation to the incidentradiation and is defined the same way as the reflectivity and is given by

σ(λ) =M t

λ(λ)Einc

λ (λ)(2.7)

Kirchoffs law states that the sum of the absorbed, reflected and transmittedpower is equal to the incident power.

α(λ) + ρ(λ) + σ(λ) = 1 (2.8)

Page 22: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

8 Basic concepts

If α(λ), ρ(λ) and σ(λ) are independent of wavelength then the emitter is calleda graybody. If α = 1 the emitter is a perfect blackbody.

2.1.2 Lambertian radiatorThe radiance unit [W/(m2 · sr ·µm)] is used to describe the exitance from objectsthat are resolved by the imaging sensor. If the object has a radiance that isindependent of the angle it is called a Lambertian radiator, for which the relationbetween the exitance and the radiance is given by

L(λ) = εs(λ)Mλ(λ)

π[W/(m2 · sr · µm)] (2.9)

where

εs(λ) = Spectral emissivityMλ(λ) = The spectral exitance given by eq. 2.1 and 2.3

The power detected by a detector may be due to radiance emitted by the objector radiance reflected by the object. At temperatures around 25◦C the radiancedetected below 3µm is mainly due to reflected radiation and radiance above 3µmis mainly due to emitted radiation, see figure 2.1.

The digital number (DN) given by a sensor is the radiance seen by the detectorconverted to an electric signal and quantized to a digital number.

2.2 Infrared imagingAn electro-optical imaging system consists of at least five fundamental parts: op-tics, optic band pass filter, detector, signal processing and monitor. The term”sensor” defines the optics, optic band pass filter, detector and signal processing.

2.2.1 DetectorsThe detector is the heart of the sensor. There are two main types of detectors

• Thermal detectors

• Photon detectors

In thermal detectors the temperature of a response element, sensitive to the actionof infrared radiation, is raised. This in turn changes a temperature dependentparameter like the electrical conductivity. These detectors can operate at roomtemperature but the sensitivity is lower and the response time longer than for thephoton detectors.

High performance IR detectors are cryogenically cooled photon detectors, whichmeans that the working temperature is below 80 K, ”cryogenically” here means atemperature down to the temperature of liquid nitrogen ≈ 77 K. Photon detec-tors are based on the interaction between photons and electrons in semiconductor

Page 23: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

2.2 Infrared imaging 9

materials. At longer wavelengths in the infrared region they require cooling to getrid of excessive noise. The sensitivity is high and the response time is small.

Systems based on uncooled thermal detectors have not been studied in thisthesis and in the following will therefore only photon detectors be discussed.

Detector material

Common materials in photon detectors are HgCdTe (known as MCT), InSb andQWIP (Quantum Well Infrared Photodetector). Advantages of MCT is a highsensitivity and quantum efficiency, a flexible wavelength tunability (1.5 − 26µm)and the potential to operate above cryogenic temperatures. A drawback of MCT isthat it is difficult to manufacture, leading to high costs and poor uniformities of thearrays. InSb is an equally sensitive alternative to MCT and is easier to reproducethan MCT. The tunability is however poorer (2 − 5.5µm). The QWIP employsilicon and GaAs manufacturing procedures, they are much more producible thanMCT, especially for long wavelength and large format arrays. The QWIP also haslower sensitivities to nonuniformities. Drawback’s of QWIP is a lower operatingtemperature and a lower responsivity than MCT, ref [8].

Sensitivity

NETD, noise equivalent temperature difference, [mK], is a common measure ofthe sensitivity. It is the smallest temperature difference that is possible to detectby an IR camera and is defined as the temperature difference between a targetand a uniform background that produces a signal to noise ratio equal to one. It iscalculated as

NETD =σN

∆S/∆T[mK] (2.10)

where

σN = standard deviation of the noise∆S = signal difference∆T = temperature difference between target and background

NETD for systems based on cryogenically cooled photon detectors and uncooledthermal detectors are typically 20 mK and 100 mK respectively.

2.2.2 From detector to an imaging systemBasically there are two types of infrared imaging systems, scanning systems andstaring systems. A general drawback with scanning systems is the need for anexpensive scanning arrangement. The first IR cameras consisted of one detectorelement that scanned the scene in two directions, figure 2.2a. The demand for thescanning velocity was high with these systems and with many pixels the frame ratewas low. By the development of linear arrays of detector elements, figure 2.2b,that performed parallel scanning of the scene in only one direction, the frame rate

Page 24: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

10 Basic concepts

was increased. An improved manufacturing lead to the development of focal planearrays, FPA, where the number of detector elements is equal to the number ofpixels, figure 2.2c. Due to the elimination of the scanning system these sensorsare called ”staring”. The technique has very high frame rates (≥ 1000Hz) andlonger integration times leading to a higher sensitivity. A longer integration timehas also made the use of uncooled thermal detectors possible.

(a) Single detector, scan-ning

(b) Linear array, scan-ning

(c) Staring focal plane ar-ray

Figure 2.2: Types of imaging systems

The array sizes for MCT, InSb and QWIP are comparable and typically 640x512.A general trend in infrared imaging is that the arrays are growing. The largestMCT arrays available today consist of 4000x4000 detector elements.

Dynamic range and contrast

The dynamic range is defined as the relation between the maximum measurablesignal and the minimum measurable signal. For digital systems it is usually definedas the relation between the largest digital number and the least significant bit(=1).The (radiometric) contrast is defined as the difference between the maximum andminimum level, radiance in the scene or DN value in a single frame. The outputare shown in with a visual contrast.

The dynamic range is a constant value for the camera, in this thesis 214 isused. The contrast change depending on the integration time and radiation fromthe scene. The contrast may vary from a DN value of a few hundred to the fulldynamic range. If the contrast is small objects may be hard to visually detect,i.e. small visual contrast. However increasing the visual contrast to enhance thevisual result also increases the nonuniformity between the pixels in the detector.

Page 25: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

2.2 Infrared imaging 11

DNMax=214-1

DNMin=0

DNMax image

DNMin image D

ynam

ic r

ange

(Rad

iom

etri

c)

cont

rast

Image contrast

14

Figure 2.3: Illustration of dynamic range and contrast

2.2.3 ResponsivityResponsivity, R, is defined as a system response to a known controlled signal input.The responsivity unit depends on the unit of the systems response.

R =U

φ[V/W ]

R =I

φ[I/W ]

R =DN

φ[DN/W ] or [DN/radiance]

If the sensor registers homogeneous blackbody surfaces the incident radiation isapproximately the same on all pixels, therefore the mean value of the FPA isproportional to the incident radiance level.

2.2.4 NoiseOn a raw image from an infrared camera there is quite some distortion, some ofwhich can be corrected and some that is incorrigible. Some categories of noise thatis commonly used in the literature and the types of noise that will be discussed inthis thesis is given here.

Temporal- and spatial noise

Temporal noise is defined as the standard deviation of the DN’s for one pixel onthe IR sensor through time with a constant incoming radiation. Temporal noiseis an uncorrectable noise, but if the noise is considered Gaussian, the images canbe averaged and the S/N radio is then raised

√n times, where n is the number of

frames averaged.Spatial noise is defined as the standard deviation of the DN’s for one image.

Spatial noise can to a large extent be reduced by the NUC.Usually the standard deviation is used to calculate the two types of noise, but

it can also be described by some other statistical approach, in this thesis it is thestandard deviation that is used.

Page 26: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

12 Basic concepts

Time

x

y

(a) Temporal noise

y

x

(b) Spatial noise

Figure 2.4: Temporal and spatial noise in the FPA

Background Noise

Background noise is one fundamental temporal noise source. It is due to thestatistical fluctuations of the radiation inside the camera. This noise is independentof the performance of the detector and is not correctable.

Nonuniformity and fixed pattern noise

A drawback with focal plane arrays is that the responsivities as a function of theincident radiation of the detectors, especially MCT, are nonuniform. This leadsto an extra noise source, spatial noise or pixel noise, often denoted fixed patternnoise (FPN). Figure 2.5 shows response curve for nine randomly selected Multimirpixels.

Due to the manufacturing process each pixel has a unique response curve,it has its own gain and offset, that differs from the other pixels in the FPA.The differences in the linearity i.e. the gain and offset levels may be significantbetween different detectors. A NUC is therefore needed where a unique correctionfunction corrects each pixels value. The nonuniformity will be further discussedin chapter 4.

The most common fixed pattern noise sources include:

Fabrication errors Inaccuraties in the fabrication process give rise to variationsin geometry and substrate doping quality of the detector elements.

Cooling system Small deviations in the regulated temperature are hard to avoidbut may have large impact on the detector responsivity.

Electronics For detector arrays, variations in the read-out electronics is a com-mon source of fixed pattern noise.

Optics The sensor optics may decrease the signal intensity at the edges of theimages and create different kinds of circular image artefacts.

Response curve over the full dynamic range

A detectors response curve is not linear over the full dynamic range, but insteadit tends to have a S-shaped form, figure 2.6. The minimum level depends on the

Page 27: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

2.2 Infrared imaging 13

4000 6000 8000−50

0

50

100

150Pixel [10,10]

4000 6000 8000−50

0

50

100Pixel [100,10]

4000 6000 8000−40

−20

0

20

40Pixel [200,10]

4000 6000 8000−50

0

50

100Pixel [10,100]

4000 6000 8000−50

0

50Pixel [100,100]

4000 6000 8000−10

0

10

20Pixel [200,100]

4000 6000 8000−50

0

50

100Pixel [10,200]

4000 6000 80000

10

20

30Pixel [100,200]

4000 6000 8000−40

−20

0

20Pixel [200,200]

Figure 2.5: Response curve showing the pixel deviation (pixel DN - mean DN)for nine randomly selected pixels in the Multimir camera. For perfect uniformdetectors a constant difference equal to zero would have been obtained.

background noise and the surrounding temperature. At a high incident radiancethe camera becomes saturated, which decreases the responsivity. A response curvetherefore has a theoretical function like the one given in figure 2.6.

Figure 2.6: A theoretical S-shaped response curve for an infrared sensor

Drifting

When a camera is calibrated, there is still some incoming noise. The responsivityof each pixel is not constant but changes through time, this is due to severalfactors, but one main reason is the fact that the infrared cameras operate at a

Page 28: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

14 Basic concepts

temperature of 80K, and are therefore very sensitive to temperature changes. Themedian value for any given radiation might be constant but the unique responsivityfor each pixel in the sensor is changing. Figure 2.7 shows six different correctionsfor one pixel, over a time period of seven hours.

4500 5000 5500 6000 6500 7000 7500−10

0

10

20

30

40

50Pixel [200,100]

DN

Diff

eren

ce to

the

mea

n va

lue

of th

e F

PA

Figure 2.7: The change of responsivity for one pixel at six timepoints during 7hours, using the Multimir camera. The curves are created using a second degreepolynomial approximation.

Page 29: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Chapter 3

Equipment used in thesis

This chapter gives an overview of the equipment that is used in this thesis work.To calibrate the infrared cameras, uniform radiance sources have been used.

3.1 CamerasThis thesis is mainly based on two infrared cameras at FOI, Linköping, which aredenoted the Multimir and the Emerald camera.

3.1.1 MultimirMultimir (MULti-spectral Midwave IR) is a multi-spectral infrared sensor. Multi-mir is based on a spinning filter wheel, figure 3.1 with four optical band pass filters.The transmission curves for the four filters are shown in figure 3.1b. The rotationfrequency of the wheel is 25Hz , which gives a full frame rate = 4 · 25 = 100Hz .The camera can be operated with the filter wheel in a non-rotating mode, whereonly one of the spectral bands is used.

The cut on and cut off frequencies of the four optical filters are shown intable 3.1.

Band 1 1.55− 1.75µmBand 2 2.05− 2.45µmBand 3 3.45− 4.15µmBand 4 4.55− 5.2µm

Table 3.1: Transmission bands for optical filters in Multimir

The transmission curves for the four filters are given from the manufacturer,and are shown in figure 3.1b.

The material of the detector is based on MCT, see section 2.2.1, and has anoperating temperature below 85K.

15

Page 30: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

16 Equipment used in thesis

(a) Multimir camera. Note the filterwheel.

1.5 2 2.5 3 3.5 4 4.5 5 5.5 60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Band 1 Band 2 Band 3

Band 4

Responsivity of MMIR

Wavelength [µm]

Res

pons

ivity

(b) Transmission curve of the four filters inthe Multimir camera.

(c) Image from the Multimir camera.

Band 1

Band 2

Band 3

Band 4

(d) In the rotating mode the images arepresented in the following way.

Figure 3.1: The Multimir camera

3.1.2 Emerald

Emerald is a multi-band sensor. Multi-band defines a sensor that registers inseveral spectral bands, but not in real time.

Emerald is equipped with a filter wheel, holding four different filters, at presentonly three of the positions are used. The transmission curves for the three filtersare shown in figure 3.2b.

The detector is based on InSb and has an operating temperature below 80K.

Band 1 − 4µmBand 2 3.5− 5µmBand 3 4.6− 5.5µmBand 4 Not in use

Table 3.2: Spectral wavelengths for the different bands, Emerald.

Page 31: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

3.2 Radiation sources 17

(a) Emerald camera.

3 3.5 4 4.5 5 5.5 6

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Band 3Band 2Band 1

µm

Res

pons

ivity

(b) Transmission curve for the three filters of theEmerald camera.

Figure 3.2: The Emerald camera.

3.1.3 Information from filesWhen saving the registrated scenes, the data is saved to different types of filesdepending on camera. The format of the file from the two cameras contains in-formation that is useful for the nonuniform correction. A comparison between theinformation between the two types is shown in table 3.3

3.2 Radiation sourcesWhen using the reference based correction some radiation sources with uniformradiation, are needed as a reference. At FOI, there are two types of radiationreferences that are used. For short wavelength radiation, < 3µm, a small spotlightis used, see figure 3.3a. This radiation is scattered through three pieces of opalglass, to create a homogeneous radiance. Using opal glass a near Lambertiansource can be achieved. For thermal infrared bands > 3µm a peltier-radiatorwith a black coating is used, see figure 3.3b. This black coating is to create anemissivity, ε, close to one. The range of temperature is from −10◦C to 60◦C.

When calibrating the voltage (=radiation) is kept constant over the spotlightinstead the integration time of the detector is changed to create unique radiationvalues. The temperature of the peltier is measured by a handheld IR-thermometer,which gives the temperature with an accuracy down to a decimal. The emittedradiance from the radiator at different temperatures is calculated by the Planckdistribution law, eq 2.1.

Pre-calibration

When using the cameras a pre-calibration is usually performed, which is an oneor two point calibration using the peltier-source. The two point calibration is a

Page 32: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

18 Equipment used in thesis

Multimir EmeraldDetector MCT(HgCdTe) InSbOptics 100mm 50mmDynamic range 214 214

Array size 288x384 512x640Frame size 576x768(rotating wheel)Frame size 288x384 512x640(non-rotating)Subframes 16,32, . . . 288 rows 16, 32, . . . rows

32,64, . . . 384 columns 16, 32, . . . columnsIntegration time 50µs− 2.6ms 3µs− 10ms

in 256 steps by steps of 1µsFrames per second 100Hz rotating wheel 100Hz full frameSubframes < 1000Hz < 1000Hz

Information from fileheaderNumber of frames X XSize of image X XUsed filter XTime of registration X XIntegration time XFrames per second XUsed lens XTemperature of camera X

Table 3.3: Information contained in the scene files from the infrared cameras

(a) Spotlight and three pieces of opal glass (b) Peltier radiator

Figure 3.3: The correction utilities, used at FOI when calibrating the IR-cameras

Page 33: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

3.2 Radiation sources 19

linear gain and offset calibration. Since the pre-calibration is a linear function itdoes not corrupt any of the information in the image.

Some of the reasons for this procedure are:

Real-time correction. The nonuniform responsivity of the sensors are creating ahighly disturbed image. Using pre-calculation creates a real-time correction,that enhances the visual output.

Scene in focus. Due to the low contrast and nonuniform responsivity, it mightbe hard to register objects in the image. The enhanced visual output fromthe real-time correction simplifies the work of putting the camera in focus.

Verifying the contrast. When registering a scene, it is important that the DNsare within valid region. When performing a pre-calibration the two usedand known radiation sources, are at least corresponding to the warmest andcoldest objects in the scene. This will verify if the DNs from the scene willbe valid or not.

Function control of camera. A system check is performed at the pre-calibration,to verify that the mean DNs are a function of the incoming radiation.

When referring to raw data or uncorrected images, pre-calibration is assumedto have been performed.

Page 34: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

20 Equipment used in thesis

Page 35: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Chapter 4

Nonuniform correction ofIR-images

Figure 4.1 shows two uncorrected images (raw data), registered by the Multimirand the Emerald camera.

The images look noisy, especially the Multimir image, which is due to thenonuniformity of the FPA. All detector elements in an array have a unique re-sponsivity which will appear as an extra noise source. There are two kinds of pixelnoise: random and non-random. If it is non-random then the noise is created bya fixed source and may produce image artefacts such as striping in the image, seefigure 4.1b. Pixels that are very different from the other pixels, appear as ”saltand pepper”. The salt and pepper may be due to both dead pixels, see chapter 5,and pixels with very different responsivity. It should be noted that two images,registered by the same camera, may appear quite different in a visual evaluationconcerning the pixel noise, depending on the following

• the contrast in the registered scenes may be different

• the visual contrast operations may not be the same

• the physical sizes of the images, on a monitor or on a paper, may not be thesame

Small differences in the radiance levels in a scene will result in a low contrast.Increasing the visual contrast will enhance the visibility of the nonuniformity andcause pixels to appear as salt and pepper.

The goal with a nonuniform correction (NUC) is that all pixels should givethe same digital signal, DN, for a certain incident radiance level. A correctionfunction is therefore needed for each pixel and each spectral band, since the inci-dent radiance level, due to the transmission of filters and optics and the detectorsensitivity are wavelength dependent.

Basically the NUC methods are divided into the scene based and the referencebased correction method.

21

Page 36: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

22 Nonuniform correction of IR-images

(a) Uncorrected image from the Multimir camera

(b) Uncorrected image from the Emerald camera band 3

Figure 4.1: Uncorrected infrared images; upper: Multimir, band 1-4. Note thesalt and pepper in the image; lower: Emerald, band 3. Note the striping in theimage, the pixel noise is not randomly distributed.

Page 37: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

4.1 Scene based correction 23

4.1 Scene based correctionAs the name implies, in scene based correction the pixels are calibrated by thescene. A correction function is created based on the DN given in earlier frames.Scene based correction does not use any homogeneous reference radiance sources,making it time and economic efficient and explains why much of the researchperformed on NUC methods is in this field. However to get a good quality of thecorrection an image motion is needed in the scene. Another drawback is that thecorrection functions created for the pixels are relative to each other. Thus, thepixels are not calibrated to any radiometric value.

More advanced scene based correction methods also involve methods for motionestimation. Since the main focus of this thesis work is on the reference basedcalibration only a brief overview of scene based calibration is given. It is based onthe references [3], [11] and [14].

4.1.1 Statistic methodsBy using statistics of previous images in a sequence, parameters to perform NUCcan be created. Depending on the type of statistics that is used, several differentmethods are availible for creating parameters for the NUC.

Temporal highpass

The basis for the method is that the DN are varying between subsequent imagesin an image sequence, while the fixed pattern noise is approximately constant.Thus the information given in a scene is in the high frequency information, whilethe fixed pattern noise is in the low frequency information. By using a high passfilter, usually a recursive IIR-filter to save memory, the temporal average valuewill therefore be subtracted from the DN creating an offset correction. Without amotion the high pass filter tries to high pass the DN from the same pixel, whichis constant plus some temporal noise. This creates a correction function that onlypasses the temporal noise.

Constant statistics

The constant statistics method estimates both the offset and gain for each pixel.The method assumes that the incoming temporal means and variances of theradiation are the same for all pixels, which requires that all possible scene radiancelevels will be observed by all pixels in an image sequence. To achieve this, an imagemotion must exist.

Let a linear model constitute the correction function

x = g · y + o (4.1)

where y is the incoming DN, g is the gain and o is the offset value of the correctionfunction and x is the corrected DN. The temporal mean value, mx, and mean

Page 38: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

24 Nonuniform correction of IR-images

deviation, sx, are

mx = E[x] == E[g · y + o] == E[g · y] + o == g · E[y] + o == g ·my + o (4.2)

sx =√

E[(x−mx)2] =

=√

E[g2 · (y −my)2] =

= g · sy (4.3)

In the constant statistics method it is assumed that the temporal statistics of x isconstant for all pixels. Then the expressions can be rewritten as

my = o (4.4)sy = g (4.5)

A gain and offset correction has thus been performed.

Kalman filtering

A key limitation of all the scene-based NUC techniques published to date is thatthey do not exploit any temporal statistics of the drift in the nonuniformity. As aresult, each time that a drift occurs, a full-scale NUC is performed, a process thatmay be greatly simplified and improved if statistical knowledge on the nature ofdrift is exploited, especially in cases where the drift is small. In the articles [3]and [11] they use Kalman filtering and a inverse covariance method to simplify theprocess.

4.1.2 Registration-basedIn registration based methods motion estmation is used in an image sequence,where each image in the sequence can be spatially related to previous images. Theidea behind the registration based method is that the DN values of all pixels thatrepresent the same position in the scene, through the image sequence should givethe same value.

The main task in this method is the motion estimation which has to be ableto estimate sub-pixel motion to create the correct coefficients for the NUC.

4.2 Reference based correctionReference based correction is based on registrations of homogeneous radiancesources at one (or more) levels. The method allows the radiance and the tem-perature of the object in the scene to be calculated, if the radiance sources are

Page 39: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

4.2 Reference based correction 25

well defined. Contrary to the scene based correction, the reference based correctiondoes not need any image motion, motion estimation or any statistics for a goodresult. The method is also more efficient concerning the need for computationalpower. A drawback is the time duration between the registrations of the radiancesources and the scene images. During the time elapsed between the registrationsthe responsivity may change due to drift and the correction function thereforemay be valid only under a limited time period. Because there is no way to updatethe coefficients, like in the scene based method, the time difference between theregistrations of the radiance sources and the scene should be as small as possible.In practice, especially in field trials, there is a lower possible limit for frequentcalibrations. The need for registrations of the radiance sources also depends onfactors like the stability of the camera, the temperature stability of the air sur-rounding the camera and the quality of the calibration files. This will be furtherdiscussed in section 6.3.

4.2.1 Correction functionThe pixels responsivities need to be described by a correction function, whichis used in the NUC. The correction function may simply be a constant (offsetcorrection) or a more complex function, compare the two equations below.

x = y + o

x = g ·[φ0 −

1kφ

ln(

kx

y − x0− 1

)]− o

To find out what correction function should be used, the valid contrast has tobe known. If the sensor’s whole dynamic range will be used, an S-shape correctiontherefore is optimal. An S-shape correction function is however complex and thereis no simple equation that can be used, making it computationally requiring. Inaddition five (or more) reference points are needed to determine an S-shape. If thecontrast is restricted to a linear range the equation is simple, it may be as simpleas an offset correction of each pixel. The demand for computational power is thenreduced and the correction function is easier to implement in hardware.

Polynomial approximation

Today the polynomial approximation is the most common correction function. Itapproximates the pixels responsivities well, is easy to implement and computa-tionally efficient enabling the NUC to be performed in real-time.

In the section explaining the scene based correction the only correction functionthat was used was the polynomial approximation. The temporal highpass usedonly an offset correction, which is a polynomial correction of zero degree while mostof the other correction methods use a first degree polynomial approximation. Thenumber of degrees of the reference based correction is depending on the numberof references, but usually a second degree approximation is quite sufficient. Thepolynomial approximation was first presented in [13], and later on improved by[10].

Page 40: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

26 Nonuniform correction of IR-images

Assume an output value yij from the FPA, the subscript j = 1 · · ·m, refers tothe m individual detector elements on the FPA and the subscript i = 1 · · ·n refersto the n individual reference sources. Ti is the temperature of reference source i.The mean signal output value, 〈yi〉, of the FPA is defined by

〈yi〉 =1m

m∑j=1

yij (4.6)

Since m is a large number, the sensor uses > 1000 pixels for each frame, a goodapproximation of the mean response characteristics is obtained by averaging. Theoverall detector response characteristics R(Ti) is sampled at the n irradiation tem-peratures,

R(Ti) = 〈yi(Ti)〉 (4.7)

The nonuniformity and the temporal noise are described by the signal outputdeviation, the amplitude deviation ∆yij , of individual data points

∆yij = yij − 〈yi〉 (4.8)

By performing a standard least square curve fitting with a polynomial definedby the order of correction to the amplitude deviation of individual pixels, theamplitude deviation values ∆ylsq

ij calculated by the curve fitting are

∆ylsqij = c0j + c1j · 〈yi〉+ c2j · 〈yi〉2 + · · · (4.9)

For an offset correction only one parameter, a constant c0j is determined foreach pixel, for a linear correction two parameters, a constant c0j and a gain c1j isdetermined. The degree of correction may be arbitrarily chosen.

This type of calibration is sensitive to the temporal noise, i.e. the variationsof the DN for constant radiation. To reduce this sensitivity, the calibration pointsare averaged over a number of frames from the same radiation source.

The least square curve fitting procedure is the adequate method to approxi-mate the individual pixel characteristics because it minimizes the error and takesinto account the errors due to the temporal noise. For the correction of the nonuni-formity, the data values determined by the curve fit to the amplitude deviation issubtracted

∆ycij = ∆yij −∆ylsq

ij (4.10)

The amplitude values xcij after correction are obtained by adding the unified re-

sponse function again

xcij = 〈yi〉+ ∆yc

ij (4.11)

The corrected pixel amplitude is determined by eliminating the linearized ir-radiation parameter 〈y〉 from equations 4.8 to 4.11 to obtain the relations for the

Page 41: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

4.2 Reference based correction 27

offset, the linear and the quadratic approximation.

xcj = yj − c0j offset correction (4.12)

xcj =

yj − c0j

1 + c1j

linear correction (4.13)

xcj = −

1 + c1j

2c2j

+

√(1 + c1j)2

4c22j

+yj − c0j

c2j

quadratic correction (4.14)

For corrections higher than second order polynomial correction the mathematicalrelations are more complex.

This procedure is, for its complexity, a time consuming function. R. Wang im-proved this procedure[10], which resulted in eliminating the square root in calcu-lating the corrected DN. While Schultz tried to fit the data with help of coefficientsand the mean value, 〈y〉, Wang fits the data based on the actual DN, yj .

Wang’s approach, starts with first calculate the mean DN, 〈yi〉. The ∆yji iscalculated as

∆yij = yij − 〈yi〉 (4.15)

By creating a least square polynomial approximation ∆yij can be calculatedas

∆yij = yij − 〈yi〉 ≈l∑

k=0

ckjykj j=1,2 . . . m (4.16)

Where l is the degree of the polynomial approximation and it has to be < n.Correcting the DN with the approach suggested by Wang is then performed by

xcj = yj −

l∑k=0

ckjykj j=1,2 . . . m (4.17)

The obvious advantages by using approach suggested by Wang instead of theSchultz approach are: 1. It has no division operations and extractions of the root.2. Using approach made by Schultz, polynomial order higher than third ordercannot give an analytical correction function. 3. For polynomial orders higherthan of first order using Schultz approach, gives multiple roots, where the correctcorrection has to be selected.

The correction algorithm that is implemented in Matlab and used in the maincorrection uses the polynomial approximation. A result of this correction functioncan be seen in figure 4.2 and figure 4.3.

Analytical approximation

While the polynomial approximation gives a good result when the DN are withinthe linear region of the S-shaped response curve (see section 2.2.4), it does notgive an acceptable correction for the response curve when using the full dynamic

Page 42: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

28 Nonuniform correction of IR-images

(a) Uncorrected image, Multimir

(b) Same image through a second order polynomial nonuniformity correction function,Multimir

Figure 4.2: A comparison between the raw image from the Multimir camera andthe polynomial nonuniformity corrected image. Note the differences in band 3.

Page 43: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

4.2 Reference based correction 29

(a) Uncorrected image, Emerald

(b) Same image through a second order polynomial nonuniformity correction function,Emerald

Figure 4.3: A comparison between the raw image from the Emerald camera andthe polynomial nonuniformity corrected image. Note the reduction of stripes andoptical artefacts in the image.

Page 44: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

30 Nonuniform correction of IR-images

range of the sensor, which for both the Emerald and Multimir, is DN ∈ [0, 214−1].Finding an approximation for the correction, that gives the proper S-shape, maybe a hard work. In the article "A feasible approach for nonuniformity correctionin IRFPA with nonlinear response" [15], they come up with an attempt to explainan analytical approximation of the curve.

Their assumptions are

1. The response function is a monotonic increasing function of the incidentphoto flux. That means that there always exists an inverse function for theresponse function.

2. The typical form of the sensors response curve is S-shaped.

The analytical S-shaped curve that they present is

y = h(x) = x0 +kx

1 + e−(kφ·(x−φ0))(4.18)

where

h(x) is the response function with the incoming photoflux as the variable.x0 and φ0 are the shift coefficients for output signal and incident flux respectively.kx and kφ are the scale coefficients.

To correct a DN the inverse of h(x) needs to be taken. This correction functionfor pixel j is

xcj = Gj(yj) ≈ g ·

[φ0,j −

1kφ,j

ln(

kx,j

yj − x0,j− 1

)]− o (4.19)

where

g and o are assigned the value of the spatial averagecoefficients of gain and offset over the sensor respectively.

4.3 Quality measurementAs has been mentioned in this chapter the NUC is never perfect, the errors are dueto the temporal noise and to nonlinearities which are not accurately approximatedby the regression analysis. The minimum resolved signal is defined by the temporalnoise; the fixed pattern noise further degrades the signal quality.

One way to estimate the quality of the correction is to relate the magnitudeof the residual fluctuations in the array after correction to the temporal noisepattern [13]. The goodness of the curve fitting for an individual pixel is describedby the χ2 value of the standard deviation normalized to the mean temporal noiseTN averaged over the FPA for the various radiation levels,

χ2j =

n∑i=1

(xcij − 〈yi〉)2

TN(4.20)

Page 45: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

4.3 Quality measurement 31

The χ2-distribution of a perfectly uniform FPA having only temporal (gaus-sian) noise is well known in statistics.

Deviations of the χ2-data histogram from the ideal χ2-distribution are due tothe residual spatial nonuniformities in the array. By subtracting the ideal χ2-distribution of measured data for the array after correction, we obtain a singlevalue to estimate the goodness of the correction

c =

√∑mj=1 χ2

j

(m− 1))− 1

=

√∑mj=1

∑ni=1(x

cij − 〈yi〉2)/TN

(m− 1)− 1 (4.21)

The DN normalization is m− 1 because one degree of freedom is consumed forthe averaging over the whole FPA to obtain the unified photo response character-istics. For a perfect nonuniformity correction the goodness value, c, is equal tozero, i.e. there is only temporal noise in the array.

The magnitude of the temporal noise strongly affects the correctability value.A small temporal noise level increases the c-value as the threshold for the fixedpattern noise is also small.

Page 46: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

32 Nonuniform correction of IR-images

Page 47: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Chapter 5

Identifying and replacingdead pixels

This chapter will study pixels that can not be corrected by a nonuniformity cor-rection. These pixels are generally called dead pixels in this thesis. They give anerroneous response to the incident radiation and have to be identified and mapped.In many applications their digital numbers (DN) also have to be replaced, for ex-ample with the value of a nearest neighbour. The number of pixels defined asdead is usually less than 1% of the total number of pixels in the focal plane array(FPA). There are two kinds of dead pixels described below.

1. Pixels with no responsivity belong to the first kind. The DNs are constantand independent of the incident radiant power on the detector. Most oftenthe pixel value is one of the two extremes 0 or the maximum value the sensorcan give, which is 214 − 1 for both the Multimir and Emerald. Pixels of thiskind are sometimes called truly dead pixel and are easy to identify.

2. Pixels with responsivities that are harder to describe belong to the secondkind. A correction function, see chapter 4, is not working and the pixel noisewill therefore remain after a nonuniform correction (NUC). Pixels of thiskind are sometimes called weak pixels and harder to identify.

The focus of this chapter will be on dead pixels of the second kind. Some ofthem do respond to the incident radiation. The response curve however may bevery weak or may be different at different time points. For some of the pixelsthe DN and the incident radiance level are only fairly correlated. One example ispixels for which the DN are varying even if the incident radiance level is constant.The response for the second kind of pixels is hard to describe by a correctionfunction and the pixel noise therefore will remain after a NUC, or it may even beincreased. The pixels may not be ”definitely” dead but may be usable after sometime, and therefore it is not possible to make a constant dead pixel map of thesepixels. To some extent this depends on the scene characteristics. Pixels with aweak response may be usable if the contrast of the scene is very high.

33

Page 48: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

34 Identifying and replacing dead pixels

The number of pixels defined as dead also depends on the application. If thegoal is only to present an image that looks visually perfect, the number may beraised until all salt and pepper is gone. In other applications the goal may be tokeep the total number of dead pixels as low as possible since dead pixels mean lostinformation. The definition and identification of dead pixels therefore has to beperformed for each application.

The result of a NUC depends on the drift of the camera and the time elapsedbetween the three registrations of radiance sources and the scene registration.Defining a dead pixel therefore is more difficult in the reference based calibrationsince the limit between a dead pixel and a pixel with a high drift might be difficultto draw. There are a number of methods for identifying the weak pixels, four ofthese are used in this thesis and will be described in sections 5.1-5.4. However, thefinal decision whether a pixel should be defined as usable or not has to be madeby the user.

5.1 Temporal noiseA pixel’s temporal noise is the variability of the DN in a series of frames wherethe incident radiance on the detector is kept constant. The standard deviation isused as a measure of the temporal noise.

TN2j =

m∑i=1

(yji − 〈yj〉)2

m− 1i = 1 . . .m, j = 1 . . . n (5.1)

where

TNj = is the temporal noise for pixel j in the sensor.m = is the number of frames.yji = is the DN for pixel j in frame number i.〈yj〉 = is the mean value for pixel j in the file.

Since the NUC is based on reference based calibration where surfaces at threeconstant radiance levels are routinely registered, suitable data is already availableto calculate the temporal noise. Since there are three calibration files three tem-poral noise values per pixel may be calculated. Figure 5.1a and figure 5.1b showvariablilities of DN for nine randomly selected pixels in the cameras, Multimir andEmerald respectively. The difference in the temporal noise or variability are quitesmall between the pixels. The temporal noise for dead pixels however is signifi-cantly higher than for the other pixels. The temporal noise can therefore be usedto define and identify dead pixels.

The following criteria has shown to be a good starting point for most imagesequences

Dead pixel = TNj ≥ 3 · TN

where TN is the mean temporal noise of all pixels in the FPA. Results withMultimir and Emerald are shown in figures 5.3 and 5.4.

Page 49: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

5.1 Temporal noise 35

0 20 40 60 80 1004465

4470

4475

4480

4485Pixelvalues at position (10, 10)

0 20 40 60 80 1004490

4495

4500

4505

4510Pixelvalues at position (144, 10)

0 20 40 60 80 1004545

4550

4555

4560

4565Pixelvalues at position (278, 10)

0 20 40 60 80 1004550

4555

4560

4565Pixelvalues at position (10, 192)

0 20 40 60 80 1004530

4535

4540

4545

4550Pixelvalues at position (144, 192)

0 20 40 60 80 1004575

4580

4585

4590

4595Pixelvalues at position (278, 192)

0 20 40 60 80 1004585

4590

4595

4600

4605Pixelvalues at position (10, 374)

0 20 40 60 80 1004590

4595

4600

4605Pixelvalues at position (144, 374)

0 20 40 60 80 1004560

4565

4570

4575

4580Pixelvalues at position (278, 374)

(a) DN for nine different pixels in one calibration file using the Multimir band 3

0 20 40 60 80 1008215

8220

8225

8230

8235

8240

8245Pixelvalues at position (10, 10)

0 20 40 60 80 1008200

8210

8220

8230

8240

8250

8260Pixelvalues at position (256, 10)

0 20 40 60 80 1008210

8215

8220

8225

8230

8235

8240Pixelvalues at position (502, 10)

0 20 40 60 80 1008190

8200

8210

8220

8230Pixelvalues at position (10, 320)

0 20 40 60 80 1008210

8215

8220

8225

8230

8235

8240Pixelvalues at position (256, 320)

0 20 40 60 80 1008210

8220

8230

8240

8250Pixelvalues at position (502, 320)

0 20 40 60 80 1008230

8240

8250

8260

8270

8280

8290Pixelvalues at position (10, 630)

0 20 40 60 80 1008240

8250

8260

8270

8280

8290Pixelvalues at position (256, 630)

0 20 40 60 80 1008230

8240

8250

8260

8270Pixelvalues at position (502, 630)

(b) DN for nine different pixels in one calibration file using the Emerald

Figure 5.1: The variability of the pixel values of nine randomly selected pixels inthe Multimir and Emerald camera. Also note the differences of mean value foreach pixel.

Page 50: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

36 Identifying and replacing dead pixels

(a) Multimir

(b) Band three, Emerald

Figure 5.2: Histogram of the temporal noise for one calibration file.The y-axis has a logarithmic scale

It should be noted that the dead pixels will be identified only if the temporalnoise is observed in the calibration files. Because there are both low frequency andhigh frequency temporal noise the sequence recorded need to be long enough; inthis work the length of the sequences have been 100 images.

Page 51: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

5.2 Extreme values 37

(a) Multimir, without masking the tempo-ral noise

(b) The mask of dead pixels in theMultimir image

(c) Multimir, with masking the temporal noise

Figure 5.3: The corrected image, with and without applying the 5·mean temporalnoise threshold

5.2 Extreme valuesExtreme values are identified as DN’s outside a valid region. Due to the fact thatwe have a response curve that is a non-linear S-shaped curve, which is hard toapproximate using three calibration points, we should set a working region, wherethe approximation is well defined. This region is somewhere between the maximumand minimum DN that the camera will produce, where the responsibility is fairlylinear. Also we have the dead pixels that give a constant value. These pixels that

Page 52: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

38 Identifying and replacing dead pixels

(a) Emerald, without masking thetemporal noise

(b) The mask of dead pixels in theMultimir image

(c) Emerald, with masking the temporal noise

Figure 5.4: The corrected image, with and without applying the 5·mean temporalnoise threshold.

have no responsivity usually have a DN that is one of the extreme points eitherzero or the maximum value that the sensor gives, this is for both Multimir andEmerald, DNmax = 214 − 1.

By looking at the histogram, in figure 5.5, for the DN on both cameras onecan see that most of the DN are well within the minimum and maximum values.Those pixels that differ from this tend to be at the minimum and maximum valuesof the dynamic range.

Page 53: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

5.3 Value on polynomial coefficients 39

(a) Multimir, histogram of the DN

(b) Band three, Emerald, histogram of the DN

Figure 5.5: Histogram of the DN viewing a scene. The y-axis has a logarithmicscale

If we have a dead pixel mask created by the extreme values we would be ableto exclude these pixels. Using a regular scene where there are no specific heatedor cooled objects, the valid range can be set to DN ∈ [500, 16000]. A result fromboth cameras is shown in figures 5.6 and 5.7.

5.3 Value on polynomial coefficientsThe polynomial coefficients obtained in the NUC using a polynomial approxi-mation, section 4.2.1, may be used to identify the dead pixels. The polynomialcoefficients describe the response curve of a single pixel, which is the single pixelvalues plotted against the median values of pixels in the FPA. The polynomialcoefficients therefore contain information about the single pixel response and canbe used to identify the dead pixels. Consequently the dead pixels should have

Page 54: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

40 Identifying and replacing dead pixels

(a) Multimir, without masking the ex-treme values

(b) Multimir, the extreme value mask

(c) Multimir, with masking the extreme values

Figure 5.6: The corrected image, with and without replacing the extreme values

coefficients that are different from the other pixels coefficients. Figure 5.8 showshistograms of the coefficients C0, C1 and C2 for images registered with Multimirand Emerald camera.

In the histograms the main part of the pixels has coefficient values that arecollected in a main peak at zero in the histograms. Dead pixels may therefore beidentified by the deviation from the mean pixel located at zero. The deviationmay be expressed in units of standard deviation and the pixels that are outside

Page 55: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

5.3 Value on polynomial coefficients 41

(a) Band three, Emerald, withoutmasking the extreme values

(b) Band three, Emerald, the extremevalue mask. Note that no pixels areidentified as dead.

(c) Band three, Emerald, with masking the extreme values

Figure 5.7: The corrected image, with and without replacing the extreme values

this interval are defined as dead pixels.

Dead pixel = |Coefficientjl| ≥ N · std(Coefficientl) (5.2)

Page 56: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

42 Identifying and replacing dead pixels

(a) Multimir, histogram of the three polynomial coefficients

(b) Band three, Emerald, histogram of the three polynomial coefficients

Figure 5.8: Histogram of the coefficients. The y-axis has a logarithmic scale

where

N = is an arbitrary value.Coefficientjl = Polynomial coefficient for pixel j and degree l

std(Coefficientl) = is the standard deviation of the coefficients of degree l, in the FPA.

Figure 5.9 and 5.10 show results with the Multimir and Emerald camera.

5.4 Principal component analysisPrincipal component analysis (PCA) is a useful statistical technique that has foundapplication in fields such as face recognition and image compression, and is a com-mon technique for finding patterns in data of multiple dimensions. PCA involves

Page 57: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

5.4 Principal component analysis 43

(a) Multimir, without masking the coeffi-cients

(b) Multimir, the coefficients mask

(c) Multimir, with masking the coefficients

Figure 5.9: The corrected image on the Multimir camera, with and without ap-plying the coefficients mask

a mathematical procedure that transforms a number of (possibly) correlated vari-ables into a (smaller) number of uncorrelated variables called principal compo-nents. The first principal component accounts for as much of the variability inthe data as possible, and each succeeding component accounts for as much of theremaining variability as possible.

Procedure to calculate the PCA

1. Collect the data.

Page 58: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

44 Identifying and replacing dead pixels

(a) Band three, Emerald, withoutmasking the coefficients

(b) Band three, Emerald, the coeffi-cients mask

(c) Band three, Emerald, with masking the coefficients

Figure 5.10: The corrected image on the Emerald camera, with and without ap-plying the coefficients mask

2. Subtract the mean value from the data.

3. Calculate the covariance matrix.

4. Calculate the eigenvectors and eigenvalues of covariance matrix.

5. Sort the eigenvectors by the eigenvalues.

Page 59: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

5.4 Principal component analysis 45

6. Create a new dataset and multiply the dataset with the eigenvectors.

The calibration files have more than 30 frames and most of them use 100 frames.This can easily create a need for large amounts of memory on the computer if, thePCA is directly implemented in Matlab. Instead, to save time and memory for theprice of possibly a little quality, we only let the PCA function look at the meanvalue for each pixel of the calibration file Having three calibration files we havethree pixel values for each pixel. Calculating the first two eigenimages of thesethree images, then more than 99% of the information from the three images areincluded. The first eigenimage gives information about the temporal noise, whilethe rest is the spatial noise [6] and [7]. There are several ways of implementingthe PCA, in this thesis the implementation is described in appendix A.

(a) Multimir, histogram of the PCA values

(b) Band three, Emerald, histogram of the PCA values

Figure 5.11: Histogram of the PCA value. The y-axis has a logarithmic scale

Page 60: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

46 Identifying and replacing dead pixels

As has been done in the earlier sections in this chapter, if the dead pixels areidentified by the standard deviation of the merged eigenimages, then the givenresult reduces the bad pixels quite successfully. If the dead pixel mask is set at2*times the standard deviation, there are many pixels identified as dead. Observ-ing the two images that is NUC and the image that is both NUC and filtered bythe dead pixel, in figure 5.12 and 5.13, the pixels that appear as deviating to thescene are successfully removed. Especially in band three of the Multimir, wherethe noise at the lower right part of the image is corrected. This part has not beenidentified in any other dead pixel functions, at least not as completely.

(a) Multimir, without masking the PCA (b) Multimir, the PCA mask

(c) Multimir, with masking the PCA

Figure 5.12: The corrected image on the Multimir camera, with and withoutapplying the PCA mask

Page 61: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

5.4 Principal component analysis 47

(a) Band three, Emerald, withoutmasking the PCA

(b) Band three, Emerald, the PCA mask

(c) Band three, Emerald, with masking the PCA

Figure 5.13: The corrected image on the Emerald camera, with and without ap-plying the PCA mask

Page 62: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

48 Identifying and replacing dead pixels

5.5 Replacement of bad pixelsReplacement of the dead pixels will improve the image quality and enhance thevisual result substantially. The dead pixels may also be a nuisance in many ap-plications, for example in detection algorithms and when object’s radiances are tobe estimated.

The correct value of a dead pixel is not known and methods for estimation ofa probable value have to be used. If a NUC method involving motion estimationhas been used, the pixel value may be estimated from the same object in previousframes. Another method is based on replacement of the dead pixel with the valueof the nearest neighbours. This method has been studied and implemented in thisthesis.

A function in the Matlab code creates a new DN based on the nearby nondead pixels by using a Gaussian weight function as a function of the distance tothe current pixel [2]. The weight function is given as:

w(x, y) =

{e−

x2+y2

2σ2 , |x| ≤ N−12 , |y| ≤ N−1

2 ,0 , otherwise

(5.3)

The new DN that will replace the dead pixel is then given as:

DNnew =∑

kDNkwxmk∑kwkmk

(5.4)

where

DNnew = The newly calculated DN to replace the value from the dead pixel.DNk = The DN at position k.

wk = The weightfunction that was created in equation 5.3.mk = The mask value for pixel k, 0 for dead pixel, 1 otherwise.

The Matlab code uses a N equal to 7, which gives a 7 by 7 square, to calculatethe new DN.

Page 63: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Chapter 6

Results

This chapter presents and discusses the results from the experimental data that isapplied to the information that is given in previous chapters.

The main task for this thesis work was to create a nonuniform correction (NUC)of the infrared focal plane array (FPA) without any major concern of the optimiza-tion of the algorithms. Nevertheless it might be interesting to do a comparisonbetween the methods that was used at FOI before the thesis work and the meth-ods that is used here in this thesis work. The computer that was used for thesecorrections was based on a P-III with a frequency of 3.0GHz and using 1024Mb ofRAM.

All work is done in Matlab scripts except for two functions, which are the twofunctions that do a NUC of the image and the one that removes the dead pixelsfrom the image, they are made in C.

Method NUC Coef Extreme Temporal PCA Correct amask value mask mask frame [s]

New X X X X X 0.2Previous X X 150

Table 6.1: Differences between new and old method for the NUC and bad pixelfiltering at FOI

The time it takes to do a NUC of an image of either the Multimir or Emeraldusing the Matlab program that was used previous to this master thesis was 150seconds per image. Even though the optimization was of no concern, but whencorrecting a sequence of > 1000 frames, from 150s per frame to 0.2s makes adifference.

All polynomial coefficients are set before the actual NUC starts, and thereforethe work that is done during the nonuniform correction is just a polynomial evalu-ation for each pixel and bad pixel filtering. The main reason for taking 0.2 secondsper frame are due to two reasons, first that Matlab does not use references butreallocates memory for each function call, also reshaping and transposing a matrix

49

Page 64: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

50 Results

is very time-consuming in Matlab.The old method uses the algorithms presented in [13], while the new meth-

ods uses algorithms from [10] for the nonuniform correction, therefore the onlydifference by visually viewing the images is the dead pixels.

6.1 NUC and bad pixel filteringWhen describing the different types of nonuniform corrections and dead pixeldefinitions the result for each attempt was shown. Some pixels were defined asdead in more than one dead pixel definition while some where not.

Figure 6.1: Dead pixels identified in the different definitions. First row for eachfunction is the number of dead pixels, the second row is the unique dead pixelsthat were not identified in any other definition.

By merging all the dead pixel masks together to one mask will result in avisually good image, the result is shown figures 6.2 and 6.3.

6.2 Linear responsivityIn appendix C.2, it is assumed that the relationship between the DN from thedetector and the incoming radiation could be approximated with a second orderpolynomial approximation. To be able to verify this, 12 images were taken atdifferent temperatures within a time period of less than an hour. Having a shorttime period is important to reduce the drift of the sensor, which is described insection 6.3. Only the bands in the MWIR region are verified to the radiance andtemperature. The emitted radiance in the SWIR, for these temperatures is to lowto be able to calibrate through these correction procedures that is currently notbeing used. The result is shown in figure 6.4.

The figures show that the detector from both the Multimir- and Emerald cam-era has almost a linear responsivity to the incoming radiation. The translationbetween the DN and the radiance, L, is, as is approximated in appendix C:

L = c0 + c1 ·DN + c2 ·DN2

Page 65: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

6.2 Linear responsivity 51

(a) Raw image from the Multimir camera (b) Dead pixel mask

(c) Nonuniform corrected image using new methods, Multimir

Figure 6.2: Visual comparison between new method and the raw uncorrected imagefrom the Multimir camera. Note that almost all pixels that deviate from the sceneare now filtered away.

The coefficients for the second degree polynomial approximation for the Mul-timir and Emerald are shown in table 6.2.

There are mainly two sources for errors when calibrating that will explain thedeviations to the linear responsivity. First there are errors when measuring thetemperature of the peltier-radiator, secondly the Peltier-radiator does not keep aconstant temperature over the full area of the radiator, but there are some smalltemperature differences.

Page 66: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

52 Results

(a) Raw image from the Emerald cam-era

(b) Dead pixel mask

(c) Nonuniform corrected image using new methods, Emerald

Figure 6.3: Visual comparison between new method and the raw uncorrected imagefrom the Emerald camera.

6.3 Correctability

When analysing the noise of the FPA, and especially when using the referencebased correction method, a problem called drift is occurring. Section 4.3 discusseda method to estimate the drift. Figure 6.5 shows the drift for band three and fouron the Multimir and band three on the Emerald. Note that high correctability

Page 67: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

6.3 Correctability 53

4500 5000 5500 6000 6500 7000 7500 80000

0.15

0.3

0.45

0.6

0.75

0.9

1.05

1.2

1.35

1.5

Rad

ians

DN4500 5000 5500 6000 6500 7000 7500 8000

11.95

28.22

38.67

46.42

52.76

58.11

62.72

66.81

70.60

74.3

Tem

pera

ture

,C

(a) Radiance as a function of DN,Multimir, band 3

5000 6000 7000 8000 9000 10000 11000 12000 13000 140000

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Rad

ians

DN5000 6000 7000 8000 9000 10000 11000 12000 13000 14000

3.52

22.83

35.45

45.04

52.91

59.62

65.49

70.79

75.8

Tem

pera

ture

,C

(b) Radiance as a function of DN,Multimir, band 4

0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6

x 104

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Radia

ns

DN

0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6

x 104

−10.17

7.90

19.69

28.67

36.03

42.30

47.78

52.71

57.17

61.2

Tem

pera

ture

,C

(c) Radiance as a function of DN,Emerald, Band 3

Figure 6.4: Showing the relationship between the DN, from the IR-camera andthe radiation. The straight line in the figures is the linear approximation of thedata

value is not equal to bad NUC, but might be a result of low temporal noise,see section 4.3, therefore a comparison of the correctability between the camerasgives no new information. These measurements were taken in a lab where the airtemperature was constant and there was no motion on the camera.

The lower correctability value, the better is the NUC assumed to be, but thereis no defined relationship on when the quality of the NUC is not high enough.What can be said about the result is that both cameras have a low correctabilityvalue for the first five measurements. After approximately one and a half hourboth cameras give a high correctability value, and a new calibrations would havebeen needed.

Not only does the responsivity for each pixel change during time, but the meanresponse of the FPA of the incoming radiation also changes, see figure 6.6. Thisindicates that it is not only the responsivity for each pixel that may change, butthe full FPA has a changing responsivity.

Page 68: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

54 Results

Coefficient Multimir EmeraldBand 4 Band 3

c0 -0.316 -3.24c1 1.00 · 10−5 5.58 · 10−4

c2 2.22 · 10−8 −2.26 · 10−9

Table 6.2: Coefficients for the DN → radiance translation

10.30 11.00 11.30 12.00 12.30 13.00 13.30 14.00 14.30 15.00 0

5

10

15

20

25

(a) Band 3, Multimir

10.30 11.00 11.30 12.00 12.30 13.00 13.30 14.00 14.30 15.00 0

5

10

15

20

25

30

35

(b) Band 4, Multimir

10.00 10.30 11.00 11.30 12.00 12.30 13.00 13.30 14.00 14.30 15.00 0

5

10

15

20

25

30

35

40

45

50

(c) Band 3, Emerald

Figure 6.5: Correctability over time

6.4 Measurement considerations

When viewing the calibration points that were taken in the range of 21 − 24◦C,there are some variations on the median value of the FPA for the constant incomingradiance. This is mainly due to the fact that the reference source ,peltier-radiator,does not have a constant value over the full radiator, but varies with 0.5 − 1◦C,also the handheld IR-thermometer might not give an accurate value. With this inmind, to minimize the result of false values in the measurements it is importantthat the difference of the temperature from the references is not too small. Atleast the same difference as in the scene, but a larger difference is better, assumingno saturation of the sensors are occurring.

Page 69: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

6.5 The user-interface of Matlab program 55

4400 4600 4800 5000 5200 5400 5600 5800 6000 6200 64000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8R

adia

nce

DN4400 4600 4800 5000 5200 5400 5600 5800 6000 6200 6400

3.26

18.50

28.22

35.57

41.50

46.42

50.73

54.8

Tem

pera

ture

,C

(a) Band 3, Multimir

5500 6000 6500 7000 7500 8000 85000

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

Rad

ianc

e

DN5500 6000 6500 7000 7500 8000 8500

−17.71

−2.21

8.34

16.33

22.83

28.37

33.23

37.7

Tem

pera

ture

,C

(b) Band 4, Multimir

7000 7500 8000 8500 9000 9500 10000 105000

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Rad

ianc

e

DN7000 7500 8000 8500 9000 9500 10000 10500

−27.14

−15.19

−5.71

1.80

7.90

13.08

17.61

21.66

25.32

28.6

Tem

pera

ture

,C

(c) Band 3, Emerald

Figure 6.6: DN → radiance, taken at 10.00am and 1.00pm. Note the drift overtime.

6.5 The user-interface of Matlab programThe theory collected in this thesis is applied in Matlab to do the NUC and filteringof dead pixels of the images from the two IR-cameras. This resulted in a toolboxwith many options and steps to be able to perform the correction. To ease thisproblem a user interface (UI) was created in Matlab, the main purpose of this isto be able to concentrate on the calibration and corrections instead of the scriptsneeded to do a correction.

A more into depth description of the UI is presented in appendix B. The optionsfor the NUC are always set to a second order polynomial, unless there are only oneor two calibration files per band. In that case the degrees of polynomials are setas high as possible (first order for two calibration files and zero order for only onecalibration file). The main functions to control the definitions of the dead pixels,are shown in figure 6.8

6.6 DN → physical unitThe NUC is the first step in a correction process and the main focus of thisthesis. In the second step a radiometric correction is performed, where the digital

Page 70: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

56 Results

4700 4750 4800 4850 4900 4950 5000 5050 5100

0.22

0.23

0.24

0.25

0.26

0.27

0.28M

easu

red

radi

ance

Median value [DN]

(a) Band 3, Multimir

6950 7000 7050 7100 7150 7200 7250 7300

0.94

0.95

0.96

0.97

0.98

0.99

Mea

sure

d ra

dian

ce

Median value [DN]

(b) Band 4, Multimir

8700 8750 8800 8850 8900 8950 9000 9050 9100 91501.56

1.58

1.6

1.62

1.64

1.66

1.68

1.7

1.72

1.74

Mea

sure

d ra

dian

ce

Median value [DN]

(c) Band 3, Emerald

Figure 6.7: Measured radiance as a function of DN. The line indicates the approx-imated linear relationship between the DN’s and radiance.

numbers, DN, are translated into a radiometric unit like radiance [W/(m2, sr)].The radiometric calibration is implemented in a Matlab program, called IR

Eval, and was originally only supporting one camera, Thermovision 900. In thisthesis work IR Eval has been modified to also support the Multimir and Emeraldimage formats.

Section C.1 discusses the factors that affects the transmission through theatmosphere. The IR Eval uses a separate program called MODTRAN, It calculatesthe transmission and emittance through the atmosphere in the infrared band. Itis based on a number of parameters, some of which are, distance, air temperatureand humidity.

Based on the information from MODTRAN and appendix C.2, Ir Eval thenconverts the DN to either temperature, ◦C, or radiance W/(sr · m2). An outputfor the Multimir camera is shown in figure 6.9.

Page 71: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

6.6 DN → physical unit 57

Figure 6.8: Showing the main windows to control the dead pixel mask and theresult

Figure 6.9: The output from IR Eval using the Multimir camera converted totemperature, units of the colour bar is in ◦C

Page 72: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

58 Results

Page 73: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Chapter 7

Conclusions

An infrared image may be severely distorted by fixed pattern noise, which is dueto that all detector elements have unique response curves where the gain and offsetvalues are varying over the focal plane array. The fixed pattern noise can to a largeextent be reduced by a nonuniform correction. Two different approaches to performa nonuniform correction are described in this thesis. Scene based correction worksgood for scenes when there is a motion in the picture, but can not be used forstatic scenes. A radiometric calibration can not be performed unless additionaldata are collected. In Reference based correction at least one radiance source needsto be registered. Reference based correction is not dependent on any motion in thescene. The radiance source data may be used to perform a radiometric calibrationof the images. A drawback however is the time difference between the registrationsof the radiance sources and the registration of the scene, which makes the methodsensitive to the drift of pixel repsonsivitiy over time.

In this thesis work a nonuniform correction method is developed and imple-mented in Matlab v.7. The utilities and the graphical user interface have beentested on real IR sequences. The method is based on a reference based correctionusing a second order polynomial approximation. The reference based correctionwas preferred, since generality was one objective with this thesis work. A graphicaluser interface was designed, in which all operations in the nonuniform correctionare performed. The focus of the user interface was on ease of use, without the needof any particular knowledge about correction methods. The image data and theparameters in the nonuniform correction are shown in windows, which gives theuser good overview of the nonuniform correction. At present the Matlab functionssupports two high performance infrared sensors, available at the Department ofIR systems. The functionality of the user interface has been verified by the cor-rection of a complete set of image data from a field trial, collected by the twoinfrared sensors. Dead pixels (pixels that give erroneous response to the incidentradiance) are identified and replaced by a method based on the nearest neighbour.The identification of the dead pixels is performed in four different ways, whichwas found to be necessary since some of the dead pixels have responsivities thatare hard to define due to that the responsivities may vary over time and between

59

Page 74: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

60 Conclusions

different scenes.A nonuniform correction method that earlier has been used at FOI was also

a reference based correction, but the algorithms were not fully developed. Thetime needed to correct one image was bout 150s. On the same computer the newmethod only needs 0.2s per frame, despite the implementation of the additionalmethods for identification of dead pixels. The question arises if the time requiredmay be further reduced. There are at least two main reasons for taking 0.2s perframe. The first reason is that Matlab does nost use references, but reallocatesmemory for each function call. Another reason is that two operations, reshapingand transposing of matrices, are very time consuming in Matlab.

7.1 Future workThe correction is based on the mean value of the calibration files and the tem-poral noise for each detector element is used in defining dead pixels, but thereis possibly more information in the calibration files. The drift is the main prob-lem when using the cameras in the field, trying to merge scene based correctionwith reference based correction to keep a radiometrically correct DN→radiationtranslation. The difficulties with using the scene based correction is that motionis a main component for the correction, which is not always present in the imagesequences taken at FOI.

At present the identified dead pixels are given the value 1, in the dead pixelsmask. In some application it may be useful to classify the dead pixels, where atruly dead pixel is given the value 1. The other dead pixels are given a probabilitynumber between 0 and 1, depending on the probability that the dead pixel containsome useful information.

Page 75: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Bibliography

[1] Rolf Carlson. Design and optimization in organic synthesis. Data handling insceience and technology, volume 8. Elsevier science publishers, Amsterdam,1992. English.

[2] Gunnar Farnebäck. The stereo problem. Course litterature in Bilder & grafikat Linköping University, 2001.

[3] M. Hayat and S. Torres. Kalman filtering for adaptive nonuniformity correc-tion in infrared focal-plane arrays. Journal of the optical society of America,20(3):470–480, March 2003.

[4] Gerald C. Holst. Testing and evaluation of infrared imaging systems. JDCpublishing, Winter Park, Fl, USA, 1st edition, 1993. English.

[5] Gerald C. Holst. Electro-optical imaging system performance. JDC publish-ing, Winter Park, Fl, USA, 1st edition, 1995. English.

[6] J. Alda J. M. L’opez-Alonso and E. Bernab’eu. Principal-component charac-terization of noise for infrared images. Applied optics, 41(2):320–331, January2002.

[7] J. M. L’opez-Alonso and J. Alda. Bad pixel identification by means of prin-cipal components analysis. Optical engineering, 41(9):2152–2157, September2002.

[8] John Lester Miller. Principles of infrared technology. VNR, New York, NY,USA, 1st edition, 1994. English.

[9] Claes Nelsson and Pär Nilsson. Measurement equipment at the departmentof ir systems. Technical Report FOA-R-99-01111-615-SE, Division of sensortechnology, FOI, Linköping, Sweden, April 1999.

[10] P. Chen R. Wang and P. Tsien. An improved nonuniformity correction al-gorithm for infrared focal plane arrays which is easy to implement. Infraredphysics & technology, 39:15–21, 1998.

[11] J. Pezoa S. Torres and M. Hayat. Scene-based nonuniformity correction forfocal plane arrays by the method of the inverse covariance form. Appliedoptics, 42(29):5872–5881, October 2003.

61

Page 76: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

62 Bibliography

[12] Robert A. Schowengerdt. Remote sensing, Models and methods for imageprocessing. Academic Press, Burlington, Ma, USA, 2nd edition, 1997. English.

[13] M. Schultz and L. Caldwell. Nonuniformity correction and correctability ofinfrared focal plane array. Infrared physics & technology, 36:763–777, 1995.

[14] P. Torle. Scene-based correction of image sensor deficiencies. Master’s thesisLiTH-ISY-EX-3350-2003, Department of Electrical Engineering, LinköpingUniversity, Linköping, Sweden, May 2003.

[15] Z. Cai Y. Shi, T. Zhang and L. Hui. A feasible approach for nonuniformitycorrection in irfpa with nonlinear response. Infrared physics & Technology,2004.

[16] George J. Zissis, editor. The infrared & electro-optical systems handbook.,volume 1. SPIE Press, Ann Arbor, Mi, USA, 1st edition, 1993. English.

Page 77: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Appendix A

NIPALS algorithm

Although a matrix X, can be factorized by singular value decomposition, or bycomputing the eigenvectors from X′X, this is not necessary when a few principalcomponents are desired, which was the case in this thesis work. In such casesit is more advantageous to use the NIPALS algorithm [1] by which the principalcomponents can be determined one at a time in decreasing order, according tohow much variance they describe. It is iterative and generally converges after few(< 20) iterations. It can be summarized as follows:

Let X be the centered (and scaled) data matrix.

1. Use any column vector X as a starting score vector t(0).

2. Determine a first loading vector as

p′(0) = t′(0)X/t′(0)t(0) (A.1)

This affords a least square estimation of the elements in p as the slopes ofthe linear regression of t(0) on the columns xi in X.

3. Normalise p(0) to unit length by multiplication with

1/|p(0)| = (p′(0)p(0))−1/2 (A.2)

4. Compute a new score vector t(1) = Xp(0) . This gives an estimate of thescores as the regression coefficients of p on the corresponding row vector inX.

5. Check the convergence by comparing t(1) to t(0). This can be made bycomputing the norm

|t(1) − t(0)| (A.3)and comparing this value to a criterion of convergence, say

|t(1) − t(0)| < ε = 10−10 (A.4)

The value of ε can be set according to the precision in the calculations bythe computer. If the score vectors have converged go to 6. else go to 2.

63

Page 78: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

64 NIPALS algorithm

6. Remove the variation described by the first component and form the matrixof residuals

E = X− t(1)p′(1) (A.5)

Use E as the starting matrix for the next principal component dimension,and return to 1.

Page 79: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Appendix B

Guide to the user interface

The program is mainly written i Matlab, it is only two functions that does theNUC on the images and the dead pixel filtering that are written in C. The code iswritten with the idea of that the program can be used through a Matlabscript orfunction or it can be used through the graphical user interface (GUI). The GUI iswritten with ease of usage in mind, without the need of any particular knowledgenonuniform correction (NUC), calibration data or Matlab.

B.1 StartupThe first window that shows is the file window, see figure B.1. In this windowone chooses which scene files to correct and which reflectance files to be used tocorrect the files.

The first thing to do, is to select a scene-file to view and/or correct. When ascene file is selected, by pressing the ”Load...” button, number 1 in figure B.1, thewindow might change appearance depending on the number of bands in the file.For a Multimir file with four bands the window looks like figure B.2a, for files withonly one band the window looks like figure B.2b.

When the scene-file has been selected there are now three choices

1. Select more scene-files, to select more than one file at a time in the fileselector press the Ctrl-button while selecting files.

2. View the uncorrected scene-file, by selecting ”View only”, number 5 in fig-ure B.1, discussed in section B.3, on page 76.

3. Select calibration-files for the current scene file, discussed in section B.1.1.

65

Page 80: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

66 Guide to the user interface

1

2

3

4

10

5

6

7 8 9

(a) The first window that is shown when starting the GUI

1. Load files that will be viewed or NUC and dead pixel filtered.

2. Remove selected file from list.

3. Load calibration files.

4. Remove calibration files.

5. View the selected file without any NUC or filtering.

6. Select band, only for Multimir with 4 bands.

7. View the result of NUC and dead pixel filtering of image.

8. Settings used for IrEval.

9. Do a NUC and filtering and save the result to a new file.

10. Quit program.(b) Available options

Figure B.1: The main window

Page 81: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

B.1 Startup 67

(a) Window appearance with four bands in file

(b) Window appearance with one band in file

Figure B.2: The two different views after loading a scene file

B.1.1 Select calibration fileWhen the scene-files have been selected it is time to select the calibration-files.These are used to create the coefficients used in the NUC and to define the deadpixels. Usually a second degree polynomial correction is used, it should thereforebe three calibration files used for each band. If only two files are selected a linearcorrection is used.

Since Multimir may use four bands simultaneously, that operates in differentfrequencies and have different responses, all four bands need to be calibrated. Theband selector, see number 6 in figure B.1, only appears when a four band MultimirIR-sequence is selected.

The calibration-files that are selected are applied to all the scene-files that arein the scene-file list. To select the calibration-files press the ”Load ...” button,number 3 in figure B.1. Select the calibration-files you wish to use, to selectmore than one file at a time press the Ctrl-button when selecting files. If usingthe Multimir with four bands it is important to select the proper calibration files

Page 82: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

68 Guide to the user interface

for the right band, i.e. band one and two uses calibration files with short waveradiation, and band three and four uses calibration files with MWIR (3 − 5µm)radiation. The window now might look like figure B.3.

Figure B.3: Window appearence after selecting three calibration files

If using the Multimir with four bands, then select one of the other three bandsin the band selector, number 6 in figure B.1, and repeat the procedure. This hasto be done for all four filters.

When this is done there are three options:

1. View the file, both uncorrected and NUC with filtered bad pixels and, ifneeded, change the default settings of the definitions of the dead pixel mask.This is done by pressing button ”View image...”, button 7 in figure B.1,discussed in section B.2.

2. Correct the scene file(s) and save to file. This is done by pressing button”Save ...”, button 9 in figure B.1, discussed in section B.4.

3. Set the proper settings that was used when shooting the scenefiles and cre-ating the calibration files, number 8 in figure B.1. That is, the aperture,integration time, frequency of the scene files and temperature on the cal-ibration files. This is only needed when using the corrected file in IrEval,which is a separate program to view the radiation and temperature, discussedin section B.5.

Page 83: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

B.2 View image and set dead pixel limits 69

B.2 View image and set dead pixel limitsWhen first selecting to view the images, by pressing the ”View image...” button,in figure B.1, a wait bar comes up. During that time the program creates the deadpixel mask and creates the polynomial coefficients to be used in the NUC. Whenthat is done, three new windows will appear, as shown in figure B.4.

1

2

3

(a) Window appearance when viewing file and setting values on dead pixel functions

The functions for the three windows are:

1. Controlling the frames, which frames to show and change appearance of theimage.

2. The current image from the file.

3. Window to select and view the current settings for the dead pixel mask.(b) Main functions for the windows

Figure B.4: The main figures to view the file and to control the dead pixels.

Page 84: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

70 Guide to the user interface

Showing framesThe window to control the frames that appeared when pressing the ”View im-age...”, has several different options, most of them are self explaining. Figure B.5will give a brief explanation of the objects.

1

2

3

4

5

6

7

8

9

(a) Window to control frames

1. Image number that is currently shown, in image window.

2. The magnification of the original size.

3. Increase the contrast in the images, usually the contrast(difference betweenhighest DN and lowest DN in the current image) is too small to show all theinformation in the image, by selecting the ”Stretch lim” option the imagecontrast is increased.

4. To compare the NUC and filtered image with the raw image.

5. Change colours might be useful to enhance the contrast in the image.

6. A simple player function, that plays the file in the direction of the arrow,The ”Step size” affects this player.

7. Slider to select the current image.

8. The number of frames it increases between two shown frames, when usingthe player or slider.

9. Save the data to a MAT-file. Each time the button is pressed then thecurrent frame is saved to a MAT-file.

(b) Function of windows

Figure B.5: Window to control images to view, and its functions

Colour settings

If the checkbox ”Change colour settings” is selected then a new window appearsand there is a possibility of changing the colours, see figure B.6

There are two different types of colour settings that can be done. At all timesone may change the colour map, which are predefined in Matlab. But when usinga four band Multimir image one may merge three bands to one image, and settingone RGB (Red, Green, Blue) colour for each band. The advantage of this is that

Page 85: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

B.2 View image and set dead pixel limits 71

(a) Window appearance when selectingcolour map

(b) Window appearance when selectingone band per colour

Figure B.6: The two different views of the color settings window

objects are visually easier to view and find. Changing the colour map is onlyaffected, when viewing the image and saving the file to a AVI-movie, no changesare made when saving the scene-file to the original format or MAT-file.

Page 86: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

72 Guide to the user interface

Current imageThis window has three different appearances. If the checkbox ”View only” in thestart up window is selected, then this figure has the same properties as a regularfigure, where the possibility of zooming and resizing can be done. If viewing aNUC and filtered image then the image can have two separate appearances. Thefirst appearance is, as can be seen in figure B.7, showing the current correctedimage and the figure below is the histogram of the one or four filters. The X-axisis from zero to the highest pixel value that the camera can give, for both Multimirand Emerald this is 214. While the Y-axis is in logarithmic scale to show the fewpixels that differs from the rest.

Figure B.7: Window with the current corrected image and its histogram

The second view is when the checkbox ”View uncorrected figure”, number 4 infigure B.5 is selected, it then shows the uncorrected image, as is shown in figure B.8.The upper image is the uncorrected and the lower image is the corrected.

Page 87: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

B.2 View image and set dead pixel limits 73

Figure B.8: Window with the current corrected image and the raw uncorrectedimage

Window to select and view the current settings for the deadpixel maskThis is the main window to control all the dead pixels, see figure B.9. By using thesliders and/or typing in new values for the different types of dead pixel functions

Page 88: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

74 Guide to the user interface

one do control which pixels that are set as dead pixels. These filters are notcorrelated, so to find which filter removes which dead pixel has to be investigated.The default values usually give a good result, but sometimes they have to bechanged.

The filters that appear initially are not all filters, currently there is also thePCA function that is not shown. Also maybe not all filters are needed for currentfile, for example when detecting warm objects in a file, that might give high DN,these might be defined as dead pixels, which in other word would be erasing theobject of interest. This could be corrected by disabling the dead pixels defined bythe DN function.

To be able to control both which filters to use and which filters to view, selectcheckbox ”Select mask”, number 3 in figure B.9 Then a new window is shown, seefigure B.10, that has two separate columns, one is for deciding which filter thatare used to defined as dead pixels and one is for selecting which filters should beedited.

To reduce the size of the figure B.9, a maximum of five filters can be viewedat one time. When a change has been made the ”Evaluate” button in figure B.9has to be pressed.

To view the number of dead pixels for each filter then select the checkbox ”Viewstatistics”, number 2 in figure B.9. Then a new window is shown, see figure B.11.This window has no inputs but can only give information.

The information is divided into each band and filter type. For each dead pixelfunction there are two rows of data, the first is the number of dead pixels for thatspecific band for that specific mask, the second row is the number of unique deadpixels that are not defined in any other masks for that specific band.

When any changes has been made in any of the two windows that takes anyinputs, then the button ”Evaluate” has to be pressed to activate the changes.

When done viewing the file and changing the settings press the ”Done” buttonto go back to the main window.

Page 89: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

B.2 View image and set dead pixel limits 75

12

3

4

6

5

(a) Window that controls the dead pixel definitions

1. Select which band to be edited, they are all independent from each other.

2. Show a window with the information about the dead pixels.

3. Select which masks to use and edit.

4. The currents masks that can be edited.

5. Update the changes and show the updated image.

6. Close windows.(b) Function of window

Figure B.9: Window to control the dead pixels.

Page 90: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

76 Guide to the user interface

Figure B.10: Window to select which filters that are active, and which are editable

Figure B.11: Window that gives information about the current masks

B.3 View file without any correction

When the checkbox ”View only”, in the main figure, figure B.5, is selected thenno correction files are needed. The figure then changes and hides the calibrationlist, since they are of no use, see figure B.12

Figure B.12: Main window, when checkbox ”View only” is selected

Page 91: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

B.3 View file without any correction 77

The options here are as when using the NUC and filtering, is to view the file orsave it, but now with some constraints and changes. By pressing the ”View file...”button the two windows to view the file, as described in section B.2 is shown.The third window, that edits the dead pixel mask is not shown, since we have notcorrected any file and therefore have no dead pixels.

The actions are the same as previously discussed.

Page 92: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

78 Guide to the user interface

B.4 Save fileIf a scene file is selected in the main window, figure B.1, and the button ”Save...”is selected a new window is shown to select the format of the saved file, currentlyone can save the scene-file to original format, AVI-movie and a MAT-file, seefigure B.13. This can be done with or without any calibration files, but withoutthe calibration files there are only two options of output formats, the AVI-moiveand MAT-file.

Figure B.13: Window to save the scene-file

If there is more than one file to be saved at one time, then one select thedestination directory, and the filenames will have an appended prefix, which is”c_” as in ”corrected”. If that filename already exists, then a postfix is addedwhich is a number, a number that is counting upwards until it finds a uniquefilename; this is to avoid overwriting existing files. If there is only one file tocorrect then one may select the destination directory and filename.

Saving to AVI-movieWhen selecting to save to either original format or AVI-movie, then all files inthe scene file-list will be corrected. This can be avoided in the AVI options byselecting the ”Change settings” options, then only the selected file be saved.

When saving to a AVI-movie then the lastly used colour map will be used. Ifthe scene file is a four-band Multimir sequence and the colour map was a threecolour, then the AVI-movie will be saved with that three colour settings.

There are two different settings that can be changed when saving to an AVI-movie. The first is the frames per second (fps), the default value is 15, but for shortsequences or fast moving object a slower fps might be used. Also when using subframes, the fps on the camera might be as high as 1000Hz, creating a AVI-movieat 15Hz tends to be too slow. The other option is what part of the frame to beused; if not the full sequence is of interest.

Page 93: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

B.5 IrEval 79

B.5 IrEvalWhen the goal is to use the package IrEval that converts the digital numbers toradiation and blackbody temperature, a little more information is needed thanjust correcting the scene-file. This information that is needed is given by the twoparts shown in figure B.1, number 8.

The first part is the information about the scene-file, which was the aperture,camera frequency and integration time. The other part sets the temperature foreach calibration file. When using all the four band of the Multimir, then only bandthree and four is of any interest, which is the long wave radiation. Currently thereis no calibrated short wave radiation source, and to calculate the radiation that isnot a reflection from the sun, needs very high temperatures, see section 2.1, andtherefore it is not used.

The camera settings should be the same for both the scene files and the cali-bration files. When the camera settings are done, then the procedure of creatinga corrected IR-sequence is the same as described in this appendix. To be able touse the IrEval, the only supported output format is to save the corrected scene filein the original format.

Page 94: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

80 Guide to the user interface

Page 95: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

Appendix C

Radiometric calibration

In a radiometric calibration the radiance and/or temperature of an object is cal-culated. A few calibration methods use scenebased calibration to estimate theradiance, but these methods still need some reference in the image. Therefore itis more common to use reference based calibration methods, which are based onknown radiance sources.

At the Department of infrared systems there are several infrared cameras rou-tinely used in signature measurements. A special tool, IR Eval is available forperforming radiometric calibrations of the image data.

C.1 Atmospheric absorptionThe radiation from the sun is the major contributor to the radiation in the rangeof 0.4− 3µm. Most materials absorb and reflect this radiation from the sun, someeven transmits this radiation like ordinary glass (for wavelengths < 2.6µm) andwater.

All radiation is affected by the medium it passes through, it is scattered and ab-sorbed. A typical spectral transmittance curve through the atmosphere, is shownin figure C.1, the figure is calculated by the modelling program MODTRAN.

The gaps of the transmittance at the different wavelengths are due to the molec-ular absorption some of which are due to water and carbon dioxide. Because ofthese gaps not the entire infrared spectrum is of interest, when measuring throughthe atmosphere.

Radiation components

The major radiation visible in the visible through the SWIR spectral regions, cangenerally be divided into three separate components, see figure C.2

1. Unscattered surface reflected radiation Lsuλ .

The radiation from the sun reflects the object/surface directly to the ob-server.

81

Page 96: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

82 Radiometric calibration

10 0,2

Transmission

20 2 1 0,0

0,2

0,4

0,6

0,8

1,0

5 0,5 Wavelength [µm]

O 3

CO 2

O 2 CO 2 CO 2

CO 2

H 2 O

H 2 O H 2 O

H 2 O

H 2 O

Figure C.1: Example of transmittance of the atmosphere as calculated by MOD-TRAN

ε

0λE

Sensor

suLλ

(a) Unscattered

ε ρ

Sensor

sdLλ

(b) Down scattered

ε ρ

Sensor

spLλ

(c) Path scattered

Figure C.2: Reflected and scattered components. Primary the visual and SWIRregion.

2. Downscattered surface reflected radiation Lsdλ .

The radiation that refracts in the atmosphere to the object/surface andfinally reflects to the observer.

3. Upscattererd path radiance, Lspλ .

The radiation that reflects in the atmosphere to the observer.

The total radiance to a sensor in the SWIR region, is then given as

Lsλ = Lsu

λ + Lsdλ + Lsp

λ (C.1)

Thermal radiation

As the wavelengths increase beyond SWIR and into the MWIR the emitted thermalradiation importance increases.

Besides the reflected radiation that was described in previous section there arealso three thermal emitted components, see figure C.3.

1. Surface emitted radiation from the objects/earth, Leuλ .

Emittance of interest in this thesis.

Page 97: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

C.2 Radiance analysis 83

Sensor

euLλ

ε,T

(a) Surface emitted

Sensor

edLλ

ε ρ

(b) Down emitted

Sensor

epLλ

ε ρ

(c) Path emitted

Figure C.3: Emitted components viewed by the observer

2. Down emitted, surface reflected radiation from the atmosphere, Ledλ .

3. Path emitted radiance, Lepλ .

The total emitted radiance to a sensor is then given as

Lsλ = Leu

λ + Ledλ + Lep

λ (C.2)

The total radiance to a sensor in the NWIR region is then given as

Lλ = Lsλ + Le

λ == Lsu

λ + Lsdλ + Lsp

λ + Leuλ + Led

λ + Lepλ (C.3)

C.2 Radiance analysisTo be able to convert the DN from the detector to a physical radiance from anobject, some aspects has to be taken into account. First, the result from theprevious section has to be part of the equations, since all radiation used with theIR-cameras are transmitted through the atmosphere. Secondly, the transmissionof the bandpassfilter of the camera and the responsivity of the detector elementis wavelength dependent. The theory to put these parts together is described inreference [9].

LCamera =

λ2∫λ1

RNorm(λ) · Linc(λ)dλ (C.4)

where

LCamera = Total incoming radiation to detectorλ1, λ2 = Upper and lower limits of the optical bandpassfilter

RNorm(λ) = Normalized response curve for detector-filter combination,Linc(λ) = Incoming spectral radiation to the camera

(C.5)

Page 98: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

84 Radiometric calibration

The atmospheric effects are characterized by the atmospheric transmissionτatm(λ) and the atmospheric path radiance Latm(λ), described in section 3.2b.

The radiance detected by the detector can be written as

LCamera =

λ2∫λ1

RNorm(λ) · Linc(λ)dλ (C.6)

=

λ2∫λ1

RNorm(λ) · τatm(λ) · Lobj(λ)dλ +

λ2∫λ1

RNorm(λ) · Latm(λ)dλ

The atmospheric transmission, τatm(λ), and radiance, Latm(λ), are calculated withMODTRAN.

Assuming that the DN as a function of the incoming radiation is close toquadratic, the detected radiance will be described by a second degree polynomial

LCamera = c0 + c1 ·DN + c2 ·DN2 (C.7)

It is shown in chapter 6 that the responsivity is quite linear and the c2 constantis usually very small.

RNorm(λ) represent the response curve for the detector-filter combinations.The two cameras uses different types of detector materials, MCT and InSb, butwhen using the IrEval program the responsivity of the two detectors are assumedto be the same, see figure C.4

1 2 3 4 5 6 70

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Responsivity of detector

Wavelength [µm]

Res

pons

ivity

Figure C.4: The responsivity of the detector on the two cameras

The transmission through the band pass filter was shown in figures 3.1b and 3.2b.

Page 99: Institutionen för systemteknik - DiVA portalliu.diva-portal.org/smash/get/diva2:21133/FULLTEXT01.pdfInstitutionen för systemteknik Department of Electrical Engineering Examensarbete

På svenska

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –under en längre tid från publiceringsdatum under förutsättning att inga extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ick-ekommersiell forskning och för undervisning. Överföring av upphovsrätten viden senare tidpunkt kan inte upphäva detta tillstånd. All annan användning avdokumentet kräver upphovsmannens medgivande. För att garantera äktheten,säkerheten och tillgängligheten finns det lösningar av teknisk och administrativart.Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i denomfattning som god sed kräver vid användning av dokumentet på ovan beskrivnasätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller isådant sammanhang som är kränkande för upphovsmannens litterära eller konst-närliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se för-lagets hemsidahttp://www.ep.liu.se/

In English

The publishers will keep this document online on the Internet - or its possiblereplacement - for a considerable time from the date of publication barring excep-tional circumstances.

The online availability of the document implies a permanent permission foranyone to read, to download, to print out single copies for your own use and touse it unchanged for any non-commercial research and educational purpose. Sub-sequent transfers of copyright cannot revoke this permission. All other uses ofthe document are conditional on the consent of the copyright owner. The pub-lisher has taken technical and administrative measures to assure authenticity,security and accessibility.

According to intellectual property law the author has the right to be men-tioned when his/her work is accessed as described above and to be protectedagainst infringement.

For additional information about the Linköping University Electronic Pressand its procedures for publication and for assurance of document integrity, pleaserefer to its WWW home page:http://www.ep.liu.se/

© Wilhelm Isoz