mam format

Embed Size (px)

Citation preview

  • 8/7/2019 mam format

    1/29

    IMPROVEMENTS IN RADAR OBSERVING CAPABILITIES

    Mtech (DC), BMSCE Page 1

    1. INTRODUCTION

    Radar is an object-detection system which uses electromagnetic

    waves specifically radio waves to determine the range, altitude,

    direction, or speed of both moving and fixed objects such as aircraft,

    ships, spacecraft, guided missiles, motor vehicles, weather formations,

    and terrain. The radar dish, or antenna, transmits pulses of radio waves

    or microwaves which bounce off any object in their path. The object

    returns a tiny part of the wave's energy to a dish or antenna which is

    usually located at the same site as the transmitter.

    The modern uses of radar are highly diverse, including air traffic

    control, radar astronomy, air-defense systems, antimissile systems;

    nautical radars to locate landmarks and other ships; aircraft anti-

    collision systems; ocean-surveillance systems, outer-space

    surveillance and rendezvous systems; meteorological precipitation

    monitoring; altimetry and flight-control systems; guided-missile

    target-locating systems and ground-penetrating radar geological

    observations.

    The applications of radar are in fields as diverse as air traffic

    control, weather monitoring, astrometry, and road speed control,

    commercial marine applications

  • 8/7/2019 mam format

    2/29

    IMPRO V

    ME N

    S IN RADAR O BS ER V IN

    C APA B ILITIE S

    M t h

    ,

    MSC E

    g 2

    The i ti prov i ed by radar i l des the bear i and range

    (and therefore pos ition) of the ob ject from the radar scanner. I t is thus

    used in many d ifferen t f ields where the need for such pos itioning is

    cruc ial. The f irst use of radar was for m ilit ary purposes: to locate air,

    ground and sea targe ts. Th is evo lved in the c ivilian f ield into

    applications for a ircraf t, sh ips, and roads.

    In av iation, a ircraf t are equ ipped w ith radar dev ices that warn of

    obs tacles in or approach ing their pa th and g ive accura te altitude

    read ings. They can land in fog a t airpor ts equ ipped w ith radar -ass isted

    ground-con trolled approach (GCA) sys tems, in wh ich the p lane's

    f light is observed on radar screens wh ile opera tors rad io land ing

    directions to the p ilot.

    For v ideo surve illance and mon itor ing a robus t sys tem shou ld no t

    depend on carefu l placemen t of cameras. I t shou ld a lso be robus t to

    wha tever is in its visual f ield or wha tever lighting effec ts occur. I t

    shou ld be capab le of dea ling w ith movemen t through c luttered areas,objects over lapping in the v isua l f ield, shadows, lighting changes, and

    effec ts of mov ing e lemen ts of the scene (e.g. sway ing trees), s low-

    mov ing ob jects, and ob jects be ing introduced or removed from the

    scene. Trad itional approaches based on back ground ing me thods

    typically fa il in these genera l situations.

    Our goa l is to crea te a robus t, adap tive track ing sys te m that isf lexible enough to hand le var iations in lighting, mov ing scene c lutter,

    multiple mov ing ob jects and o ther arb itrary changes to the observed

    scene. The resu lting tracker is pr imar ily geared towards scene - leve l

    video surve illance app lications.

  • 8/7/2019 mam format

    3/29

    IMPRO V EME N T S IN RADAR O BS ER V IN

    C APA B ILITIE S

    M t h

    C),

    MSC E

    g 3

    2. PROBLEM DEFINI T ION

    Compu tationa l barr iers have limited the comp lexity of rea l -time v ideo

    process ing app lications of radar. Mos t systems were e ither too s low to be

    prac tical, or Succeeded by res tr icting themse lves to very con trolled

    situations.

    R ecen tly, fas ter compu ters have enab led researchers to cons ider more

    comp lex, robus t mode ls for rea l -time ana lysis of s tream ing da ta. As

    a consequence, this new me thods a llow us to beg in mode ling rea l wor ld

    processes under vary ing cond itions.

    The so lely a im of this work is to prov ide a track ing a lgor ithm for the

    radar f ixed v ision sensor (f ixed camera) to track the mov ing ob jects from

    the acqu ired da ta within its range of v iew.

  • 8/7/2019 mam format

    4/29

    IMPRO V EME N T S IN RADAR O BS ER V IN

    C APA B ILITIE S

    M t ! " h # $

    C),%

    MSC E &

    ' g ! 4

    3 . PROBLEM OBJEC T IVES

    The research ob jective is to deve lop a common me thod for rea l-time

    segmen tation of mov ing reg ions in image sequences invo lvesbackground sub traction or thresho lding the error be tween an es timate of

    the image w ithout mov ing ob jects and the curren t image. The numerous

    approaches to this prob lem d iffer in the type of background mode l used

    and the procedure used to upda te the mode l.

    Our approach includes

    y Mode ling each p ixel as a m ixture of Gauss ians and us ingapprox imation to upda te the mode l.

    y The eva luation of Gauss ian d istr ibutions for the adap tive m ixture

    mode l is done to de term ine wh ich are mos t likely to resu lt from a

    background process.

    y E ach p ixel is c lass if ied based on whe ther the Gauss ian d istr ibution

    which represen ts it mos t effec tively and is cons idered par t of the

    background mode l.

    y E xtending this to a s table, rea l -time ou tdoor tracker wh ich re liably

    deals with lighting changes, repe titive mo tions from c lutter, and

    long- term scene changes.

  • 8/7/2019 mam format

    5/29

    IMPRO V EME N T S IN RADAR O BS ER V IN(

    C APA B ILITIE S

    M t ) 0 h 1 2

    C),3

    MSC E 4

    5 g ) 5

    4 . A CCOMPLISHMEN T S OF PH A SE-1

    D etailed s tudy was made on B ispec trum, the Four ier

    transform of third order cumu lant. In MATLAB-7 p latform, B ispec trum for bas ic signals have

    been found.

    A three po int scatterer prob lem has been cons idered and

    bispec trum was app lied to reso lve the sca ttered s ignals.

  • 8/7/2019 mam format

    6/29

  • 8/7/2019 mam format

    7/29

    IMPRO V EME N T S IN RADAR O BS ER V IND

    C APA B ILITIE S

    M t E F h G H

    C),I

    MSC E P

    Q g E 7

    6 . PH A SE-1

    1 . BI-SPEC T R A L ME T HOD FOR R A DA R T A RGE T

    RECOGNI TION

    The bas ic idea is that the geome try of the targe t scatterers and their

    mutual interac tions impose fea tures in the ref lected radar s ignal that are

    typical and un ique to the targe t of interes ts. The b i -spec trum can be used

    to de tect these mu ltiple interac tions fea tures wh i ch may then be used to

    match aga inst a reference da tabase that con tains s ignatures of d ifferen t

    targe t types for recogn ition and identif ication.

    Radar backscattered signal from a simple 3 -point scatterers target

    To illustrate ma thema tical mode ling of the backsca ttered radar s ignal,

    cons idered a 3-po int scatterers hypo thetical targe t with sca tter po ints a t

    ranges of R1, R2 and R3 from the radar as shown in Figure be low

    Figure: D irect and Ind irect Propaga tion Pa th of a 3-Po int Scatterers

    Targe t.

    The radar backsca tter s ignal for a g iven frequency, f is given as

  • 8/7/2019 mam format

    8/29

    IMPRO V EME N T S IN RADAR O BS ER V INR

    C APA B ILITIE S

    M t S T h U V

    C),W

    MSC E X

    Y g S 8

    W hen R1 = 0 m, R2 = 1.5 m and R3 = 3.6 m, the correspond ing radar

    backsca tter

    Signal can be rewr itten as:

    W here C1, C 2, C 3 and C 4 are cons tants.

    Radial range profile of a simple 3 -pointScatterers target

    To be tter unders tand and v isua lize the con tr ibutions of individua l

    scatterers and their mu ltiple interac tions in the radar backsca ttered

    signal, the rad ial range prof ile of the targe t is exam ined. The rad ial

    range prof ile of any targe t can be genera ted by eva luating the

    amp litudes and phases of the radar backsca ttered s ignals over a

    frequency band of even ly spaced frequency interva ls and perform ing

    Four ier Transform to conver t the s ignals into the time doma in. The

    radial range prof ile is a p lot of the Magn itude of the impulse response

    versus the pro jected down range a long the line-of s ight between theradar sys tem and the targe t, where sca tter ings from the individua l

    scatterers and their mu ltiple interac tions w ill appear as peak responses.

    The rad ial range prof ile of the s imple 3-po int scatterers targe t,

    compu ted us ing MATLABs Fas t Four ier Transform of the assoc iated

  • 8/7/2019 mam format

    9/29

    IMPRO V EME N T S IN RADAR O BS ER V IN`

    C APA B ILITIE S

    M t a b h c d

    C),e

    MSC E f

    g g a 9

    radar backsca ttered s ignals over a frequency band from 5.0 GHz to 6.0

    GHz, is illustrated in F igure

    Figure: R adial R ange Prof ile of 3-Po int Sca tterers Targe t

    From F igure, it can be observed that the f irst three peak impu lse

    responses are cons istent with the re lative pos itions and geome try of

    the 3-po ints sca tterers a t ranges R1= 0 m, R2 = 1.5 m and R3 = 3.6 m,

    while the four th peak impu lse response is a ghos t ar tifact that does

    not map to any sca tterers and represen ts the co llective summa tion of

    the mu ltiple sca tters interac tion terms be tween the 2nd and 3rd

    scatter ing po ints at a range equ ivalent to R2 + R3 = 1.5 m + 3.6 m =

    5.1 m.

  • 8/7/2019 mam format

    10/29

    IMPROVEMENTS IN RADAR OBSERVING CAPABILITIES

    Mtech (DC), BMSCE Page 10

    2 . E IPO LA R G E OM E TRY

    Epipolar geometry is the geometry of stereo vision . When two cameras

    view a 3D scene from two distinct positions, there are a number of

    geometric relations between the 3D points and their projections onto the

    2D images that lead to constraints between the image points. These

    relations are derived based on the assumption that the cameras can be

    approximated by the model. Typical case for epipolar geometry is when

    two cameras take a picture of the same scene from different points of

    view. The epipolar geometry is then used to describe the relation between

    the two resulting views.

    The figure below depicts two pinhole cameras looking at point X. In

    real cameras, the image plane is actually behind the focal point, and

    produces a rotated image. Here, however, the projection problem is

    simplified by placing a virtual image plane in front of the focal point of

    each camera to produce an unrotated image. OL and OR represent the

    focal points of the two cameras. X represents the point of interest in both

    cameras. Points xL and xR are the projections of point X onto the image

    planes.

    Each camera captures a 2D image of the 3D world. This conversion

    from 3D to 2D is referred to as a perspective projection and is described by

    the pinhole camera model. It is common to model this projection

    operation by rays that emanate from the camera, passing through its focal

  • 8/7/2019 mam format

    11/29

    IMPROVEMENTS IN RADAR OBSERVING CAPABILITIES

    Mtech (DC), BMSCE Page 11

    point. Note that each emanating ray corresponds to a single point in the

    image.

    If the points x L and x R are known, their projection lines are also known. If

    the two image points correspond to the same 3D point X the projection

    lines must intersect precisely at X . This means that X can be calculated

    from the coordinates of the two image points, a process called

    triangulation

    Resu l s:

    1. Calibration of the camera was successfully done and the fundamental matrix

    between the two cameras was found with the two projections of the sameworld point on two different image planes

    Camera projection

    0

    0. 2

    0. 4

    0. 6

    0. 8

    - 0 . 2

    0

    0. 2

    0

    0. 2

    0. 4

    0. 6

    Xm

    Z eX

    e

    Y m

    Y eZ

    m

  • 8/7/2019 mam format

    12/29

    IMPROVEMENTS IN RADAR OBSERVING CAPABILITIES

    Mtech (DC), BMSCE Page 12

    Calibrated image

    2. Image rectification is done to obtain triangulated image provided the two images

    from two different views.

    Normal image Rectified image

  • 8/7/2019 mam format

    13/29

    IMPRO V EME N T S IN RADAR O BS ER V INh

    C APA B ILITIE S

    M t i p h q r

    C),s

    MSC E t

    u g i 13

    7 . PH A SE- 2

    1. VA NISHING POIN T

    Given a s ingle image of an arb itrary road, that may no t be we ll-paved, or have

    clear ly de linea ted edges, or some a pr ior i known co lor or texture d istr ibution, it

    poss ible for a compu ter to f ind this road. Th is assessmen t work addresses this by

    decompos ing the road de tection process into two s teps

    1. The es timation of the van ishing po int assoc iated w ith the ma in (s traight) par t

    of the road, fo llowed by

    2. The segmen tation of the correspond ing road area based on the de tected

    van ish ing po int on the s traight par t of the road.

    The ma in techn ica l con tr ibutions of the proposed approach are a nove l

    adap tive sof t voting scheme based on var iable-s ized vo ting reg ion us ing conf idence-

    weighted Gabor f ilters, wh ich compu te the dom inant texture or ientation a t each p ixel,

    and a new van ishing po int- cons trained edge de tection techn ique for de tecting road

    boundar ies. The proposed me thod has been implemen ted, and exper imen ts w ith

    genera l road images demons trate that it is bo th compu tationa lly eff icient and effec tive

    at detecting road reg ions in cha llenging cond itions.

    In the texture or ientation es timation, we no t only compu te the texture

    or ientation a t each p ixel, bu t also g ive a conf idence to each es timation. The

    introduced conf idence is then incorpora ted into the van ishing po int estimation.

    Observ ing that the h igher image p ixels tend to rece ive more vo tes than lower image

    pixels, wh ich usua lly resu lts in wrong van ishing po int estimation for the road images

    where the true van ishing po int of the road is no t in the upper par t of the image, aloca lly adap tive sof t-voting (LASV) Scheme is proposed to overcome this prob lem.

    The correc tly es timated van ish ing po int prov ides a s trong c lue to the

    loca lization of the road reg ion. Therefore, we propose a van ishing-po int cons tra ined

    dom inan t edge de tection me thod to f ind the two mos t dom inan t edges of the road.

  • 8/7/2019 mam format

    14/29

    IMPRO V EME N T S IN RADAR O BS ER V INv

    C APA B ILITIE S

    M t w x h y

    C),

    MSC E

    g w 14

    Based on the two dom inan t edges, we can rough ly segmen t the road area and upda te

    the van ishing po int estimated by LASV w ith the joint point of the two mos t dominan t

    edges. The proposed road segmen tation s trategy is to f ind the two mos t dominan t

    edges by initially f inding the f irst one and the o ther based on the f irst one. W e prefer

    not to use the co lor cue in f inding these edges because of the fo llowing three reasons:1. Color usua lly changes w ith illumination var iation.2. For some road images, there is very sub tle or no change in co lors be tween the

    road and its surround ing areas, e.g., the road covered by snow or deser t road.

    And

    3 . For some roads, co lor changes drama tically in the road area.

    For the purposes of easy illustration, the def inition of Or ientation Cons istency

    R atio (OC R ) is g iven in the top lef t image of f igure is a line cons isting of a se t of

    discre te. Or iented po ints ( the or ientation of these po ints deno ted by a b lack arrow in

    the f igure). For each po int, if the ang le be tween the po ints or ientation and the lines

    direc tion is sma ller than a thresho ld, this po int is v iewed to be or ientationa lly

    cons istent with the line. OC R is def ined as the ra tio be tween the number of

    or ientationa lly cons istent points and the number of total points on the line. In an

    image, each po int corresponds to a p ixel. W e f ind that the es timated van ishing po int

    coincides w ith the joint point of a few dom inan t edges of the road if this van ishing

    point is a correc t estimation, wh ile it usua lly fa lls on the ex tens ion of one of the mos t

    dom inan t edges if it is a wrong es timation, therefore, we propose to use the initial

    Figure I llustration of de tection of the two mos t dom inan t edges. Top lef t: line

    segmen ts cons isting of d iscre te or iented po ints. Top r ight: initially de tected van ishing

    point. Bo ttom lef t: de tection of the two mos t dominan t edges based on initial

    van ish ing po int. Bo ttom r ight: the two mos t dom inan t edges and upda ted van ishing

    point. Van ishing po int as a cons traint to f ind the f irst mos t dom inan t edge of the road.

    The top r ight image of f igure illustrates this search process, where the f irst mos t

    dom inan t edge is de tected as the one wh ich has the larges t OCR from the se t of lines

    going through the initial vanish ing po int

  • 8/7/2019 mam format

    15/29

    IMPRO V EME N T S IN RADAR O BS ER V IN

    C APA B ILITIE S

    M t h

    C),

    MSC E

    g 15

    1. Coun ting the number of dom inan t edges wh ich dev iate to lef t and r ight respec tively

    2. If a ll deviate to lef t or r ight, the two mos t dom inan t edges correspond to the two

    cand idates w ith the large s tand sma llest deviation ang le respec tively.

    3. O therw ise, f ind those dom inan t edges wh ich have d ifferen t deviation or ientation

    from the f irst dom inan t edge

    4. D ivide these dom inan t edges into severa l clusters accord ing to the ang le be tween

    two ne ighbor ing dom inan t edges, e.g., if the ang le is no sma ller than 2 the two

    neighbor ing dom inan t edges be long to d ifferen t clusters.

    5. F ind the cen ter of the larges t cluster as the dev iation ang le of the second mos t

    dom inan t edge. If more than one c luster has equa l number of dom inan t edges, the

    cen ter of these c lusters is used.

  • 8/7/2019 mam format

    16/29

    IMPRO V EME N T S IN RADAR O BS ER V IN

    C APA B ILITIE S

    M t h

    C),

    MSC E

    g 16

    Results:

    Figure: Van ish ing po int estimation and dom inan t edge de tection

    (a) Van ish ing po int marked manua lly

    (b) D ominan t edges that are de tected us ing edge de tection

    (c) Vo ting the dom inan t edge

    (d) E stimated van ishing po int

  • 8/7/2019 mam format

    17/29

    IMPROVEMENTS IN RADAR OBSERVING CAPABILITIES

    M tech (DC), BMSCE Pag e 17

    3 . HORIZON B A SED ROLL A NGLE ES T IM A T ION FOR

    UNM A NNED A IR VEHICLES

    The typica l Nav igationa l sys tems for UAVs have been bu ilt on a comb ination of

    iner tial navigation (INS) and G loba l Pos itioning sys tem (GPS) da ta. The uncer tainties in

    the GPS based es timates can become large some times, for examp le, due to shadow

    induced sa tellit e carr ier s igna l loss, or mu lti-pa th interference effec ts w ith the ground, and

    the var iab le number of sa tellit es v isible in the sky. In such s ituations, Image-based

    informa tion is very usefu l since cameras are inexpens ive and ob tain da ta a t a high

    tempora l rate, and (due to advances in image process ing power,) they prov ide a r ich

    source of informa tion abou t the immed iate surround ings and a id in successfu l nav igation

    of UAVs.

    In many ex isting au tonomous sys tems where GPS and INS sys tems are used for

    nav igationa l purposes, it may a lso be easy to upgrade for v ision based nav igation s ince

    mos t autonomous veh icles a lready have v isua l sensors onboard for o ther tasks such as

    targe t recogn ition. The task in fron t of us is to deve lop a nav igation sys tem, wh ich merges

    the v ision based upda tes w ith the informa tion from IMU and GPS sys tems. As a f irst step

    towards this task, this paper dea ls with ro ll ang le es timation for UAVs based on hor izon

    sens ing v ia image process ing. In this techn ique, an on-board camera genera tes images of

    the v iews a long the p itch and ro ll axes. The images are then processed d igitally to

    estimate the or ientation of the hor izon w ith respec t to the a ircraf t body axes and the ro ll

    ang le can be es timated.

    IM A GE PROCESSING SUBSYS T EM

    Image process ing is concerned w ith transform ing an image in one s torage area

    to a new image in ano ther s torage area. Image unders tand ing bas ically involves the s tudy

    of fea ture ex trac tion, segmen tation, c lass if ication and interpre tation. A me thod used for

    both fea ture ex trac tion and segmen tation is edge de tection. I t is a fundamen tal prob lem in

    image unders tand ing and the s implest to iden tify ob ject and ob tain mos t of the re levan t

    informa tion abou t them such as shapes, loca tions and d istance from the camera. The

    componen ts of typical image unders tand ing sys tem are as shown in F igure.

  • 8/7/2019 mam format

    18/29

    IMPROVEME N

    I N RADAR OBSERVI N C APABILITIES

    M t

    (DC j

    k BMS C E P ag 18

    Figure 1 : Components of typical image understanding system

    1. F eature e x tract i :

    In the feature extraction it is necessary to extract certain features of objects

    from the scene. In our example the feature of our interest is horizon, which is perceived asthe line which separates the sky and the ground. Horizon serves as a land mark for UAVs

    navigational tasks.

    2. S egmentat i n:

    The segmentation technique is required by a vision system in order to

    isolate the objects from the background. Segmentation is often imagined as the splitting

    of the image into a number of regions each having a high level of uniformity in some

    designated parameter such as brightness, color or texture. One method of segmentation is

    the region growing technique. The goal of region growing is to use image characteristicsto map individual pixels in an input image to sets of pixels, called regions. In this

    technique, pixels are placed in a region on the basis of their similar intensity. Similar

    adjacent regions are then merged sequentially to form larger regions. This technique is

    found to be useful with the aid of edge detection.

    3. C lass i icat i n and Interpretat i n:

    The tasks of image understanding are concerned with the ability of the vision

    system to interpret the obtained information from segmentation or feature extraction. This

    is done to provide a description of the objects in an image scene in a useful way.

    HORI ZON D ETE CTION

  • 8/7/2019 mam format

    19/29

    IMPROVEME N TS I N RADAR OBSERVI N l C APABILITIES

    M t m n o

    (DC

    BMS C E P ag m 19

    AL GORITHM :

    In this section we explain an algorithm for detecting horizon line in digital images.

    Two methods are employed. The first method employs Hough transform and the second

    method is based on gradient based pixel separation. In both of the methods, horizon is

    considered as the most prominent straight line in an image.

    HOUGH TR ANSF ORM F OR HORI ZON D E T ECTION :

    The horizon line separates the sky area from the rest of the image. The difference

    in image properties on the two sides of the horizon line, such as color and texture

    properties, causes that the horizon often forms a prominent edge in the image. Therefore,

    this method aims at detecting the principle edge in the image. The detector first computes

    a binary edge map and subsequently identifies the most prominent straight line in this

    edge map.

    Figure 2 : Edge based horizon detection.(a)Y component of the input image, (b)

    result of Canny edge detector, (c)Hough map, (d)Horizon detection result.

    Figure2 depicts the steps involved in the horizon detection using Hough transform. In

    the first step, canny edge detection is performed on the luminance component of the

    image. Since we are only interested in the most prominent straight edge, we choose a

    large variance for the low pass filter of the canny edge detector. This smoothes the

    detailed edges end concentrates on the main structure of the image. This yields a binary

    image that contains the significant edges.

  • 8/7/2019 mam format

    20/29

    IMPROVEME N TS I N RADAR OBSERVI N C APABILITIES

    M t

    (DC

    BMS C E P ag 20

    The next step aims at identifying straight line in the edge map. For this purpose,

    we apply the Hough transform to the edge map. The Hough transform is a means of detecting

    the straight lines in the image. It transforms the edge map from the spatial domain to the Hough

    space. The Hough transform uses the parametric representation of a line, described by :

    Where and are coordinates in the spatial domain, and and arecoordinates in the Hough space. Considering a line in the spatial domain, stands for the

    distance from the image origin (top left corner of the image) to this line along a vector

    perpendicular to the line and is the angle between the horizontal axis and this vector.

    The map shown in Figure 2(c), can be seen as an intensity map in which

    coordinates with high values (bright positions) represent straight lines in the image (in

    our case the edge map). Therefore, the most prominent straight line in the edge map can be

    found by identifying the point in the Hough map that has the highest value (indicated by

    the small square at the right side of the Figure 2(c)). The result is shown in Figure 2(d).

    RESULTS :

    Figure 3 : Edge based horizon detection results :

    (a) Y component of the input image, (b) result of Canny edge detector,

    (c) Hough map, (d) Horizon detection result.

    Edge based detector has the following characteristics :

    It is accurate in calculating horizon line when other straight edges are less

    dominant than the horizon line.

    It is mislead by non horizon straight edges. The edge based detector cannot

    distinguish between the edge corresponding to the actual horizon line and other

    prominent straight edges.

  • 8/7/2019 mam format

    21/29

    IMPROVEME N TS I N RADAR OBSERVI N C APABILITIES

    M t z {

    (DC |

    } BMS C E P ag 21

    It fails when no clear horizon line exists in the image. For example this occurs

    when the horizon is occluded by objects such as mountains, trees, buildings as

    shown in Figure 4.

    Figure 4 : Edge based horizon detection results :

    (a) Y component of the input image, (b) result of Canny edge detector

    (c) Hough map, (d) false detection of Horizon.

    GR ADI E NT BA S E D HORI ZON D ETE CTION:

    Although the results obtained from the Hough transform based horizon detection

    are satisfactory because the Hough transform method fails when no clear horizon line

    exists in the image. In the gradient based horizon detection method, horizon is assumed

    to be a prominent straight edge in the image. The basic common step in the process of

    detecting edges in a gray scale image is to compute the gradient at each pixel. The partial

    derivatives and of the intensity function f of a pixel p with respect to x and y

    can be computed using a gradient operator. A gradient operator is represented by a pair of

    masks, x and y. Each mask is a square matrix of weights mapped on to a group of pixels

    around an origin (or centre) pixel. The mask x will be used for computing the horizontal

    direction gradient, and y for the vertical direction gradient. There are several edge

    operators that can be used in edge detection. Table 1 shows three of the well known

    operators :

    Operator x y

    Roberts Prewitt

  • 8/7/2019 mam format

    22/29

    IMPROVEMENTS IN RADAR OBSERVING CAPABILITIES

    M tech (DC), BMSCE Pag e 22

    Sobe l

    Tab le 1: Three we ll known opera tors used for edge de tection.

    Surround ing p ixels abou t a cen tre p ixel at loca tion p= (x,y) w ill be used to f ind the

    grad ient informa tion of p ixel p :

    The grad ient componen ts g x (p) = and g y (p) = can be compu ted us ing

    the Sobe l opera tor by mu ltiplying each of the we ights correspond ing to those p ixels in Son each mask w ith its intens ity va lue.

    The grad ient magn itude g (p) is compu ted from the two or thogona l grad ients g x(p)

    and g y(p) as fo llows:

    g (p)= A p ixel p is sa id to be s ignif ican t if its grad ient magn itude g (p) is grea ter than

    some thresho ld va lue. The grad ient image ( the image w ith on ly p ixels of s ignif ican t grad ient magn itudes) ob tained from the or igina l gray sca le image in f igure 5 is shown in

    f igure 6:

    Figure5: Or iginal gray sca le image F igure 6: The grad ient image

    The process of edge de tection based on grad ient of image is d ivided in to two levels:

  • 8/7/2019 mam format

    23/29

    IMPROVEMENTS IN RADAR OBSERVING CAPABILITIES

    M tech (DC), BMSCE Pag e 23

    y Pixel-leve l opera tion: To dec ide whe ther each p ixel is an edge p ixel or no t, and

    to dec ide in wh ich reg ion this pixel shou ld be included.

    y R egion- level opera tion: To f ind the line segmen t informa tion for reg ion w ith

    max imum number of p ixels.

    IMAG E SCANNING:

    The image is scanned s tar ting from the p ixel at (1,1), by exc luding the p ixels that

    exist at the image ou ter boundar ies. For an image w ith s ize W H p ixels the p ixels on

    rows 0 and W -1 and the p ixels on co lumns 0 and H-1 are exc luded, because the Sobe l

    edge opera tor we are us ing requ ires e ight surround ing p ixels abou t the cen tra l pixel. The

    scann ing is performed from top to bo ttom and from lef t to r ight as shown in the f igure 7:

    Y -ax is of image

    X-axis of image

    Figure 7: Scann ing d irec tion.

    1. W hile scann ing the image, the compute gradient func tion is used to app ly

    Sobe l opera tor (Tab le 1) a t each p ixel p= ( x,y) for compu ting its grad ient magn itude.

    Sobe l opera tor prov ides good performance and is re latively insens itive to no ise. Two

    dimens iona l array I [x] [ y] to represen t the intens ity va lues f (p) for p= ( x,y). P ixels hav ing

    grad ients above some thresho ld va lue are se lected to form a grad ient image in wh ich

    edges are prom inen tly no ticeab le.

    2. Connec tivity test: Connec tivity test of a p ixel is intended to see if the p ixel can

    be connec ted to some reg ion con taining one of its ne ighbors. Th is is done by the func tion

    connectivity test . If the connec tivity test is done for the p ixel p= ( x,y), then 8 p ixels

    surround ing it are included for the connec tivity test as shown in the F igure 8:

    i

    ,j

  • 8/7/2019 mam format

    24/29

    IMPROVEME N TS I N RADAR OBSERVI N ~ C APABILITIES

    M t

    (DC

    BMS C E P ag 24

    Figure 8 : Connectivity test

    If one of the neighbors of the current pixel p belongs to some region R, then the

    pixel should be added to that region R as shown in the Figure 9.If it is found that p does

    not belong to any existing region, a new region will be created and the new pixel will be

    considered as the first pixel of the newly created region as depicted in the Figure 10.

    Figure 9 : A pixel is included in the region as one of its neighbor if it satisfies the

    connectivity test.

    Figure 10 : Current pixel starts a new region if it cannot belong to a region as one of its

    neighbors.

    As a result of a connectivity test all the pixels corresponding to a large prominent

    line in a gradient image is added to one region as shown in the figure 11 :

    Figure 11 : (a) Gradient image Figure 11 : (b) result of connectivity test.

  • 8/7/2019 mam format

    25/29

    IMPROVEMENTS IN RADAR OBSERVING CAPABILITIES

    M tech (DC), BMSCE Pag e 25

    3. L ine segmen t: Af ter perform ing connec tivity test on grad ient image, a

    reg ion w ith max imum number of p ixels is se lected and their end po ints are joined

    to form a line segmen t which corresponds to Hor izon line in the or iginal image.

    R esu lts:

    (a) (b) (c) (d)

    (a) Gray sca le image (b) Grad ient image (c) image af ter connec tivity test (d) Hor izon

    line p lotted on or iginal image.

  • 8/7/2019 mam format

    26/29

    IMPROVEME N TS I N RADAR OBSERVI N C APABILITIES

    M t

    (DC

    BMS C E P ag 26

    Roll angle estimation : Once the horizon line is detected using one of the above

    described methods, the next step is to estimate the roll angle of the UAV. The angle

    between the horizon line and a horizontal reference line gives the roll angle estimate.

    Figure 13 shows the relative inclination of the UAV with respect to Horizon line.

  • 8/7/2019 mam format

    27/29

    IMPROVEMENTS IN RADAR OBSERVING CAPABILITIES

    M tech (DC), BMSCE Pag e 27

    8 . SCOPE A ND ODJEC T IVES OF PH A SE- 3

    The work includes track ing of a mov ing ob ject in the v ideo taken from

    the v ision sensor wh ich here is a camera moun ted on the MAV or the f ixedterres tr ial camera. Imp lemen ting it for the terres tr ial camera, us ing

    MATLAB-7 as a programm ing tool and app lying it to radars wou ld be the

    solely idea of the work.

    BLUE PRIN T :

    1. Convers ion of v ideo c lip ob tained from sensor in to image frames.

    2. Mode l each p ixel as a m ixture of Gauss ians and us ing approx imation to

    upda te the mode l.

    3. The Gauss ian d istr ibutions of the adap tive m ixture mode l are then

    evaluated to de term ine wh ich are mos t likely to resu lt from a background

    process.

    4. E ach p ixel is c lass if ied based on whe ther the Gauss ian d istr ibution wh ich

    represen ts it mos t effec tively and is cons idered par t of the background

    mode l.

    6. Usage of Kaman f ilter ing to f ilter ou t the unconnec ted componen ts.

    5. To ach ieve a s table, rea l-time ou tdoor tracker wh ich re liably dea ls with

    lighting changes, repe titi ve mo tions from c lutter, and long- term scene

    changes.

  • 8/7/2019 mam format

    28/29

    IMPROVEMENTS IN RADAR OBSERVING CAPABILITIES

    M tech (DC), BMSCE Pag e 28

    9 . BIBLOGR A PHY

    [1] htt p://www.d ar pa.mil / gran d challeng e.

    [2] htt p://www. ieee xpl r e.org

    [3] www. g oog le.com

    [4] D . Ko ller, J. W eber, T. Huang, J. Ma lik, G. Ogasawara,

    B. R ao, and S. R usse l. Towards robus t automa tic traff ic

    scene ana lysis in rea l-time. In Proc. of the In terna tional Conference on Pa ttern R ecogn ition, Israe l, November 1994.

    [5] Chr is Stauffer W .E .L Gr imson Adap tive background

    mixture mode ls for rea l-time track ingThe Ar tif icial Intelligence

    Labora tory, Massachuse tts Ins titute of Techno logy

    Cambr idge, M A 02139.

    [6] Nir Fr iedman and S tuar t R usse ll. Image segmen tation in

    Video sequences: A probab ilistic approach, In Proc. of the

    Thir teenth Conference on Uncer tainty in Ar tif icial Intelligence

    (UAI), Aug. 1-3, 1997.

    [7] W ren, Chr istopher R ., A li Azerba ijani, Trevor D arre ll

    And A lex Pen tland. Pf inder: R eal -Time Track ing of the

    Human Body, In IEEE Tran sactions on Pa ttern Ana l ysis an d

    Machine Intelli g ence , Ju ly 1997, vo l 19, no 7, pp.780-785.

  • 8/7/2019 mam format

    29/29

    IMPROVEMENTS IN RADAR OBSERVING CAPABILITIES