13
Hindawi Publishing Corporation Journal of Electrical and Computer Engineering Volume 2013, Article ID 598708, 12 pages http://dx.doi.org/10.1155/2013/598708 Research Article Enhancement of Background Subtraction Techniques Using a Second Derivative in Gradient Direction Filter Farah Yasmin Abdul Rahman, 1 Aini Hussain, 1 Wan Mimi Diyana Wan Zaki, 1 Halimah Badioze Zaman, 2 and Nooritawati Md Tahir 3 1 Smart Engineering Systems Laboratory, Department of Electrical, Electronic & Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor, Malaysia 2 Institute of Visual Informatics, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor, Malaysia 3 Faculty of Electrical Engineering, Universiti Teknologi MARA, 40450 Shah Alam, Selangor, Malaysia Correspondence should be addressed to Aini Hussain; [email protected] Received 27 March 2013; Revised 5 September 2013; Accepted 24 September 2013 Academic Editor: Vijayan K. Asari Copyright © 2013 Farah Yasmin Abdul Rahman et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A new approach was proposed to improve traditional background subtraction (BGS) techniques by integrating a gradient-based edge detector called a second derivative in gradient direction (SDGD) filter with the BGS output. e four fundamental BGS techniques, namely, frame difference (FD), approximate median (AM), running average (RA), and running Gaussian average (RGA), showed imperfect foreground pixels generated specifically at the boundary. e pixel intensity was lesser than the preset threshold value, and the blob size was smaller. e SDGD filter was introduced to enhance edge detection upon the completion of each basic BGS technique as well as to complement the missing pixels. e results proved that fusing the SDGD filter with each elementary BGS increased segmentation performance and suited postrecording video applications. Evidently, the analysis using F- score and average accuracy percentage proved this, and, as such, it can be concluded that this new hybrid BGS technique improved upon existing techniques. 1. Introduction Object extraction is a technique used in suppressing the background of a video scene to detect subjects that appear in the frame. e technique involves comparing or subtracting the current frames from the background frame and treating the remaining pixels as foreground [1]. Prior research on background subtraction (BGS) used several parametric BGS techniques, such as running average [24], running Gaussian average [57], approximate median filter [7, 8], and Gaussian Mixture Model [911]. ese parametric techniques deter- mine the foreground and update the subsequent background based on the distribution of intensity value [12]. Aside from these techniques, other studies have introduced nonparamet- ric models that detect foreground and background based on the intensity of statistical properties [13]. Other non- parametric models include a kernel density estimator [14] and mean shiſt estimation [15]. is work focuses on basic BGS techniques, such as frame differencing, approximate median, running average, and running Gaussian average. e motivation of this work lies in the fact that most edge pixels are undetected aſter performing object extraction techniques based on FD, AM, RA, and RGA. In this study, however, we have overcome this limitation by detecting all edge pixels; hence a perfect blob can be retrieved through morphological procedures. is is done by applying an SDGD filter on the results of background suppression and combining the foreground pixels generated by BGS techniques with the detected edge as our extracted object. e edge pixels are expected to fill in the boundary gap, which creates better connections among pixels in the boundary. is leads to better foreground frame detection. is paper is organized as follows. Section 2 presents an overview of several basic BGS techniques and the SDGD

Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2013 Article ID 598708 12 pageshttpdxdoiorg1011552013598708

Research ArticleEnhancement of Background Subtraction TechniquesUsing a Second Derivative in Gradient Direction Filter

Farah Yasmin Abdul Rahman1 Aini Hussain1 Wan Mimi Diyana Wan Zaki1

Halimah Badioze Zaman2 and Nooritawati Md Tahir3

1 Smart Engineering Systems Laboratory Department of Electrical Electronic amp Systems Engineering Faculty of Engineering amp BuiltEnvironment Universiti Kebangsaan Malaysia 43600 UKM Bangi Selangor Malaysia

2 Institute of Visual Informatics Universiti Kebangsaan Malaysia 43600 UKM Bangi Selangor Malaysia3 Faculty of Electrical Engineering Universiti Teknologi MARA 40450 Shah Alam Selangor Malaysia

Correspondence should be addressed to Aini Hussain ainiengukmmy

Received 27 March 2013 Revised 5 September 2013 Accepted 24 September 2013

Academic Editor Vijayan K Asari

Copyright copy 2013 Farah Yasmin Abdul Rahman et al This is an open access article distributed under the Creative CommonsAttribution License which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited

A new approach was proposed to improve traditional background subtraction (BGS) techniques by integrating a gradient-basededge detector called a second derivative in gradient direction (SDGD) filter with the BGS output The four fundamental BGStechniques namely frame difference (FD) approximate median (AM) running average (RA) and running Gaussian average(RGA) showed imperfect foreground pixels generated specifically at the boundary The pixel intensity was lesser than the presetthreshold value and the blob size was smaller The SDGD filter was introduced to enhance edge detection upon the completion ofeach basic BGS technique as well as to complement the missing pixels The results proved that fusing the SDGD filter with eachelementary BGS increased segmentation performance and suited postrecording video applications Evidently the analysis using F-score and average accuracy percentage proved this and as such it can be concluded that this new hybrid BGS technique improvedupon existing techniques

1 Introduction

Object extraction is a technique used in suppressing thebackground of a video scene to detect subjects that appear inthe frame The technique involves comparing or subtractingthe current frames from the background frame and treatingthe remaining pixels as foreground [1] Prior research onbackground subtraction (BGS) used several parametric BGStechniques such as running average [2ndash4] running Gaussianaverage [5ndash7] approximate median filter [7 8] and GaussianMixture Model [9ndash11] These parametric techniques deter-mine the foreground and update the subsequent backgroundbased on the distribution of intensity value [12] Aside fromthese techniques other studies have introduced nonparamet-ric models that detect foreground and background basedon the intensity of statistical properties [13] Other non-parametricmodels include a kernel density estimator [14] andmean shift estimation [15]

This work focuses on basic BGS techniques such asframe differencing approximate median running averageand running Gaussian average The motivation of this worklies in the fact that most edge pixels are undetected afterperforming object extraction techniques based on FD AMRA and RGA In this study however we have overcome thislimitation by detecting all edge pixels hence a perfect blobcan be retrieved through morphological procedures

This is done by applying an SDGD filter on the resultsof background suppression and combining the foregroundpixels generated by BGS techniques with the detected edgeas our extracted object The edge pixels are expected to fill inthe boundary gap which creates better connections amongpixels in the boundary This leads to better foreground framedetection

This paper is organized as follows Section 2 presents anoverview of several basic BGS techniques and the SDGD

2 Journal of Electrical and Computer Engineering

filter Section 3 describes the methodology Section 4 dis-cusses the results Finally Section 5 concludes our paper

2 Literature Review

This section provides a review of the literature on the fourBGS techniques evaluated in this study namely frame differ-encing approximate median running average and runningGaussian average SDGD filter studies are also presented

21 Frame Differencing Frame differencing (FD) is the mostfundamental technique in BGS FD involves finding theabsolute difference between the current frame and a previousor background frame [1] The absolute difference is thencompared with an appropriate threshold value 119860 to detectthe object as shown in (1) where 119865

119894is the current frame

intensity value119861119894is the background intensity value and Fg

119894is

the foreground intensity value This technique uses the samebackground frame for all video sequences

Fg119894(119909 119910) =

11003816100381610038161003816119865119894 (119909 119910) minus 119861119894 (119909 119910)

1003816100381610038161003816 gt 119860

0 otherwise(1)

22 Approximate Median Filter The approximate median(AM) algorithm is adaptive dynamic nonprobabilistic andintuitive [8] AM is obtained by calculating the differencebetween two video frames and using this difference in deter-mining the perfectmethod for updating the background AMis considered one of the most acceptable methods because itprovides the most accurate pixel identification

Several studies have evaluated the efficacy of the AMalgorithm He et al [7] tested the effectiveness of the AMalgorithm as part of their optimized algorithm for vehicledetection in an embedded system Their approach yieldshighly accurate information with less computational timewhen detecting and tracking vehicles in a traffic sceneEquation (2) presents AM updates in the reference frameof every video sequence The succeeding background frame119861119894+1

is dependent on the intensity value of both the presentframe 119865

119894 and background frame 119861

119894

119861119894+1(119909 119910) =

119865119894(119909 119910) + 1 119865

119894(119909 119910) gt 119861

119894(119909 119910)

119865119894(119909 119910) minus 1 otherwise

(2)

23 Running Average Running average (RA) is anothertechnique for updating a background image A pixel isclassified as background when the pixel value belongs to thecorresponding distribution of the background model andif otherwise the mean of the distribution is updated [4]The updated image is then used in the changing scene Thecomputational effect in an RA background is lesser becauseonly the weighted sums of the two images are computed andthus low computational and space complexities are produced[3] Moreover several researchers have utilized this methodto detect moving objects in video captured by a static camera

Several studies were conducted to enhance the efficiencyof BGS based on the RAmethod [2ndash4] The outcome by Parket al in [4] showed that application of a hierarchical data

structure significantly increased the processing speed withaccurate motion detection This outcome can be attributedto the updating of the background frame by the RA methodEquation (3) shows a specified learning rate based on theprevious background frame where 120572 is the learning rate and119860 is the threshold value

119861119894+1(119909 119910)

= 119861119894(119909 119910) 119865

119894(119909 119910) gt 119860

120572 lowast 119865119894(119909 119910) + (1 minus 120572) lowast 119861119894 (119909 119910) otherwise

(3)

24 Running Gaussian Average This method combines boththeGaussian function andRAOverall the runningGaussianaverage (RGA) method has a significant advantage overother approaches because it requires less processing time andutilizes less memory compared with nonrecursive methodssuch as mixture of Gaussian (MoG) and kernel densityestimation (KDE) [9] Equation (4) shows how the referenceframe that is represented by the mean 120583 is updated in eachvideo sequence using this method Unlike AM and RA thismethod uses 2120590

119894as a threshold value

120583119894+1

= 120572119865119894+ (1 minus 120572) 120583119894

1205902

119894+1= 120572(119865

119894minus 120583119894)2+ (1 minus 120572) 120590

2

119894

(4)

25 Second Derivative in a Gradient Direction (SDGD) Fil-ter In image processing studies researchers use first- andsecond-order derivatives to detect the edge of an object basedon its gradient By using the first derivative the edge locationis defined at the maximum position of the steep and thedescent [16] Traditional edge detection methods such asthose by Prewitt Sobel and Roberts convolved the imagewith a specific kernel [16 17] However these techniqueswere reported to be sensitive to noise and inaccurate [17]In 1986 the Canny edge detector was introduced whichrepresented an improvement over the traditional methods[17 18] The detector applied Gaussian smoothing to reducenoise unwanted details textures nonmaximum suppressionand hysteresis thresholding to find the edges [19]

The second-order derivative approach defines the edgepixels based on changes in brightness or zero crossing in theimage area [19 20] SDGD is a nonlinear operator that canbe expressed in the first and second derivatives Additionallysimilar to Canny SDGD is combined with a Gaussian lowpass filter for smoothing purposes [21] Moreover a Laplaceoperator is used to simplify the SDGD operation [22]

The Laplacian is defined as

nabla2=1205972119886

1205971199092+1205972119886

1205971199102= (ℎ2119909otimes 119886) + (ℎ

2119910otimes 119886) (5)

where ℎ2119909

and ℎ2119910

are the second derivative filters

Journal of Electrical and Computer Engineering 3

Thebasic versions in the secondderivative filters are givenby

[ℎ2119909] = [ℎ

2119910] = [1 minus2 1]

[ℎ2] = [

[

0 1 0

1 minus4 1

0 1 0

]

]

(6)

Associating (5) with the Gaussian filter yields

119887 = 119892120590otimes (ℎ2otimes 119886) = (119892

120590otimes ℎ2) otimes 119886 (7)

where 119892120590is the Gaussian low pass filter

Five partial derivatives are used in the SDGD filter asfollows

119860119909119909=1205972119886

1205971199092 119860

119909119910=

1205972119886

120597119909120597119910 119860

119909=120597119886

120597119909

119860119910119910=1205972119886

1205971199102 119860

119910=120597119886

120597119910

(8)

Therefore

SDGD =

1198601199091199091198602

119909+ 2119860119909119910119860119909119860119910+ 1198601199101199101198602

119910

1198602119909+ 1198602119910

(9)

A detailed explanation of SDGD can be found in [20 21 23]In [19 24] SDGD was presented as a filter used in

finding edges and measuring objects Several studies utilizethe SDGD filter For example Aarnick et al [25] used thisfilter in analyzing the ultrasound images of male kidneysand prostate in their study on preprocessing algorithms foredge detection at multiple resolution scales Their study[25] reported that detecting the contour of objects in greymedical images could be improved by applying an adaptivefilter size in SDGD Nader El-Glaly [24] used SDGD as partof her work in a digital inpainting algorithm Hagara andMoravcik [23] introduced the PLUS operator a combinationof the SDGD filter and Laplace operator for edge detectionSimilar findings were obtained by using PLUS and SDGDfilters based on a kernel size of nine or lower and PLUSyielded better results when the kernel size was greater thannine and was suitable in locating the edges of small objectsSimilarly Verbeek and Van Vliet [22] compared LaplaceSGDG and PLUS derived from 2D and 3D images Theirresearch confirmed the findings in [23]

The idea of combining two methods in one algorithmwas inspired by a study made by Zheng and Fan [3] whereRA and temporal difference were combined to detect themoving object Another example of hybrid research in BGSwas conducted by Lu and Wang [26] They crossbred opticalflow and double background filtering to detect the movingobject Zaki et al [27] combined frame differencing with ascale invariant feature detector to detect the moving object invarious environments

Based on the study of Persoon et al [19] the SDGDfilter gave better surface localization especially in highly-curved areas compared with the Canny edge detection

technique Thus we adopted this filter in our present workIn addition Persoon et al showed that SDGD guaranteedminimal detail smoothing that led to better visualizationof polyps in computed tomography (CT) scan data Thisfinding is aligned with our results reported in [28] Furtherthe study by Nader El-Glaly [24] used the SGDG filter indeveloping an enhanced partial-differential equation-baseddigital inpainting algorithm to find themissing data in digitalimages

To the best of our knowledge this study is new becauseno prior work exists that integrates the SDGD filter withthe BGS technique Although Al-Garni and Abdennour usededge detection and the FD technique to findmoving vehiclesno informationwas provided on the edge detection techniquethey utilized [29] We used an SDGD filter to enhancethe performance of the existing background subtractiontechnique by combining foreground pixels generated by BGStechniqueswith the detected edge as our extracted objectTheedge pixels are expected to fill in the boundary gap whichwillcreate better connections among pixels in the boundaryThisresearch is an extension of our previous work published in[28] which uses more data from a variety of data sources andmore detailed analysis is being done

3 Methodology

This section discusses the database used and the proposedmethod

31 Dataset This study utilized datasets that were acquiredfrom selected prerecorded video collections of several onlinedatabases

(a) Smart Engineering System Research Group (SESRG)UKM Collections This video collection consists ofvarious human actions and activities recorded bystudents involved in SESRG studies on smart surveil-lance systems Besides humans this database alsohas a collection of moving cars that are used asnonhuman samples for classification that will beexplained further in Section 34

(b) CMUGraphic LabMotion Capture (MoCap)Database[30] This database which is owned by CarnegieMellonUniversity contains 2506 trials in 6 categoriesand 23 subcategories The videos were recorded in anindoor environment

(c) CMU Motion of Body (MoBo) Database [31] Thisdatabase which is also owned by Carnegie MellonUniversity consists of videos showing six differentangles of the subject who is walking on a treadmill

(d) Multicamera Human Action Video Data (MuHAVi)[32] This database is owned by the Digital ImagingResearch Centre at Kingston University It presents 17action classes with 8 camera views

(e) Human Motion Database (HMDB51) [33] This data-base consists of collections of edited videos from digi-tized movies and YouTubeThe collection contains 51action categories with 7000manually annotated clips

4 Journal of Electrical and Computer Engineering

Yes

Yes

No

No Is itthe last frame

Update the reference frame

Subtract current frame with reference frame to

Is itFD

technique

End

Start

Generate reference frame

Perform SDGD filter on Fdiff and produce Fsdgd

Perform thresholding to obtain Fx

obtain Fdiff

obtain Ffinal

Perform morphological operation on Fg and

Fuse Fsdgd with Fx to produce Fg

Figure 1 Flowchart of the proposed algorithm

The initial background for videos taken from databases(a) (b) (d) and (e) was modeled using the median valuefrom a selected frame interval We used five frames from thevideo sequences with 10 intervals between the frames thatis 1198651 11986510 11986520 11986530 and 119865

40 Next the median value of these

selected frames was used as the reference image Details ofthis technique are presented in [34] Meanwhile database (c)provided the background reference frame All datasets exceptthe MuHAVi videos were manually segmented to obtain theground truth

32 The Proposed Algorithm The methodology of our pro-posed techniques is as described in the flowchart shown inFigure 1

First the dataset was tested using the following basicparametric BGS techniques FD RA AM and RGA Nextthe SDGD filter was fused with each technique by combiningthe output of a background technique with the SDGD filteroutput The SDGD filter was selected as a segmentation toolbecause it produced better results compared to other edgedetection techniques (Sobel Canny and Roberts) [19 28]

33 Evaluation Method To evaluate the performance of eachBGS technique we calculated the average of recall precision119865 score and accuracy

(a) Recall (Rcl) refers to the detection rate which iscalculated by comparing the total number of detectedtrue positive pixels with the total number of truepositive pixels in the ground truth frame [35] Thisis also known as sensitivity The following equationshows how recall is calculated

recall = ΣTPΣTP + ΣFN

(10)

where FP is the false positive and FN is false negative

(b) Precision (Prcsn) is the ratio between the detectedtrue positive pixels and the total number of positivepixels detected by the method [35 36] This is alsoknown as specificity

precision = ΣTPΣTP + ΣFP

(11)

where FP is false positive FN is false negative TP istrue positive and TN is true negative

(c) 119865-measure or balance 119865-score is the weighted har-monic mean of recall and precision It is used as asingle measurement for comparing different methods[35 36]

119865 =2 sdot recall sdot precisionrecall + precision

(12)

(d) Accuracy is the percentage of correct data retrievalIt is calculated by dividing the number of pixels withtrue positive plus pixels with true negative pixels overthe total number of pixels in the frameThe followingequation displays the calculation of accuracy [36]

accuracy = ΣTP + ΣTNΣTP + ΣTN + ΣFP + ΣFN

times 100 (13)

This study utilized videos with a multiple number offrames Hence we presented a comparison of the averageF score and average accuracy percentage as the overallperformance benchmark in each BGS technique with andwithout SDGD filter The following equation is used tocalculate the percentage of improvement

im (119865 score) = (119865 scorewith minus 119865 scorewithout) times 100

im (accuracy) = Accuracywith minus Accuracywithout(14)

Journal of Electrical and Computer Engineering 5

(a) A1 (outdoor) (b) A2 (outdoor) (c) E1 (outdoor)

(d) B1 (indoor) (e) C1 (indoor) (f) D1 (indoor)

Figure 2 Background image of data in this study

(a) Original image

(b) Ground truth image

Figure 3 Original image and its ground truth of a frame in A1 foran outdoor environment

34 Classification Using the artificial neural network (ANN)as classifier the segmented image from the proposed tech-nique was subjected to classification testing Training inputof the ANN was extracted from 1500 randomly chosen seg-mented framesimages 750 human blob images represented

human samples and another 750 car blob images representednon-human samples

A scale conjugate gradient and the back propagation ruleswere chosen to train the classifier The ANN was designedwith one hidden layer containing ten hidden neurons andan output layer containing two hidden neurons Both layersused sigmoid as the activation function and themean squarederror value as the performance function Images were classi-fied as either humanor non-humanNext another set of 1000framesimageswere chosen for testingWe applied leave-one-out cross validation in our study Ten experiments and theaverage classification rate were used to evaluate whether therecognition performance was human or not

The evaluation was based on the segmented framesim-ages generated from the enhanced FD technique with theSDGD filter added to the algorithm In this experiment weonly used the FD instead of other BGS techniques because ithas the fastest processing time We also performed statisticaltests such as recall precision and F-score on the obtainedclassification result by taking positive identification on theclassification of human and negative for the classification ofnon-human

4 Results and Discussion

This section discusses the robustness of the proposed tech-nique based on videos taken from five different databases Toconfirm the robustness of the proposed algorithm we testedusing video that presented a different environment or cameraangle Specifically the video sequences were recorded inmultiple environments (indoors and outdoors) and multipleviews

6 Journal of Electrical and Computer Engineering

(a) Original image (b) Ground truth image

Figure 4 Original image and its ground truth of a frame in B1 for an indoor environment

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 5 Segmentation performance using various background subtraction techniques for an outdoor environment

Journal of Electrical and Computer Engineering 7

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 6 Segmentation performance using various background subtraction techniques for an indoor environment

Figures 2(a)ndash2(f) show some of the background framesused in this research These background frames were gen-erated by using the median value of selected frames in thevideo sequences except for the videos obtained from theMoBo database Rather than stating the filename we assignedletters from A to E to identify the videos representing thedata A refers to SESRG B to MoCap C to MoBo D toMuHAVi and E to HMDB51 The words in brackets indicatewhether the video environment was either indoor or outdoor

Next we present the subjective results of our objectextraction Since this study involves video data with a multi-ple number of frames we only depict the results obtained forframenumber 10 in dataA1 and framenumber 29 in data B1 torepresent the outdoor and indoor scene samples Figures 3(a)and 4(a) illustrate the original images of a frame in videos A1and B1 respectively Figures 3(b) and 4(b) depict the groundtruth images

Figures 5 and 6 present the extracted subjects in bothindoor and outdoor environments using FD AM RA andRGA with and without the SDGD filter

The first column of Figures 5 and 6 show that all the basicBGS techniques were capable of detecting the object in thescene of interest based on the tested videos However manypixels were missing and resulted in a smaller blob comparedwith the ground truth image Our proposed technique solvesthe problem of missing pixels and reduced blob size becauseit combines the SDGD filter with FD AM RA and RGAas shown in column 2 of Figures 5 and 6 The pixel sizeof the extracted object is slightly enlarged and becomesmore compound Foreground detection showed significantimprovement after the proposed technique was applied to alldatasets

Tables 1 and 2 show the number of TP TN FP and FN forthe selected indoor andoutdoor sampleswith andwithout the

8 Journal of Electrical and Computer Engineering

0010203040506070809

1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

F-sc

ore

Frame number

(a) FD

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(b) AM

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 7 Graph of F-score versus frame number for outdoor sample

Table 1 Performance for frame 10 in outdoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2046 1279 1091 80064 065 062 063With SDGD 2521 804 616 80539 080 076 078

AM without SDGD 1986 1785 1151 79558 063 053 057With SDGD 2374 951 763 80392 076 071 073

RG without SDGD 1959 1781 1178 79562 062 052 057With SDGD 2303 1022 834 80321 073 069 071

RGA without SDGD 2441 912 696 80431 078 073 075With SDGD 2654 499 483 80844 085 084 084

Table 2 Performance for frame 29 in indoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2336 1372 809 79963 074 063 068With SDGD 2788 920 1165 79607 071 075 073

AM without SDGD 2771 1437 808 79964 074 061 067With SDGD 2726 982 1077 79695 072 074 073

RG without SDGD 2289 1419 858 79914 073 062 067With SDGD 2723 985 1106 79666 071 073 072

RGA without SDGD 2564 1092 1175 79649 069 070 069With SDGD 2979 677 1535 79289 067 081 073

Journal of Electrical and Computer Engineering 9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(a) FD

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(b) AM

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 8 Graph of F-score versus frame number for indoor sample

addition of the SDGD filter in the BGS techniques Then thevalues of recall precision and F-score were calculated basedon mathematical equations (10)ndash(12)

Based on the findings shown in Tables 1 and 2 we cansee that the number of TPs has increased significantly whichproves that our technique is able to detect more of thecompound blob than the original methods This finding isalso in line with the increment of precision values Tables 1and 2 also show that the F-score increased for both sampleswhen we add the SDGD filter

The graphs in Figures 7 and 8 show the F-score trends inboth A1 and B1 for the four BGS techniques namely FD AMRA and RGA The solid lines represent the F-score resultsusing our proposed technique that is with the SDGD filterwhereas the dashed lines represent the F-score results withoutusing the SDGD filter Based on Figures 7 and 8 higherF-score values were noted for the A1 and B1 videos whenusing the proposed hybrid technique compared with thosewhen using the basic BGS techniques Thus our proposedtechnique improves upon traditional BGS techniques

To confirm the effectiveness of the proposed techniquewe tested the algorithm in six different videos with sixdifferent backgrounds as obtained from the five differentdatabases Table 3 shows the performance of FD AMRA and RGA with and without the SDGD filter in termsof the F-score and average accuracy percentage of all sixvideo samples Columns 5 and 8 show the percentageof improvement for both F-score and average accuracy

percentage Based on Table 3 the use of an SDGD filterimproved the average F-score values for all data comparedwith the values produced by the methods without SDGDColumn 5 of Table 3 shows that the F-score values improvedby 1 to 9 Therefore the proposed technique comparedwith existing techniques enhances object extraction

Additionally Table 3 depicts an increment in averageaccuracy percentage for each tested technique except forvideos A2 and E1 A2 had poor video quality because thefootage was taken in a corridor without proper lightingBecause of the poor video quality and bad lighting conditionthe SDGD filter was unable to segment the foregroundsubjects properly and produced an unwanted shadow in theforeground Nevertheless our proposed technique is capableof detecting foreground pixels with over 90 accuracy on allvideos tested

Meanwhile Table 4 exhibits the results of classifyinghuman and non-human recognition based on the segmentedframe generated by the proposed technique In Table 4 ANNsuccessfully recognized human and non-human images fromthe generated frames using the improved FD techniqueIncorrect classifications were minimal in all ten experiments

Figure 9 presents amatrix describing the overall results ofclassification testingThe average recognition rate for humanand non-human categories was 9878 and 9872 respec-tively The rate is higher for both categories as our algorithmprovided good human and non-human blob images which

10 Journal of Electrical and Computer Engineering

Table 3 Performance of each technique

Name MethodEvaluation performance based on average value

119865-score Accuracy ()Without SDGD With SDGD imlowast Without SDGD With SDGD imlowast

A1

FD 075 078 3 9845 9847 002AM 069 075 6 9816 9829 013RA 074 077 3 9842 9844 002RGA 079 081 2 9842 9847 003

A2

FD 078 079 1 9674 9665 minus009AM 080 081 1 9731 9714 minus017RA 076 078 2 9691 9683 minus008RGA 076 077 1 9575 9565 minus010

B1

FD 068 074 6 9799 9812 013AM 071 076 5 9822 9837 015RA 071 076 5 9825 9838 013RGA 064 068 4 9747 9751 004

C1

FD 084 085 1 9564 9569 005AM 079 080 1 9487 9503 016RA 057 065 8 9169 9217 048RGA 08 081 1 9495 9507 012

D1

FD 068 071 3 9784 9807 023AM 071 075 4 9813 9837 022RA 071 074 3 9814 9836 022RGA 070 074 4 9804 9825 021

E1

FD 080 081 1 9666 9669 003AM 080 081 1 9670 9671 001RA 080 081 1 9669 9670 001RGA 068 077 9 9645 9650 005

lowastim percent improvement

Target classHuman Nonhuman ()

Output class

Human 4939 64 9878

Non-human 61 4936 9872

() 9878 9872 9875

Figure 9 Confusion matrix of the overall performance

allowed ANN to distinguish between the two categoriesFurther the overall performance in both classes was 9875The findings confirmed that the proposed algorithm whichcombines FD with the SDGD filter could generate bettersilhouette images thereby facilitating recognition of humanand non-human images in the segmented imagesframes

From the matrix the information of TP TN FP and FNcan be extracted hence Table 5 shows results of the statisticalanalysis done on the classification findings by ANN

5 Conclusion

We presented a new hybrid approach that incorporates theSDGDfilterwith four basic BGS techniques namely FDAMRA and RGAThis hybrid technique enhanced segmentationperformance as indicated by the F-score values and averageaccuracy percentages The technique was tested on six differ-ent videos from five different databases and each video wastaken either indoors or outdoors and showed different scenesAnANNclassifierwas used to classify human andnonhumanimages appearing in the segmented images generated by ouralgorithm As the algorithm was capable of providing goodblob images ANN recognition of human and non-humanimages in the silhouette images was facilitated

Although computational time increased this aspect isacceptable considering the enhancement and the character-istics of second-order derivatives Therefore this study is

Journal of Electrical and Computer Engineering 11

Table 4 Classification results

Exp Human Non-human

1 Correct 498 492 99Missed 2 8 10

2 Correct 499 491 990Missed 1 9 10

3 Correct 498 495 993Missed 2 5 07

4 Correct 495 500 995Missed 5 0 05

5 Correct 481 492 973Missed 19 8 27

6 Correct 497 490 987Missed 3 10 13

7 Correct 484 496 980Missed 16 4 20

8 Correct 493 498 991Missed 7 2 09

9 Correct 497 496 993Missed 3 4 07

10 Correct 497 486 983Missed 3 14 17

Table 5 Performance for ANN classification

TP TN FP FN Recall Precision 119865-Score4939 4936 61 64 0987 0988 0987

valid and suitable for implementation in non-real-time appli-cations The proposed hybrid technique can improve upontraditional BGS techniques as indicated by the improved F-score accuracy andANNrecognition values after testing var-ious data sources and data environments The technique canalso be considered in detecting moving objects in non-real-time applications such as investigations of human actions ortraffic conditions

Acknowledgments

This work was supported by Universiti KebangsaanMalaysia (UKM) research Grant DPP-2013-003 and Minis-try of Higher Education (MoHE) research Grant LRGSTD2011ICT0402

References

[1] AMMcIvor ldquoBackground subtraction techniquesrdquo inProceed-ings of the Image and Vision Computing Conference AucklandNew Zealand 2000

[2] M H Sigari N Mozayani and H M Pourreza ldquoFuzzyrunning average and fuzzy background subtraction conceptsand applicationrdquo International Journal of Computer Science andNetwork Security vol 8 pp 138ndash143 2008

[3] Y Zheng and L Fan ldquoMoving object detection based onrunning average background and temporal differencerdquo in Pro-ceedings of the IEEE International Conference on Intelligent

Systems and Knowledge Engineering (ISKE rsquo10) pp 270ndash272November 2010

[4] J Park A Tabb and A C Kak ldquoHierarchical data structure forreal-time background subtractionrdquo in Proceedings of the IEEEInternational Conference on Image Processing (ICIP rsquo06) pp1849ndash1852 October 2006

[5] C RWrenAAzarbayeni TDarrel andA P Petland ldquoPfinderreal-time tracking of the human bodyrdquo IEEE Transaction onPattern Analysis andMachine Intelligence vol 19 no 7 pp 780ndash785 1997

[6] Z Tang Z Miao and Y Wan ldquoBackground subtraction usingrunning Gaussian average and frame differencerdquo in Proceedingsof the International Conference Entertainment Computing (ICECrsquo07) vol 4740 of Lecture Notes in Computer Science pp 411ndash4142007

[7] Z He Y Liu H Yu and X Ye ldquoOptimized algorithm for trafficinformation collection in an embedded systemrdquo in Proceedingsof the IEEE Congress on Image and Signal Processing May 2008

[8] A Singh S Sawan M Hanmandlu V K Madasu and BC Lovell ldquoAn abandoned object detection system based ondual background segmentationrdquo in Proceedings of the 6th IEEEInternational Conference on Advanced Video and Signal BasedSurveillance (AVSS rsquo09) pp 352ndash357 IEEE Press September2009

[9] S Su and Y Chen ldquoMoving object segmentation usingimproved running Gaussian average background modelrdquo inProceedings of the International conference on Digital ImageComputing Techniques and Applications 2008

[10] H Kim R Sakamoto I Kitahara T Toriyama and K KogureldquoBackground subtraction using generalised Gaussian familymodelrdquo Electronics Letters vol 44 no 3 pp 189ndash190 2008

[11] Z H Huang and K W Chau ldquoA new image thresholdingmethod based on Gaussian mixture modelrdquo Journal AppliedMathematics and Computation vol 205 pp 899ndash907 2008

[12] Y Liu H Yao W Gao X Chen and D Zhao ldquoNonparametricbackground generationrdquo in Proceedings of the 18th InternationalConference on Pattern Recognition (ICPR rsquo06) pp 916ndash919August 2006

[13] P Vu V Phong T H Vu and H B Le ldquoGPU implemen-tation of Extended Gaussian mixture model for backgroundsubtractionrdquo in Proceedings of the 8th IEEE-RIVF InternationalConference on Computing and Communication TechnologiesResearch Innovation and Vision for the Future (RIVF rsquo10)November 2010

[14] A Mittal and N Paragios ldquoMotion-based background subtrac-tion using adaptive kernel density estimationrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II302ndashII309 July 2004

[15] F Porikli and O Tuzel ldquoHuman body tracking by adaptivebackground models and mean shift analysisrdquo in Proceedings ofthe IEEE International Workshop on Performance Evaluation ofTracking and Surveillance (PETS rsquo03) July2003

[16] I J Young J J Gerbrands and L J van Vliet ldquoFundamentals ofImage Processingrdquo Version 2 3 1995ndash2007

[17] V G Narendra and K S Hareesh ldquoStudy and comparison ofvarious image edge detection techniquesrdquo International Journalof Image Processing vol 4 no 2 article 83 2009

[18] J F Canny ldquoA computational approach to edge detectionrdquo IEEETransaction Pattern Analysis and Machine Intelligent vol 8 no6 pp 679ndash697 1986

12 Journal of Electrical and Computer Engineering

[19] M P Persoon I W O Serlie F H Post R Truyen and F MVos ldquoVisualization of noisy and biased volume data using firstand second order derivative techniquesrdquo in Proceedings of the14th IEEE Visualization Conference pp 379ndash385 October 2003

[20] I T Young ldquoGeneralized convolutional filteringrdquo inProceedingsof the 19th CERN School of Computing pp 51ndash65 1996

[21] ldquoImage processing fundamentals derivative-based operationrdquo2011 httpwwwmifvultatpazinimasdipFIPfip-Derivatihtml

[22] PWVerbeek and L J vanVliet ldquoLocation error of curved edgesin low-pass filtered 2-D and 3-D imagesrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 16 no 7 pp 726ndash733 1994

[23] M Hagara and J Moravcik ldquoPLUS operator for edge detectionin digital imagesrdquo in Proceedings of the International Conferenceof Radioelektonika pp 467ndash470 2002

[24] Y Nader El-GlalyDevelopment of PDE-Based Digital InpaintingAlgorithm Applied to Missing Data in Digital Images [MSthesis] Ain Shams University 2007

[25] R G Aarnick J de la Rosette W Feitz F Debruyne and HWijkstra ldquoA preprocessing algorithm for edge detection withmultiple scales of resolutionrdquo European Journal of Ultrasoundvol 5 pp 113ndash126 1997

[26] N Lu and J Wang ldquoMotion detection based on accumulativeoptical flow and double background filteringrdquo in Proceedings ofthe World Congress on Engineering vol 1 2007

[27] ZakiWM DW A Hussain andM Hedayati ldquoMoving objectdetection using Keypoints reference modelrdquo EURASIP Journalon Image and Video Processing vol 2011 13 pages 2011

[28] F Y A Rahman A Hussain N M Tahir S A Samad and MH M Saad ldquoHybrid background subtraction techniques withsecond derivative on gradient direction filterrdquo in Proceedingsof the International Workshop on Advanced Image Technology(IWAIT rsquo12) 2012

[29] SMAl-Garni andAAAbdennour ldquoMoving vehicle detectionusing automatic background extractionrdquo in proceedings of theWorld Academy of Science Engineering and Technology vol 24pp 82ndash86 Sydney Australia December 2006

[30] ldquoCMU graphic lab motion capture databaserdquo 2010 httpmocapcscmuedu

[31] R Gross and J Shi ldquoThe CMU motion of body (MoBo)databaserdquo Technical Report CMU-RI-TR-01-18 Robotics Insti-tute Carnegie Mellon University 2001

[32] S Singh S A Velastin and H Ragheb ldquoMuHAVi a mul-ticamera human action video dataset for the evaluation ofaction recognition methodsrdquo in Proceedings of the 7th IEEEInternational Conference on Advanced Video and Signal Based(AVSS rsquo10) pp 48ndash55 Boston Mass USA September 2010

[33] H Kuehne H Jhuang E Garrote T Poggio and T SerreldquoHMDB a large video database for humanmotion recognitionrdquoinProceedings of the IEEE International Conference onComputerVision (ICCV rsquo11) pp 2556ndash2563 November 2011

[34] F Y A Rahman A Hussain N M Tahir and W M DZaki ldquoModeling of initial reference frame for backgroundsubtractionrdquo in Proceedings of the 6th International Colloquiumon Signal Processing and Its Applications (CSPA rsquo10) pp 125ndash1282010

[35] E Fauske L M Eliassen and R H Bakken ldquoA comparison oflearning based background subtractionrdquo in Proceedings of theNorwegian Artificial Intelligens Symposium (NAIS) pp 181ndash1922009

[36] H Kim B Ku D K Han S Kang and H Ko ldquoAdaptive selec-tion model in block-based background subtractionrdquo ElectronicsLetter vol 48 no 8 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 2: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

2 Journal of Electrical and Computer Engineering

filter Section 3 describes the methodology Section 4 dis-cusses the results Finally Section 5 concludes our paper

2 Literature Review

This section provides a review of the literature on the fourBGS techniques evaluated in this study namely frame differ-encing approximate median running average and runningGaussian average SDGD filter studies are also presented

21 Frame Differencing Frame differencing (FD) is the mostfundamental technique in BGS FD involves finding theabsolute difference between the current frame and a previousor background frame [1] The absolute difference is thencompared with an appropriate threshold value 119860 to detectthe object as shown in (1) where 119865

119894is the current frame

intensity value119861119894is the background intensity value and Fg

119894is

the foreground intensity value This technique uses the samebackground frame for all video sequences

Fg119894(119909 119910) =

11003816100381610038161003816119865119894 (119909 119910) minus 119861119894 (119909 119910)

1003816100381610038161003816 gt 119860

0 otherwise(1)

22 Approximate Median Filter The approximate median(AM) algorithm is adaptive dynamic nonprobabilistic andintuitive [8] AM is obtained by calculating the differencebetween two video frames and using this difference in deter-mining the perfectmethod for updating the background AMis considered one of the most acceptable methods because itprovides the most accurate pixel identification

Several studies have evaluated the efficacy of the AMalgorithm He et al [7] tested the effectiveness of the AMalgorithm as part of their optimized algorithm for vehicledetection in an embedded system Their approach yieldshighly accurate information with less computational timewhen detecting and tracking vehicles in a traffic sceneEquation (2) presents AM updates in the reference frameof every video sequence The succeeding background frame119861119894+1

is dependent on the intensity value of both the presentframe 119865

119894 and background frame 119861

119894

119861119894+1(119909 119910) =

119865119894(119909 119910) + 1 119865

119894(119909 119910) gt 119861

119894(119909 119910)

119865119894(119909 119910) minus 1 otherwise

(2)

23 Running Average Running average (RA) is anothertechnique for updating a background image A pixel isclassified as background when the pixel value belongs to thecorresponding distribution of the background model andif otherwise the mean of the distribution is updated [4]The updated image is then used in the changing scene Thecomputational effect in an RA background is lesser becauseonly the weighted sums of the two images are computed andthus low computational and space complexities are produced[3] Moreover several researchers have utilized this methodto detect moving objects in video captured by a static camera

Several studies were conducted to enhance the efficiencyof BGS based on the RAmethod [2ndash4] The outcome by Parket al in [4] showed that application of a hierarchical data

structure significantly increased the processing speed withaccurate motion detection This outcome can be attributedto the updating of the background frame by the RA methodEquation (3) shows a specified learning rate based on theprevious background frame where 120572 is the learning rate and119860 is the threshold value

119861119894+1(119909 119910)

= 119861119894(119909 119910) 119865

119894(119909 119910) gt 119860

120572 lowast 119865119894(119909 119910) + (1 minus 120572) lowast 119861119894 (119909 119910) otherwise

(3)

24 Running Gaussian Average This method combines boththeGaussian function andRAOverall the runningGaussianaverage (RGA) method has a significant advantage overother approaches because it requires less processing time andutilizes less memory compared with nonrecursive methodssuch as mixture of Gaussian (MoG) and kernel densityestimation (KDE) [9] Equation (4) shows how the referenceframe that is represented by the mean 120583 is updated in eachvideo sequence using this method Unlike AM and RA thismethod uses 2120590

119894as a threshold value

120583119894+1

= 120572119865119894+ (1 minus 120572) 120583119894

1205902

119894+1= 120572(119865

119894minus 120583119894)2+ (1 minus 120572) 120590

2

119894

(4)

25 Second Derivative in a Gradient Direction (SDGD) Fil-ter In image processing studies researchers use first- andsecond-order derivatives to detect the edge of an object basedon its gradient By using the first derivative the edge locationis defined at the maximum position of the steep and thedescent [16] Traditional edge detection methods such asthose by Prewitt Sobel and Roberts convolved the imagewith a specific kernel [16 17] However these techniqueswere reported to be sensitive to noise and inaccurate [17]In 1986 the Canny edge detector was introduced whichrepresented an improvement over the traditional methods[17 18] The detector applied Gaussian smoothing to reducenoise unwanted details textures nonmaximum suppressionand hysteresis thresholding to find the edges [19]

The second-order derivative approach defines the edgepixels based on changes in brightness or zero crossing in theimage area [19 20] SDGD is a nonlinear operator that canbe expressed in the first and second derivatives Additionallysimilar to Canny SDGD is combined with a Gaussian lowpass filter for smoothing purposes [21] Moreover a Laplaceoperator is used to simplify the SDGD operation [22]

The Laplacian is defined as

nabla2=1205972119886

1205971199092+1205972119886

1205971199102= (ℎ2119909otimes 119886) + (ℎ

2119910otimes 119886) (5)

where ℎ2119909

and ℎ2119910

are the second derivative filters

Journal of Electrical and Computer Engineering 3

Thebasic versions in the secondderivative filters are givenby

[ℎ2119909] = [ℎ

2119910] = [1 minus2 1]

[ℎ2] = [

[

0 1 0

1 minus4 1

0 1 0

]

]

(6)

Associating (5) with the Gaussian filter yields

119887 = 119892120590otimes (ℎ2otimes 119886) = (119892

120590otimes ℎ2) otimes 119886 (7)

where 119892120590is the Gaussian low pass filter

Five partial derivatives are used in the SDGD filter asfollows

119860119909119909=1205972119886

1205971199092 119860

119909119910=

1205972119886

120597119909120597119910 119860

119909=120597119886

120597119909

119860119910119910=1205972119886

1205971199102 119860

119910=120597119886

120597119910

(8)

Therefore

SDGD =

1198601199091199091198602

119909+ 2119860119909119910119860119909119860119910+ 1198601199101199101198602

119910

1198602119909+ 1198602119910

(9)

A detailed explanation of SDGD can be found in [20 21 23]In [19 24] SDGD was presented as a filter used in

finding edges and measuring objects Several studies utilizethe SDGD filter For example Aarnick et al [25] used thisfilter in analyzing the ultrasound images of male kidneysand prostate in their study on preprocessing algorithms foredge detection at multiple resolution scales Their study[25] reported that detecting the contour of objects in greymedical images could be improved by applying an adaptivefilter size in SDGD Nader El-Glaly [24] used SDGD as partof her work in a digital inpainting algorithm Hagara andMoravcik [23] introduced the PLUS operator a combinationof the SDGD filter and Laplace operator for edge detectionSimilar findings were obtained by using PLUS and SDGDfilters based on a kernel size of nine or lower and PLUSyielded better results when the kernel size was greater thannine and was suitable in locating the edges of small objectsSimilarly Verbeek and Van Vliet [22] compared LaplaceSGDG and PLUS derived from 2D and 3D images Theirresearch confirmed the findings in [23]

The idea of combining two methods in one algorithmwas inspired by a study made by Zheng and Fan [3] whereRA and temporal difference were combined to detect themoving object Another example of hybrid research in BGSwas conducted by Lu and Wang [26] They crossbred opticalflow and double background filtering to detect the movingobject Zaki et al [27] combined frame differencing with ascale invariant feature detector to detect the moving object invarious environments

Based on the study of Persoon et al [19] the SDGDfilter gave better surface localization especially in highly-curved areas compared with the Canny edge detection

technique Thus we adopted this filter in our present workIn addition Persoon et al showed that SDGD guaranteedminimal detail smoothing that led to better visualizationof polyps in computed tomography (CT) scan data Thisfinding is aligned with our results reported in [28] Furtherthe study by Nader El-Glaly [24] used the SGDG filter indeveloping an enhanced partial-differential equation-baseddigital inpainting algorithm to find themissing data in digitalimages

To the best of our knowledge this study is new becauseno prior work exists that integrates the SDGD filter withthe BGS technique Although Al-Garni and Abdennour usededge detection and the FD technique to findmoving vehiclesno informationwas provided on the edge detection techniquethey utilized [29] We used an SDGD filter to enhancethe performance of the existing background subtractiontechnique by combining foreground pixels generated by BGStechniqueswith the detected edge as our extracted objectTheedge pixels are expected to fill in the boundary gap whichwillcreate better connections among pixels in the boundaryThisresearch is an extension of our previous work published in[28] which uses more data from a variety of data sources andmore detailed analysis is being done

3 Methodology

This section discusses the database used and the proposedmethod

31 Dataset This study utilized datasets that were acquiredfrom selected prerecorded video collections of several onlinedatabases

(a) Smart Engineering System Research Group (SESRG)UKM Collections This video collection consists ofvarious human actions and activities recorded bystudents involved in SESRG studies on smart surveil-lance systems Besides humans this database alsohas a collection of moving cars that are used asnonhuman samples for classification that will beexplained further in Section 34

(b) CMUGraphic LabMotion Capture (MoCap)Database[30] This database which is owned by CarnegieMellonUniversity contains 2506 trials in 6 categoriesand 23 subcategories The videos were recorded in anindoor environment

(c) CMU Motion of Body (MoBo) Database [31] Thisdatabase which is also owned by Carnegie MellonUniversity consists of videos showing six differentangles of the subject who is walking on a treadmill

(d) Multicamera Human Action Video Data (MuHAVi)[32] This database is owned by the Digital ImagingResearch Centre at Kingston University It presents 17action classes with 8 camera views

(e) Human Motion Database (HMDB51) [33] This data-base consists of collections of edited videos from digi-tized movies and YouTubeThe collection contains 51action categories with 7000manually annotated clips

4 Journal of Electrical and Computer Engineering

Yes

Yes

No

No Is itthe last frame

Update the reference frame

Subtract current frame with reference frame to

Is itFD

technique

End

Start

Generate reference frame

Perform SDGD filter on Fdiff and produce Fsdgd

Perform thresholding to obtain Fx

obtain Fdiff

obtain Ffinal

Perform morphological operation on Fg and

Fuse Fsdgd with Fx to produce Fg

Figure 1 Flowchart of the proposed algorithm

The initial background for videos taken from databases(a) (b) (d) and (e) was modeled using the median valuefrom a selected frame interval We used five frames from thevideo sequences with 10 intervals between the frames thatis 1198651 11986510 11986520 11986530 and 119865

40 Next the median value of these

selected frames was used as the reference image Details ofthis technique are presented in [34] Meanwhile database (c)provided the background reference frame All datasets exceptthe MuHAVi videos were manually segmented to obtain theground truth

32 The Proposed Algorithm The methodology of our pro-posed techniques is as described in the flowchart shown inFigure 1

First the dataset was tested using the following basicparametric BGS techniques FD RA AM and RGA Nextthe SDGD filter was fused with each technique by combiningthe output of a background technique with the SDGD filteroutput The SDGD filter was selected as a segmentation toolbecause it produced better results compared to other edgedetection techniques (Sobel Canny and Roberts) [19 28]

33 Evaluation Method To evaluate the performance of eachBGS technique we calculated the average of recall precision119865 score and accuracy

(a) Recall (Rcl) refers to the detection rate which iscalculated by comparing the total number of detectedtrue positive pixels with the total number of truepositive pixels in the ground truth frame [35] Thisis also known as sensitivity The following equationshows how recall is calculated

recall = ΣTPΣTP + ΣFN

(10)

where FP is the false positive and FN is false negative

(b) Precision (Prcsn) is the ratio between the detectedtrue positive pixels and the total number of positivepixels detected by the method [35 36] This is alsoknown as specificity

precision = ΣTPΣTP + ΣFP

(11)

where FP is false positive FN is false negative TP istrue positive and TN is true negative

(c) 119865-measure or balance 119865-score is the weighted har-monic mean of recall and precision It is used as asingle measurement for comparing different methods[35 36]

119865 =2 sdot recall sdot precisionrecall + precision

(12)

(d) Accuracy is the percentage of correct data retrievalIt is calculated by dividing the number of pixels withtrue positive plus pixels with true negative pixels overthe total number of pixels in the frameThe followingequation displays the calculation of accuracy [36]

accuracy = ΣTP + ΣTNΣTP + ΣTN + ΣFP + ΣFN

times 100 (13)

This study utilized videos with a multiple number offrames Hence we presented a comparison of the averageF score and average accuracy percentage as the overallperformance benchmark in each BGS technique with andwithout SDGD filter The following equation is used tocalculate the percentage of improvement

im (119865 score) = (119865 scorewith minus 119865 scorewithout) times 100

im (accuracy) = Accuracywith minus Accuracywithout(14)

Journal of Electrical and Computer Engineering 5

(a) A1 (outdoor) (b) A2 (outdoor) (c) E1 (outdoor)

(d) B1 (indoor) (e) C1 (indoor) (f) D1 (indoor)

Figure 2 Background image of data in this study

(a) Original image

(b) Ground truth image

Figure 3 Original image and its ground truth of a frame in A1 foran outdoor environment

34 Classification Using the artificial neural network (ANN)as classifier the segmented image from the proposed tech-nique was subjected to classification testing Training inputof the ANN was extracted from 1500 randomly chosen seg-mented framesimages 750 human blob images represented

human samples and another 750 car blob images representednon-human samples

A scale conjugate gradient and the back propagation ruleswere chosen to train the classifier The ANN was designedwith one hidden layer containing ten hidden neurons andan output layer containing two hidden neurons Both layersused sigmoid as the activation function and themean squarederror value as the performance function Images were classi-fied as either humanor non-humanNext another set of 1000framesimageswere chosen for testingWe applied leave-one-out cross validation in our study Ten experiments and theaverage classification rate were used to evaluate whether therecognition performance was human or not

The evaluation was based on the segmented framesim-ages generated from the enhanced FD technique with theSDGD filter added to the algorithm In this experiment weonly used the FD instead of other BGS techniques because ithas the fastest processing time We also performed statisticaltests such as recall precision and F-score on the obtainedclassification result by taking positive identification on theclassification of human and negative for the classification ofnon-human

4 Results and Discussion

This section discusses the robustness of the proposed tech-nique based on videos taken from five different databases Toconfirm the robustness of the proposed algorithm we testedusing video that presented a different environment or cameraangle Specifically the video sequences were recorded inmultiple environments (indoors and outdoors) and multipleviews

6 Journal of Electrical and Computer Engineering

(a) Original image (b) Ground truth image

Figure 4 Original image and its ground truth of a frame in B1 for an indoor environment

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 5 Segmentation performance using various background subtraction techniques for an outdoor environment

Journal of Electrical and Computer Engineering 7

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 6 Segmentation performance using various background subtraction techniques for an indoor environment

Figures 2(a)ndash2(f) show some of the background framesused in this research These background frames were gen-erated by using the median value of selected frames in thevideo sequences except for the videos obtained from theMoBo database Rather than stating the filename we assignedletters from A to E to identify the videos representing thedata A refers to SESRG B to MoCap C to MoBo D toMuHAVi and E to HMDB51 The words in brackets indicatewhether the video environment was either indoor or outdoor

Next we present the subjective results of our objectextraction Since this study involves video data with a multi-ple number of frames we only depict the results obtained forframenumber 10 in dataA1 and framenumber 29 in data B1 torepresent the outdoor and indoor scene samples Figures 3(a)and 4(a) illustrate the original images of a frame in videos A1and B1 respectively Figures 3(b) and 4(b) depict the groundtruth images

Figures 5 and 6 present the extracted subjects in bothindoor and outdoor environments using FD AM RA andRGA with and without the SDGD filter

The first column of Figures 5 and 6 show that all the basicBGS techniques were capable of detecting the object in thescene of interest based on the tested videos However manypixels were missing and resulted in a smaller blob comparedwith the ground truth image Our proposed technique solvesthe problem of missing pixels and reduced blob size becauseit combines the SDGD filter with FD AM RA and RGAas shown in column 2 of Figures 5 and 6 The pixel sizeof the extracted object is slightly enlarged and becomesmore compound Foreground detection showed significantimprovement after the proposed technique was applied to alldatasets

Tables 1 and 2 show the number of TP TN FP and FN forthe selected indoor andoutdoor sampleswith andwithout the

8 Journal of Electrical and Computer Engineering

0010203040506070809

1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

F-sc

ore

Frame number

(a) FD

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(b) AM

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 7 Graph of F-score versus frame number for outdoor sample

Table 1 Performance for frame 10 in outdoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2046 1279 1091 80064 065 062 063With SDGD 2521 804 616 80539 080 076 078

AM without SDGD 1986 1785 1151 79558 063 053 057With SDGD 2374 951 763 80392 076 071 073

RG without SDGD 1959 1781 1178 79562 062 052 057With SDGD 2303 1022 834 80321 073 069 071

RGA without SDGD 2441 912 696 80431 078 073 075With SDGD 2654 499 483 80844 085 084 084

Table 2 Performance for frame 29 in indoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2336 1372 809 79963 074 063 068With SDGD 2788 920 1165 79607 071 075 073

AM without SDGD 2771 1437 808 79964 074 061 067With SDGD 2726 982 1077 79695 072 074 073

RG without SDGD 2289 1419 858 79914 073 062 067With SDGD 2723 985 1106 79666 071 073 072

RGA without SDGD 2564 1092 1175 79649 069 070 069With SDGD 2979 677 1535 79289 067 081 073

Journal of Electrical and Computer Engineering 9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(a) FD

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(b) AM

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 8 Graph of F-score versus frame number for indoor sample

addition of the SDGD filter in the BGS techniques Then thevalues of recall precision and F-score were calculated basedon mathematical equations (10)ndash(12)

Based on the findings shown in Tables 1 and 2 we cansee that the number of TPs has increased significantly whichproves that our technique is able to detect more of thecompound blob than the original methods This finding isalso in line with the increment of precision values Tables 1and 2 also show that the F-score increased for both sampleswhen we add the SDGD filter

The graphs in Figures 7 and 8 show the F-score trends inboth A1 and B1 for the four BGS techniques namely FD AMRA and RGA The solid lines represent the F-score resultsusing our proposed technique that is with the SDGD filterwhereas the dashed lines represent the F-score results withoutusing the SDGD filter Based on Figures 7 and 8 higherF-score values were noted for the A1 and B1 videos whenusing the proposed hybrid technique compared with thosewhen using the basic BGS techniques Thus our proposedtechnique improves upon traditional BGS techniques

To confirm the effectiveness of the proposed techniquewe tested the algorithm in six different videos with sixdifferent backgrounds as obtained from the five differentdatabases Table 3 shows the performance of FD AMRA and RGA with and without the SDGD filter in termsof the F-score and average accuracy percentage of all sixvideo samples Columns 5 and 8 show the percentageof improvement for both F-score and average accuracy

percentage Based on Table 3 the use of an SDGD filterimproved the average F-score values for all data comparedwith the values produced by the methods without SDGDColumn 5 of Table 3 shows that the F-score values improvedby 1 to 9 Therefore the proposed technique comparedwith existing techniques enhances object extraction

Additionally Table 3 depicts an increment in averageaccuracy percentage for each tested technique except forvideos A2 and E1 A2 had poor video quality because thefootage was taken in a corridor without proper lightingBecause of the poor video quality and bad lighting conditionthe SDGD filter was unable to segment the foregroundsubjects properly and produced an unwanted shadow in theforeground Nevertheless our proposed technique is capableof detecting foreground pixels with over 90 accuracy on allvideos tested

Meanwhile Table 4 exhibits the results of classifyinghuman and non-human recognition based on the segmentedframe generated by the proposed technique In Table 4 ANNsuccessfully recognized human and non-human images fromthe generated frames using the improved FD techniqueIncorrect classifications were minimal in all ten experiments

Figure 9 presents amatrix describing the overall results ofclassification testingThe average recognition rate for humanand non-human categories was 9878 and 9872 respec-tively The rate is higher for both categories as our algorithmprovided good human and non-human blob images which

10 Journal of Electrical and Computer Engineering

Table 3 Performance of each technique

Name MethodEvaluation performance based on average value

119865-score Accuracy ()Without SDGD With SDGD imlowast Without SDGD With SDGD imlowast

A1

FD 075 078 3 9845 9847 002AM 069 075 6 9816 9829 013RA 074 077 3 9842 9844 002RGA 079 081 2 9842 9847 003

A2

FD 078 079 1 9674 9665 minus009AM 080 081 1 9731 9714 minus017RA 076 078 2 9691 9683 minus008RGA 076 077 1 9575 9565 minus010

B1

FD 068 074 6 9799 9812 013AM 071 076 5 9822 9837 015RA 071 076 5 9825 9838 013RGA 064 068 4 9747 9751 004

C1

FD 084 085 1 9564 9569 005AM 079 080 1 9487 9503 016RA 057 065 8 9169 9217 048RGA 08 081 1 9495 9507 012

D1

FD 068 071 3 9784 9807 023AM 071 075 4 9813 9837 022RA 071 074 3 9814 9836 022RGA 070 074 4 9804 9825 021

E1

FD 080 081 1 9666 9669 003AM 080 081 1 9670 9671 001RA 080 081 1 9669 9670 001RGA 068 077 9 9645 9650 005

lowastim percent improvement

Target classHuman Nonhuman ()

Output class

Human 4939 64 9878

Non-human 61 4936 9872

() 9878 9872 9875

Figure 9 Confusion matrix of the overall performance

allowed ANN to distinguish between the two categoriesFurther the overall performance in both classes was 9875The findings confirmed that the proposed algorithm whichcombines FD with the SDGD filter could generate bettersilhouette images thereby facilitating recognition of humanand non-human images in the segmented imagesframes

From the matrix the information of TP TN FP and FNcan be extracted hence Table 5 shows results of the statisticalanalysis done on the classification findings by ANN

5 Conclusion

We presented a new hybrid approach that incorporates theSDGDfilterwith four basic BGS techniques namely FDAMRA and RGAThis hybrid technique enhanced segmentationperformance as indicated by the F-score values and averageaccuracy percentages The technique was tested on six differ-ent videos from five different databases and each video wastaken either indoors or outdoors and showed different scenesAnANNclassifierwas used to classify human andnonhumanimages appearing in the segmented images generated by ouralgorithm As the algorithm was capable of providing goodblob images ANN recognition of human and non-humanimages in the silhouette images was facilitated

Although computational time increased this aspect isacceptable considering the enhancement and the character-istics of second-order derivatives Therefore this study is

Journal of Electrical and Computer Engineering 11

Table 4 Classification results

Exp Human Non-human

1 Correct 498 492 99Missed 2 8 10

2 Correct 499 491 990Missed 1 9 10

3 Correct 498 495 993Missed 2 5 07

4 Correct 495 500 995Missed 5 0 05

5 Correct 481 492 973Missed 19 8 27

6 Correct 497 490 987Missed 3 10 13

7 Correct 484 496 980Missed 16 4 20

8 Correct 493 498 991Missed 7 2 09

9 Correct 497 496 993Missed 3 4 07

10 Correct 497 486 983Missed 3 14 17

Table 5 Performance for ANN classification

TP TN FP FN Recall Precision 119865-Score4939 4936 61 64 0987 0988 0987

valid and suitable for implementation in non-real-time appli-cations The proposed hybrid technique can improve upontraditional BGS techniques as indicated by the improved F-score accuracy andANNrecognition values after testing var-ious data sources and data environments The technique canalso be considered in detecting moving objects in non-real-time applications such as investigations of human actions ortraffic conditions

Acknowledgments

This work was supported by Universiti KebangsaanMalaysia (UKM) research Grant DPP-2013-003 and Minis-try of Higher Education (MoHE) research Grant LRGSTD2011ICT0402

References

[1] AMMcIvor ldquoBackground subtraction techniquesrdquo inProceed-ings of the Image and Vision Computing Conference AucklandNew Zealand 2000

[2] M H Sigari N Mozayani and H M Pourreza ldquoFuzzyrunning average and fuzzy background subtraction conceptsand applicationrdquo International Journal of Computer Science andNetwork Security vol 8 pp 138ndash143 2008

[3] Y Zheng and L Fan ldquoMoving object detection based onrunning average background and temporal differencerdquo in Pro-ceedings of the IEEE International Conference on Intelligent

Systems and Knowledge Engineering (ISKE rsquo10) pp 270ndash272November 2010

[4] J Park A Tabb and A C Kak ldquoHierarchical data structure forreal-time background subtractionrdquo in Proceedings of the IEEEInternational Conference on Image Processing (ICIP rsquo06) pp1849ndash1852 October 2006

[5] C RWrenAAzarbayeni TDarrel andA P Petland ldquoPfinderreal-time tracking of the human bodyrdquo IEEE Transaction onPattern Analysis andMachine Intelligence vol 19 no 7 pp 780ndash785 1997

[6] Z Tang Z Miao and Y Wan ldquoBackground subtraction usingrunning Gaussian average and frame differencerdquo in Proceedingsof the International Conference Entertainment Computing (ICECrsquo07) vol 4740 of Lecture Notes in Computer Science pp 411ndash4142007

[7] Z He Y Liu H Yu and X Ye ldquoOptimized algorithm for trafficinformation collection in an embedded systemrdquo in Proceedingsof the IEEE Congress on Image and Signal Processing May 2008

[8] A Singh S Sawan M Hanmandlu V K Madasu and BC Lovell ldquoAn abandoned object detection system based ondual background segmentationrdquo in Proceedings of the 6th IEEEInternational Conference on Advanced Video and Signal BasedSurveillance (AVSS rsquo09) pp 352ndash357 IEEE Press September2009

[9] S Su and Y Chen ldquoMoving object segmentation usingimproved running Gaussian average background modelrdquo inProceedings of the International conference on Digital ImageComputing Techniques and Applications 2008

[10] H Kim R Sakamoto I Kitahara T Toriyama and K KogureldquoBackground subtraction using generalised Gaussian familymodelrdquo Electronics Letters vol 44 no 3 pp 189ndash190 2008

[11] Z H Huang and K W Chau ldquoA new image thresholdingmethod based on Gaussian mixture modelrdquo Journal AppliedMathematics and Computation vol 205 pp 899ndash907 2008

[12] Y Liu H Yao W Gao X Chen and D Zhao ldquoNonparametricbackground generationrdquo in Proceedings of the 18th InternationalConference on Pattern Recognition (ICPR rsquo06) pp 916ndash919August 2006

[13] P Vu V Phong T H Vu and H B Le ldquoGPU implemen-tation of Extended Gaussian mixture model for backgroundsubtractionrdquo in Proceedings of the 8th IEEE-RIVF InternationalConference on Computing and Communication TechnologiesResearch Innovation and Vision for the Future (RIVF rsquo10)November 2010

[14] A Mittal and N Paragios ldquoMotion-based background subtrac-tion using adaptive kernel density estimationrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II302ndashII309 July 2004

[15] F Porikli and O Tuzel ldquoHuman body tracking by adaptivebackground models and mean shift analysisrdquo in Proceedings ofthe IEEE International Workshop on Performance Evaluation ofTracking and Surveillance (PETS rsquo03) July2003

[16] I J Young J J Gerbrands and L J van Vliet ldquoFundamentals ofImage Processingrdquo Version 2 3 1995ndash2007

[17] V G Narendra and K S Hareesh ldquoStudy and comparison ofvarious image edge detection techniquesrdquo International Journalof Image Processing vol 4 no 2 article 83 2009

[18] J F Canny ldquoA computational approach to edge detectionrdquo IEEETransaction Pattern Analysis and Machine Intelligent vol 8 no6 pp 679ndash697 1986

12 Journal of Electrical and Computer Engineering

[19] M P Persoon I W O Serlie F H Post R Truyen and F MVos ldquoVisualization of noisy and biased volume data using firstand second order derivative techniquesrdquo in Proceedings of the14th IEEE Visualization Conference pp 379ndash385 October 2003

[20] I T Young ldquoGeneralized convolutional filteringrdquo inProceedingsof the 19th CERN School of Computing pp 51ndash65 1996

[21] ldquoImage processing fundamentals derivative-based operationrdquo2011 httpwwwmifvultatpazinimasdipFIPfip-Derivatihtml

[22] PWVerbeek and L J vanVliet ldquoLocation error of curved edgesin low-pass filtered 2-D and 3-D imagesrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 16 no 7 pp 726ndash733 1994

[23] M Hagara and J Moravcik ldquoPLUS operator for edge detectionin digital imagesrdquo in Proceedings of the International Conferenceof Radioelektonika pp 467ndash470 2002

[24] Y Nader El-GlalyDevelopment of PDE-Based Digital InpaintingAlgorithm Applied to Missing Data in Digital Images [MSthesis] Ain Shams University 2007

[25] R G Aarnick J de la Rosette W Feitz F Debruyne and HWijkstra ldquoA preprocessing algorithm for edge detection withmultiple scales of resolutionrdquo European Journal of Ultrasoundvol 5 pp 113ndash126 1997

[26] N Lu and J Wang ldquoMotion detection based on accumulativeoptical flow and double background filteringrdquo in Proceedings ofthe World Congress on Engineering vol 1 2007

[27] ZakiWM DW A Hussain andM Hedayati ldquoMoving objectdetection using Keypoints reference modelrdquo EURASIP Journalon Image and Video Processing vol 2011 13 pages 2011

[28] F Y A Rahman A Hussain N M Tahir S A Samad and MH M Saad ldquoHybrid background subtraction techniques withsecond derivative on gradient direction filterrdquo in Proceedingsof the International Workshop on Advanced Image Technology(IWAIT rsquo12) 2012

[29] SMAl-Garni andAAAbdennour ldquoMoving vehicle detectionusing automatic background extractionrdquo in proceedings of theWorld Academy of Science Engineering and Technology vol 24pp 82ndash86 Sydney Australia December 2006

[30] ldquoCMU graphic lab motion capture databaserdquo 2010 httpmocapcscmuedu

[31] R Gross and J Shi ldquoThe CMU motion of body (MoBo)databaserdquo Technical Report CMU-RI-TR-01-18 Robotics Insti-tute Carnegie Mellon University 2001

[32] S Singh S A Velastin and H Ragheb ldquoMuHAVi a mul-ticamera human action video dataset for the evaluation ofaction recognition methodsrdquo in Proceedings of the 7th IEEEInternational Conference on Advanced Video and Signal Based(AVSS rsquo10) pp 48ndash55 Boston Mass USA September 2010

[33] H Kuehne H Jhuang E Garrote T Poggio and T SerreldquoHMDB a large video database for humanmotion recognitionrdquoinProceedings of the IEEE International Conference onComputerVision (ICCV rsquo11) pp 2556ndash2563 November 2011

[34] F Y A Rahman A Hussain N M Tahir and W M DZaki ldquoModeling of initial reference frame for backgroundsubtractionrdquo in Proceedings of the 6th International Colloquiumon Signal Processing and Its Applications (CSPA rsquo10) pp 125ndash1282010

[35] E Fauske L M Eliassen and R H Bakken ldquoA comparison oflearning based background subtractionrdquo in Proceedings of theNorwegian Artificial Intelligens Symposium (NAIS) pp 181ndash1922009

[36] H Kim B Ku D K Han S Kang and H Ko ldquoAdaptive selec-tion model in block-based background subtractionrdquo ElectronicsLetter vol 48 no 8 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 3: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

Journal of Electrical and Computer Engineering 3

Thebasic versions in the secondderivative filters are givenby

[ℎ2119909] = [ℎ

2119910] = [1 minus2 1]

[ℎ2] = [

[

0 1 0

1 minus4 1

0 1 0

]

]

(6)

Associating (5) with the Gaussian filter yields

119887 = 119892120590otimes (ℎ2otimes 119886) = (119892

120590otimes ℎ2) otimes 119886 (7)

where 119892120590is the Gaussian low pass filter

Five partial derivatives are used in the SDGD filter asfollows

119860119909119909=1205972119886

1205971199092 119860

119909119910=

1205972119886

120597119909120597119910 119860

119909=120597119886

120597119909

119860119910119910=1205972119886

1205971199102 119860

119910=120597119886

120597119910

(8)

Therefore

SDGD =

1198601199091199091198602

119909+ 2119860119909119910119860119909119860119910+ 1198601199101199101198602

119910

1198602119909+ 1198602119910

(9)

A detailed explanation of SDGD can be found in [20 21 23]In [19 24] SDGD was presented as a filter used in

finding edges and measuring objects Several studies utilizethe SDGD filter For example Aarnick et al [25] used thisfilter in analyzing the ultrasound images of male kidneysand prostate in their study on preprocessing algorithms foredge detection at multiple resolution scales Their study[25] reported that detecting the contour of objects in greymedical images could be improved by applying an adaptivefilter size in SDGD Nader El-Glaly [24] used SDGD as partof her work in a digital inpainting algorithm Hagara andMoravcik [23] introduced the PLUS operator a combinationof the SDGD filter and Laplace operator for edge detectionSimilar findings were obtained by using PLUS and SDGDfilters based on a kernel size of nine or lower and PLUSyielded better results when the kernel size was greater thannine and was suitable in locating the edges of small objectsSimilarly Verbeek and Van Vliet [22] compared LaplaceSGDG and PLUS derived from 2D and 3D images Theirresearch confirmed the findings in [23]

The idea of combining two methods in one algorithmwas inspired by a study made by Zheng and Fan [3] whereRA and temporal difference were combined to detect themoving object Another example of hybrid research in BGSwas conducted by Lu and Wang [26] They crossbred opticalflow and double background filtering to detect the movingobject Zaki et al [27] combined frame differencing with ascale invariant feature detector to detect the moving object invarious environments

Based on the study of Persoon et al [19] the SDGDfilter gave better surface localization especially in highly-curved areas compared with the Canny edge detection

technique Thus we adopted this filter in our present workIn addition Persoon et al showed that SDGD guaranteedminimal detail smoothing that led to better visualizationof polyps in computed tomography (CT) scan data Thisfinding is aligned with our results reported in [28] Furtherthe study by Nader El-Glaly [24] used the SGDG filter indeveloping an enhanced partial-differential equation-baseddigital inpainting algorithm to find themissing data in digitalimages

To the best of our knowledge this study is new becauseno prior work exists that integrates the SDGD filter withthe BGS technique Although Al-Garni and Abdennour usededge detection and the FD technique to findmoving vehiclesno informationwas provided on the edge detection techniquethey utilized [29] We used an SDGD filter to enhancethe performance of the existing background subtractiontechnique by combining foreground pixels generated by BGStechniqueswith the detected edge as our extracted objectTheedge pixels are expected to fill in the boundary gap whichwillcreate better connections among pixels in the boundaryThisresearch is an extension of our previous work published in[28] which uses more data from a variety of data sources andmore detailed analysis is being done

3 Methodology

This section discusses the database used and the proposedmethod

31 Dataset This study utilized datasets that were acquiredfrom selected prerecorded video collections of several onlinedatabases

(a) Smart Engineering System Research Group (SESRG)UKM Collections This video collection consists ofvarious human actions and activities recorded bystudents involved in SESRG studies on smart surveil-lance systems Besides humans this database alsohas a collection of moving cars that are used asnonhuman samples for classification that will beexplained further in Section 34

(b) CMUGraphic LabMotion Capture (MoCap)Database[30] This database which is owned by CarnegieMellonUniversity contains 2506 trials in 6 categoriesand 23 subcategories The videos were recorded in anindoor environment

(c) CMU Motion of Body (MoBo) Database [31] Thisdatabase which is also owned by Carnegie MellonUniversity consists of videos showing six differentangles of the subject who is walking on a treadmill

(d) Multicamera Human Action Video Data (MuHAVi)[32] This database is owned by the Digital ImagingResearch Centre at Kingston University It presents 17action classes with 8 camera views

(e) Human Motion Database (HMDB51) [33] This data-base consists of collections of edited videos from digi-tized movies and YouTubeThe collection contains 51action categories with 7000manually annotated clips

4 Journal of Electrical and Computer Engineering

Yes

Yes

No

No Is itthe last frame

Update the reference frame

Subtract current frame with reference frame to

Is itFD

technique

End

Start

Generate reference frame

Perform SDGD filter on Fdiff and produce Fsdgd

Perform thresholding to obtain Fx

obtain Fdiff

obtain Ffinal

Perform morphological operation on Fg and

Fuse Fsdgd with Fx to produce Fg

Figure 1 Flowchart of the proposed algorithm

The initial background for videos taken from databases(a) (b) (d) and (e) was modeled using the median valuefrom a selected frame interval We used five frames from thevideo sequences with 10 intervals between the frames thatis 1198651 11986510 11986520 11986530 and 119865

40 Next the median value of these

selected frames was used as the reference image Details ofthis technique are presented in [34] Meanwhile database (c)provided the background reference frame All datasets exceptthe MuHAVi videos were manually segmented to obtain theground truth

32 The Proposed Algorithm The methodology of our pro-posed techniques is as described in the flowchart shown inFigure 1

First the dataset was tested using the following basicparametric BGS techniques FD RA AM and RGA Nextthe SDGD filter was fused with each technique by combiningthe output of a background technique with the SDGD filteroutput The SDGD filter was selected as a segmentation toolbecause it produced better results compared to other edgedetection techniques (Sobel Canny and Roberts) [19 28]

33 Evaluation Method To evaluate the performance of eachBGS technique we calculated the average of recall precision119865 score and accuracy

(a) Recall (Rcl) refers to the detection rate which iscalculated by comparing the total number of detectedtrue positive pixels with the total number of truepositive pixels in the ground truth frame [35] Thisis also known as sensitivity The following equationshows how recall is calculated

recall = ΣTPΣTP + ΣFN

(10)

where FP is the false positive and FN is false negative

(b) Precision (Prcsn) is the ratio between the detectedtrue positive pixels and the total number of positivepixels detected by the method [35 36] This is alsoknown as specificity

precision = ΣTPΣTP + ΣFP

(11)

where FP is false positive FN is false negative TP istrue positive and TN is true negative

(c) 119865-measure or balance 119865-score is the weighted har-monic mean of recall and precision It is used as asingle measurement for comparing different methods[35 36]

119865 =2 sdot recall sdot precisionrecall + precision

(12)

(d) Accuracy is the percentage of correct data retrievalIt is calculated by dividing the number of pixels withtrue positive plus pixels with true negative pixels overthe total number of pixels in the frameThe followingequation displays the calculation of accuracy [36]

accuracy = ΣTP + ΣTNΣTP + ΣTN + ΣFP + ΣFN

times 100 (13)

This study utilized videos with a multiple number offrames Hence we presented a comparison of the averageF score and average accuracy percentage as the overallperformance benchmark in each BGS technique with andwithout SDGD filter The following equation is used tocalculate the percentage of improvement

im (119865 score) = (119865 scorewith minus 119865 scorewithout) times 100

im (accuracy) = Accuracywith minus Accuracywithout(14)

Journal of Electrical and Computer Engineering 5

(a) A1 (outdoor) (b) A2 (outdoor) (c) E1 (outdoor)

(d) B1 (indoor) (e) C1 (indoor) (f) D1 (indoor)

Figure 2 Background image of data in this study

(a) Original image

(b) Ground truth image

Figure 3 Original image and its ground truth of a frame in A1 foran outdoor environment

34 Classification Using the artificial neural network (ANN)as classifier the segmented image from the proposed tech-nique was subjected to classification testing Training inputof the ANN was extracted from 1500 randomly chosen seg-mented framesimages 750 human blob images represented

human samples and another 750 car blob images representednon-human samples

A scale conjugate gradient and the back propagation ruleswere chosen to train the classifier The ANN was designedwith one hidden layer containing ten hidden neurons andan output layer containing two hidden neurons Both layersused sigmoid as the activation function and themean squarederror value as the performance function Images were classi-fied as either humanor non-humanNext another set of 1000framesimageswere chosen for testingWe applied leave-one-out cross validation in our study Ten experiments and theaverage classification rate were used to evaluate whether therecognition performance was human or not

The evaluation was based on the segmented framesim-ages generated from the enhanced FD technique with theSDGD filter added to the algorithm In this experiment weonly used the FD instead of other BGS techniques because ithas the fastest processing time We also performed statisticaltests such as recall precision and F-score on the obtainedclassification result by taking positive identification on theclassification of human and negative for the classification ofnon-human

4 Results and Discussion

This section discusses the robustness of the proposed tech-nique based on videos taken from five different databases Toconfirm the robustness of the proposed algorithm we testedusing video that presented a different environment or cameraangle Specifically the video sequences were recorded inmultiple environments (indoors and outdoors) and multipleviews

6 Journal of Electrical and Computer Engineering

(a) Original image (b) Ground truth image

Figure 4 Original image and its ground truth of a frame in B1 for an indoor environment

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 5 Segmentation performance using various background subtraction techniques for an outdoor environment

Journal of Electrical and Computer Engineering 7

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 6 Segmentation performance using various background subtraction techniques for an indoor environment

Figures 2(a)ndash2(f) show some of the background framesused in this research These background frames were gen-erated by using the median value of selected frames in thevideo sequences except for the videos obtained from theMoBo database Rather than stating the filename we assignedletters from A to E to identify the videos representing thedata A refers to SESRG B to MoCap C to MoBo D toMuHAVi and E to HMDB51 The words in brackets indicatewhether the video environment was either indoor or outdoor

Next we present the subjective results of our objectextraction Since this study involves video data with a multi-ple number of frames we only depict the results obtained forframenumber 10 in dataA1 and framenumber 29 in data B1 torepresent the outdoor and indoor scene samples Figures 3(a)and 4(a) illustrate the original images of a frame in videos A1and B1 respectively Figures 3(b) and 4(b) depict the groundtruth images

Figures 5 and 6 present the extracted subjects in bothindoor and outdoor environments using FD AM RA andRGA with and without the SDGD filter

The first column of Figures 5 and 6 show that all the basicBGS techniques were capable of detecting the object in thescene of interest based on the tested videos However manypixels were missing and resulted in a smaller blob comparedwith the ground truth image Our proposed technique solvesthe problem of missing pixels and reduced blob size becauseit combines the SDGD filter with FD AM RA and RGAas shown in column 2 of Figures 5 and 6 The pixel sizeof the extracted object is slightly enlarged and becomesmore compound Foreground detection showed significantimprovement after the proposed technique was applied to alldatasets

Tables 1 and 2 show the number of TP TN FP and FN forthe selected indoor andoutdoor sampleswith andwithout the

8 Journal of Electrical and Computer Engineering

0010203040506070809

1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

F-sc

ore

Frame number

(a) FD

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(b) AM

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 7 Graph of F-score versus frame number for outdoor sample

Table 1 Performance for frame 10 in outdoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2046 1279 1091 80064 065 062 063With SDGD 2521 804 616 80539 080 076 078

AM without SDGD 1986 1785 1151 79558 063 053 057With SDGD 2374 951 763 80392 076 071 073

RG without SDGD 1959 1781 1178 79562 062 052 057With SDGD 2303 1022 834 80321 073 069 071

RGA without SDGD 2441 912 696 80431 078 073 075With SDGD 2654 499 483 80844 085 084 084

Table 2 Performance for frame 29 in indoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2336 1372 809 79963 074 063 068With SDGD 2788 920 1165 79607 071 075 073

AM without SDGD 2771 1437 808 79964 074 061 067With SDGD 2726 982 1077 79695 072 074 073

RG without SDGD 2289 1419 858 79914 073 062 067With SDGD 2723 985 1106 79666 071 073 072

RGA without SDGD 2564 1092 1175 79649 069 070 069With SDGD 2979 677 1535 79289 067 081 073

Journal of Electrical and Computer Engineering 9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(a) FD

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(b) AM

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 8 Graph of F-score versus frame number for indoor sample

addition of the SDGD filter in the BGS techniques Then thevalues of recall precision and F-score were calculated basedon mathematical equations (10)ndash(12)

Based on the findings shown in Tables 1 and 2 we cansee that the number of TPs has increased significantly whichproves that our technique is able to detect more of thecompound blob than the original methods This finding isalso in line with the increment of precision values Tables 1and 2 also show that the F-score increased for both sampleswhen we add the SDGD filter

The graphs in Figures 7 and 8 show the F-score trends inboth A1 and B1 for the four BGS techniques namely FD AMRA and RGA The solid lines represent the F-score resultsusing our proposed technique that is with the SDGD filterwhereas the dashed lines represent the F-score results withoutusing the SDGD filter Based on Figures 7 and 8 higherF-score values were noted for the A1 and B1 videos whenusing the proposed hybrid technique compared with thosewhen using the basic BGS techniques Thus our proposedtechnique improves upon traditional BGS techniques

To confirm the effectiveness of the proposed techniquewe tested the algorithm in six different videos with sixdifferent backgrounds as obtained from the five differentdatabases Table 3 shows the performance of FD AMRA and RGA with and without the SDGD filter in termsof the F-score and average accuracy percentage of all sixvideo samples Columns 5 and 8 show the percentageof improvement for both F-score and average accuracy

percentage Based on Table 3 the use of an SDGD filterimproved the average F-score values for all data comparedwith the values produced by the methods without SDGDColumn 5 of Table 3 shows that the F-score values improvedby 1 to 9 Therefore the proposed technique comparedwith existing techniques enhances object extraction

Additionally Table 3 depicts an increment in averageaccuracy percentage for each tested technique except forvideos A2 and E1 A2 had poor video quality because thefootage was taken in a corridor without proper lightingBecause of the poor video quality and bad lighting conditionthe SDGD filter was unable to segment the foregroundsubjects properly and produced an unwanted shadow in theforeground Nevertheless our proposed technique is capableof detecting foreground pixels with over 90 accuracy on allvideos tested

Meanwhile Table 4 exhibits the results of classifyinghuman and non-human recognition based on the segmentedframe generated by the proposed technique In Table 4 ANNsuccessfully recognized human and non-human images fromthe generated frames using the improved FD techniqueIncorrect classifications were minimal in all ten experiments

Figure 9 presents amatrix describing the overall results ofclassification testingThe average recognition rate for humanand non-human categories was 9878 and 9872 respec-tively The rate is higher for both categories as our algorithmprovided good human and non-human blob images which

10 Journal of Electrical and Computer Engineering

Table 3 Performance of each technique

Name MethodEvaluation performance based on average value

119865-score Accuracy ()Without SDGD With SDGD imlowast Without SDGD With SDGD imlowast

A1

FD 075 078 3 9845 9847 002AM 069 075 6 9816 9829 013RA 074 077 3 9842 9844 002RGA 079 081 2 9842 9847 003

A2

FD 078 079 1 9674 9665 minus009AM 080 081 1 9731 9714 minus017RA 076 078 2 9691 9683 minus008RGA 076 077 1 9575 9565 minus010

B1

FD 068 074 6 9799 9812 013AM 071 076 5 9822 9837 015RA 071 076 5 9825 9838 013RGA 064 068 4 9747 9751 004

C1

FD 084 085 1 9564 9569 005AM 079 080 1 9487 9503 016RA 057 065 8 9169 9217 048RGA 08 081 1 9495 9507 012

D1

FD 068 071 3 9784 9807 023AM 071 075 4 9813 9837 022RA 071 074 3 9814 9836 022RGA 070 074 4 9804 9825 021

E1

FD 080 081 1 9666 9669 003AM 080 081 1 9670 9671 001RA 080 081 1 9669 9670 001RGA 068 077 9 9645 9650 005

lowastim percent improvement

Target classHuman Nonhuman ()

Output class

Human 4939 64 9878

Non-human 61 4936 9872

() 9878 9872 9875

Figure 9 Confusion matrix of the overall performance

allowed ANN to distinguish between the two categoriesFurther the overall performance in both classes was 9875The findings confirmed that the proposed algorithm whichcombines FD with the SDGD filter could generate bettersilhouette images thereby facilitating recognition of humanand non-human images in the segmented imagesframes

From the matrix the information of TP TN FP and FNcan be extracted hence Table 5 shows results of the statisticalanalysis done on the classification findings by ANN

5 Conclusion

We presented a new hybrid approach that incorporates theSDGDfilterwith four basic BGS techniques namely FDAMRA and RGAThis hybrid technique enhanced segmentationperformance as indicated by the F-score values and averageaccuracy percentages The technique was tested on six differ-ent videos from five different databases and each video wastaken either indoors or outdoors and showed different scenesAnANNclassifierwas used to classify human andnonhumanimages appearing in the segmented images generated by ouralgorithm As the algorithm was capable of providing goodblob images ANN recognition of human and non-humanimages in the silhouette images was facilitated

Although computational time increased this aspect isacceptable considering the enhancement and the character-istics of second-order derivatives Therefore this study is

Journal of Electrical and Computer Engineering 11

Table 4 Classification results

Exp Human Non-human

1 Correct 498 492 99Missed 2 8 10

2 Correct 499 491 990Missed 1 9 10

3 Correct 498 495 993Missed 2 5 07

4 Correct 495 500 995Missed 5 0 05

5 Correct 481 492 973Missed 19 8 27

6 Correct 497 490 987Missed 3 10 13

7 Correct 484 496 980Missed 16 4 20

8 Correct 493 498 991Missed 7 2 09

9 Correct 497 496 993Missed 3 4 07

10 Correct 497 486 983Missed 3 14 17

Table 5 Performance for ANN classification

TP TN FP FN Recall Precision 119865-Score4939 4936 61 64 0987 0988 0987

valid and suitable for implementation in non-real-time appli-cations The proposed hybrid technique can improve upontraditional BGS techniques as indicated by the improved F-score accuracy andANNrecognition values after testing var-ious data sources and data environments The technique canalso be considered in detecting moving objects in non-real-time applications such as investigations of human actions ortraffic conditions

Acknowledgments

This work was supported by Universiti KebangsaanMalaysia (UKM) research Grant DPP-2013-003 and Minis-try of Higher Education (MoHE) research Grant LRGSTD2011ICT0402

References

[1] AMMcIvor ldquoBackground subtraction techniquesrdquo inProceed-ings of the Image and Vision Computing Conference AucklandNew Zealand 2000

[2] M H Sigari N Mozayani and H M Pourreza ldquoFuzzyrunning average and fuzzy background subtraction conceptsand applicationrdquo International Journal of Computer Science andNetwork Security vol 8 pp 138ndash143 2008

[3] Y Zheng and L Fan ldquoMoving object detection based onrunning average background and temporal differencerdquo in Pro-ceedings of the IEEE International Conference on Intelligent

Systems and Knowledge Engineering (ISKE rsquo10) pp 270ndash272November 2010

[4] J Park A Tabb and A C Kak ldquoHierarchical data structure forreal-time background subtractionrdquo in Proceedings of the IEEEInternational Conference on Image Processing (ICIP rsquo06) pp1849ndash1852 October 2006

[5] C RWrenAAzarbayeni TDarrel andA P Petland ldquoPfinderreal-time tracking of the human bodyrdquo IEEE Transaction onPattern Analysis andMachine Intelligence vol 19 no 7 pp 780ndash785 1997

[6] Z Tang Z Miao and Y Wan ldquoBackground subtraction usingrunning Gaussian average and frame differencerdquo in Proceedingsof the International Conference Entertainment Computing (ICECrsquo07) vol 4740 of Lecture Notes in Computer Science pp 411ndash4142007

[7] Z He Y Liu H Yu and X Ye ldquoOptimized algorithm for trafficinformation collection in an embedded systemrdquo in Proceedingsof the IEEE Congress on Image and Signal Processing May 2008

[8] A Singh S Sawan M Hanmandlu V K Madasu and BC Lovell ldquoAn abandoned object detection system based ondual background segmentationrdquo in Proceedings of the 6th IEEEInternational Conference on Advanced Video and Signal BasedSurveillance (AVSS rsquo09) pp 352ndash357 IEEE Press September2009

[9] S Su and Y Chen ldquoMoving object segmentation usingimproved running Gaussian average background modelrdquo inProceedings of the International conference on Digital ImageComputing Techniques and Applications 2008

[10] H Kim R Sakamoto I Kitahara T Toriyama and K KogureldquoBackground subtraction using generalised Gaussian familymodelrdquo Electronics Letters vol 44 no 3 pp 189ndash190 2008

[11] Z H Huang and K W Chau ldquoA new image thresholdingmethod based on Gaussian mixture modelrdquo Journal AppliedMathematics and Computation vol 205 pp 899ndash907 2008

[12] Y Liu H Yao W Gao X Chen and D Zhao ldquoNonparametricbackground generationrdquo in Proceedings of the 18th InternationalConference on Pattern Recognition (ICPR rsquo06) pp 916ndash919August 2006

[13] P Vu V Phong T H Vu and H B Le ldquoGPU implemen-tation of Extended Gaussian mixture model for backgroundsubtractionrdquo in Proceedings of the 8th IEEE-RIVF InternationalConference on Computing and Communication TechnologiesResearch Innovation and Vision for the Future (RIVF rsquo10)November 2010

[14] A Mittal and N Paragios ldquoMotion-based background subtrac-tion using adaptive kernel density estimationrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II302ndashII309 July 2004

[15] F Porikli and O Tuzel ldquoHuman body tracking by adaptivebackground models and mean shift analysisrdquo in Proceedings ofthe IEEE International Workshop on Performance Evaluation ofTracking and Surveillance (PETS rsquo03) July2003

[16] I J Young J J Gerbrands and L J van Vliet ldquoFundamentals ofImage Processingrdquo Version 2 3 1995ndash2007

[17] V G Narendra and K S Hareesh ldquoStudy and comparison ofvarious image edge detection techniquesrdquo International Journalof Image Processing vol 4 no 2 article 83 2009

[18] J F Canny ldquoA computational approach to edge detectionrdquo IEEETransaction Pattern Analysis and Machine Intelligent vol 8 no6 pp 679ndash697 1986

12 Journal of Electrical and Computer Engineering

[19] M P Persoon I W O Serlie F H Post R Truyen and F MVos ldquoVisualization of noisy and biased volume data using firstand second order derivative techniquesrdquo in Proceedings of the14th IEEE Visualization Conference pp 379ndash385 October 2003

[20] I T Young ldquoGeneralized convolutional filteringrdquo inProceedingsof the 19th CERN School of Computing pp 51ndash65 1996

[21] ldquoImage processing fundamentals derivative-based operationrdquo2011 httpwwwmifvultatpazinimasdipFIPfip-Derivatihtml

[22] PWVerbeek and L J vanVliet ldquoLocation error of curved edgesin low-pass filtered 2-D and 3-D imagesrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 16 no 7 pp 726ndash733 1994

[23] M Hagara and J Moravcik ldquoPLUS operator for edge detectionin digital imagesrdquo in Proceedings of the International Conferenceof Radioelektonika pp 467ndash470 2002

[24] Y Nader El-GlalyDevelopment of PDE-Based Digital InpaintingAlgorithm Applied to Missing Data in Digital Images [MSthesis] Ain Shams University 2007

[25] R G Aarnick J de la Rosette W Feitz F Debruyne and HWijkstra ldquoA preprocessing algorithm for edge detection withmultiple scales of resolutionrdquo European Journal of Ultrasoundvol 5 pp 113ndash126 1997

[26] N Lu and J Wang ldquoMotion detection based on accumulativeoptical flow and double background filteringrdquo in Proceedings ofthe World Congress on Engineering vol 1 2007

[27] ZakiWM DW A Hussain andM Hedayati ldquoMoving objectdetection using Keypoints reference modelrdquo EURASIP Journalon Image and Video Processing vol 2011 13 pages 2011

[28] F Y A Rahman A Hussain N M Tahir S A Samad and MH M Saad ldquoHybrid background subtraction techniques withsecond derivative on gradient direction filterrdquo in Proceedingsof the International Workshop on Advanced Image Technology(IWAIT rsquo12) 2012

[29] SMAl-Garni andAAAbdennour ldquoMoving vehicle detectionusing automatic background extractionrdquo in proceedings of theWorld Academy of Science Engineering and Technology vol 24pp 82ndash86 Sydney Australia December 2006

[30] ldquoCMU graphic lab motion capture databaserdquo 2010 httpmocapcscmuedu

[31] R Gross and J Shi ldquoThe CMU motion of body (MoBo)databaserdquo Technical Report CMU-RI-TR-01-18 Robotics Insti-tute Carnegie Mellon University 2001

[32] S Singh S A Velastin and H Ragheb ldquoMuHAVi a mul-ticamera human action video dataset for the evaluation ofaction recognition methodsrdquo in Proceedings of the 7th IEEEInternational Conference on Advanced Video and Signal Based(AVSS rsquo10) pp 48ndash55 Boston Mass USA September 2010

[33] H Kuehne H Jhuang E Garrote T Poggio and T SerreldquoHMDB a large video database for humanmotion recognitionrdquoinProceedings of the IEEE International Conference onComputerVision (ICCV rsquo11) pp 2556ndash2563 November 2011

[34] F Y A Rahman A Hussain N M Tahir and W M DZaki ldquoModeling of initial reference frame for backgroundsubtractionrdquo in Proceedings of the 6th International Colloquiumon Signal Processing and Its Applications (CSPA rsquo10) pp 125ndash1282010

[35] E Fauske L M Eliassen and R H Bakken ldquoA comparison oflearning based background subtractionrdquo in Proceedings of theNorwegian Artificial Intelligens Symposium (NAIS) pp 181ndash1922009

[36] H Kim B Ku D K Han S Kang and H Ko ldquoAdaptive selec-tion model in block-based background subtractionrdquo ElectronicsLetter vol 48 no 8 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 4: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

4 Journal of Electrical and Computer Engineering

Yes

Yes

No

No Is itthe last frame

Update the reference frame

Subtract current frame with reference frame to

Is itFD

technique

End

Start

Generate reference frame

Perform SDGD filter on Fdiff and produce Fsdgd

Perform thresholding to obtain Fx

obtain Fdiff

obtain Ffinal

Perform morphological operation on Fg and

Fuse Fsdgd with Fx to produce Fg

Figure 1 Flowchart of the proposed algorithm

The initial background for videos taken from databases(a) (b) (d) and (e) was modeled using the median valuefrom a selected frame interval We used five frames from thevideo sequences with 10 intervals between the frames thatis 1198651 11986510 11986520 11986530 and 119865

40 Next the median value of these

selected frames was used as the reference image Details ofthis technique are presented in [34] Meanwhile database (c)provided the background reference frame All datasets exceptthe MuHAVi videos were manually segmented to obtain theground truth

32 The Proposed Algorithm The methodology of our pro-posed techniques is as described in the flowchart shown inFigure 1

First the dataset was tested using the following basicparametric BGS techniques FD RA AM and RGA Nextthe SDGD filter was fused with each technique by combiningthe output of a background technique with the SDGD filteroutput The SDGD filter was selected as a segmentation toolbecause it produced better results compared to other edgedetection techniques (Sobel Canny and Roberts) [19 28]

33 Evaluation Method To evaluate the performance of eachBGS technique we calculated the average of recall precision119865 score and accuracy

(a) Recall (Rcl) refers to the detection rate which iscalculated by comparing the total number of detectedtrue positive pixels with the total number of truepositive pixels in the ground truth frame [35] Thisis also known as sensitivity The following equationshows how recall is calculated

recall = ΣTPΣTP + ΣFN

(10)

where FP is the false positive and FN is false negative

(b) Precision (Prcsn) is the ratio between the detectedtrue positive pixels and the total number of positivepixels detected by the method [35 36] This is alsoknown as specificity

precision = ΣTPΣTP + ΣFP

(11)

where FP is false positive FN is false negative TP istrue positive and TN is true negative

(c) 119865-measure or balance 119865-score is the weighted har-monic mean of recall and precision It is used as asingle measurement for comparing different methods[35 36]

119865 =2 sdot recall sdot precisionrecall + precision

(12)

(d) Accuracy is the percentage of correct data retrievalIt is calculated by dividing the number of pixels withtrue positive plus pixels with true negative pixels overthe total number of pixels in the frameThe followingequation displays the calculation of accuracy [36]

accuracy = ΣTP + ΣTNΣTP + ΣTN + ΣFP + ΣFN

times 100 (13)

This study utilized videos with a multiple number offrames Hence we presented a comparison of the averageF score and average accuracy percentage as the overallperformance benchmark in each BGS technique with andwithout SDGD filter The following equation is used tocalculate the percentage of improvement

im (119865 score) = (119865 scorewith minus 119865 scorewithout) times 100

im (accuracy) = Accuracywith minus Accuracywithout(14)

Journal of Electrical and Computer Engineering 5

(a) A1 (outdoor) (b) A2 (outdoor) (c) E1 (outdoor)

(d) B1 (indoor) (e) C1 (indoor) (f) D1 (indoor)

Figure 2 Background image of data in this study

(a) Original image

(b) Ground truth image

Figure 3 Original image and its ground truth of a frame in A1 foran outdoor environment

34 Classification Using the artificial neural network (ANN)as classifier the segmented image from the proposed tech-nique was subjected to classification testing Training inputof the ANN was extracted from 1500 randomly chosen seg-mented framesimages 750 human blob images represented

human samples and another 750 car blob images representednon-human samples

A scale conjugate gradient and the back propagation ruleswere chosen to train the classifier The ANN was designedwith one hidden layer containing ten hidden neurons andan output layer containing two hidden neurons Both layersused sigmoid as the activation function and themean squarederror value as the performance function Images were classi-fied as either humanor non-humanNext another set of 1000framesimageswere chosen for testingWe applied leave-one-out cross validation in our study Ten experiments and theaverage classification rate were used to evaluate whether therecognition performance was human or not

The evaluation was based on the segmented framesim-ages generated from the enhanced FD technique with theSDGD filter added to the algorithm In this experiment weonly used the FD instead of other BGS techniques because ithas the fastest processing time We also performed statisticaltests such as recall precision and F-score on the obtainedclassification result by taking positive identification on theclassification of human and negative for the classification ofnon-human

4 Results and Discussion

This section discusses the robustness of the proposed tech-nique based on videos taken from five different databases Toconfirm the robustness of the proposed algorithm we testedusing video that presented a different environment or cameraangle Specifically the video sequences were recorded inmultiple environments (indoors and outdoors) and multipleviews

6 Journal of Electrical and Computer Engineering

(a) Original image (b) Ground truth image

Figure 4 Original image and its ground truth of a frame in B1 for an indoor environment

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 5 Segmentation performance using various background subtraction techniques for an outdoor environment

Journal of Electrical and Computer Engineering 7

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 6 Segmentation performance using various background subtraction techniques for an indoor environment

Figures 2(a)ndash2(f) show some of the background framesused in this research These background frames were gen-erated by using the median value of selected frames in thevideo sequences except for the videos obtained from theMoBo database Rather than stating the filename we assignedletters from A to E to identify the videos representing thedata A refers to SESRG B to MoCap C to MoBo D toMuHAVi and E to HMDB51 The words in brackets indicatewhether the video environment was either indoor or outdoor

Next we present the subjective results of our objectextraction Since this study involves video data with a multi-ple number of frames we only depict the results obtained forframenumber 10 in dataA1 and framenumber 29 in data B1 torepresent the outdoor and indoor scene samples Figures 3(a)and 4(a) illustrate the original images of a frame in videos A1and B1 respectively Figures 3(b) and 4(b) depict the groundtruth images

Figures 5 and 6 present the extracted subjects in bothindoor and outdoor environments using FD AM RA andRGA with and without the SDGD filter

The first column of Figures 5 and 6 show that all the basicBGS techniques were capable of detecting the object in thescene of interest based on the tested videos However manypixels were missing and resulted in a smaller blob comparedwith the ground truth image Our proposed technique solvesthe problem of missing pixels and reduced blob size becauseit combines the SDGD filter with FD AM RA and RGAas shown in column 2 of Figures 5 and 6 The pixel sizeof the extracted object is slightly enlarged and becomesmore compound Foreground detection showed significantimprovement after the proposed technique was applied to alldatasets

Tables 1 and 2 show the number of TP TN FP and FN forthe selected indoor andoutdoor sampleswith andwithout the

8 Journal of Electrical and Computer Engineering

0010203040506070809

1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

F-sc

ore

Frame number

(a) FD

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(b) AM

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 7 Graph of F-score versus frame number for outdoor sample

Table 1 Performance for frame 10 in outdoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2046 1279 1091 80064 065 062 063With SDGD 2521 804 616 80539 080 076 078

AM without SDGD 1986 1785 1151 79558 063 053 057With SDGD 2374 951 763 80392 076 071 073

RG without SDGD 1959 1781 1178 79562 062 052 057With SDGD 2303 1022 834 80321 073 069 071

RGA without SDGD 2441 912 696 80431 078 073 075With SDGD 2654 499 483 80844 085 084 084

Table 2 Performance for frame 29 in indoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2336 1372 809 79963 074 063 068With SDGD 2788 920 1165 79607 071 075 073

AM without SDGD 2771 1437 808 79964 074 061 067With SDGD 2726 982 1077 79695 072 074 073

RG without SDGD 2289 1419 858 79914 073 062 067With SDGD 2723 985 1106 79666 071 073 072

RGA without SDGD 2564 1092 1175 79649 069 070 069With SDGD 2979 677 1535 79289 067 081 073

Journal of Electrical and Computer Engineering 9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(a) FD

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(b) AM

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 8 Graph of F-score versus frame number for indoor sample

addition of the SDGD filter in the BGS techniques Then thevalues of recall precision and F-score were calculated basedon mathematical equations (10)ndash(12)

Based on the findings shown in Tables 1 and 2 we cansee that the number of TPs has increased significantly whichproves that our technique is able to detect more of thecompound blob than the original methods This finding isalso in line with the increment of precision values Tables 1and 2 also show that the F-score increased for both sampleswhen we add the SDGD filter

The graphs in Figures 7 and 8 show the F-score trends inboth A1 and B1 for the four BGS techniques namely FD AMRA and RGA The solid lines represent the F-score resultsusing our proposed technique that is with the SDGD filterwhereas the dashed lines represent the F-score results withoutusing the SDGD filter Based on Figures 7 and 8 higherF-score values were noted for the A1 and B1 videos whenusing the proposed hybrid technique compared with thosewhen using the basic BGS techniques Thus our proposedtechnique improves upon traditional BGS techniques

To confirm the effectiveness of the proposed techniquewe tested the algorithm in six different videos with sixdifferent backgrounds as obtained from the five differentdatabases Table 3 shows the performance of FD AMRA and RGA with and without the SDGD filter in termsof the F-score and average accuracy percentage of all sixvideo samples Columns 5 and 8 show the percentageof improvement for both F-score and average accuracy

percentage Based on Table 3 the use of an SDGD filterimproved the average F-score values for all data comparedwith the values produced by the methods without SDGDColumn 5 of Table 3 shows that the F-score values improvedby 1 to 9 Therefore the proposed technique comparedwith existing techniques enhances object extraction

Additionally Table 3 depicts an increment in averageaccuracy percentage for each tested technique except forvideos A2 and E1 A2 had poor video quality because thefootage was taken in a corridor without proper lightingBecause of the poor video quality and bad lighting conditionthe SDGD filter was unable to segment the foregroundsubjects properly and produced an unwanted shadow in theforeground Nevertheless our proposed technique is capableof detecting foreground pixels with over 90 accuracy on allvideos tested

Meanwhile Table 4 exhibits the results of classifyinghuman and non-human recognition based on the segmentedframe generated by the proposed technique In Table 4 ANNsuccessfully recognized human and non-human images fromthe generated frames using the improved FD techniqueIncorrect classifications were minimal in all ten experiments

Figure 9 presents amatrix describing the overall results ofclassification testingThe average recognition rate for humanand non-human categories was 9878 and 9872 respec-tively The rate is higher for both categories as our algorithmprovided good human and non-human blob images which

10 Journal of Electrical and Computer Engineering

Table 3 Performance of each technique

Name MethodEvaluation performance based on average value

119865-score Accuracy ()Without SDGD With SDGD imlowast Without SDGD With SDGD imlowast

A1

FD 075 078 3 9845 9847 002AM 069 075 6 9816 9829 013RA 074 077 3 9842 9844 002RGA 079 081 2 9842 9847 003

A2

FD 078 079 1 9674 9665 minus009AM 080 081 1 9731 9714 minus017RA 076 078 2 9691 9683 minus008RGA 076 077 1 9575 9565 minus010

B1

FD 068 074 6 9799 9812 013AM 071 076 5 9822 9837 015RA 071 076 5 9825 9838 013RGA 064 068 4 9747 9751 004

C1

FD 084 085 1 9564 9569 005AM 079 080 1 9487 9503 016RA 057 065 8 9169 9217 048RGA 08 081 1 9495 9507 012

D1

FD 068 071 3 9784 9807 023AM 071 075 4 9813 9837 022RA 071 074 3 9814 9836 022RGA 070 074 4 9804 9825 021

E1

FD 080 081 1 9666 9669 003AM 080 081 1 9670 9671 001RA 080 081 1 9669 9670 001RGA 068 077 9 9645 9650 005

lowastim percent improvement

Target classHuman Nonhuman ()

Output class

Human 4939 64 9878

Non-human 61 4936 9872

() 9878 9872 9875

Figure 9 Confusion matrix of the overall performance

allowed ANN to distinguish between the two categoriesFurther the overall performance in both classes was 9875The findings confirmed that the proposed algorithm whichcombines FD with the SDGD filter could generate bettersilhouette images thereby facilitating recognition of humanand non-human images in the segmented imagesframes

From the matrix the information of TP TN FP and FNcan be extracted hence Table 5 shows results of the statisticalanalysis done on the classification findings by ANN

5 Conclusion

We presented a new hybrid approach that incorporates theSDGDfilterwith four basic BGS techniques namely FDAMRA and RGAThis hybrid technique enhanced segmentationperformance as indicated by the F-score values and averageaccuracy percentages The technique was tested on six differ-ent videos from five different databases and each video wastaken either indoors or outdoors and showed different scenesAnANNclassifierwas used to classify human andnonhumanimages appearing in the segmented images generated by ouralgorithm As the algorithm was capable of providing goodblob images ANN recognition of human and non-humanimages in the silhouette images was facilitated

Although computational time increased this aspect isacceptable considering the enhancement and the character-istics of second-order derivatives Therefore this study is

Journal of Electrical and Computer Engineering 11

Table 4 Classification results

Exp Human Non-human

1 Correct 498 492 99Missed 2 8 10

2 Correct 499 491 990Missed 1 9 10

3 Correct 498 495 993Missed 2 5 07

4 Correct 495 500 995Missed 5 0 05

5 Correct 481 492 973Missed 19 8 27

6 Correct 497 490 987Missed 3 10 13

7 Correct 484 496 980Missed 16 4 20

8 Correct 493 498 991Missed 7 2 09

9 Correct 497 496 993Missed 3 4 07

10 Correct 497 486 983Missed 3 14 17

Table 5 Performance for ANN classification

TP TN FP FN Recall Precision 119865-Score4939 4936 61 64 0987 0988 0987

valid and suitable for implementation in non-real-time appli-cations The proposed hybrid technique can improve upontraditional BGS techniques as indicated by the improved F-score accuracy andANNrecognition values after testing var-ious data sources and data environments The technique canalso be considered in detecting moving objects in non-real-time applications such as investigations of human actions ortraffic conditions

Acknowledgments

This work was supported by Universiti KebangsaanMalaysia (UKM) research Grant DPP-2013-003 and Minis-try of Higher Education (MoHE) research Grant LRGSTD2011ICT0402

References

[1] AMMcIvor ldquoBackground subtraction techniquesrdquo inProceed-ings of the Image and Vision Computing Conference AucklandNew Zealand 2000

[2] M H Sigari N Mozayani and H M Pourreza ldquoFuzzyrunning average and fuzzy background subtraction conceptsand applicationrdquo International Journal of Computer Science andNetwork Security vol 8 pp 138ndash143 2008

[3] Y Zheng and L Fan ldquoMoving object detection based onrunning average background and temporal differencerdquo in Pro-ceedings of the IEEE International Conference on Intelligent

Systems and Knowledge Engineering (ISKE rsquo10) pp 270ndash272November 2010

[4] J Park A Tabb and A C Kak ldquoHierarchical data structure forreal-time background subtractionrdquo in Proceedings of the IEEEInternational Conference on Image Processing (ICIP rsquo06) pp1849ndash1852 October 2006

[5] C RWrenAAzarbayeni TDarrel andA P Petland ldquoPfinderreal-time tracking of the human bodyrdquo IEEE Transaction onPattern Analysis andMachine Intelligence vol 19 no 7 pp 780ndash785 1997

[6] Z Tang Z Miao and Y Wan ldquoBackground subtraction usingrunning Gaussian average and frame differencerdquo in Proceedingsof the International Conference Entertainment Computing (ICECrsquo07) vol 4740 of Lecture Notes in Computer Science pp 411ndash4142007

[7] Z He Y Liu H Yu and X Ye ldquoOptimized algorithm for trafficinformation collection in an embedded systemrdquo in Proceedingsof the IEEE Congress on Image and Signal Processing May 2008

[8] A Singh S Sawan M Hanmandlu V K Madasu and BC Lovell ldquoAn abandoned object detection system based ondual background segmentationrdquo in Proceedings of the 6th IEEEInternational Conference on Advanced Video and Signal BasedSurveillance (AVSS rsquo09) pp 352ndash357 IEEE Press September2009

[9] S Su and Y Chen ldquoMoving object segmentation usingimproved running Gaussian average background modelrdquo inProceedings of the International conference on Digital ImageComputing Techniques and Applications 2008

[10] H Kim R Sakamoto I Kitahara T Toriyama and K KogureldquoBackground subtraction using generalised Gaussian familymodelrdquo Electronics Letters vol 44 no 3 pp 189ndash190 2008

[11] Z H Huang and K W Chau ldquoA new image thresholdingmethod based on Gaussian mixture modelrdquo Journal AppliedMathematics and Computation vol 205 pp 899ndash907 2008

[12] Y Liu H Yao W Gao X Chen and D Zhao ldquoNonparametricbackground generationrdquo in Proceedings of the 18th InternationalConference on Pattern Recognition (ICPR rsquo06) pp 916ndash919August 2006

[13] P Vu V Phong T H Vu and H B Le ldquoGPU implemen-tation of Extended Gaussian mixture model for backgroundsubtractionrdquo in Proceedings of the 8th IEEE-RIVF InternationalConference on Computing and Communication TechnologiesResearch Innovation and Vision for the Future (RIVF rsquo10)November 2010

[14] A Mittal and N Paragios ldquoMotion-based background subtrac-tion using adaptive kernel density estimationrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II302ndashII309 July 2004

[15] F Porikli and O Tuzel ldquoHuman body tracking by adaptivebackground models and mean shift analysisrdquo in Proceedings ofthe IEEE International Workshop on Performance Evaluation ofTracking and Surveillance (PETS rsquo03) July2003

[16] I J Young J J Gerbrands and L J van Vliet ldquoFundamentals ofImage Processingrdquo Version 2 3 1995ndash2007

[17] V G Narendra and K S Hareesh ldquoStudy and comparison ofvarious image edge detection techniquesrdquo International Journalof Image Processing vol 4 no 2 article 83 2009

[18] J F Canny ldquoA computational approach to edge detectionrdquo IEEETransaction Pattern Analysis and Machine Intelligent vol 8 no6 pp 679ndash697 1986

12 Journal of Electrical and Computer Engineering

[19] M P Persoon I W O Serlie F H Post R Truyen and F MVos ldquoVisualization of noisy and biased volume data using firstand second order derivative techniquesrdquo in Proceedings of the14th IEEE Visualization Conference pp 379ndash385 October 2003

[20] I T Young ldquoGeneralized convolutional filteringrdquo inProceedingsof the 19th CERN School of Computing pp 51ndash65 1996

[21] ldquoImage processing fundamentals derivative-based operationrdquo2011 httpwwwmifvultatpazinimasdipFIPfip-Derivatihtml

[22] PWVerbeek and L J vanVliet ldquoLocation error of curved edgesin low-pass filtered 2-D and 3-D imagesrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 16 no 7 pp 726ndash733 1994

[23] M Hagara and J Moravcik ldquoPLUS operator for edge detectionin digital imagesrdquo in Proceedings of the International Conferenceof Radioelektonika pp 467ndash470 2002

[24] Y Nader El-GlalyDevelopment of PDE-Based Digital InpaintingAlgorithm Applied to Missing Data in Digital Images [MSthesis] Ain Shams University 2007

[25] R G Aarnick J de la Rosette W Feitz F Debruyne and HWijkstra ldquoA preprocessing algorithm for edge detection withmultiple scales of resolutionrdquo European Journal of Ultrasoundvol 5 pp 113ndash126 1997

[26] N Lu and J Wang ldquoMotion detection based on accumulativeoptical flow and double background filteringrdquo in Proceedings ofthe World Congress on Engineering vol 1 2007

[27] ZakiWM DW A Hussain andM Hedayati ldquoMoving objectdetection using Keypoints reference modelrdquo EURASIP Journalon Image and Video Processing vol 2011 13 pages 2011

[28] F Y A Rahman A Hussain N M Tahir S A Samad and MH M Saad ldquoHybrid background subtraction techniques withsecond derivative on gradient direction filterrdquo in Proceedingsof the International Workshop on Advanced Image Technology(IWAIT rsquo12) 2012

[29] SMAl-Garni andAAAbdennour ldquoMoving vehicle detectionusing automatic background extractionrdquo in proceedings of theWorld Academy of Science Engineering and Technology vol 24pp 82ndash86 Sydney Australia December 2006

[30] ldquoCMU graphic lab motion capture databaserdquo 2010 httpmocapcscmuedu

[31] R Gross and J Shi ldquoThe CMU motion of body (MoBo)databaserdquo Technical Report CMU-RI-TR-01-18 Robotics Insti-tute Carnegie Mellon University 2001

[32] S Singh S A Velastin and H Ragheb ldquoMuHAVi a mul-ticamera human action video dataset for the evaluation ofaction recognition methodsrdquo in Proceedings of the 7th IEEEInternational Conference on Advanced Video and Signal Based(AVSS rsquo10) pp 48ndash55 Boston Mass USA September 2010

[33] H Kuehne H Jhuang E Garrote T Poggio and T SerreldquoHMDB a large video database for humanmotion recognitionrdquoinProceedings of the IEEE International Conference onComputerVision (ICCV rsquo11) pp 2556ndash2563 November 2011

[34] F Y A Rahman A Hussain N M Tahir and W M DZaki ldquoModeling of initial reference frame for backgroundsubtractionrdquo in Proceedings of the 6th International Colloquiumon Signal Processing and Its Applications (CSPA rsquo10) pp 125ndash1282010

[35] E Fauske L M Eliassen and R H Bakken ldquoA comparison oflearning based background subtractionrdquo in Proceedings of theNorwegian Artificial Intelligens Symposium (NAIS) pp 181ndash1922009

[36] H Kim B Ku D K Han S Kang and H Ko ldquoAdaptive selec-tion model in block-based background subtractionrdquo ElectronicsLetter vol 48 no 8 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 5: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

Journal of Electrical and Computer Engineering 5

(a) A1 (outdoor) (b) A2 (outdoor) (c) E1 (outdoor)

(d) B1 (indoor) (e) C1 (indoor) (f) D1 (indoor)

Figure 2 Background image of data in this study

(a) Original image

(b) Ground truth image

Figure 3 Original image and its ground truth of a frame in A1 foran outdoor environment

34 Classification Using the artificial neural network (ANN)as classifier the segmented image from the proposed tech-nique was subjected to classification testing Training inputof the ANN was extracted from 1500 randomly chosen seg-mented framesimages 750 human blob images represented

human samples and another 750 car blob images representednon-human samples

A scale conjugate gradient and the back propagation ruleswere chosen to train the classifier The ANN was designedwith one hidden layer containing ten hidden neurons andan output layer containing two hidden neurons Both layersused sigmoid as the activation function and themean squarederror value as the performance function Images were classi-fied as either humanor non-humanNext another set of 1000framesimageswere chosen for testingWe applied leave-one-out cross validation in our study Ten experiments and theaverage classification rate were used to evaluate whether therecognition performance was human or not

The evaluation was based on the segmented framesim-ages generated from the enhanced FD technique with theSDGD filter added to the algorithm In this experiment weonly used the FD instead of other BGS techniques because ithas the fastest processing time We also performed statisticaltests such as recall precision and F-score on the obtainedclassification result by taking positive identification on theclassification of human and negative for the classification ofnon-human

4 Results and Discussion

This section discusses the robustness of the proposed tech-nique based on videos taken from five different databases Toconfirm the robustness of the proposed algorithm we testedusing video that presented a different environment or cameraangle Specifically the video sequences were recorded inmultiple environments (indoors and outdoors) and multipleviews

6 Journal of Electrical and Computer Engineering

(a) Original image (b) Ground truth image

Figure 4 Original image and its ground truth of a frame in B1 for an indoor environment

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 5 Segmentation performance using various background subtraction techniques for an outdoor environment

Journal of Electrical and Computer Engineering 7

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 6 Segmentation performance using various background subtraction techniques for an indoor environment

Figures 2(a)ndash2(f) show some of the background framesused in this research These background frames were gen-erated by using the median value of selected frames in thevideo sequences except for the videos obtained from theMoBo database Rather than stating the filename we assignedletters from A to E to identify the videos representing thedata A refers to SESRG B to MoCap C to MoBo D toMuHAVi and E to HMDB51 The words in brackets indicatewhether the video environment was either indoor or outdoor

Next we present the subjective results of our objectextraction Since this study involves video data with a multi-ple number of frames we only depict the results obtained forframenumber 10 in dataA1 and framenumber 29 in data B1 torepresent the outdoor and indoor scene samples Figures 3(a)and 4(a) illustrate the original images of a frame in videos A1and B1 respectively Figures 3(b) and 4(b) depict the groundtruth images

Figures 5 and 6 present the extracted subjects in bothindoor and outdoor environments using FD AM RA andRGA with and without the SDGD filter

The first column of Figures 5 and 6 show that all the basicBGS techniques were capable of detecting the object in thescene of interest based on the tested videos However manypixels were missing and resulted in a smaller blob comparedwith the ground truth image Our proposed technique solvesthe problem of missing pixels and reduced blob size becauseit combines the SDGD filter with FD AM RA and RGAas shown in column 2 of Figures 5 and 6 The pixel sizeof the extracted object is slightly enlarged and becomesmore compound Foreground detection showed significantimprovement after the proposed technique was applied to alldatasets

Tables 1 and 2 show the number of TP TN FP and FN forthe selected indoor andoutdoor sampleswith andwithout the

8 Journal of Electrical and Computer Engineering

0010203040506070809

1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

F-sc

ore

Frame number

(a) FD

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(b) AM

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 7 Graph of F-score versus frame number for outdoor sample

Table 1 Performance for frame 10 in outdoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2046 1279 1091 80064 065 062 063With SDGD 2521 804 616 80539 080 076 078

AM without SDGD 1986 1785 1151 79558 063 053 057With SDGD 2374 951 763 80392 076 071 073

RG without SDGD 1959 1781 1178 79562 062 052 057With SDGD 2303 1022 834 80321 073 069 071

RGA without SDGD 2441 912 696 80431 078 073 075With SDGD 2654 499 483 80844 085 084 084

Table 2 Performance for frame 29 in indoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2336 1372 809 79963 074 063 068With SDGD 2788 920 1165 79607 071 075 073

AM without SDGD 2771 1437 808 79964 074 061 067With SDGD 2726 982 1077 79695 072 074 073

RG without SDGD 2289 1419 858 79914 073 062 067With SDGD 2723 985 1106 79666 071 073 072

RGA without SDGD 2564 1092 1175 79649 069 070 069With SDGD 2979 677 1535 79289 067 081 073

Journal of Electrical and Computer Engineering 9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(a) FD

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(b) AM

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 8 Graph of F-score versus frame number for indoor sample

addition of the SDGD filter in the BGS techniques Then thevalues of recall precision and F-score were calculated basedon mathematical equations (10)ndash(12)

Based on the findings shown in Tables 1 and 2 we cansee that the number of TPs has increased significantly whichproves that our technique is able to detect more of thecompound blob than the original methods This finding isalso in line with the increment of precision values Tables 1and 2 also show that the F-score increased for both sampleswhen we add the SDGD filter

The graphs in Figures 7 and 8 show the F-score trends inboth A1 and B1 for the four BGS techniques namely FD AMRA and RGA The solid lines represent the F-score resultsusing our proposed technique that is with the SDGD filterwhereas the dashed lines represent the F-score results withoutusing the SDGD filter Based on Figures 7 and 8 higherF-score values were noted for the A1 and B1 videos whenusing the proposed hybrid technique compared with thosewhen using the basic BGS techniques Thus our proposedtechnique improves upon traditional BGS techniques

To confirm the effectiveness of the proposed techniquewe tested the algorithm in six different videos with sixdifferent backgrounds as obtained from the five differentdatabases Table 3 shows the performance of FD AMRA and RGA with and without the SDGD filter in termsof the F-score and average accuracy percentage of all sixvideo samples Columns 5 and 8 show the percentageof improvement for both F-score and average accuracy

percentage Based on Table 3 the use of an SDGD filterimproved the average F-score values for all data comparedwith the values produced by the methods without SDGDColumn 5 of Table 3 shows that the F-score values improvedby 1 to 9 Therefore the proposed technique comparedwith existing techniques enhances object extraction

Additionally Table 3 depicts an increment in averageaccuracy percentage for each tested technique except forvideos A2 and E1 A2 had poor video quality because thefootage was taken in a corridor without proper lightingBecause of the poor video quality and bad lighting conditionthe SDGD filter was unable to segment the foregroundsubjects properly and produced an unwanted shadow in theforeground Nevertheless our proposed technique is capableof detecting foreground pixels with over 90 accuracy on allvideos tested

Meanwhile Table 4 exhibits the results of classifyinghuman and non-human recognition based on the segmentedframe generated by the proposed technique In Table 4 ANNsuccessfully recognized human and non-human images fromthe generated frames using the improved FD techniqueIncorrect classifications were minimal in all ten experiments

Figure 9 presents amatrix describing the overall results ofclassification testingThe average recognition rate for humanand non-human categories was 9878 and 9872 respec-tively The rate is higher for both categories as our algorithmprovided good human and non-human blob images which

10 Journal of Electrical and Computer Engineering

Table 3 Performance of each technique

Name MethodEvaluation performance based on average value

119865-score Accuracy ()Without SDGD With SDGD imlowast Without SDGD With SDGD imlowast

A1

FD 075 078 3 9845 9847 002AM 069 075 6 9816 9829 013RA 074 077 3 9842 9844 002RGA 079 081 2 9842 9847 003

A2

FD 078 079 1 9674 9665 minus009AM 080 081 1 9731 9714 minus017RA 076 078 2 9691 9683 minus008RGA 076 077 1 9575 9565 minus010

B1

FD 068 074 6 9799 9812 013AM 071 076 5 9822 9837 015RA 071 076 5 9825 9838 013RGA 064 068 4 9747 9751 004

C1

FD 084 085 1 9564 9569 005AM 079 080 1 9487 9503 016RA 057 065 8 9169 9217 048RGA 08 081 1 9495 9507 012

D1

FD 068 071 3 9784 9807 023AM 071 075 4 9813 9837 022RA 071 074 3 9814 9836 022RGA 070 074 4 9804 9825 021

E1

FD 080 081 1 9666 9669 003AM 080 081 1 9670 9671 001RA 080 081 1 9669 9670 001RGA 068 077 9 9645 9650 005

lowastim percent improvement

Target classHuman Nonhuman ()

Output class

Human 4939 64 9878

Non-human 61 4936 9872

() 9878 9872 9875

Figure 9 Confusion matrix of the overall performance

allowed ANN to distinguish between the two categoriesFurther the overall performance in both classes was 9875The findings confirmed that the proposed algorithm whichcombines FD with the SDGD filter could generate bettersilhouette images thereby facilitating recognition of humanand non-human images in the segmented imagesframes

From the matrix the information of TP TN FP and FNcan be extracted hence Table 5 shows results of the statisticalanalysis done on the classification findings by ANN

5 Conclusion

We presented a new hybrid approach that incorporates theSDGDfilterwith four basic BGS techniques namely FDAMRA and RGAThis hybrid technique enhanced segmentationperformance as indicated by the F-score values and averageaccuracy percentages The technique was tested on six differ-ent videos from five different databases and each video wastaken either indoors or outdoors and showed different scenesAnANNclassifierwas used to classify human andnonhumanimages appearing in the segmented images generated by ouralgorithm As the algorithm was capable of providing goodblob images ANN recognition of human and non-humanimages in the silhouette images was facilitated

Although computational time increased this aspect isacceptable considering the enhancement and the character-istics of second-order derivatives Therefore this study is

Journal of Electrical and Computer Engineering 11

Table 4 Classification results

Exp Human Non-human

1 Correct 498 492 99Missed 2 8 10

2 Correct 499 491 990Missed 1 9 10

3 Correct 498 495 993Missed 2 5 07

4 Correct 495 500 995Missed 5 0 05

5 Correct 481 492 973Missed 19 8 27

6 Correct 497 490 987Missed 3 10 13

7 Correct 484 496 980Missed 16 4 20

8 Correct 493 498 991Missed 7 2 09

9 Correct 497 496 993Missed 3 4 07

10 Correct 497 486 983Missed 3 14 17

Table 5 Performance for ANN classification

TP TN FP FN Recall Precision 119865-Score4939 4936 61 64 0987 0988 0987

valid and suitable for implementation in non-real-time appli-cations The proposed hybrid technique can improve upontraditional BGS techniques as indicated by the improved F-score accuracy andANNrecognition values after testing var-ious data sources and data environments The technique canalso be considered in detecting moving objects in non-real-time applications such as investigations of human actions ortraffic conditions

Acknowledgments

This work was supported by Universiti KebangsaanMalaysia (UKM) research Grant DPP-2013-003 and Minis-try of Higher Education (MoHE) research Grant LRGSTD2011ICT0402

References

[1] AMMcIvor ldquoBackground subtraction techniquesrdquo inProceed-ings of the Image and Vision Computing Conference AucklandNew Zealand 2000

[2] M H Sigari N Mozayani and H M Pourreza ldquoFuzzyrunning average and fuzzy background subtraction conceptsand applicationrdquo International Journal of Computer Science andNetwork Security vol 8 pp 138ndash143 2008

[3] Y Zheng and L Fan ldquoMoving object detection based onrunning average background and temporal differencerdquo in Pro-ceedings of the IEEE International Conference on Intelligent

Systems and Knowledge Engineering (ISKE rsquo10) pp 270ndash272November 2010

[4] J Park A Tabb and A C Kak ldquoHierarchical data structure forreal-time background subtractionrdquo in Proceedings of the IEEEInternational Conference on Image Processing (ICIP rsquo06) pp1849ndash1852 October 2006

[5] C RWrenAAzarbayeni TDarrel andA P Petland ldquoPfinderreal-time tracking of the human bodyrdquo IEEE Transaction onPattern Analysis andMachine Intelligence vol 19 no 7 pp 780ndash785 1997

[6] Z Tang Z Miao and Y Wan ldquoBackground subtraction usingrunning Gaussian average and frame differencerdquo in Proceedingsof the International Conference Entertainment Computing (ICECrsquo07) vol 4740 of Lecture Notes in Computer Science pp 411ndash4142007

[7] Z He Y Liu H Yu and X Ye ldquoOptimized algorithm for trafficinformation collection in an embedded systemrdquo in Proceedingsof the IEEE Congress on Image and Signal Processing May 2008

[8] A Singh S Sawan M Hanmandlu V K Madasu and BC Lovell ldquoAn abandoned object detection system based ondual background segmentationrdquo in Proceedings of the 6th IEEEInternational Conference on Advanced Video and Signal BasedSurveillance (AVSS rsquo09) pp 352ndash357 IEEE Press September2009

[9] S Su and Y Chen ldquoMoving object segmentation usingimproved running Gaussian average background modelrdquo inProceedings of the International conference on Digital ImageComputing Techniques and Applications 2008

[10] H Kim R Sakamoto I Kitahara T Toriyama and K KogureldquoBackground subtraction using generalised Gaussian familymodelrdquo Electronics Letters vol 44 no 3 pp 189ndash190 2008

[11] Z H Huang and K W Chau ldquoA new image thresholdingmethod based on Gaussian mixture modelrdquo Journal AppliedMathematics and Computation vol 205 pp 899ndash907 2008

[12] Y Liu H Yao W Gao X Chen and D Zhao ldquoNonparametricbackground generationrdquo in Proceedings of the 18th InternationalConference on Pattern Recognition (ICPR rsquo06) pp 916ndash919August 2006

[13] P Vu V Phong T H Vu and H B Le ldquoGPU implemen-tation of Extended Gaussian mixture model for backgroundsubtractionrdquo in Proceedings of the 8th IEEE-RIVF InternationalConference on Computing and Communication TechnologiesResearch Innovation and Vision for the Future (RIVF rsquo10)November 2010

[14] A Mittal and N Paragios ldquoMotion-based background subtrac-tion using adaptive kernel density estimationrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II302ndashII309 July 2004

[15] F Porikli and O Tuzel ldquoHuman body tracking by adaptivebackground models and mean shift analysisrdquo in Proceedings ofthe IEEE International Workshop on Performance Evaluation ofTracking and Surveillance (PETS rsquo03) July2003

[16] I J Young J J Gerbrands and L J van Vliet ldquoFundamentals ofImage Processingrdquo Version 2 3 1995ndash2007

[17] V G Narendra and K S Hareesh ldquoStudy and comparison ofvarious image edge detection techniquesrdquo International Journalof Image Processing vol 4 no 2 article 83 2009

[18] J F Canny ldquoA computational approach to edge detectionrdquo IEEETransaction Pattern Analysis and Machine Intelligent vol 8 no6 pp 679ndash697 1986

12 Journal of Electrical and Computer Engineering

[19] M P Persoon I W O Serlie F H Post R Truyen and F MVos ldquoVisualization of noisy and biased volume data using firstand second order derivative techniquesrdquo in Proceedings of the14th IEEE Visualization Conference pp 379ndash385 October 2003

[20] I T Young ldquoGeneralized convolutional filteringrdquo inProceedingsof the 19th CERN School of Computing pp 51ndash65 1996

[21] ldquoImage processing fundamentals derivative-based operationrdquo2011 httpwwwmifvultatpazinimasdipFIPfip-Derivatihtml

[22] PWVerbeek and L J vanVliet ldquoLocation error of curved edgesin low-pass filtered 2-D and 3-D imagesrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 16 no 7 pp 726ndash733 1994

[23] M Hagara and J Moravcik ldquoPLUS operator for edge detectionin digital imagesrdquo in Proceedings of the International Conferenceof Radioelektonika pp 467ndash470 2002

[24] Y Nader El-GlalyDevelopment of PDE-Based Digital InpaintingAlgorithm Applied to Missing Data in Digital Images [MSthesis] Ain Shams University 2007

[25] R G Aarnick J de la Rosette W Feitz F Debruyne and HWijkstra ldquoA preprocessing algorithm for edge detection withmultiple scales of resolutionrdquo European Journal of Ultrasoundvol 5 pp 113ndash126 1997

[26] N Lu and J Wang ldquoMotion detection based on accumulativeoptical flow and double background filteringrdquo in Proceedings ofthe World Congress on Engineering vol 1 2007

[27] ZakiWM DW A Hussain andM Hedayati ldquoMoving objectdetection using Keypoints reference modelrdquo EURASIP Journalon Image and Video Processing vol 2011 13 pages 2011

[28] F Y A Rahman A Hussain N M Tahir S A Samad and MH M Saad ldquoHybrid background subtraction techniques withsecond derivative on gradient direction filterrdquo in Proceedingsof the International Workshop on Advanced Image Technology(IWAIT rsquo12) 2012

[29] SMAl-Garni andAAAbdennour ldquoMoving vehicle detectionusing automatic background extractionrdquo in proceedings of theWorld Academy of Science Engineering and Technology vol 24pp 82ndash86 Sydney Australia December 2006

[30] ldquoCMU graphic lab motion capture databaserdquo 2010 httpmocapcscmuedu

[31] R Gross and J Shi ldquoThe CMU motion of body (MoBo)databaserdquo Technical Report CMU-RI-TR-01-18 Robotics Insti-tute Carnegie Mellon University 2001

[32] S Singh S A Velastin and H Ragheb ldquoMuHAVi a mul-ticamera human action video dataset for the evaluation ofaction recognition methodsrdquo in Proceedings of the 7th IEEEInternational Conference on Advanced Video and Signal Based(AVSS rsquo10) pp 48ndash55 Boston Mass USA September 2010

[33] H Kuehne H Jhuang E Garrote T Poggio and T SerreldquoHMDB a large video database for humanmotion recognitionrdquoinProceedings of the IEEE International Conference onComputerVision (ICCV rsquo11) pp 2556ndash2563 November 2011

[34] F Y A Rahman A Hussain N M Tahir and W M DZaki ldquoModeling of initial reference frame for backgroundsubtractionrdquo in Proceedings of the 6th International Colloquiumon Signal Processing and Its Applications (CSPA rsquo10) pp 125ndash1282010

[35] E Fauske L M Eliassen and R H Bakken ldquoA comparison oflearning based background subtractionrdquo in Proceedings of theNorwegian Artificial Intelligens Symposium (NAIS) pp 181ndash1922009

[36] H Kim B Ku D K Han S Kang and H Ko ldquoAdaptive selec-tion model in block-based background subtractionrdquo ElectronicsLetter vol 48 no 8 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 6: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

6 Journal of Electrical and Computer Engineering

(a) Original image (b) Ground truth image

Figure 4 Original image and its ground truth of a frame in B1 for an indoor environment

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 5 Segmentation performance using various background subtraction techniques for an outdoor environment

Journal of Electrical and Computer Engineering 7

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 6 Segmentation performance using various background subtraction techniques for an indoor environment

Figures 2(a)ndash2(f) show some of the background framesused in this research These background frames were gen-erated by using the median value of selected frames in thevideo sequences except for the videos obtained from theMoBo database Rather than stating the filename we assignedletters from A to E to identify the videos representing thedata A refers to SESRG B to MoCap C to MoBo D toMuHAVi and E to HMDB51 The words in brackets indicatewhether the video environment was either indoor or outdoor

Next we present the subjective results of our objectextraction Since this study involves video data with a multi-ple number of frames we only depict the results obtained forframenumber 10 in dataA1 and framenumber 29 in data B1 torepresent the outdoor and indoor scene samples Figures 3(a)and 4(a) illustrate the original images of a frame in videos A1and B1 respectively Figures 3(b) and 4(b) depict the groundtruth images

Figures 5 and 6 present the extracted subjects in bothindoor and outdoor environments using FD AM RA andRGA with and without the SDGD filter

The first column of Figures 5 and 6 show that all the basicBGS techniques were capable of detecting the object in thescene of interest based on the tested videos However manypixels were missing and resulted in a smaller blob comparedwith the ground truth image Our proposed technique solvesthe problem of missing pixels and reduced blob size becauseit combines the SDGD filter with FD AM RA and RGAas shown in column 2 of Figures 5 and 6 The pixel sizeof the extracted object is slightly enlarged and becomesmore compound Foreground detection showed significantimprovement after the proposed technique was applied to alldatasets

Tables 1 and 2 show the number of TP TN FP and FN forthe selected indoor andoutdoor sampleswith andwithout the

8 Journal of Electrical and Computer Engineering

0010203040506070809

1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

F-sc

ore

Frame number

(a) FD

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(b) AM

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 7 Graph of F-score versus frame number for outdoor sample

Table 1 Performance for frame 10 in outdoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2046 1279 1091 80064 065 062 063With SDGD 2521 804 616 80539 080 076 078

AM without SDGD 1986 1785 1151 79558 063 053 057With SDGD 2374 951 763 80392 076 071 073

RG without SDGD 1959 1781 1178 79562 062 052 057With SDGD 2303 1022 834 80321 073 069 071

RGA without SDGD 2441 912 696 80431 078 073 075With SDGD 2654 499 483 80844 085 084 084

Table 2 Performance for frame 29 in indoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2336 1372 809 79963 074 063 068With SDGD 2788 920 1165 79607 071 075 073

AM without SDGD 2771 1437 808 79964 074 061 067With SDGD 2726 982 1077 79695 072 074 073

RG without SDGD 2289 1419 858 79914 073 062 067With SDGD 2723 985 1106 79666 071 073 072

RGA without SDGD 2564 1092 1175 79649 069 070 069With SDGD 2979 677 1535 79289 067 081 073

Journal of Electrical and Computer Engineering 9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(a) FD

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(b) AM

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 8 Graph of F-score versus frame number for indoor sample

addition of the SDGD filter in the BGS techniques Then thevalues of recall precision and F-score were calculated basedon mathematical equations (10)ndash(12)

Based on the findings shown in Tables 1 and 2 we cansee that the number of TPs has increased significantly whichproves that our technique is able to detect more of thecompound blob than the original methods This finding isalso in line with the increment of precision values Tables 1and 2 also show that the F-score increased for both sampleswhen we add the SDGD filter

The graphs in Figures 7 and 8 show the F-score trends inboth A1 and B1 for the four BGS techniques namely FD AMRA and RGA The solid lines represent the F-score resultsusing our proposed technique that is with the SDGD filterwhereas the dashed lines represent the F-score results withoutusing the SDGD filter Based on Figures 7 and 8 higherF-score values were noted for the A1 and B1 videos whenusing the proposed hybrid technique compared with thosewhen using the basic BGS techniques Thus our proposedtechnique improves upon traditional BGS techniques

To confirm the effectiveness of the proposed techniquewe tested the algorithm in six different videos with sixdifferent backgrounds as obtained from the five differentdatabases Table 3 shows the performance of FD AMRA and RGA with and without the SDGD filter in termsof the F-score and average accuracy percentage of all sixvideo samples Columns 5 and 8 show the percentageof improvement for both F-score and average accuracy

percentage Based on Table 3 the use of an SDGD filterimproved the average F-score values for all data comparedwith the values produced by the methods without SDGDColumn 5 of Table 3 shows that the F-score values improvedby 1 to 9 Therefore the proposed technique comparedwith existing techniques enhances object extraction

Additionally Table 3 depicts an increment in averageaccuracy percentage for each tested technique except forvideos A2 and E1 A2 had poor video quality because thefootage was taken in a corridor without proper lightingBecause of the poor video quality and bad lighting conditionthe SDGD filter was unable to segment the foregroundsubjects properly and produced an unwanted shadow in theforeground Nevertheless our proposed technique is capableof detecting foreground pixels with over 90 accuracy on allvideos tested

Meanwhile Table 4 exhibits the results of classifyinghuman and non-human recognition based on the segmentedframe generated by the proposed technique In Table 4 ANNsuccessfully recognized human and non-human images fromthe generated frames using the improved FD techniqueIncorrect classifications were minimal in all ten experiments

Figure 9 presents amatrix describing the overall results ofclassification testingThe average recognition rate for humanand non-human categories was 9878 and 9872 respec-tively The rate is higher for both categories as our algorithmprovided good human and non-human blob images which

10 Journal of Electrical and Computer Engineering

Table 3 Performance of each technique

Name MethodEvaluation performance based on average value

119865-score Accuracy ()Without SDGD With SDGD imlowast Without SDGD With SDGD imlowast

A1

FD 075 078 3 9845 9847 002AM 069 075 6 9816 9829 013RA 074 077 3 9842 9844 002RGA 079 081 2 9842 9847 003

A2

FD 078 079 1 9674 9665 minus009AM 080 081 1 9731 9714 minus017RA 076 078 2 9691 9683 minus008RGA 076 077 1 9575 9565 minus010

B1

FD 068 074 6 9799 9812 013AM 071 076 5 9822 9837 015RA 071 076 5 9825 9838 013RGA 064 068 4 9747 9751 004

C1

FD 084 085 1 9564 9569 005AM 079 080 1 9487 9503 016RA 057 065 8 9169 9217 048RGA 08 081 1 9495 9507 012

D1

FD 068 071 3 9784 9807 023AM 071 075 4 9813 9837 022RA 071 074 3 9814 9836 022RGA 070 074 4 9804 9825 021

E1

FD 080 081 1 9666 9669 003AM 080 081 1 9670 9671 001RA 080 081 1 9669 9670 001RGA 068 077 9 9645 9650 005

lowastim percent improvement

Target classHuman Nonhuman ()

Output class

Human 4939 64 9878

Non-human 61 4936 9872

() 9878 9872 9875

Figure 9 Confusion matrix of the overall performance

allowed ANN to distinguish between the two categoriesFurther the overall performance in both classes was 9875The findings confirmed that the proposed algorithm whichcombines FD with the SDGD filter could generate bettersilhouette images thereby facilitating recognition of humanand non-human images in the segmented imagesframes

From the matrix the information of TP TN FP and FNcan be extracted hence Table 5 shows results of the statisticalanalysis done on the classification findings by ANN

5 Conclusion

We presented a new hybrid approach that incorporates theSDGDfilterwith four basic BGS techniques namely FDAMRA and RGAThis hybrid technique enhanced segmentationperformance as indicated by the F-score values and averageaccuracy percentages The technique was tested on six differ-ent videos from five different databases and each video wastaken either indoors or outdoors and showed different scenesAnANNclassifierwas used to classify human andnonhumanimages appearing in the segmented images generated by ouralgorithm As the algorithm was capable of providing goodblob images ANN recognition of human and non-humanimages in the silhouette images was facilitated

Although computational time increased this aspect isacceptable considering the enhancement and the character-istics of second-order derivatives Therefore this study is

Journal of Electrical and Computer Engineering 11

Table 4 Classification results

Exp Human Non-human

1 Correct 498 492 99Missed 2 8 10

2 Correct 499 491 990Missed 1 9 10

3 Correct 498 495 993Missed 2 5 07

4 Correct 495 500 995Missed 5 0 05

5 Correct 481 492 973Missed 19 8 27

6 Correct 497 490 987Missed 3 10 13

7 Correct 484 496 980Missed 16 4 20

8 Correct 493 498 991Missed 7 2 09

9 Correct 497 496 993Missed 3 4 07

10 Correct 497 486 983Missed 3 14 17

Table 5 Performance for ANN classification

TP TN FP FN Recall Precision 119865-Score4939 4936 61 64 0987 0988 0987

valid and suitable for implementation in non-real-time appli-cations The proposed hybrid technique can improve upontraditional BGS techniques as indicated by the improved F-score accuracy andANNrecognition values after testing var-ious data sources and data environments The technique canalso be considered in detecting moving objects in non-real-time applications such as investigations of human actions ortraffic conditions

Acknowledgments

This work was supported by Universiti KebangsaanMalaysia (UKM) research Grant DPP-2013-003 and Minis-try of Higher Education (MoHE) research Grant LRGSTD2011ICT0402

References

[1] AMMcIvor ldquoBackground subtraction techniquesrdquo inProceed-ings of the Image and Vision Computing Conference AucklandNew Zealand 2000

[2] M H Sigari N Mozayani and H M Pourreza ldquoFuzzyrunning average and fuzzy background subtraction conceptsand applicationrdquo International Journal of Computer Science andNetwork Security vol 8 pp 138ndash143 2008

[3] Y Zheng and L Fan ldquoMoving object detection based onrunning average background and temporal differencerdquo in Pro-ceedings of the IEEE International Conference on Intelligent

Systems and Knowledge Engineering (ISKE rsquo10) pp 270ndash272November 2010

[4] J Park A Tabb and A C Kak ldquoHierarchical data structure forreal-time background subtractionrdquo in Proceedings of the IEEEInternational Conference on Image Processing (ICIP rsquo06) pp1849ndash1852 October 2006

[5] C RWrenAAzarbayeni TDarrel andA P Petland ldquoPfinderreal-time tracking of the human bodyrdquo IEEE Transaction onPattern Analysis andMachine Intelligence vol 19 no 7 pp 780ndash785 1997

[6] Z Tang Z Miao and Y Wan ldquoBackground subtraction usingrunning Gaussian average and frame differencerdquo in Proceedingsof the International Conference Entertainment Computing (ICECrsquo07) vol 4740 of Lecture Notes in Computer Science pp 411ndash4142007

[7] Z He Y Liu H Yu and X Ye ldquoOptimized algorithm for trafficinformation collection in an embedded systemrdquo in Proceedingsof the IEEE Congress on Image and Signal Processing May 2008

[8] A Singh S Sawan M Hanmandlu V K Madasu and BC Lovell ldquoAn abandoned object detection system based ondual background segmentationrdquo in Proceedings of the 6th IEEEInternational Conference on Advanced Video and Signal BasedSurveillance (AVSS rsquo09) pp 352ndash357 IEEE Press September2009

[9] S Su and Y Chen ldquoMoving object segmentation usingimproved running Gaussian average background modelrdquo inProceedings of the International conference on Digital ImageComputing Techniques and Applications 2008

[10] H Kim R Sakamoto I Kitahara T Toriyama and K KogureldquoBackground subtraction using generalised Gaussian familymodelrdquo Electronics Letters vol 44 no 3 pp 189ndash190 2008

[11] Z H Huang and K W Chau ldquoA new image thresholdingmethod based on Gaussian mixture modelrdquo Journal AppliedMathematics and Computation vol 205 pp 899ndash907 2008

[12] Y Liu H Yao W Gao X Chen and D Zhao ldquoNonparametricbackground generationrdquo in Proceedings of the 18th InternationalConference on Pattern Recognition (ICPR rsquo06) pp 916ndash919August 2006

[13] P Vu V Phong T H Vu and H B Le ldquoGPU implemen-tation of Extended Gaussian mixture model for backgroundsubtractionrdquo in Proceedings of the 8th IEEE-RIVF InternationalConference on Computing and Communication TechnologiesResearch Innovation and Vision for the Future (RIVF rsquo10)November 2010

[14] A Mittal and N Paragios ldquoMotion-based background subtrac-tion using adaptive kernel density estimationrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II302ndashII309 July 2004

[15] F Porikli and O Tuzel ldquoHuman body tracking by adaptivebackground models and mean shift analysisrdquo in Proceedings ofthe IEEE International Workshop on Performance Evaluation ofTracking and Surveillance (PETS rsquo03) July2003

[16] I J Young J J Gerbrands and L J van Vliet ldquoFundamentals ofImage Processingrdquo Version 2 3 1995ndash2007

[17] V G Narendra and K S Hareesh ldquoStudy and comparison ofvarious image edge detection techniquesrdquo International Journalof Image Processing vol 4 no 2 article 83 2009

[18] J F Canny ldquoA computational approach to edge detectionrdquo IEEETransaction Pattern Analysis and Machine Intelligent vol 8 no6 pp 679ndash697 1986

12 Journal of Electrical and Computer Engineering

[19] M P Persoon I W O Serlie F H Post R Truyen and F MVos ldquoVisualization of noisy and biased volume data using firstand second order derivative techniquesrdquo in Proceedings of the14th IEEE Visualization Conference pp 379ndash385 October 2003

[20] I T Young ldquoGeneralized convolutional filteringrdquo inProceedingsof the 19th CERN School of Computing pp 51ndash65 1996

[21] ldquoImage processing fundamentals derivative-based operationrdquo2011 httpwwwmifvultatpazinimasdipFIPfip-Derivatihtml

[22] PWVerbeek and L J vanVliet ldquoLocation error of curved edgesin low-pass filtered 2-D and 3-D imagesrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 16 no 7 pp 726ndash733 1994

[23] M Hagara and J Moravcik ldquoPLUS operator for edge detectionin digital imagesrdquo in Proceedings of the International Conferenceof Radioelektonika pp 467ndash470 2002

[24] Y Nader El-GlalyDevelopment of PDE-Based Digital InpaintingAlgorithm Applied to Missing Data in Digital Images [MSthesis] Ain Shams University 2007

[25] R G Aarnick J de la Rosette W Feitz F Debruyne and HWijkstra ldquoA preprocessing algorithm for edge detection withmultiple scales of resolutionrdquo European Journal of Ultrasoundvol 5 pp 113ndash126 1997

[26] N Lu and J Wang ldquoMotion detection based on accumulativeoptical flow and double background filteringrdquo in Proceedings ofthe World Congress on Engineering vol 1 2007

[27] ZakiWM DW A Hussain andM Hedayati ldquoMoving objectdetection using Keypoints reference modelrdquo EURASIP Journalon Image and Video Processing vol 2011 13 pages 2011

[28] F Y A Rahman A Hussain N M Tahir S A Samad and MH M Saad ldquoHybrid background subtraction techniques withsecond derivative on gradient direction filterrdquo in Proceedingsof the International Workshop on Advanced Image Technology(IWAIT rsquo12) 2012

[29] SMAl-Garni andAAAbdennour ldquoMoving vehicle detectionusing automatic background extractionrdquo in proceedings of theWorld Academy of Science Engineering and Technology vol 24pp 82ndash86 Sydney Australia December 2006

[30] ldquoCMU graphic lab motion capture databaserdquo 2010 httpmocapcscmuedu

[31] R Gross and J Shi ldquoThe CMU motion of body (MoBo)databaserdquo Technical Report CMU-RI-TR-01-18 Robotics Insti-tute Carnegie Mellon University 2001

[32] S Singh S A Velastin and H Ragheb ldquoMuHAVi a mul-ticamera human action video dataset for the evaluation ofaction recognition methodsrdquo in Proceedings of the 7th IEEEInternational Conference on Advanced Video and Signal Based(AVSS rsquo10) pp 48ndash55 Boston Mass USA September 2010

[33] H Kuehne H Jhuang E Garrote T Poggio and T SerreldquoHMDB a large video database for humanmotion recognitionrdquoinProceedings of the IEEE International Conference onComputerVision (ICCV rsquo11) pp 2556ndash2563 November 2011

[34] F Y A Rahman A Hussain N M Tahir and W M DZaki ldquoModeling of initial reference frame for backgroundsubtractionrdquo in Proceedings of the 6th International Colloquiumon Signal Processing and Its Applications (CSPA rsquo10) pp 125ndash1282010

[35] E Fauske L M Eliassen and R H Bakken ldquoA comparison oflearning based background subtractionrdquo in Proceedings of theNorwegian Artificial Intelligens Symposium (NAIS) pp 181ndash1922009

[36] H Kim B Ku D K Han S Kang and H Ko ldquoAdaptive selec-tion model in block-based background subtractionrdquo ElectronicsLetter vol 48 no 8 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 7: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

Journal of Electrical and Computer Engineering 7

Method

FD

AM

RA

RGA

Without SDGD With SDGD

Figure 6 Segmentation performance using various background subtraction techniques for an indoor environment

Figures 2(a)ndash2(f) show some of the background framesused in this research These background frames were gen-erated by using the median value of selected frames in thevideo sequences except for the videos obtained from theMoBo database Rather than stating the filename we assignedletters from A to E to identify the videos representing thedata A refers to SESRG B to MoCap C to MoBo D toMuHAVi and E to HMDB51 The words in brackets indicatewhether the video environment was either indoor or outdoor

Next we present the subjective results of our objectextraction Since this study involves video data with a multi-ple number of frames we only depict the results obtained forframenumber 10 in dataA1 and framenumber 29 in data B1 torepresent the outdoor and indoor scene samples Figures 3(a)and 4(a) illustrate the original images of a frame in videos A1and B1 respectively Figures 3(b) and 4(b) depict the groundtruth images

Figures 5 and 6 present the extracted subjects in bothindoor and outdoor environments using FD AM RA andRGA with and without the SDGD filter

The first column of Figures 5 and 6 show that all the basicBGS techniques were capable of detecting the object in thescene of interest based on the tested videos However manypixels were missing and resulted in a smaller blob comparedwith the ground truth image Our proposed technique solvesthe problem of missing pixels and reduced blob size becauseit combines the SDGD filter with FD AM RA and RGAas shown in column 2 of Figures 5 and 6 The pixel sizeof the extracted object is slightly enlarged and becomesmore compound Foreground detection showed significantimprovement after the proposed technique was applied to alldatasets

Tables 1 and 2 show the number of TP TN FP and FN forthe selected indoor andoutdoor sampleswith andwithout the

8 Journal of Electrical and Computer Engineering

0010203040506070809

1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

F-sc

ore

Frame number

(a) FD

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(b) AM

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 7 Graph of F-score versus frame number for outdoor sample

Table 1 Performance for frame 10 in outdoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2046 1279 1091 80064 065 062 063With SDGD 2521 804 616 80539 080 076 078

AM without SDGD 1986 1785 1151 79558 063 053 057With SDGD 2374 951 763 80392 076 071 073

RG without SDGD 1959 1781 1178 79562 062 052 057With SDGD 2303 1022 834 80321 073 069 071

RGA without SDGD 2441 912 696 80431 078 073 075With SDGD 2654 499 483 80844 085 084 084

Table 2 Performance for frame 29 in indoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2336 1372 809 79963 074 063 068With SDGD 2788 920 1165 79607 071 075 073

AM without SDGD 2771 1437 808 79964 074 061 067With SDGD 2726 982 1077 79695 072 074 073

RG without SDGD 2289 1419 858 79914 073 062 067With SDGD 2723 985 1106 79666 071 073 072

RGA without SDGD 2564 1092 1175 79649 069 070 069With SDGD 2979 677 1535 79289 067 081 073

Journal of Electrical and Computer Engineering 9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(a) FD

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(b) AM

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 8 Graph of F-score versus frame number for indoor sample

addition of the SDGD filter in the BGS techniques Then thevalues of recall precision and F-score were calculated basedon mathematical equations (10)ndash(12)

Based on the findings shown in Tables 1 and 2 we cansee that the number of TPs has increased significantly whichproves that our technique is able to detect more of thecompound blob than the original methods This finding isalso in line with the increment of precision values Tables 1and 2 also show that the F-score increased for both sampleswhen we add the SDGD filter

The graphs in Figures 7 and 8 show the F-score trends inboth A1 and B1 for the four BGS techniques namely FD AMRA and RGA The solid lines represent the F-score resultsusing our proposed technique that is with the SDGD filterwhereas the dashed lines represent the F-score results withoutusing the SDGD filter Based on Figures 7 and 8 higherF-score values were noted for the A1 and B1 videos whenusing the proposed hybrid technique compared with thosewhen using the basic BGS techniques Thus our proposedtechnique improves upon traditional BGS techniques

To confirm the effectiveness of the proposed techniquewe tested the algorithm in six different videos with sixdifferent backgrounds as obtained from the five differentdatabases Table 3 shows the performance of FD AMRA and RGA with and without the SDGD filter in termsof the F-score and average accuracy percentage of all sixvideo samples Columns 5 and 8 show the percentageof improvement for both F-score and average accuracy

percentage Based on Table 3 the use of an SDGD filterimproved the average F-score values for all data comparedwith the values produced by the methods without SDGDColumn 5 of Table 3 shows that the F-score values improvedby 1 to 9 Therefore the proposed technique comparedwith existing techniques enhances object extraction

Additionally Table 3 depicts an increment in averageaccuracy percentage for each tested technique except forvideos A2 and E1 A2 had poor video quality because thefootage was taken in a corridor without proper lightingBecause of the poor video quality and bad lighting conditionthe SDGD filter was unable to segment the foregroundsubjects properly and produced an unwanted shadow in theforeground Nevertheless our proposed technique is capableof detecting foreground pixels with over 90 accuracy on allvideos tested

Meanwhile Table 4 exhibits the results of classifyinghuman and non-human recognition based on the segmentedframe generated by the proposed technique In Table 4 ANNsuccessfully recognized human and non-human images fromthe generated frames using the improved FD techniqueIncorrect classifications were minimal in all ten experiments

Figure 9 presents amatrix describing the overall results ofclassification testingThe average recognition rate for humanand non-human categories was 9878 and 9872 respec-tively The rate is higher for both categories as our algorithmprovided good human and non-human blob images which

10 Journal of Electrical and Computer Engineering

Table 3 Performance of each technique

Name MethodEvaluation performance based on average value

119865-score Accuracy ()Without SDGD With SDGD imlowast Without SDGD With SDGD imlowast

A1

FD 075 078 3 9845 9847 002AM 069 075 6 9816 9829 013RA 074 077 3 9842 9844 002RGA 079 081 2 9842 9847 003

A2

FD 078 079 1 9674 9665 minus009AM 080 081 1 9731 9714 minus017RA 076 078 2 9691 9683 minus008RGA 076 077 1 9575 9565 minus010

B1

FD 068 074 6 9799 9812 013AM 071 076 5 9822 9837 015RA 071 076 5 9825 9838 013RGA 064 068 4 9747 9751 004

C1

FD 084 085 1 9564 9569 005AM 079 080 1 9487 9503 016RA 057 065 8 9169 9217 048RGA 08 081 1 9495 9507 012

D1

FD 068 071 3 9784 9807 023AM 071 075 4 9813 9837 022RA 071 074 3 9814 9836 022RGA 070 074 4 9804 9825 021

E1

FD 080 081 1 9666 9669 003AM 080 081 1 9670 9671 001RA 080 081 1 9669 9670 001RGA 068 077 9 9645 9650 005

lowastim percent improvement

Target classHuman Nonhuman ()

Output class

Human 4939 64 9878

Non-human 61 4936 9872

() 9878 9872 9875

Figure 9 Confusion matrix of the overall performance

allowed ANN to distinguish between the two categoriesFurther the overall performance in both classes was 9875The findings confirmed that the proposed algorithm whichcombines FD with the SDGD filter could generate bettersilhouette images thereby facilitating recognition of humanand non-human images in the segmented imagesframes

From the matrix the information of TP TN FP and FNcan be extracted hence Table 5 shows results of the statisticalanalysis done on the classification findings by ANN

5 Conclusion

We presented a new hybrid approach that incorporates theSDGDfilterwith four basic BGS techniques namely FDAMRA and RGAThis hybrid technique enhanced segmentationperformance as indicated by the F-score values and averageaccuracy percentages The technique was tested on six differ-ent videos from five different databases and each video wastaken either indoors or outdoors and showed different scenesAnANNclassifierwas used to classify human andnonhumanimages appearing in the segmented images generated by ouralgorithm As the algorithm was capable of providing goodblob images ANN recognition of human and non-humanimages in the silhouette images was facilitated

Although computational time increased this aspect isacceptable considering the enhancement and the character-istics of second-order derivatives Therefore this study is

Journal of Electrical and Computer Engineering 11

Table 4 Classification results

Exp Human Non-human

1 Correct 498 492 99Missed 2 8 10

2 Correct 499 491 990Missed 1 9 10

3 Correct 498 495 993Missed 2 5 07

4 Correct 495 500 995Missed 5 0 05

5 Correct 481 492 973Missed 19 8 27

6 Correct 497 490 987Missed 3 10 13

7 Correct 484 496 980Missed 16 4 20

8 Correct 493 498 991Missed 7 2 09

9 Correct 497 496 993Missed 3 4 07

10 Correct 497 486 983Missed 3 14 17

Table 5 Performance for ANN classification

TP TN FP FN Recall Precision 119865-Score4939 4936 61 64 0987 0988 0987

valid and suitable for implementation in non-real-time appli-cations The proposed hybrid technique can improve upontraditional BGS techniques as indicated by the improved F-score accuracy andANNrecognition values after testing var-ious data sources and data environments The technique canalso be considered in detecting moving objects in non-real-time applications such as investigations of human actions ortraffic conditions

Acknowledgments

This work was supported by Universiti KebangsaanMalaysia (UKM) research Grant DPP-2013-003 and Minis-try of Higher Education (MoHE) research Grant LRGSTD2011ICT0402

References

[1] AMMcIvor ldquoBackground subtraction techniquesrdquo inProceed-ings of the Image and Vision Computing Conference AucklandNew Zealand 2000

[2] M H Sigari N Mozayani and H M Pourreza ldquoFuzzyrunning average and fuzzy background subtraction conceptsand applicationrdquo International Journal of Computer Science andNetwork Security vol 8 pp 138ndash143 2008

[3] Y Zheng and L Fan ldquoMoving object detection based onrunning average background and temporal differencerdquo in Pro-ceedings of the IEEE International Conference on Intelligent

Systems and Knowledge Engineering (ISKE rsquo10) pp 270ndash272November 2010

[4] J Park A Tabb and A C Kak ldquoHierarchical data structure forreal-time background subtractionrdquo in Proceedings of the IEEEInternational Conference on Image Processing (ICIP rsquo06) pp1849ndash1852 October 2006

[5] C RWrenAAzarbayeni TDarrel andA P Petland ldquoPfinderreal-time tracking of the human bodyrdquo IEEE Transaction onPattern Analysis andMachine Intelligence vol 19 no 7 pp 780ndash785 1997

[6] Z Tang Z Miao and Y Wan ldquoBackground subtraction usingrunning Gaussian average and frame differencerdquo in Proceedingsof the International Conference Entertainment Computing (ICECrsquo07) vol 4740 of Lecture Notes in Computer Science pp 411ndash4142007

[7] Z He Y Liu H Yu and X Ye ldquoOptimized algorithm for trafficinformation collection in an embedded systemrdquo in Proceedingsof the IEEE Congress on Image and Signal Processing May 2008

[8] A Singh S Sawan M Hanmandlu V K Madasu and BC Lovell ldquoAn abandoned object detection system based ondual background segmentationrdquo in Proceedings of the 6th IEEEInternational Conference on Advanced Video and Signal BasedSurveillance (AVSS rsquo09) pp 352ndash357 IEEE Press September2009

[9] S Su and Y Chen ldquoMoving object segmentation usingimproved running Gaussian average background modelrdquo inProceedings of the International conference on Digital ImageComputing Techniques and Applications 2008

[10] H Kim R Sakamoto I Kitahara T Toriyama and K KogureldquoBackground subtraction using generalised Gaussian familymodelrdquo Electronics Letters vol 44 no 3 pp 189ndash190 2008

[11] Z H Huang and K W Chau ldquoA new image thresholdingmethod based on Gaussian mixture modelrdquo Journal AppliedMathematics and Computation vol 205 pp 899ndash907 2008

[12] Y Liu H Yao W Gao X Chen and D Zhao ldquoNonparametricbackground generationrdquo in Proceedings of the 18th InternationalConference on Pattern Recognition (ICPR rsquo06) pp 916ndash919August 2006

[13] P Vu V Phong T H Vu and H B Le ldquoGPU implemen-tation of Extended Gaussian mixture model for backgroundsubtractionrdquo in Proceedings of the 8th IEEE-RIVF InternationalConference on Computing and Communication TechnologiesResearch Innovation and Vision for the Future (RIVF rsquo10)November 2010

[14] A Mittal and N Paragios ldquoMotion-based background subtrac-tion using adaptive kernel density estimationrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II302ndashII309 July 2004

[15] F Porikli and O Tuzel ldquoHuman body tracking by adaptivebackground models and mean shift analysisrdquo in Proceedings ofthe IEEE International Workshop on Performance Evaluation ofTracking and Surveillance (PETS rsquo03) July2003

[16] I J Young J J Gerbrands and L J van Vliet ldquoFundamentals ofImage Processingrdquo Version 2 3 1995ndash2007

[17] V G Narendra and K S Hareesh ldquoStudy and comparison ofvarious image edge detection techniquesrdquo International Journalof Image Processing vol 4 no 2 article 83 2009

[18] J F Canny ldquoA computational approach to edge detectionrdquo IEEETransaction Pattern Analysis and Machine Intelligent vol 8 no6 pp 679ndash697 1986

12 Journal of Electrical and Computer Engineering

[19] M P Persoon I W O Serlie F H Post R Truyen and F MVos ldquoVisualization of noisy and biased volume data using firstand second order derivative techniquesrdquo in Proceedings of the14th IEEE Visualization Conference pp 379ndash385 October 2003

[20] I T Young ldquoGeneralized convolutional filteringrdquo inProceedingsof the 19th CERN School of Computing pp 51ndash65 1996

[21] ldquoImage processing fundamentals derivative-based operationrdquo2011 httpwwwmifvultatpazinimasdipFIPfip-Derivatihtml

[22] PWVerbeek and L J vanVliet ldquoLocation error of curved edgesin low-pass filtered 2-D and 3-D imagesrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 16 no 7 pp 726ndash733 1994

[23] M Hagara and J Moravcik ldquoPLUS operator for edge detectionin digital imagesrdquo in Proceedings of the International Conferenceof Radioelektonika pp 467ndash470 2002

[24] Y Nader El-GlalyDevelopment of PDE-Based Digital InpaintingAlgorithm Applied to Missing Data in Digital Images [MSthesis] Ain Shams University 2007

[25] R G Aarnick J de la Rosette W Feitz F Debruyne and HWijkstra ldquoA preprocessing algorithm for edge detection withmultiple scales of resolutionrdquo European Journal of Ultrasoundvol 5 pp 113ndash126 1997

[26] N Lu and J Wang ldquoMotion detection based on accumulativeoptical flow and double background filteringrdquo in Proceedings ofthe World Congress on Engineering vol 1 2007

[27] ZakiWM DW A Hussain andM Hedayati ldquoMoving objectdetection using Keypoints reference modelrdquo EURASIP Journalon Image and Video Processing vol 2011 13 pages 2011

[28] F Y A Rahman A Hussain N M Tahir S A Samad and MH M Saad ldquoHybrid background subtraction techniques withsecond derivative on gradient direction filterrdquo in Proceedingsof the International Workshop on Advanced Image Technology(IWAIT rsquo12) 2012

[29] SMAl-Garni andAAAbdennour ldquoMoving vehicle detectionusing automatic background extractionrdquo in proceedings of theWorld Academy of Science Engineering and Technology vol 24pp 82ndash86 Sydney Australia December 2006

[30] ldquoCMU graphic lab motion capture databaserdquo 2010 httpmocapcscmuedu

[31] R Gross and J Shi ldquoThe CMU motion of body (MoBo)databaserdquo Technical Report CMU-RI-TR-01-18 Robotics Insti-tute Carnegie Mellon University 2001

[32] S Singh S A Velastin and H Ragheb ldquoMuHAVi a mul-ticamera human action video dataset for the evaluation ofaction recognition methodsrdquo in Proceedings of the 7th IEEEInternational Conference on Advanced Video and Signal Based(AVSS rsquo10) pp 48ndash55 Boston Mass USA September 2010

[33] H Kuehne H Jhuang E Garrote T Poggio and T SerreldquoHMDB a large video database for humanmotion recognitionrdquoinProceedings of the IEEE International Conference onComputerVision (ICCV rsquo11) pp 2556ndash2563 November 2011

[34] F Y A Rahman A Hussain N M Tahir and W M DZaki ldquoModeling of initial reference frame for backgroundsubtractionrdquo in Proceedings of the 6th International Colloquiumon Signal Processing and Its Applications (CSPA rsquo10) pp 125ndash1282010

[35] E Fauske L M Eliassen and R H Bakken ldquoA comparison oflearning based background subtractionrdquo in Proceedings of theNorwegian Artificial Intelligens Symposium (NAIS) pp 181ndash1922009

[36] H Kim B Ku D K Han S Kang and H Ko ldquoAdaptive selec-tion model in block-based background subtractionrdquo ElectronicsLetter vol 48 no 8 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 8: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

8 Journal of Electrical and Computer Engineering

0010203040506070809

1

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

F-sc

ore

Frame number

(a) FD

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(b) AM

F-sc

ore

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 7 Graph of F-score versus frame number for outdoor sample

Table 1 Performance for frame 10 in outdoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2046 1279 1091 80064 065 062 063With SDGD 2521 804 616 80539 080 076 078

AM without SDGD 1986 1785 1151 79558 063 053 057With SDGD 2374 951 763 80392 076 071 073

RG without SDGD 1959 1781 1178 79562 062 052 057With SDGD 2303 1022 834 80321 073 069 071

RGA without SDGD 2441 912 696 80431 078 073 075With SDGD 2654 499 483 80844 085 084 084

Table 2 Performance for frame 29 in indoor sample with and without SDGD filter

Method TP FP FN TN Recall Precision 119865-Score

FD without SDGD 2336 1372 809 79963 074 063 068With SDGD 2788 920 1165 79607 071 075 073

AM without SDGD 2771 1437 808 79964 074 061 067With SDGD 2726 982 1077 79695 072 074 073

RG without SDGD 2289 1419 858 79914 073 062 067With SDGD 2723 985 1106 79666 071 073 072

RGA without SDGD 2564 1092 1175 79649 069 070 069With SDGD 2979 677 1535 79289 067 081 073

Journal of Electrical and Computer Engineering 9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(a) FD

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(b) AM

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 8 Graph of F-score versus frame number for indoor sample

addition of the SDGD filter in the BGS techniques Then thevalues of recall precision and F-score were calculated basedon mathematical equations (10)ndash(12)

Based on the findings shown in Tables 1 and 2 we cansee that the number of TPs has increased significantly whichproves that our technique is able to detect more of thecompound blob than the original methods This finding isalso in line with the increment of precision values Tables 1and 2 also show that the F-score increased for both sampleswhen we add the SDGD filter

The graphs in Figures 7 and 8 show the F-score trends inboth A1 and B1 for the four BGS techniques namely FD AMRA and RGA The solid lines represent the F-score resultsusing our proposed technique that is with the SDGD filterwhereas the dashed lines represent the F-score results withoutusing the SDGD filter Based on Figures 7 and 8 higherF-score values were noted for the A1 and B1 videos whenusing the proposed hybrid technique compared with thosewhen using the basic BGS techniques Thus our proposedtechnique improves upon traditional BGS techniques

To confirm the effectiveness of the proposed techniquewe tested the algorithm in six different videos with sixdifferent backgrounds as obtained from the five differentdatabases Table 3 shows the performance of FD AMRA and RGA with and without the SDGD filter in termsof the F-score and average accuracy percentage of all sixvideo samples Columns 5 and 8 show the percentageof improvement for both F-score and average accuracy

percentage Based on Table 3 the use of an SDGD filterimproved the average F-score values for all data comparedwith the values produced by the methods without SDGDColumn 5 of Table 3 shows that the F-score values improvedby 1 to 9 Therefore the proposed technique comparedwith existing techniques enhances object extraction

Additionally Table 3 depicts an increment in averageaccuracy percentage for each tested technique except forvideos A2 and E1 A2 had poor video quality because thefootage was taken in a corridor without proper lightingBecause of the poor video quality and bad lighting conditionthe SDGD filter was unable to segment the foregroundsubjects properly and produced an unwanted shadow in theforeground Nevertheless our proposed technique is capableof detecting foreground pixels with over 90 accuracy on allvideos tested

Meanwhile Table 4 exhibits the results of classifyinghuman and non-human recognition based on the segmentedframe generated by the proposed technique In Table 4 ANNsuccessfully recognized human and non-human images fromthe generated frames using the improved FD techniqueIncorrect classifications were minimal in all ten experiments

Figure 9 presents amatrix describing the overall results ofclassification testingThe average recognition rate for humanand non-human categories was 9878 and 9872 respec-tively The rate is higher for both categories as our algorithmprovided good human and non-human blob images which

10 Journal of Electrical and Computer Engineering

Table 3 Performance of each technique

Name MethodEvaluation performance based on average value

119865-score Accuracy ()Without SDGD With SDGD imlowast Without SDGD With SDGD imlowast

A1

FD 075 078 3 9845 9847 002AM 069 075 6 9816 9829 013RA 074 077 3 9842 9844 002RGA 079 081 2 9842 9847 003

A2

FD 078 079 1 9674 9665 minus009AM 080 081 1 9731 9714 minus017RA 076 078 2 9691 9683 minus008RGA 076 077 1 9575 9565 minus010

B1

FD 068 074 6 9799 9812 013AM 071 076 5 9822 9837 015RA 071 076 5 9825 9838 013RGA 064 068 4 9747 9751 004

C1

FD 084 085 1 9564 9569 005AM 079 080 1 9487 9503 016RA 057 065 8 9169 9217 048RGA 08 081 1 9495 9507 012

D1

FD 068 071 3 9784 9807 023AM 071 075 4 9813 9837 022RA 071 074 3 9814 9836 022RGA 070 074 4 9804 9825 021

E1

FD 080 081 1 9666 9669 003AM 080 081 1 9670 9671 001RA 080 081 1 9669 9670 001RGA 068 077 9 9645 9650 005

lowastim percent improvement

Target classHuman Nonhuman ()

Output class

Human 4939 64 9878

Non-human 61 4936 9872

() 9878 9872 9875

Figure 9 Confusion matrix of the overall performance

allowed ANN to distinguish between the two categoriesFurther the overall performance in both classes was 9875The findings confirmed that the proposed algorithm whichcombines FD with the SDGD filter could generate bettersilhouette images thereby facilitating recognition of humanand non-human images in the segmented imagesframes

From the matrix the information of TP TN FP and FNcan be extracted hence Table 5 shows results of the statisticalanalysis done on the classification findings by ANN

5 Conclusion

We presented a new hybrid approach that incorporates theSDGDfilterwith four basic BGS techniques namely FDAMRA and RGAThis hybrid technique enhanced segmentationperformance as indicated by the F-score values and averageaccuracy percentages The technique was tested on six differ-ent videos from five different databases and each video wastaken either indoors or outdoors and showed different scenesAnANNclassifierwas used to classify human andnonhumanimages appearing in the segmented images generated by ouralgorithm As the algorithm was capable of providing goodblob images ANN recognition of human and non-humanimages in the silhouette images was facilitated

Although computational time increased this aspect isacceptable considering the enhancement and the character-istics of second-order derivatives Therefore this study is

Journal of Electrical and Computer Engineering 11

Table 4 Classification results

Exp Human Non-human

1 Correct 498 492 99Missed 2 8 10

2 Correct 499 491 990Missed 1 9 10

3 Correct 498 495 993Missed 2 5 07

4 Correct 495 500 995Missed 5 0 05

5 Correct 481 492 973Missed 19 8 27

6 Correct 497 490 987Missed 3 10 13

7 Correct 484 496 980Missed 16 4 20

8 Correct 493 498 991Missed 7 2 09

9 Correct 497 496 993Missed 3 4 07

10 Correct 497 486 983Missed 3 14 17

Table 5 Performance for ANN classification

TP TN FP FN Recall Precision 119865-Score4939 4936 61 64 0987 0988 0987

valid and suitable for implementation in non-real-time appli-cations The proposed hybrid technique can improve upontraditional BGS techniques as indicated by the improved F-score accuracy andANNrecognition values after testing var-ious data sources and data environments The technique canalso be considered in detecting moving objects in non-real-time applications such as investigations of human actions ortraffic conditions

Acknowledgments

This work was supported by Universiti KebangsaanMalaysia (UKM) research Grant DPP-2013-003 and Minis-try of Higher Education (MoHE) research Grant LRGSTD2011ICT0402

References

[1] AMMcIvor ldquoBackground subtraction techniquesrdquo inProceed-ings of the Image and Vision Computing Conference AucklandNew Zealand 2000

[2] M H Sigari N Mozayani and H M Pourreza ldquoFuzzyrunning average and fuzzy background subtraction conceptsand applicationrdquo International Journal of Computer Science andNetwork Security vol 8 pp 138ndash143 2008

[3] Y Zheng and L Fan ldquoMoving object detection based onrunning average background and temporal differencerdquo in Pro-ceedings of the IEEE International Conference on Intelligent

Systems and Knowledge Engineering (ISKE rsquo10) pp 270ndash272November 2010

[4] J Park A Tabb and A C Kak ldquoHierarchical data structure forreal-time background subtractionrdquo in Proceedings of the IEEEInternational Conference on Image Processing (ICIP rsquo06) pp1849ndash1852 October 2006

[5] C RWrenAAzarbayeni TDarrel andA P Petland ldquoPfinderreal-time tracking of the human bodyrdquo IEEE Transaction onPattern Analysis andMachine Intelligence vol 19 no 7 pp 780ndash785 1997

[6] Z Tang Z Miao and Y Wan ldquoBackground subtraction usingrunning Gaussian average and frame differencerdquo in Proceedingsof the International Conference Entertainment Computing (ICECrsquo07) vol 4740 of Lecture Notes in Computer Science pp 411ndash4142007

[7] Z He Y Liu H Yu and X Ye ldquoOptimized algorithm for trafficinformation collection in an embedded systemrdquo in Proceedingsof the IEEE Congress on Image and Signal Processing May 2008

[8] A Singh S Sawan M Hanmandlu V K Madasu and BC Lovell ldquoAn abandoned object detection system based ondual background segmentationrdquo in Proceedings of the 6th IEEEInternational Conference on Advanced Video and Signal BasedSurveillance (AVSS rsquo09) pp 352ndash357 IEEE Press September2009

[9] S Su and Y Chen ldquoMoving object segmentation usingimproved running Gaussian average background modelrdquo inProceedings of the International conference on Digital ImageComputing Techniques and Applications 2008

[10] H Kim R Sakamoto I Kitahara T Toriyama and K KogureldquoBackground subtraction using generalised Gaussian familymodelrdquo Electronics Letters vol 44 no 3 pp 189ndash190 2008

[11] Z H Huang and K W Chau ldquoA new image thresholdingmethod based on Gaussian mixture modelrdquo Journal AppliedMathematics and Computation vol 205 pp 899ndash907 2008

[12] Y Liu H Yao W Gao X Chen and D Zhao ldquoNonparametricbackground generationrdquo in Proceedings of the 18th InternationalConference on Pattern Recognition (ICPR rsquo06) pp 916ndash919August 2006

[13] P Vu V Phong T H Vu and H B Le ldquoGPU implemen-tation of Extended Gaussian mixture model for backgroundsubtractionrdquo in Proceedings of the 8th IEEE-RIVF InternationalConference on Computing and Communication TechnologiesResearch Innovation and Vision for the Future (RIVF rsquo10)November 2010

[14] A Mittal and N Paragios ldquoMotion-based background subtrac-tion using adaptive kernel density estimationrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II302ndashII309 July 2004

[15] F Porikli and O Tuzel ldquoHuman body tracking by adaptivebackground models and mean shift analysisrdquo in Proceedings ofthe IEEE International Workshop on Performance Evaluation ofTracking and Surveillance (PETS rsquo03) July2003

[16] I J Young J J Gerbrands and L J van Vliet ldquoFundamentals ofImage Processingrdquo Version 2 3 1995ndash2007

[17] V G Narendra and K S Hareesh ldquoStudy and comparison ofvarious image edge detection techniquesrdquo International Journalof Image Processing vol 4 no 2 article 83 2009

[18] J F Canny ldquoA computational approach to edge detectionrdquo IEEETransaction Pattern Analysis and Machine Intelligent vol 8 no6 pp 679ndash697 1986

12 Journal of Electrical and Computer Engineering

[19] M P Persoon I W O Serlie F H Post R Truyen and F MVos ldquoVisualization of noisy and biased volume data using firstand second order derivative techniquesrdquo in Proceedings of the14th IEEE Visualization Conference pp 379ndash385 October 2003

[20] I T Young ldquoGeneralized convolutional filteringrdquo inProceedingsof the 19th CERN School of Computing pp 51ndash65 1996

[21] ldquoImage processing fundamentals derivative-based operationrdquo2011 httpwwwmifvultatpazinimasdipFIPfip-Derivatihtml

[22] PWVerbeek and L J vanVliet ldquoLocation error of curved edgesin low-pass filtered 2-D and 3-D imagesrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 16 no 7 pp 726ndash733 1994

[23] M Hagara and J Moravcik ldquoPLUS operator for edge detectionin digital imagesrdquo in Proceedings of the International Conferenceof Radioelektonika pp 467ndash470 2002

[24] Y Nader El-GlalyDevelopment of PDE-Based Digital InpaintingAlgorithm Applied to Missing Data in Digital Images [MSthesis] Ain Shams University 2007

[25] R G Aarnick J de la Rosette W Feitz F Debruyne and HWijkstra ldquoA preprocessing algorithm for edge detection withmultiple scales of resolutionrdquo European Journal of Ultrasoundvol 5 pp 113ndash126 1997

[26] N Lu and J Wang ldquoMotion detection based on accumulativeoptical flow and double background filteringrdquo in Proceedings ofthe World Congress on Engineering vol 1 2007

[27] ZakiWM DW A Hussain andM Hedayati ldquoMoving objectdetection using Keypoints reference modelrdquo EURASIP Journalon Image and Video Processing vol 2011 13 pages 2011

[28] F Y A Rahman A Hussain N M Tahir S A Samad and MH M Saad ldquoHybrid background subtraction techniques withsecond derivative on gradient direction filterrdquo in Proceedingsof the International Workshop on Advanced Image Technology(IWAIT rsquo12) 2012

[29] SMAl-Garni andAAAbdennour ldquoMoving vehicle detectionusing automatic background extractionrdquo in proceedings of theWorld Academy of Science Engineering and Technology vol 24pp 82ndash86 Sydney Australia December 2006

[30] ldquoCMU graphic lab motion capture databaserdquo 2010 httpmocapcscmuedu

[31] R Gross and J Shi ldquoThe CMU motion of body (MoBo)databaserdquo Technical Report CMU-RI-TR-01-18 Robotics Insti-tute Carnegie Mellon University 2001

[32] S Singh S A Velastin and H Ragheb ldquoMuHAVi a mul-ticamera human action video dataset for the evaluation ofaction recognition methodsrdquo in Proceedings of the 7th IEEEInternational Conference on Advanced Video and Signal Based(AVSS rsquo10) pp 48ndash55 Boston Mass USA September 2010

[33] H Kuehne H Jhuang E Garrote T Poggio and T SerreldquoHMDB a large video database for humanmotion recognitionrdquoinProceedings of the IEEE International Conference onComputerVision (ICCV rsquo11) pp 2556ndash2563 November 2011

[34] F Y A Rahman A Hussain N M Tahir and W M DZaki ldquoModeling of initial reference frame for backgroundsubtractionrdquo in Proceedings of the 6th International Colloquiumon Signal Processing and Its Applications (CSPA rsquo10) pp 125ndash1282010

[35] E Fauske L M Eliassen and R H Bakken ldquoA comparison oflearning based background subtractionrdquo in Proceedings of theNorwegian Artificial Intelligens Symposium (NAIS) pp 181ndash1922009

[36] H Kim B Ku D K Han S Kang and H Ko ldquoAdaptive selec-tion model in block-based background subtractionrdquo ElectronicsLetter vol 48 no 8 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 9: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

Journal of Electrical and Computer Engineering 9

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(a) FD

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(b) AM

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(c) RA

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29Frame number

F-sc

ore

0010203040506070809

1

(d) RGA

Figure 8 Graph of F-score versus frame number for indoor sample

addition of the SDGD filter in the BGS techniques Then thevalues of recall precision and F-score were calculated basedon mathematical equations (10)ndash(12)

Based on the findings shown in Tables 1 and 2 we cansee that the number of TPs has increased significantly whichproves that our technique is able to detect more of thecompound blob than the original methods This finding isalso in line with the increment of precision values Tables 1and 2 also show that the F-score increased for both sampleswhen we add the SDGD filter

The graphs in Figures 7 and 8 show the F-score trends inboth A1 and B1 for the four BGS techniques namely FD AMRA and RGA The solid lines represent the F-score resultsusing our proposed technique that is with the SDGD filterwhereas the dashed lines represent the F-score results withoutusing the SDGD filter Based on Figures 7 and 8 higherF-score values were noted for the A1 and B1 videos whenusing the proposed hybrid technique compared with thosewhen using the basic BGS techniques Thus our proposedtechnique improves upon traditional BGS techniques

To confirm the effectiveness of the proposed techniquewe tested the algorithm in six different videos with sixdifferent backgrounds as obtained from the five differentdatabases Table 3 shows the performance of FD AMRA and RGA with and without the SDGD filter in termsof the F-score and average accuracy percentage of all sixvideo samples Columns 5 and 8 show the percentageof improvement for both F-score and average accuracy

percentage Based on Table 3 the use of an SDGD filterimproved the average F-score values for all data comparedwith the values produced by the methods without SDGDColumn 5 of Table 3 shows that the F-score values improvedby 1 to 9 Therefore the proposed technique comparedwith existing techniques enhances object extraction

Additionally Table 3 depicts an increment in averageaccuracy percentage for each tested technique except forvideos A2 and E1 A2 had poor video quality because thefootage was taken in a corridor without proper lightingBecause of the poor video quality and bad lighting conditionthe SDGD filter was unable to segment the foregroundsubjects properly and produced an unwanted shadow in theforeground Nevertheless our proposed technique is capableof detecting foreground pixels with over 90 accuracy on allvideos tested

Meanwhile Table 4 exhibits the results of classifyinghuman and non-human recognition based on the segmentedframe generated by the proposed technique In Table 4 ANNsuccessfully recognized human and non-human images fromthe generated frames using the improved FD techniqueIncorrect classifications were minimal in all ten experiments

Figure 9 presents amatrix describing the overall results ofclassification testingThe average recognition rate for humanand non-human categories was 9878 and 9872 respec-tively The rate is higher for both categories as our algorithmprovided good human and non-human blob images which

10 Journal of Electrical and Computer Engineering

Table 3 Performance of each technique

Name MethodEvaluation performance based on average value

119865-score Accuracy ()Without SDGD With SDGD imlowast Without SDGD With SDGD imlowast

A1

FD 075 078 3 9845 9847 002AM 069 075 6 9816 9829 013RA 074 077 3 9842 9844 002RGA 079 081 2 9842 9847 003

A2

FD 078 079 1 9674 9665 minus009AM 080 081 1 9731 9714 minus017RA 076 078 2 9691 9683 minus008RGA 076 077 1 9575 9565 minus010

B1

FD 068 074 6 9799 9812 013AM 071 076 5 9822 9837 015RA 071 076 5 9825 9838 013RGA 064 068 4 9747 9751 004

C1

FD 084 085 1 9564 9569 005AM 079 080 1 9487 9503 016RA 057 065 8 9169 9217 048RGA 08 081 1 9495 9507 012

D1

FD 068 071 3 9784 9807 023AM 071 075 4 9813 9837 022RA 071 074 3 9814 9836 022RGA 070 074 4 9804 9825 021

E1

FD 080 081 1 9666 9669 003AM 080 081 1 9670 9671 001RA 080 081 1 9669 9670 001RGA 068 077 9 9645 9650 005

lowastim percent improvement

Target classHuman Nonhuman ()

Output class

Human 4939 64 9878

Non-human 61 4936 9872

() 9878 9872 9875

Figure 9 Confusion matrix of the overall performance

allowed ANN to distinguish between the two categoriesFurther the overall performance in both classes was 9875The findings confirmed that the proposed algorithm whichcombines FD with the SDGD filter could generate bettersilhouette images thereby facilitating recognition of humanand non-human images in the segmented imagesframes

From the matrix the information of TP TN FP and FNcan be extracted hence Table 5 shows results of the statisticalanalysis done on the classification findings by ANN

5 Conclusion

We presented a new hybrid approach that incorporates theSDGDfilterwith four basic BGS techniques namely FDAMRA and RGAThis hybrid technique enhanced segmentationperformance as indicated by the F-score values and averageaccuracy percentages The technique was tested on six differ-ent videos from five different databases and each video wastaken either indoors or outdoors and showed different scenesAnANNclassifierwas used to classify human andnonhumanimages appearing in the segmented images generated by ouralgorithm As the algorithm was capable of providing goodblob images ANN recognition of human and non-humanimages in the silhouette images was facilitated

Although computational time increased this aspect isacceptable considering the enhancement and the character-istics of second-order derivatives Therefore this study is

Journal of Electrical and Computer Engineering 11

Table 4 Classification results

Exp Human Non-human

1 Correct 498 492 99Missed 2 8 10

2 Correct 499 491 990Missed 1 9 10

3 Correct 498 495 993Missed 2 5 07

4 Correct 495 500 995Missed 5 0 05

5 Correct 481 492 973Missed 19 8 27

6 Correct 497 490 987Missed 3 10 13

7 Correct 484 496 980Missed 16 4 20

8 Correct 493 498 991Missed 7 2 09

9 Correct 497 496 993Missed 3 4 07

10 Correct 497 486 983Missed 3 14 17

Table 5 Performance for ANN classification

TP TN FP FN Recall Precision 119865-Score4939 4936 61 64 0987 0988 0987

valid and suitable for implementation in non-real-time appli-cations The proposed hybrid technique can improve upontraditional BGS techniques as indicated by the improved F-score accuracy andANNrecognition values after testing var-ious data sources and data environments The technique canalso be considered in detecting moving objects in non-real-time applications such as investigations of human actions ortraffic conditions

Acknowledgments

This work was supported by Universiti KebangsaanMalaysia (UKM) research Grant DPP-2013-003 and Minis-try of Higher Education (MoHE) research Grant LRGSTD2011ICT0402

References

[1] AMMcIvor ldquoBackground subtraction techniquesrdquo inProceed-ings of the Image and Vision Computing Conference AucklandNew Zealand 2000

[2] M H Sigari N Mozayani and H M Pourreza ldquoFuzzyrunning average and fuzzy background subtraction conceptsand applicationrdquo International Journal of Computer Science andNetwork Security vol 8 pp 138ndash143 2008

[3] Y Zheng and L Fan ldquoMoving object detection based onrunning average background and temporal differencerdquo in Pro-ceedings of the IEEE International Conference on Intelligent

Systems and Knowledge Engineering (ISKE rsquo10) pp 270ndash272November 2010

[4] J Park A Tabb and A C Kak ldquoHierarchical data structure forreal-time background subtractionrdquo in Proceedings of the IEEEInternational Conference on Image Processing (ICIP rsquo06) pp1849ndash1852 October 2006

[5] C RWrenAAzarbayeni TDarrel andA P Petland ldquoPfinderreal-time tracking of the human bodyrdquo IEEE Transaction onPattern Analysis andMachine Intelligence vol 19 no 7 pp 780ndash785 1997

[6] Z Tang Z Miao and Y Wan ldquoBackground subtraction usingrunning Gaussian average and frame differencerdquo in Proceedingsof the International Conference Entertainment Computing (ICECrsquo07) vol 4740 of Lecture Notes in Computer Science pp 411ndash4142007

[7] Z He Y Liu H Yu and X Ye ldquoOptimized algorithm for trafficinformation collection in an embedded systemrdquo in Proceedingsof the IEEE Congress on Image and Signal Processing May 2008

[8] A Singh S Sawan M Hanmandlu V K Madasu and BC Lovell ldquoAn abandoned object detection system based ondual background segmentationrdquo in Proceedings of the 6th IEEEInternational Conference on Advanced Video and Signal BasedSurveillance (AVSS rsquo09) pp 352ndash357 IEEE Press September2009

[9] S Su and Y Chen ldquoMoving object segmentation usingimproved running Gaussian average background modelrdquo inProceedings of the International conference on Digital ImageComputing Techniques and Applications 2008

[10] H Kim R Sakamoto I Kitahara T Toriyama and K KogureldquoBackground subtraction using generalised Gaussian familymodelrdquo Electronics Letters vol 44 no 3 pp 189ndash190 2008

[11] Z H Huang and K W Chau ldquoA new image thresholdingmethod based on Gaussian mixture modelrdquo Journal AppliedMathematics and Computation vol 205 pp 899ndash907 2008

[12] Y Liu H Yao W Gao X Chen and D Zhao ldquoNonparametricbackground generationrdquo in Proceedings of the 18th InternationalConference on Pattern Recognition (ICPR rsquo06) pp 916ndash919August 2006

[13] P Vu V Phong T H Vu and H B Le ldquoGPU implemen-tation of Extended Gaussian mixture model for backgroundsubtractionrdquo in Proceedings of the 8th IEEE-RIVF InternationalConference on Computing and Communication TechnologiesResearch Innovation and Vision for the Future (RIVF rsquo10)November 2010

[14] A Mittal and N Paragios ldquoMotion-based background subtrac-tion using adaptive kernel density estimationrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II302ndashII309 July 2004

[15] F Porikli and O Tuzel ldquoHuman body tracking by adaptivebackground models and mean shift analysisrdquo in Proceedings ofthe IEEE International Workshop on Performance Evaluation ofTracking and Surveillance (PETS rsquo03) July2003

[16] I J Young J J Gerbrands and L J van Vliet ldquoFundamentals ofImage Processingrdquo Version 2 3 1995ndash2007

[17] V G Narendra and K S Hareesh ldquoStudy and comparison ofvarious image edge detection techniquesrdquo International Journalof Image Processing vol 4 no 2 article 83 2009

[18] J F Canny ldquoA computational approach to edge detectionrdquo IEEETransaction Pattern Analysis and Machine Intelligent vol 8 no6 pp 679ndash697 1986

12 Journal of Electrical and Computer Engineering

[19] M P Persoon I W O Serlie F H Post R Truyen and F MVos ldquoVisualization of noisy and biased volume data using firstand second order derivative techniquesrdquo in Proceedings of the14th IEEE Visualization Conference pp 379ndash385 October 2003

[20] I T Young ldquoGeneralized convolutional filteringrdquo inProceedingsof the 19th CERN School of Computing pp 51ndash65 1996

[21] ldquoImage processing fundamentals derivative-based operationrdquo2011 httpwwwmifvultatpazinimasdipFIPfip-Derivatihtml

[22] PWVerbeek and L J vanVliet ldquoLocation error of curved edgesin low-pass filtered 2-D and 3-D imagesrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 16 no 7 pp 726ndash733 1994

[23] M Hagara and J Moravcik ldquoPLUS operator for edge detectionin digital imagesrdquo in Proceedings of the International Conferenceof Radioelektonika pp 467ndash470 2002

[24] Y Nader El-GlalyDevelopment of PDE-Based Digital InpaintingAlgorithm Applied to Missing Data in Digital Images [MSthesis] Ain Shams University 2007

[25] R G Aarnick J de la Rosette W Feitz F Debruyne and HWijkstra ldquoA preprocessing algorithm for edge detection withmultiple scales of resolutionrdquo European Journal of Ultrasoundvol 5 pp 113ndash126 1997

[26] N Lu and J Wang ldquoMotion detection based on accumulativeoptical flow and double background filteringrdquo in Proceedings ofthe World Congress on Engineering vol 1 2007

[27] ZakiWM DW A Hussain andM Hedayati ldquoMoving objectdetection using Keypoints reference modelrdquo EURASIP Journalon Image and Video Processing vol 2011 13 pages 2011

[28] F Y A Rahman A Hussain N M Tahir S A Samad and MH M Saad ldquoHybrid background subtraction techniques withsecond derivative on gradient direction filterrdquo in Proceedingsof the International Workshop on Advanced Image Technology(IWAIT rsquo12) 2012

[29] SMAl-Garni andAAAbdennour ldquoMoving vehicle detectionusing automatic background extractionrdquo in proceedings of theWorld Academy of Science Engineering and Technology vol 24pp 82ndash86 Sydney Australia December 2006

[30] ldquoCMU graphic lab motion capture databaserdquo 2010 httpmocapcscmuedu

[31] R Gross and J Shi ldquoThe CMU motion of body (MoBo)databaserdquo Technical Report CMU-RI-TR-01-18 Robotics Insti-tute Carnegie Mellon University 2001

[32] S Singh S A Velastin and H Ragheb ldquoMuHAVi a mul-ticamera human action video dataset for the evaluation ofaction recognition methodsrdquo in Proceedings of the 7th IEEEInternational Conference on Advanced Video and Signal Based(AVSS rsquo10) pp 48ndash55 Boston Mass USA September 2010

[33] H Kuehne H Jhuang E Garrote T Poggio and T SerreldquoHMDB a large video database for humanmotion recognitionrdquoinProceedings of the IEEE International Conference onComputerVision (ICCV rsquo11) pp 2556ndash2563 November 2011

[34] F Y A Rahman A Hussain N M Tahir and W M DZaki ldquoModeling of initial reference frame for backgroundsubtractionrdquo in Proceedings of the 6th International Colloquiumon Signal Processing and Its Applications (CSPA rsquo10) pp 125ndash1282010

[35] E Fauske L M Eliassen and R H Bakken ldquoA comparison oflearning based background subtractionrdquo in Proceedings of theNorwegian Artificial Intelligens Symposium (NAIS) pp 181ndash1922009

[36] H Kim B Ku D K Han S Kang and H Ko ldquoAdaptive selec-tion model in block-based background subtractionrdquo ElectronicsLetter vol 48 no 8 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 10: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

10 Journal of Electrical and Computer Engineering

Table 3 Performance of each technique

Name MethodEvaluation performance based on average value

119865-score Accuracy ()Without SDGD With SDGD imlowast Without SDGD With SDGD imlowast

A1

FD 075 078 3 9845 9847 002AM 069 075 6 9816 9829 013RA 074 077 3 9842 9844 002RGA 079 081 2 9842 9847 003

A2

FD 078 079 1 9674 9665 minus009AM 080 081 1 9731 9714 minus017RA 076 078 2 9691 9683 minus008RGA 076 077 1 9575 9565 minus010

B1

FD 068 074 6 9799 9812 013AM 071 076 5 9822 9837 015RA 071 076 5 9825 9838 013RGA 064 068 4 9747 9751 004

C1

FD 084 085 1 9564 9569 005AM 079 080 1 9487 9503 016RA 057 065 8 9169 9217 048RGA 08 081 1 9495 9507 012

D1

FD 068 071 3 9784 9807 023AM 071 075 4 9813 9837 022RA 071 074 3 9814 9836 022RGA 070 074 4 9804 9825 021

E1

FD 080 081 1 9666 9669 003AM 080 081 1 9670 9671 001RA 080 081 1 9669 9670 001RGA 068 077 9 9645 9650 005

lowastim percent improvement

Target classHuman Nonhuman ()

Output class

Human 4939 64 9878

Non-human 61 4936 9872

() 9878 9872 9875

Figure 9 Confusion matrix of the overall performance

allowed ANN to distinguish between the two categoriesFurther the overall performance in both classes was 9875The findings confirmed that the proposed algorithm whichcombines FD with the SDGD filter could generate bettersilhouette images thereby facilitating recognition of humanand non-human images in the segmented imagesframes

From the matrix the information of TP TN FP and FNcan be extracted hence Table 5 shows results of the statisticalanalysis done on the classification findings by ANN

5 Conclusion

We presented a new hybrid approach that incorporates theSDGDfilterwith four basic BGS techniques namely FDAMRA and RGAThis hybrid technique enhanced segmentationperformance as indicated by the F-score values and averageaccuracy percentages The technique was tested on six differ-ent videos from five different databases and each video wastaken either indoors or outdoors and showed different scenesAnANNclassifierwas used to classify human andnonhumanimages appearing in the segmented images generated by ouralgorithm As the algorithm was capable of providing goodblob images ANN recognition of human and non-humanimages in the silhouette images was facilitated

Although computational time increased this aspect isacceptable considering the enhancement and the character-istics of second-order derivatives Therefore this study is

Journal of Electrical and Computer Engineering 11

Table 4 Classification results

Exp Human Non-human

1 Correct 498 492 99Missed 2 8 10

2 Correct 499 491 990Missed 1 9 10

3 Correct 498 495 993Missed 2 5 07

4 Correct 495 500 995Missed 5 0 05

5 Correct 481 492 973Missed 19 8 27

6 Correct 497 490 987Missed 3 10 13

7 Correct 484 496 980Missed 16 4 20

8 Correct 493 498 991Missed 7 2 09

9 Correct 497 496 993Missed 3 4 07

10 Correct 497 486 983Missed 3 14 17

Table 5 Performance for ANN classification

TP TN FP FN Recall Precision 119865-Score4939 4936 61 64 0987 0988 0987

valid and suitable for implementation in non-real-time appli-cations The proposed hybrid technique can improve upontraditional BGS techniques as indicated by the improved F-score accuracy andANNrecognition values after testing var-ious data sources and data environments The technique canalso be considered in detecting moving objects in non-real-time applications such as investigations of human actions ortraffic conditions

Acknowledgments

This work was supported by Universiti KebangsaanMalaysia (UKM) research Grant DPP-2013-003 and Minis-try of Higher Education (MoHE) research Grant LRGSTD2011ICT0402

References

[1] AMMcIvor ldquoBackground subtraction techniquesrdquo inProceed-ings of the Image and Vision Computing Conference AucklandNew Zealand 2000

[2] M H Sigari N Mozayani and H M Pourreza ldquoFuzzyrunning average and fuzzy background subtraction conceptsand applicationrdquo International Journal of Computer Science andNetwork Security vol 8 pp 138ndash143 2008

[3] Y Zheng and L Fan ldquoMoving object detection based onrunning average background and temporal differencerdquo in Pro-ceedings of the IEEE International Conference on Intelligent

Systems and Knowledge Engineering (ISKE rsquo10) pp 270ndash272November 2010

[4] J Park A Tabb and A C Kak ldquoHierarchical data structure forreal-time background subtractionrdquo in Proceedings of the IEEEInternational Conference on Image Processing (ICIP rsquo06) pp1849ndash1852 October 2006

[5] C RWrenAAzarbayeni TDarrel andA P Petland ldquoPfinderreal-time tracking of the human bodyrdquo IEEE Transaction onPattern Analysis andMachine Intelligence vol 19 no 7 pp 780ndash785 1997

[6] Z Tang Z Miao and Y Wan ldquoBackground subtraction usingrunning Gaussian average and frame differencerdquo in Proceedingsof the International Conference Entertainment Computing (ICECrsquo07) vol 4740 of Lecture Notes in Computer Science pp 411ndash4142007

[7] Z He Y Liu H Yu and X Ye ldquoOptimized algorithm for trafficinformation collection in an embedded systemrdquo in Proceedingsof the IEEE Congress on Image and Signal Processing May 2008

[8] A Singh S Sawan M Hanmandlu V K Madasu and BC Lovell ldquoAn abandoned object detection system based ondual background segmentationrdquo in Proceedings of the 6th IEEEInternational Conference on Advanced Video and Signal BasedSurveillance (AVSS rsquo09) pp 352ndash357 IEEE Press September2009

[9] S Su and Y Chen ldquoMoving object segmentation usingimproved running Gaussian average background modelrdquo inProceedings of the International conference on Digital ImageComputing Techniques and Applications 2008

[10] H Kim R Sakamoto I Kitahara T Toriyama and K KogureldquoBackground subtraction using generalised Gaussian familymodelrdquo Electronics Letters vol 44 no 3 pp 189ndash190 2008

[11] Z H Huang and K W Chau ldquoA new image thresholdingmethod based on Gaussian mixture modelrdquo Journal AppliedMathematics and Computation vol 205 pp 899ndash907 2008

[12] Y Liu H Yao W Gao X Chen and D Zhao ldquoNonparametricbackground generationrdquo in Proceedings of the 18th InternationalConference on Pattern Recognition (ICPR rsquo06) pp 916ndash919August 2006

[13] P Vu V Phong T H Vu and H B Le ldquoGPU implemen-tation of Extended Gaussian mixture model for backgroundsubtractionrdquo in Proceedings of the 8th IEEE-RIVF InternationalConference on Computing and Communication TechnologiesResearch Innovation and Vision for the Future (RIVF rsquo10)November 2010

[14] A Mittal and N Paragios ldquoMotion-based background subtrac-tion using adaptive kernel density estimationrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II302ndashII309 July 2004

[15] F Porikli and O Tuzel ldquoHuman body tracking by adaptivebackground models and mean shift analysisrdquo in Proceedings ofthe IEEE International Workshop on Performance Evaluation ofTracking and Surveillance (PETS rsquo03) July2003

[16] I J Young J J Gerbrands and L J van Vliet ldquoFundamentals ofImage Processingrdquo Version 2 3 1995ndash2007

[17] V G Narendra and K S Hareesh ldquoStudy and comparison ofvarious image edge detection techniquesrdquo International Journalof Image Processing vol 4 no 2 article 83 2009

[18] J F Canny ldquoA computational approach to edge detectionrdquo IEEETransaction Pattern Analysis and Machine Intelligent vol 8 no6 pp 679ndash697 1986

12 Journal of Electrical and Computer Engineering

[19] M P Persoon I W O Serlie F H Post R Truyen and F MVos ldquoVisualization of noisy and biased volume data using firstand second order derivative techniquesrdquo in Proceedings of the14th IEEE Visualization Conference pp 379ndash385 October 2003

[20] I T Young ldquoGeneralized convolutional filteringrdquo inProceedingsof the 19th CERN School of Computing pp 51ndash65 1996

[21] ldquoImage processing fundamentals derivative-based operationrdquo2011 httpwwwmifvultatpazinimasdipFIPfip-Derivatihtml

[22] PWVerbeek and L J vanVliet ldquoLocation error of curved edgesin low-pass filtered 2-D and 3-D imagesrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 16 no 7 pp 726ndash733 1994

[23] M Hagara and J Moravcik ldquoPLUS operator for edge detectionin digital imagesrdquo in Proceedings of the International Conferenceof Radioelektonika pp 467ndash470 2002

[24] Y Nader El-GlalyDevelopment of PDE-Based Digital InpaintingAlgorithm Applied to Missing Data in Digital Images [MSthesis] Ain Shams University 2007

[25] R G Aarnick J de la Rosette W Feitz F Debruyne and HWijkstra ldquoA preprocessing algorithm for edge detection withmultiple scales of resolutionrdquo European Journal of Ultrasoundvol 5 pp 113ndash126 1997

[26] N Lu and J Wang ldquoMotion detection based on accumulativeoptical flow and double background filteringrdquo in Proceedings ofthe World Congress on Engineering vol 1 2007

[27] ZakiWM DW A Hussain andM Hedayati ldquoMoving objectdetection using Keypoints reference modelrdquo EURASIP Journalon Image and Video Processing vol 2011 13 pages 2011

[28] F Y A Rahman A Hussain N M Tahir S A Samad and MH M Saad ldquoHybrid background subtraction techniques withsecond derivative on gradient direction filterrdquo in Proceedingsof the International Workshop on Advanced Image Technology(IWAIT rsquo12) 2012

[29] SMAl-Garni andAAAbdennour ldquoMoving vehicle detectionusing automatic background extractionrdquo in proceedings of theWorld Academy of Science Engineering and Technology vol 24pp 82ndash86 Sydney Australia December 2006

[30] ldquoCMU graphic lab motion capture databaserdquo 2010 httpmocapcscmuedu

[31] R Gross and J Shi ldquoThe CMU motion of body (MoBo)databaserdquo Technical Report CMU-RI-TR-01-18 Robotics Insti-tute Carnegie Mellon University 2001

[32] S Singh S A Velastin and H Ragheb ldquoMuHAVi a mul-ticamera human action video dataset for the evaluation ofaction recognition methodsrdquo in Proceedings of the 7th IEEEInternational Conference on Advanced Video and Signal Based(AVSS rsquo10) pp 48ndash55 Boston Mass USA September 2010

[33] H Kuehne H Jhuang E Garrote T Poggio and T SerreldquoHMDB a large video database for humanmotion recognitionrdquoinProceedings of the IEEE International Conference onComputerVision (ICCV rsquo11) pp 2556ndash2563 November 2011

[34] F Y A Rahman A Hussain N M Tahir and W M DZaki ldquoModeling of initial reference frame for backgroundsubtractionrdquo in Proceedings of the 6th International Colloquiumon Signal Processing and Its Applications (CSPA rsquo10) pp 125ndash1282010

[35] E Fauske L M Eliassen and R H Bakken ldquoA comparison oflearning based background subtractionrdquo in Proceedings of theNorwegian Artificial Intelligens Symposium (NAIS) pp 181ndash1922009

[36] H Kim B Ku D K Han S Kang and H Ko ldquoAdaptive selec-tion model in block-based background subtractionrdquo ElectronicsLetter vol 48 no 8 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 11: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

Journal of Electrical and Computer Engineering 11

Table 4 Classification results

Exp Human Non-human

1 Correct 498 492 99Missed 2 8 10

2 Correct 499 491 990Missed 1 9 10

3 Correct 498 495 993Missed 2 5 07

4 Correct 495 500 995Missed 5 0 05

5 Correct 481 492 973Missed 19 8 27

6 Correct 497 490 987Missed 3 10 13

7 Correct 484 496 980Missed 16 4 20

8 Correct 493 498 991Missed 7 2 09

9 Correct 497 496 993Missed 3 4 07

10 Correct 497 486 983Missed 3 14 17

Table 5 Performance for ANN classification

TP TN FP FN Recall Precision 119865-Score4939 4936 61 64 0987 0988 0987

valid and suitable for implementation in non-real-time appli-cations The proposed hybrid technique can improve upontraditional BGS techniques as indicated by the improved F-score accuracy andANNrecognition values after testing var-ious data sources and data environments The technique canalso be considered in detecting moving objects in non-real-time applications such as investigations of human actions ortraffic conditions

Acknowledgments

This work was supported by Universiti KebangsaanMalaysia (UKM) research Grant DPP-2013-003 and Minis-try of Higher Education (MoHE) research Grant LRGSTD2011ICT0402

References

[1] AMMcIvor ldquoBackground subtraction techniquesrdquo inProceed-ings of the Image and Vision Computing Conference AucklandNew Zealand 2000

[2] M H Sigari N Mozayani and H M Pourreza ldquoFuzzyrunning average and fuzzy background subtraction conceptsand applicationrdquo International Journal of Computer Science andNetwork Security vol 8 pp 138ndash143 2008

[3] Y Zheng and L Fan ldquoMoving object detection based onrunning average background and temporal differencerdquo in Pro-ceedings of the IEEE International Conference on Intelligent

Systems and Knowledge Engineering (ISKE rsquo10) pp 270ndash272November 2010

[4] J Park A Tabb and A C Kak ldquoHierarchical data structure forreal-time background subtractionrdquo in Proceedings of the IEEEInternational Conference on Image Processing (ICIP rsquo06) pp1849ndash1852 October 2006

[5] C RWrenAAzarbayeni TDarrel andA P Petland ldquoPfinderreal-time tracking of the human bodyrdquo IEEE Transaction onPattern Analysis andMachine Intelligence vol 19 no 7 pp 780ndash785 1997

[6] Z Tang Z Miao and Y Wan ldquoBackground subtraction usingrunning Gaussian average and frame differencerdquo in Proceedingsof the International Conference Entertainment Computing (ICECrsquo07) vol 4740 of Lecture Notes in Computer Science pp 411ndash4142007

[7] Z He Y Liu H Yu and X Ye ldquoOptimized algorithm for trafficinformation collection in an embedded systemrdquo in Proceedingsof the IEEE Congress on Image and Signal Processing May 2008

[8] A Singh S Sawan M Hanmandlu V K Madasu and BC Lovell ldquoAn abandoned object detection system based ondual background segmentationrdquo in Proceedings of the 6th IEEEInternational Conference on Advanced Video and Signal BasedSurveillance (AVSS rsquo09) pp 352ndash357 IEEE Press September2009

[9] S Su and Y Chen ldquoMoving object segmentation usingimproved running Gaussian average background modelrdquo inProceedings of the International conference on Digital ImageComputing Techniques and Applications 2008

[10] H Kim R Sakamoto I Kitahara T Toriyama and K KogureldquoBackground subtraction using generalised Gaussian familymodelrdquo Electronics Letters vol 44 no 3 pp 189ndash190 2008

[11] Z H Huang and K W Chau ldquoA new image thresholdingmethod based on Gaussian mixture modelrdquo Journal AppliedMathematics and Computation vol 205 pp 899ndash907 2008

[12] Y Liu H Yao W Gao X Chen and D Zhao ldquoNonparametricbackground generationrdquo in Proceedings of the 18th InternationalConference on Pattern Recognition (ICPR rsquo06) pp 916ndash919August 2006

[13] P Vu V Phong T H Vu and H B Le ldquoGPU implemen-tation of Extended Gaussian mixture model for backgroundsubtractionrdquo in Proceedings of the 8th IEEE-RIVF InternationalConference on Computing and Communication TechnologiesResearch Innovation and Vision for the Future (RIVF rsquo10)November 2010

[14] A Mittal and N Paragios ldquoMotion-based background subtrac-tion using adaptive kernel density estimationrdquo in Proceedings ofthe IEEE Computer Society Conference on Computer Vision andPattern Recognition (CVPR rsquo04) pp II302ndashII309 July 2004

[15] F Porikli and O Tuzel ldquoHuman body tracking by adaptivebackground models and mean shift analysisrdquo in Proceedings ofthe IEEE International Workshop on Performance Evaluation ofTracking and Surveillance (PETS rsquo03) July2003

[16] I J Young J J Gerbrands and L J van Vliet ldquoFundamentals ofImage Processingrdquo Version 2 3 1995ndash2007

[17] V G Narendra and K S Hareesh ldquoStudy and comparison ofvarious image edge detection techniquesrdquo International Journalof Image Processing vol 4 no 2 article 83 2009

[18] J F Canny ldquoA computational approach to edge detectionrdquo IEEETransaction Pattern Analysis and Machine Intelligent vol 8 no6 pp 679ndash697 1986

12 Journal of Electrical and Computer Engineering

[19] M P Persoon I W O Serlie F H Post R Truyen and F MVos ldquoVisualization of noisy and biased volume data using firstand second order derivative techniquesrdquo in Proceedings of the14th IEEE Visualization Conference pp 379ndash385 October 2003

[20] I T Young ldquoGeneralized convolutional filteringrdquo inProceedingsof the 19th CERN School of Computing pp 51ndash65 1996

[21] ldquoImage processing fundamentals derivative-based operationrdquo2011 httpwwwmifvultatpazinimasdipFIPfip-Derivatihtml

[22] PWVerbeek and L J vanVliet ldquoLocation error of curved edgesin low-pass filtered 2-D and 3-D imagesrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 16 no 7 pp 726ndash733 1994

[23] M Hagara and J Moravcik ldquoPLUS operator for edge detectionin digital imagesrdquo in Proceedings of the International Conferenceof Radioelektonika pp 467ndash470 2002

[24] Y Nader El-GlalyDevelopment of PDE-Based Digital InpaintingAlgorithm Applied to Missing Data in Digital Images [MSthesis] Ain Shams University 2007

[25] R G Aarnick J de la Rosette W Feitz F Debruyne and HWijkstra ldquoA preprocessing algorithm for edge detection withmultiple scales of resolutionrdquo European Journal of Ultrasoundvol 5 pp 113ndash126 1997

[26] N Lu and J Wang ldquoMotion detection based on accumulativeoptical flow and double background filteringrdquo in Proceedings ofthe World Congress on Engineering vol 1 2007

[27] ZakiWM DW A Hussain andM Hedayati ldquoMoving objectdetection using Keypoints reference modelrdquo EURASIP Journalon Image and Video Processing vol 2011 13 pages 2011

[28] F Y A Rahman A Hussain N M Tahir S A Samad and MH M Saad ldquoHybrid background subtraction techniques withsecond derivative on gradient direction filterrdquo in Proceedingsof the International Workshop on Advanced Image Technology(IWAIT rsquo12) 2012

[29] SMAl-Garni andAAAbdennour ldquoMoving vehicle detectionusing automatic background extractionrdquo in proceedings of theWorld Academy of Science Engineering and Technology vol 24pp 82ndash86 Sydney Australia December 2006

[30] ldquoCMU graphic lab motion capture databaserdquo 2010 httpmocapcscmuedu

[31] R Gross and J Shi ldquoThe CMU motion of body (MoBo)databaserdquo Technical Report CMU-RI-TR-01-18 Robotics Insti-tute Carnegie Mellon University 2001

[32] S Singh S A Velastin and H Ragheb ldquoMuHAVi a mul-ticamera human action video dataset for the evaluation ofaction recognition methodsrdquo in Proceedings of the 7th IEEEInternational Conference on Advanced Video and Signal Based(AVSS rsquo10) pp 48ndash55 Boston Mass USA September 2010

[33] H Kuehne H Jhuang E Garrote T Poggio and T SerreldquoHMDB a large video database for humanmotion recognitionrdquoinProceedings of the IEEE International Conference onComputerVision (ICCV rsquo11) pp 2556ndash2563 November 2011

[34] F Y A Rahman A Hussain N M Tahir and W M DZaki ldquoModeling of initial reference frame for backgroundsubtractionrdquo in Proceedings of the 6th International Colloquiumon Signal Processing and Its Applications (CSPA rsquo10) pp 125ndash1282010

[35] E Fauske L M Eliassen and R H Bakken ldquoA comparison oflearning based background subtractionrdquo in Proceedings of theNorwegian Artificial Intelligens Symposium (NAIS) pp 181ndash1922009

[36] H Kim B Ku D K Han S Kang and H Ko ldquoAdaptive selec-tion model in block-based background subtractionrdquo ElectronicsLetter vol 48 no 8 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 12: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

12 Journal of Electrical and Computer Engineering

[19] M P Persoon I W O Serlie F H Post R Truyen and F MVos ldquoVisualization of noisy and biased volume data using firstand second order derivative techniquesrdquo in Proceedings of the14th IEEE Visualization Conference pp 379ndash385 October 2003

[20] I T Young ldquoGeneralized convolutional filteringrdquo inProceedingsof the 19th CERN School of Computing pp 51ndash65 1996

[21] ldquoImage processing fundamentals derivative-based operationrdquo2011 httpwwwmifvultatpazinimasdipFIPfip-Derivatihtml

[22] PWVerbeek and L J vanVliet ldquoLocation error of curved edgesin low-pass filtered 2-D and 3-D imagesrdquo IEEE Transactions onPattern Analysis andMachine Intelligence vol 16 no 7 pp 726ndash733 1994

[23] M Hagara and J Moravcik ldquoPLUS operator for edge detectionin digital imagesrdquo in Proceedings of the International Conferenceof Radioelektonika pp 467ndash470 2002

[24] Y Nader El-GlalyDevelopment of PDE-Based Digital InpaintingAlgorithm Applied to Missing Data in Digital Images [MSthesis] Ain Shams University 2007

[25] R G Aarnick J de la Rosette W Feitz F Debruyne and HWijkstra ldquoA preprocessing algorithm for edge detection withmultiple scales of resolutionrdquo European Journal of Ultrasoundvol 5 pp 113ndash126 1997

[26] N Lu and J Wang ldquoMotion detection based on accumulativeoptical flow and double background filteringrdquo in Proceedings ofthe World Congress on Engineering vol 1 2007

[27] ZakiWM DW A Hussain andM Hedayati ldquoMoving objectdetection using Keypoints reference modelrdquo EURASIP Journalon Image and Video Processing vol 2011 13 pages 2011

[28] F Y A Rahman A Hussain N M Tahir S A Samad and MH M Saad ldquoHybrid background subtraction techniques withsecond derivative on gradient direction filterrdquo in Proceedingsof the International Workshop on Advanced Image Technology(IWAIT rsquo12) 2012

[29] SMAl-Garni andAAAbdennour ldquoMoving vehicle detectionusing automatic background extractionrdquo in proceedings of theWorld Academy of Science Engineering and Technology vol 24pp 82ndash86 Sydney Australia December 2006

[30] ldquoCMU graphic lab motion capture databaserdquo 2010 httpmocapcscmuedu

[31] R Gross and J Shi ldquoThe CMU motion of body (MoBo)databaserdquo Technical Report CMU-RI-TR-01-18 Robotics Insti-tute Carnegie Mellon University 2001

[32] S Singh S A Velastin and H Ragheb ldquoMuHAVi a mul-ticamera human action video dataset for the evaluation ofaction recognition methodsrdquo in Proceedings of the 7th IEEEInternational Conference on Advanced Video and Signal Based(AVSS rsquo10) pp 48ndash55 Boston Mass USA September 2010

[33] H Kuehne H Jhuang E Garrote T Poggio and T SerreldquoHMDB a large video database for humanmotion recognitionrdquoinProceedings of the IEEE International Conference onComputerVision (ICCV rsquo11) pp 2556ndash2563 November 2011

[34] F Y A Rahman A Hussain N M Tahir and W M DZaki ldquoModeling of initial reference frame for backgroundsubtractionrdquo in Proceedings of the 6th International Colloquiumon Signal Processing and Its Applications (CSPA rsquo10) pp 125ndash1282010

[35] E Fauske L M Eliassen and R H Bakken ldquoA comparison oflearning based background subtractionrdquo in Proceedings of theNorwegian Artificial Intelligens Symposium (NAIS) pp 181ndash1922009

[36] H Kim B Ku D K Han S Kang and H Ko ldquoAdaptive selec-tion model in block-based background subtractionrdquo ElectronicsLetter vol 48 no 8 2012

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 13: Research Article Enhancement of Background Subtraction ...downloads.hindawi.com/journals/jece/2013/598708.pdf · Research Article Enhancement of Background Subtraction Techniques

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of