39
IT 13 009 Examensarbete 45 hp January 2013 Tracking individual bees in a beehive ZI QUAN YU Institutionen för informationsteknologi Department of Information Technology

Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

IT 13 009

Examensarbete 45 hpJanuary 2013

Tracking individual bees in a beehive

ZI QUAN YU

Institutionen för informationsteknologiDepartment of Information Technology

Page 2: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network
Page 3: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Teknisk- naturvetenskaplig fakultet UTH-enheten Besöksadress: Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0 Postadress: Box 536 751 21 Uppsala Telefon: 018 – 471 30 03 Telefax: 018 – 471 30 00 Hemsida: http://www.teknat.uu.se/student

Abstract

Tracking individual bees in a beehive

ZI QUAN YU

Studying and analyzing interactions among bees requires tracking and identifying eachindividual among hundreds of them on a complex background. Automatic tracking andidentification is challenging because of the unreliable features and appearance changes.In order to map bee’s social interactions, low computational cost algorithm needs torun for a long time and process has to be done at the same time.

We present comparison among several methods and how we stabilize the featuresand reduce the appearance changes. We have improved much in set-ups and made anewly designed tag. Meanwhile we have developed the prototype of this automaticalgorithm to track and identify each individual bee among hundreds of bees in abeehive over time. The rate is 15 frame per second at this stage and for the globaldetector it takes around 21s to process one frame and for the local detector it takesaround 11s to process one frame. The algorithm can correctly detect 89% of around300 tagged bees over hundreds of frames on average, but there are still around 11%misdetections.

Tryckt av: Reprocentralen ITCIT 13 009Examinator: Jarmo RantakokkoÄmnesgranskare: Ida-Maria SintornHandledare: Cris Luengo

Page 4: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network
Page 5: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

AcknowledgementsI would like to express my deep gratitude to Dr. Cris Luengo my research supervisor, for his patientguidance, enthusiastic encouragement and valuable discussion of this research work. I would also like tothank Vladimir Curic for his advice and assistance. My grateful thanks are also extended to my reviewerIda-Maria Sintorn who helped me with many valuable comments and suggestions of my work. And Iwant to also thank everyone in Prof. Gunilla Borgefors’s group.

I wish to thank my mother for her encouragement on me. I want to thank my friends who appreci-ated me for my work and supported me about my interests. I also would like to thank the colleagues atDragon Palace for their help.

I would also like to extend my thanks to all the staff of the Centrum för Bildanalys for their help andoffering me the resources.

I also want to thanks to Dr. Olle Terenius, Dr. Barbara Locke-Grandér and Teatske Bakker for theirhelp in collecting and preparing the bees and gluing tags and all the technicians who helped me in han-dling the instruments.

Page 6: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network
Page 7: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Contents

List of Figures 9

1 Introduction 111.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.2 Previous work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.4 Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.5 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Image Acquisition 152.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.2 Lighting system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3 1st Attempt – Histogram Matching 213.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.2 Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.4 Reason to abandon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4 2nd Attempt – Modified Mean-Shift Method 274.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.2 Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.4 Reason to abandon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

5 3rd Attempt – Tag Based Tracking 315.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315.2 Design of Tag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315.3 Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

6 Conclusion and Future Work 37

7

Page 8: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network
Page 9: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

List of Figures

1.1 Frame from previously acquired videos . . . . . . . . . . . . . . . . . . . . . . . . . . 131.2 Sketch of how observation hive looks like in theory . . . . . . . . . . . . . . . . . . . . 14

2.1 First attempt of improved lighting system . . . . . . . . . . . . . . . . . . . . . . . . . 162.2 Second attempt of improved lighting system . . . . . . . . . . . . . . . . . . . . . . . . 172.3 Third attempt of improved lighting system . . . . . . . . . . . . . . . . . . . . . . . . . 172.4 Acquired frame after first improvement . . . . . . . . . . . . . . . . . . . . . . . . . . 182.5 Acquired frame after second improvement . . . . . . . . . . . . . . . . . . . . . . . . . 192.6 Acquired frame after third improvement . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.1 Comparison of histogram between bee and background . . . . . . . . . . . . . . . . . . 213.2 Masked images shows where we measure the template information . . . . . . . . . . . . 223.3 Images show how we measure the target’s information . . . . . . . . . . . . . . . . . . 233.4 Pre-processed image shows dark regions that we are looking for . . . . . . . . . . . . . 243.5 Images show how edge-based searching method works . . . . . . . . . . . . . . . . . . 243.6 Correlation result after one loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.7 Good result of this method shows reasonable detected bees . . . . . . . . . . . . . . . . 253.8 Good result of this method shows reasonable detected bees . . . . . . . . . . . . . . . . 263.9 Bad detections without any special reason . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.1 Manually assigned initial position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.2 One frame of tracked Bees in the video2 . . . . . . . . . . . . . . . . . . . . . . . . . . 294.3 One frame of tracked Bees in video1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.4 One frame shows losing track in video1 . . . . . . . . . . . . . . . . . . . . . . . . . . 30

5.1 Some designs for the new tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315.2 Old tags derived from video and one in theory . . . . . . . . . . . . . . . . . . . . . . . 315.3 The new tag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.4 detected tag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325.5 Measuring grid on the tag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.6 Result cropped from video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.7 Mis-detected reflection in the process . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.8 New measuring suggestion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

9

Page 10: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network
Page 11: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Chapter 1

Introduction

1.1 BackgroundA bee hive is a suitable model to study disease transmission in human society. The network of interactionin a beehive is presumed to be similar to the network of interaction in human society. Meanwhile thenetwork allows disease transmission go through in different ways, and some bees have more interactionswith more peers than the others. Researchers at the department of Ecology, Swedish University of Agri-cultural Sciences, are developing methods to quantify certain types of interaction and deriving interactionnetworks encompassing all bees in a hive and making bees individually identified. There are a couple ofpreviously recorded videos of bees in an observation hive. Each individual bee was tagged with a uniqueidentifier.

Before this project was started, researchers would have to manually count and observe different typesof the honey bees’ interactions. It took very long time to manually analyze, for instance, video clips froman observed bee hive. Also, the accuracy was not that high. Thus, there is a strong need for automatedanalysis of these videos. This project aims to improve the acquisition method and develop an algorithmto detect and identify each tagged bee among the hundreds of bees in a beehive.

This project is trying to realize the similar purpose to tracking multiple targets but the algorithm willbe implemented and developed in a more complicated and challenging environment. For instance, thesystem aims to track hundreds of bees and identify each individual at the same time. It is challenging todo this project in this research area.

There are various types of limitations and challenges for the experiment that will directly affect the anal-ysis, such as, the frame per second(fps) of the web camera, the resolution of the camera, the distancebetween camera and the observation hive, the illumination of the hive, the storage space on the computerand bee’s active season and the like. These shows part of reason why this project is challenging.

1.2 Previous workThe goal of previous work at SLU (Sverige Lantbruks Universitet) was to determine whether artificiallight has an impact on honey bee’s(Apis mellifera) activities in an observation hive. The experiment con-cluded that “Strong peaks of bee’s activity appeared when the white light was turned on and turned off.The observed activity was noted as ’nervous’ behaviour” [1].

There are some relevant work which has been done by other researchers, which has inspired us moreor less about how we should form our own framework for this project. For instance, these things havebee done before [2]. It tells us some ideas about how to track single bee with MCMC (Markov ChainMonte Carlo) method. The similar tracking method on ants [3]. This paper describes how researchers

11

Page 12: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

manage to track multiple ants. There is another paper that describes a method to track flies [4]. It isalso illustrating how researchers tried to track flies and analyze their behaviour. However, there are nopublished methods to track and identify all the bees in a beehive at the same time.

1.3 Motivation

I would like to mention some parts which motivated us on how to form this project in order to improvewhat other researchers have already done. Firstly, the image acquisition method and equipment did notperform as well as we expected visually. The difficulty can be found in Figure 1.1.It is nearly impossibleto implement any type of analysis method on the dark area. Secondly, the long term goal for this projectis to build up a real-time system to detect honey bees in beehive automatically. Therefore, a low com-putational cost method will be desirable. But sometimes, low cost method will generate results with lowaccuracy. So how to keep the balance between those two important factors becomes another issue in thiscase. We will illustrate later on in this paper about how we will solve these issues.

1.4 Limitation

This project is currently a prototype that we are developing and studying. Therefore the accuracy is notas high as we expected so far, the same is true for the computational performance. Meanwhile, the projectis aiming to implement the algorithm to track each individual bee in a beehive over time. Although thereare some ways to improve the computational power, we will try to speed up the algorithm when we havedone the development part. So we will not do anything more about reducing the computing cost at thecurrent stage. However, we will consider this factor in the development. Modelling and analysing theinteractions among tracked bees is not included at this thesis project.

1.5 Materials

The experiments were performed at honey bee research facility, Bigården, at the Swedish University ofAgricultural Sciences in Uppsala, Sweden. The observation hive consisted of a wooden frame (52.5 x43.5 x 5.5 cm) and two plexiglas sheets on both sides, see Figure 1.2 [5]. This project focused on onlyhalf of the whole frame because it is currently a research for making prototype, half size of the frame wasenough. When the prototype is realized, the whole frame will be observed by two cameras at the sametime.

The observed bee hive was previously illuminated by four aHTI-760KCS1A1 IR illuminators. Eachof these illuminators has 20st 12V IR LEDs with wavelength 850 nm(Vivotek,Taiwan). The LED lightmatched the camera’s bandpass filter so that the camera can record only the light from those LEDs,undisturbed by other light sources. There is one white light, whose type is PROX Light S30 Series(SmartVision Lights,MI,USA) and it’s fitted with a 675 high-pass filter so that any light close to 850 nm wave-length will be cut. Therefore there is no other “noisy light" which will affect the illumination conditionin the video.

The camera is a Basler Scout scA1600-14gm, with a Fujinon HF16HA-1B lens and a 850 nm band-pass filter. The camera is placed in front of one plexiglas and the distance between plexiglas and camerais about 80 cm. The camera is set up to record 14 fps, the resolution of each frame is 1628 x 1236 pixels.The camera is controlled by Basler’s Pylon driver, and video is recorded off-line by Virtual VCR, Version2.6.9.

12

Page 13: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 1.1: Frame from previously acquired videos

13

Page 14: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 1.2: Sketch of how observation hive looks like in theory

14

Page 15: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Chapter 2

Image Acquisition

2.1 IntroductionImage Acquisition plays an important role in image analysis. Because if we don’t have good imagesto start with, it is not possible to generate excellent result in analysis. Although image analysis seemspowerful to any type of image with any quality, this is not true. It is still necessary to have the best sourceimage to start with if we are also aiming for the best results.

There are a couple of factors that we can improve so as to acquire a better source image. One factoris the distance between the camera and the observation hive which is currently 80 cm. This was estab-lished by Lecocq [5]. The reason for this distance is that it is the minimum distance to observe the wholehive for this scientific-grade CCD camera. It’s definitely possible to buy a wider angle lens, but a lenswith a different angle does not change anything except the distance to the hive and the image would lookthe same as normal lens. So we left this factor as it was there. Another factor is the resolution of theimage. If we can have a higher resolution of the image, it will be easier for us to analyse the image indifferent ways. Third factor is the fps, A higher fps means that we can capture more details concerningbee’s movement. More details can give us less hard process to realize our tracking algorithm since thephysical limitations will be weaken by higher fps. Forth factor is the frame where the bees walk on. Ifwe can constrain the bees walking area, we can easily see the tag, and it will be easier for us to analyseand process the derived images. The fifth factor is the illumination condition, we expected an uniformlydistributed illumination on the whole hive so that the intensity of all tags could be as equal as possible. Itwill be much more convenient for later process.

The last factor, that is illumination condition, is so important in this case, because we are going to identifyeach tag based on the gray value intensity at each sub region on the tag. If the illumination is not evenlydistributed, we will have to process different regions separately on one frame. It’s a sort of waste if westick to the low-quality resource. Therefore, we plan to focus on how we can improve the illuminationcondition the most. Also,this factor is worth trying to improve without spending too much money. Mean-while, this factor plays the most important role in the project. Hence, we plan to improve this factor asmuch as possible.

2.2 Lighting systemIn Figure 2.1, it shows what the first attempt of improved lighting system looks like. The idea aboutthis new lighting system is to diffuse the light beam, then we can have an evenly distributed illuminationcondition. The acquired image from this lighting system is shown in Figure 2.4. The illumination isdistributed evenly, but the intensity is too weak, so it looks really dark. Figure 1.1 shows the comparisonof previous illumination condition. We then decided to try to make a closed space for the diffused lightin lighting system since the reflectors might diffuse too much light to non-related directions in the open

15

Page 16: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

space.

Figure 2.1: First attempt of improved lighting system

In Figure 2.2, the second attempt of lighting system is shown from side view. It doesn’t have a fancyappearance, but the result is improved. Based on the insight we have got after analysing the first attemptand its result, the second lighting box is almost a closed space around the lamps. We add three morecardboards between lighting system and observation hive, which is like a tunnel from the top view. Thenwe have the light source at one end, the target at another end. The light will in principle not run out ofthis closed space. So the light intensity on the hive will be theoretically stronger than in the fist attempt.Figure 2.5 shows the acquired frame using the second attempt’s lighting system. The intensity looksstronger than the first attempt but there’s a dark area on one side. We want to avoid light’s refection onthe cover glass to affect the detecting results. However the result is still not acceptable to us after we runthe algorithm on this video clip.

We finally decided to buy four more lamps of the same type as the ones we already have. We nowuse two equivalent smaller lighting boxes instead of the reflectors that we’ve tried previously.Figure 2.3shows how we use those two lighting boxes in the scene. Because we have tried to use one single lightsource and put it in front of the bee hive. We got the uniform illumination but weaker light intensityor stronger light intensity but uneven illumination. We then considered why we can not split the lightsource into two parts and place them with a certain angle in front of beehive, so that they can not onlycancel the shadow but also compensate intensity to each other on the whole hive. It is like an operatingastral lamp but a simple version. The angle for each lighting box is around 40 degree to the hive. At themeantime, by putting the lighting boxes at this angle, we also avoid the reflection on the cover glass tothe camera. Therefore by using this type of lighting system, we have stronger intensity of light, uniformillumination and there is no reflection on the glass. Currently, we can start to acquire video by using thislighting system. Figure 2.6 shows the result that was satisfactory. It satisfies the light intensity and evenlydistributed illumination on the whole frame.

16

Page 17: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 2.2: Second attempt of improved lighting system

Figure 2.3: Third attempt of improved lighting system

17

Page 18: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 2.4: Acquired frame after first improvement

18

Page 19: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 2.5: Acquired frame after second improvement

19

Page 20: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 2.6: Acquired frame after third improvement

20

Page 21: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Chapter 3

1st Attempt – Histogram Matching

3.1 Motivation

The motivation of this attempt is based on a bee’s special appearance. We considered that special ap-pearance should produce a special histogram other than the hive’s histogram,i.e., background. There aredifferent ways to represent an object, for instance, using the center point, using multiple points, usingrectangular patch, using skeleton, using object contour and using object silhouette/ object appearance,there is a general view of common methods used in tracking [6]. In principle, the appearance of thehistogram should be different to the histogram of background and it should be easily distinguished, seeFigure 3.1. The correlation between these two histograms is 0.025, which means they are unrelated toeach other. Therefore we decided to start with this histogram matching method.

Figure 3.1: Comparison of histogram between bee and background

3.2 Processing

Introduction

Before we start to explain how this method works in details, we would like to explain some conceptsand some problems which are related to this histogram matching method. The first concept is histogrammatching, which means to measure the histogram information on the target image and compare the his-

21

Page 22: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 3.2: Masked images shows where we measure the template information

togram information to the template’s histogram information. Based on the correlation between the target’sand template’s histogram information, we can find out that if the target is matched more to the template,the correlation value will be higher . The second concept is kernel tracking, which means to track objectby computing the motion of the kernel in consecutive frames. An object’s appearance and shape or otherfeatures can be referred to as a kernel [6]. There are two important concepts: “target" and “template".“Template" means we measure the object’s information first and store it as the criterion for later compari-son. “Target" means before we run the matching method, we need some information (target information)from the searching region to compare with the criterion (template information). After we know whatthose concepts are, we need to solve several problems for this method to work. The first one is how wecan measure and derive the target and template’s histogram information. Although each bee has similarshape, they have different orientations. This becomes a difficulty in measuring target histogram informa-tion. The second problem is whether we can enhance the difference between target and background, sothat they will be dramatically different in histogram matching.

SolutionsWe manually choose some templates from the whole frame and put a mask on that region as shown inFigure 3.2. We then measure the histogram information and remove the black mask from the histogram.This is how we derive the histogram information for the template. We then applied first target searchingmethod which is center-based or tag-based rotating searching. It means we tried to find the bee’s thoraxand then rotate the mask (6 degrees each step) on that sub image within 360 degrees. Therefore we willhave different histogram information at different orientations. Ideally, we can locate the bee’s positionand also its orientation.

22

Page 23: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 3.3: Images show how we measure the target’s information

We did not obtain good results with this method. Then we have considered to change the method forhow to search the target. Because center-based searching method has the drawback which is the regionaround center will not change at all. This means the correlation will be high on average. Then anotherrotating searching method was applied which is edge-based or head/tail-based rotating searching. Fig-ure 3.4 shows how we pre-process and detect dark regions for later use. The reason why we wanted todetect the dark region was that this could be a good feature to help us with locating a bee’s position. Thered dots in Figure 3.4 present detected results of dark regions. Through Figure 3.5, we can see how theedge-based searching method works. It means the rotating center locates at either detected head pointor tail point. The image at right hand side shows masked image, the detector will only measure thenon-black area for the histogram as target histogram information. Based on correlation information inFigure 3.6, it is easy to notice that the matched area has higher correlation value than the unmatched ones.

In order to generate as good results as we can, there are some other improvements we’ve done, likeincreasing the number of template’s models and taking the average of those models’ histogram, and amethod similar to the eigenface method [7]. We generated an eigenbee from hundreds of bees, andmeasured the histogram information from the eigenbee as the template information. However it turns outthat the results are not as good as we expected. Because there are too many mis-detections by using theeigenbee’s histogram information as the template.

3.3 Results

In this part, a couple of images demonstrating the results for the edge-based histogram matching methodwill be shown. It seems this method can detect some simple region, for instance, one separate bee aswe can see in Figure 3.7 and Figure 3.8. However it will meet some problems when the bees are sittingclosely together.

23

Page 24: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 3.4: Pre-processed image shows dark regions that we are looking for

Figure 3.5: Images show how edge-based searching method works

24

Page 25: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 3.6: Correlation result after one loop

Figure 3.7: Good result of this method shows reasonable detected bees

25

Page 26: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 3.8: Good result of this method shows reasonable detected bees

3.4 Reason to abandonThe histogram matching method is simple and can be easily implemented. But It takes long time (around178 seconds) to process one single frame. The accuracy is also relatively low which can be seen in Fig-ure 3.7 and Figure 3.8. Another problem is shown in Figure 3.9. It happens without any special reasonon a random frame that we picked for testing. It means this method is unreliable with low accuracy andcomputationally costly. We therefore decided to abandon this method and try to make another approach.

Figure 3.9: Bad detections without any special reason

26

Page 27: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Chapter 4

2nd Attempt – Modified Mean-ShiftMethod

4.1 MotivationMean-shift method is a non-parametric feature-space analysis technique [8], a so-called mode seekingalgorithm. It is a famous method that researchers have developed to track a moving object. The mean-shift method is an iterative method, it searches in a certain region and calculates the “similarity" betweentarget and the model, the method stops when it reaches the stop criterion. The modification in our methodis that we set fixed iteration times and a constrained searching area. There is some similarity to the methodof appearance-adaptive model(AAM). The advantage of AAM is that it can update the appearance of thetracking object so as to reduce the error, see [9] and [10] for information about how researchers trackfacial animation and moving objects in different scales. Therefore we decided to try this modified mean-shift method combined with what we have already done with the histogram matching method. We updatethe template histogram information after every 2 frames, which is the similar part as AAM method.

4.2 ProcessingIn this part, we are going to describe how this method works step by step. This method is based on theassumption that we have already found the position of each bee. The position of each bee was manuallyassigned on several moving and non-moving bees for testing purpose. The reason for this hypothesis isthat this mean-shift method needs to initialize itself at the first step. Then the method is going to searchiteratively within the search region until it meets the stop criterion. Also, the newly designed tags (seenext chapter for more details) can provide the initial location information for this method.

There is a good constraint that we can use based on a bee’s physical speed limit and the recording fps. Itmeans that bees can not move more than certain number of pixels from one frame to another. This allowsus to limit the search region based on this constraint and reduce the number of iterations. Meanwhile itcan reduce the computational time.

The modified mean shift works as follow:1) Shift the mask center within constraint area, to find the potential target.2) Start center-based rotating method to find the orientation of bee’s head.3) Measure the histogram information of masked image likes in Figure 3.5 and update the template his-togram information for next frame.4) Measure the masked image and compare with template information.5) Check stop criterion, if yes, stop to search for the next one, otherwise it goes on to 3) and repeat untilit meets the stop criterion.

27

Page 28: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 4.1: Manually assigned initial position

The stop criterion is to calculate correlation value to the power of 2 and compare if the target’s valueis over 0.65. By doing this way, it can make the difference more obvious. Then it compares the imagesdifference and tries to find the minima value between 11 and 20 in gray value intensity by subtractingtheir own mean value.

In principle, we expected that this method will produce a more accurate result than the average model’sor eigenbee’s information does by using previously tracked object’s histogram information. However theresults turned out to be not the same as we expected at the beginning.

4.3 Results

Figure 4.1 shows how the first step looks after we have assigned the initial position. Figure 4.2 shows abetter result we have obtained by this modified mean shift method. The red circles represent moving beesand the blue ones show the non-moving bees. This result has tracked moving bee around 150 frames.Figure 4.3 shows another try of this method, it still could track moving bees around 130 frames and itlost target when the moving bee walked into a crowded cluster of bees, see Figure 4.4. The green figureshows the bee’s orientation information in radian. The yellow figure shows the max correlation value atbest matched position.

28

Page 29: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 4.2: One frame of tracked Bees in the video2

Figure 4.3: One frame of tracked Bees in video1

29

Page 30: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 4.4: One frame shows losing track in video1

4.4 Reason to abandonThe results were not as good as we expected, although it could track moving bees over time based onmanually assigned initial positions. We need better accuracy and lower computational cost. We finallydecided to implement the tag based tracking method in the next step.The reason to abandon this methodis that it has low accuracy and easy to lose track when bees are walking into a complicated/crowdedsituation, although the results were improved somehow.

30

Page 31: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Chapter 5

3rd Attempt – Tag Based Tracking

5.1 MotivationWe tried some different methods previously, none of them performed well enough. We would like to finda reliable way to track and identify each single bee over time and this method should be computationallyfast. We decided to replace the old tags with a newly designed tag. This new tag will provide us not onlythe ID information, but the bee’s position (i.e. object’s center point) and the bee’s orientation at the sametime. These three pieces of information play a vital role in tracking a moving object.

5.2 Design of TagBecause the coding combination of black and white is not enough for our case, we decided to add onemore color in coding tags. Then we started to investigate which gray value intensities to use for en-coding tags. We visually tested how different those gray value intensities look in our camera with-out infrared light. Then we also tested how they looked like under infrared light. Thereafter, we de-cided to use total black, i.e. 0 out of 255 as the digit zero, gray value 65 out of 255 as digit oneand 130 out of 255 as the digit two.We have tried to design several different ideas too, see Figure 5.1and tested each of those on dead bees that we collected from SLU. Finally, we decided to use tag de-picted on the left in this figure as the final design that we are going to use later on in our project.

Figure 5.1: Some designs for the new tags

In figure 5.2, we can see what the old tags look like in the video. They are not in a good lighting condition

Figure 5.2: Old tags derived from video and one in theory

31

Page 32: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 5.3: The new tag

and there are several different colors for the tags, therefore they look so different under infrared light, itis very hard to recognize the digits on each tag automatically. However, the newly designed tags willovercome the drawbacks of the old tags and help us to realize our goal.

Here we would like to introduce some parameters for the newly designed tag. The size of the tag is3 x 3 mm square. The gray values we finally print out are 26% and 51% and combined with pure whiteand black. These intensities are calibrated to the illumination wavelength and printer, and should bere-calibrated when either changes. You can see the new tag in figure 5.3, see [11] for more details.

5.3 ProcessingIn this part, we are going to describe how the tag reader algorithm works. Because we are aiming toimplement the algorithm in real time, we applied Laplace filter combined with minima filter to find localmaxima on the original image so as to find the tag’s position. We employed the following parameters incurrent detector, which is that Laplace filter’s size is 2 and the minima filter’s size is 3. The filter’s shapeis rectangular for both filters. The red dot in Figure 5.4 is the detected tag’s position which correspondsto the bee’s thorax’s position. The red circle is a measuring circle which aims to find the direction of thethin white bar on the tag by searching the highest intensity along the measuring circle. The tag is placedon the bee such that the white bar points to the direction of bee’s head.

Figure 5.4: detected tag

The second step is to distinguish if this detected object is a tag or not. We have already detected thepotential white thin bar on the tag towards bee’s head. Then we draw a line from the center to the circle’s

32

Page 33: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 5.5: Measuring grid on the tag

position where it holds the highest intensity. It can be seen as the line in figure 5.4. The intensity alongthis black line should be high enough and the difference between the center point and end point should notbe too much. Based on this condition, some of the shinning reflection on bee’s wing will be recognizedas the false detections. Most of them can be removed at this stage. But there are still some big reflectionon bees’ wings and the edge of honey combo can not be removed.

After we find the tag and select it as the tag that we are expecting in the scene, the next step is to identifyeach tag by using a measurement grid, see Figure 5.5. On this stage, we calculate average intensity alongwhite lines by convolving a 5x5 Gaussian kernel (sigma is 0.33) to each pixel on the lines. Then weemploy the same method on the red lines. The reason for this is to compensate and correct the result fromthe horizontal white lines, because the horizontal lines can inherit the error in detecting the thin white bar.We also meant to widen up slightly for those red lines towards the tail’s direction. Because there is a bigwhite square in the middle, we do not want to cover that area when we calculate the average intensity ofred lines. Thereafter, we collect the useful data from the detected area, which means to remove some dataat the edge and center of tag. The marginal data at the edge also contains the background information andthe central data is useless at this stage for comparing the intensities among different areas. Decoding thetag is based on selected information from tag detector.Based on the algorithm we have implemented sofar, we developed a local detector for tracking purpose. We named previous detector as global detectorfor convenience. There is an advantage that the local detector could save around 50% computational timeover the global detector. The local detector works with the global detector alternatively. Firstly, we runthe global detector to detect all the information we need and then we transfer all information to localdetector. The local detector will then carry on tracking the bees over time. After a certain period of time,for example 100 frames, the global detector is run again to correct the errors the local detector made.For instance, the local detector might lose track on some moving bees and after 100 frames, the globaldetector would pick up those untracked bees and start to track again over time with the local detector.

There are some differences between global and local detector. The first is that the size for structureelement (SE) is different, because they process on different size of images, that is, we enlarged 5 timesthe original tag image. it is obvious that the SE should be different. Meanwhile the filters’ size are 1 pixeland 7 pixels for the minima filter and the Laplace filter respectively. Secondly, the local detector will notidentify the tags, hence it can save much time.

5.4 ResultsIn this part, we are going to illustrate some results. Figure 5.6 shows the detected tags and identifiedresults as well. The algorithm can correctly detect 89% of around 300 tags over hundreds of frames onaverage, but there are still around 11% mis-detection. Most of those mis-detection happens due to reflec-

33

Page 34: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

tion of wings, see Figure 5.7 [11]. We check the error by doing the following steps, we record the detectedresults first and then manually check the errors over frames and take the average of them. This methodis generally working as we expected, and it achieved the first goal of the project. But there are still someparts which can be improved. For instance, the measuring method for identifying tags can be improved,so it will take less time to identify a tag. There is a matlab built-in command named “improfile" can be agood alternative than the one I made myself. Because I guess there are too many control points in currentmethod , which means there are 20 points in each horizontal line and there will be totally 80 controlpoints. However we can choose only 4 control points totally if we use “improfile" instead. It means wecan reduce quite a lot of control points and it is much faster to obtain the tag’s ID, see Figure 5.8. As forthe local minima detecting method, it can also be improved by applying other method. The threshold partin finding tag’s positions can also be adaptive, so that a better detection can be made.

Figure 5.6: Result cropped from video

Figure 5.7: Mis-detected reflection in the process

34

Page 35: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Figure 5.8: New measuring suggestion

35

Page 36: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

36

Page 37: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Chapter 6

Conclusion and Future Work

This project aimed to track around 1000 bees, however we had only around 300 bees finally in the result.We actually prepared about 1000 bees and they were all dead including the queen because of a mistakeI made. I also learned that bee is sensitive to the isopropanol. What has happened then? We wantedto constrain the walking area for the bees and we did not want them to walk on the edge of frame andplexiglass but on the honey comb. We were suggested to spray fluon disolved in isopropanol onto theplexiglass and the edge of frame. But we did not let the fluon dry properly. Therefore bees died graduallyover night. What a pity! Then I tagged around 300 bees since we did not want to miss the bee’s season.Otherwise we would miss the main interaction that bees usually have when they are active enough. Theproject needs to be continued and the video data plays an important role in developing stage. We alsorecorded around 10TB videos in different conditions for developing purpose over days.

It is not an easy task to track hundreds of bees simultaneously over time. There will be a lot of problemsthat we need to solve properly. We have developed many important factors in this project, for instance,a modified frame that we put bees on, which can constrain bees’ walking area, and the illumination andcamera system that can record bees in darkness, which will produce an evenly distributed illuminationon the tags and it can help to make the later process easier and more accurate. We have also developeda new tag, different from the ones bee researchers usually use. The new tags have many advantages overthe standard tags: the surface is flat and it is not shiny, so that there is not specular reflection; the barcode is easier to be read by the computer than the arabic numbers and the capacity is much larger too,there are 38 unique combinations in the new tag. There are some features the old tags do not have atall. The important one is the orientation information. Based on the customized tag, we can detect thebee’s orientation together with the identity and position information. It is much more convenient than thestandard tags and it helps the later process too. Thanks to the customized tag, we do not need to spendtime to segment or identify the actual bee for the useful information we need but process on its tiny tag.

We have also developed the algorithm to detect, identify and track these tags on the honey bees. Al-though there are some parts which need to be optimized later, the algorithm can work and generate someuseful data as we expected. There are still many unfinished parts in the project, which can be donegradually in the future. I suppose that the next step will be to develop the algorithm to detect bees’ in-teractions and to identify different types of interactions. Once all the different components are in place,some graphics processing unit (GPU) technology can be employed in some core parts [12] [13], so that afaster programme can be used for a real time and fully automatic system. Another thing we can do in thefuture work is to place the color reference bars on the frame in the beehive, so that we can compare thecolor between these color bars in order to know if the illumination is correct or not. If the illumination iscorrect, those color bars should have the same color intensities in the images. Otherwise we can adjust thelighting system to adjust the illumination condition after we move the set ups. There is another advantageof these identical color bars. The algorithm can use them as the standard color that we used in encodingtags. So that identifying tag becomes adaptive which means that it will be more robust.

37

Page 38: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

 

Page 39: Tracking individual bees in a beehive - DiVA portaluu.diva-portal.org/smash/get/diva2:602825/FULLTEXT01.pdf · 2013-02-04 · Tracking individual bees in a beehive ... The network

Bibliography

[1] A. Lecocq, C. L. Luengo Hendriks, B. Locke, and O. Terenius, “Increased artificial light inten-sity temporarily increases honey bee(apis mellifera) activity in an observation hive,submitted forpublication,” 2011.

[2] S. Oh, J. Rehg, T. Balch, and F. Dellaert, “Data-driven MCMC for learning and inference in switch-ing linear dynamic systems,” in PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTI-FICIAL INTELLIGENCE, vol. 20, p. 944, Menlo Park, CA; Cambridge, MA; London; AAAI Press;MIT Press, 2005.

[3] Z. Khan, T. Balch, and F. Dellaert, “MCMC-based particle filtering for tracking a variable numberof interacting targets,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 27,no. 11, pp. 1805–1819, 2005.

[4] K. Branson, A. Robie, J. Bender, P. Perona, and M. Dickinson, “High-throughput ethomics in largegroups of drosophila,” Nature methods, vol. 6, no. 6, pp. 451–457, 2009.

[5] A. Lecocq, “The development of tool for network analysis in the honey bee (apis mellifera l.),” Mas-ter’s thesis, Swedish University of Agricultural Sciences Faculty of Natural Resources and Agricul-tural Sciences, 2011.

[6] A. Yilmaz, O. Javed, and M. Shah, “Object tracking: A survey,” ACM Computing Surveys (CSUR),vol. 38, no. 4, p. 13, 2006.

[7] https://en.wikipedia.org/wiki/Eigenface, 2012.

[8] https://secure.wikimedia.org/wikipedia/en/wiki/Mean-shift, 2012.

[9] F. Davoine and F. Dornaika, “Head and facial animation tracking using appearance-adaptive modelsand particle filters,” Real-Time Vision for Human-Computer Interaction, pp. 121–140, 2005.

[10] S. Zhou, R. Chellappa, and B. Moghaddam, “Visual tracking and recognition using appearance-adaptive models in particle filters,” Image Processing, IEEE Transactions on, vol. 13, no. 11,pp. 1491–1506, 2004.

[11] C. L. Luengo Hendriks, Z. Q. Yu, A. Lecocq, T. Bakker, B. Locke, and O. Terenius, “Identifyingall individuals in a honeybee hive progress towards mapping all social interactions,” Proceedings ofthe Visual observation and analysis of animal and insect behavior 2012 workshop (Tsukuba, Japan,in conjunction with ICPR), November, 11,2012.

[12] https://gpgpu.org/tag/matlab, 2012.

[13] J. Kong, M. Dimitrov, Y. Yang, J. Liyanage, L. Cao, J. Staples, M. Mantor, and H. Zhou, “Acceler-ating MATLAB image processing toolbox functions on GPUs,” in Proceedings of the 3rd Workshopon General-Purpose Computation on Graphics Processing Units, pp. 75–85, ACM, 2010.

38