6
Takuro Yonezawa · Hiroshi Sakakibara · Jin Nakazawa · Kazunori Takashio · Hideyuki Tokuda Towards a better understanding of association between sensor nodes and daily objects Received: date / Accepted: date Abstract This paper introduces uAssociator, an image-based association tool for realizing sensor-attachment type smart object services that enable end-users to associate everyday objects to tiny sensor nodes easily. By attaching sensor nodes to everyday objects, users can augment the objects digitally and take the objects into various services intuitively. When using such smart object services, a semantic connection be- tween sensor nodes and objects must be made before ser- vices initiate properly. At home, however, professional as- sistance with such installation may be either unavailable or too costly. This paper explores the design choices to realize an easy association method, describing trade-offs inherent in each choice. In addition, we show a spotlight-and-camera based association tool which can reduce association costs significantly. Keywords Association · Interaction · Smart Object · Deployment · Application Model 1 Introduction Our life is filled with everyday objects, and we often have troubles with them (e.g. lost property). In order to achieve the pervasive computing environment, it is important to take everyday objects into pervasive service. Sensor nodes, when attached to everyday objects, enable us to gather real-world information as context. Recently many researchers are fo- cusing on the services with these smart objects [5, 11]. With smart objects, users would be able to enjoy the privilege of pervasive technology anytime anywhere in their lives. We consider that smart objects can be classified into the two: the sensor-builtin type and the sensor-attachment type. The difference of these two kinds is their origins. While the builtin-type smart objects is well-configured at the time of shipment, the attachment-type smart objects is configured Takuro Yonezawa Keio University, Delta S213 Endou 5322, Fujisawa, Kanagawa, Japan Tel.: +081-466-47-0836 Fax: +081-466-47-0835 E-mail: [email protected] by users (i.e., users attach sensor nodes to their belongings). Each type of smart objects has both advantages and disad- vantages. For example, builtin type smart objects require no complex configuration to users; once users buy these smart objects, they can leverage smart object services instantly. In addition, builtin type has good looks. However, the users should buy and use pre-configured products only. In con- trast, attachment type smart objects can provide freedom to users; users can use many ordinary (i.e., not smart) belong- ings that already exist in our daily life. However, users must make software configurations for adjusting those objects to run various applications. In addition, objects may look bad with sesor nodes attached, because sensor nodes are still big in the present MEMS technology. However, we con- sider the problem of bad looks will be solved in the future because sensor nodes will become tiny as technology ad- vances. Therefore, if the configuration cost can be reduced, the attachment-type smart objects plays an important role to realize ubiquitous computing environment. The goal of our research is reducing this configuration cost for realizing the attachment-type smart object services. Successful installation for attachment-type smart object ser- vices requires a three-step process: 1) attaching sensor nodes to objects, 2) making semantic associations between the sen- sor nodes and the objects, and 3) configuring each applica- tion to a preferred setting. Of these, this paper focuses to reduce association costs. We assume sensor nodes have lim- ited computation power only enough to transport sensor data for reducing the cost, so that application software are imple- mented on high performance machines such as desktop or laptop PCs. From this point of view, applications, which pro- vide smart objects services, need to know what object each sensor node is monitoring. This, in turn, requires associa- tion, or making a semantic relationship between the sensor node ID and its object information. This paper explores the design choices to realize intu- itive association method, describing trade-offs inherent in each choice (in section 2). After that, we propose our scheme called uAssociator, a spotlight-and-camera based associa- tion tool (in section 3). The users can achieved association task by following process: obtaining sensor node ID and ob-

Towards a better understanding of association between ...takuro/resources/dipso...pros and cons of our approach comparing related work (in section 4). Finally, we conclude this paper

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Towards a better understanding of association between ...takuro/resources/dipso...pros and cons of our approach comparing related work (in section 4). Finally, we conclude this paper

Takuro Yonezawa · Hiroshi Sakakibara · Jin Nakazawa · Kazunori Takashio ·Hideyuki Tokuda

Towards a better understanding of association between sensornodes and daily objects

Received: date / Accepted: date

Abstract This paper introduces uAssociator, an image-basedassociation tool for realizing sensor-attachment type smartobject services that enable end-users to associate everydayobjects to tiny sensor nodes easily. By attaching sensor nodesto everyday objects, users can augment the objects digitallyand take the objects into various services intuitively. Whenusing such smart object services, a semantic connection be-tween sensor nodes and objects must be made before ser-vices initiate properly. At home, however, professional as-sistance with such installation may be either unavailable ortoo costly. This paper explores the design choices to realizean easy association method, describing trade-offs inherentin each choice. In addition, we show a spotlight-and-camerabased association tool which can reduce association costssignificantly.

Keywords Association · Interaction · Smart Object ·Deployment · Application Model

1 Introduction

Our life is filled with everyday objects, and we often havetroubles with them (e.g. lost property). In order to achievethe pervasive computing environment, it is important to takeeveryday objects into pervasive service. Sensor nodes, whenattached to everyday objects, enable us to gather real-worldinformation as context. Recently many researchers are fo-cusing on the services with these smart objects [5, 11]. Withsmart objects, users would be able to enjoy the privilege ofpervasive technology anytime anywhere in their lives.

We consider that smart objects can be classified into thetwo: the sensor-builtin type and the sensor-attachment type.The difference of these two kinds is their origins. While thebuiltin-type smart objects is well-configured at the time ofshipment, the attachment-type smart objects is configured

Takuro YonezawaKeio University, Delta S213 Endou 5322, Fujisawa, Kanagawa, JapanTel.: +081-466-47-0836Fax: +081-466-47-0835E-mail: [email protected]

by users (i.e., users attach sensor nodes to their belongings).Each type of smart objects has both advantages and disad-vantages. For example, builtin type smart objects require nocomplex configuration to users; once users buy these smartobjects, they can leverage smart object services instantly. Inaddition, builtin type has good looks. However, the usersshould buy and use pre-configured products only. In con-trast, attachment type smart objects can provide freedom tousers; users can use many ordinary (i.e., not smart) belong-ings that already exist in our daily life. However, users mustmake software configurations for adjusting those objects torun various applications. In addition, objects may look badwith sesor nodes attached, because sensor nodes are stillbig in the present MEMS technology. However, we con-sider the problem of bad looks will be solved in the futurebecause sensor nodes will become tiny as technology ad-vances. Therefore, if the configuration cost can be reduced,the attachment-type smart objects plays an important role torealize ubiquitous computing environment.

The goal of our research is reducing this configurationcost for realizing the attachment-type smart object services.Successful installation for attachment-type smart object ser-vices requires a three-step process: 1) attaching sensor nodesto objects, 2) making semantic associations between the sen-sor nodes and the objects, and 3) configuring each applica-tion to a preferred setting. Of these, this paper focuses toreduce association costs. We assume sensor nodes have lim-ited computation power only enough to transport sensor datafor reducing the cost, so that application software are imple-mented on high performance machines such as desktop orlaptop PCs. From this point of view, applications, which pro-vide smart objects services, need to know what object eachsensor node is monitoring. This, in turn, requires associa-tion, or making a semantic relationship between the sensornode ID and its object information.

This paper explores the design choices to realize intu-itive association method, describing trade-offs inherent ineach choice (in section 2). After that, we propose our schemecalled uAssociator, a spotlight-and-camera based associa-tion tool (in section 3). The users can achieved associationtask by following process: obtaining sensor node ID and ob-

Page 2: Towards a better understanding of association between ...takuro/resources/dipso...pros and cons of our approach comparing related work (in section 4). Finally, we conclude this paper

ject image simultaneously by using digital camera, and en-ter the optional information through graphical user interface.Our association scheme can be used in various smart objectservices to monitor, notify the status of, or generate cooper-ative functions among smart objects. In addition, we discusspros and cons of our approach comparing related work (insection 4). Finally, we conclude this paper and show futurework (in section 5 ).

2 Design space for association

Before discussing design space for association, the applica-tion domain that we target should be made clear. As mostcommon context-aware applications are described as a col-lection of rule-based conditions, applications we target adaptsif-then rule for providing smart object services. The differ-ence between common context-aware applications and ap-plications in attachment-type smart object is that users canchoose any domestic object as the target of applications. Asthe scenario, ”if a secret diary which mounts sensor node isremoved from drawer, alert by sounds” or ”if a brush whichmounts sensor node is not moved after a meal, tell the childto brush his teeth” are simple examples. The major require-ment in the scenario is Do-It-Yourself (DIY) style of ser-vice usage; non-expert users must be able to register theirpreferred belongings to preferred services. It can be dividedinto the following three according to the operations neededfor a registration.

– Coping with variety of items: The user needs to attachsensor nodes to his/her belongings to use them in a ser-vice. The sensor node must be small enough to be at-tached to a wide range of items. In addition, it must havefeatures to use in daily life (e.g., a sensor cover for wa-terproof). These are a physical requirement to the sensornode itself, which this paper does not focus.

– Easy association: The user needs to tell the system whichobject each sensor node is attached to. To do so, the userfirst needs to specify the sensor node that the user wantsto associate with an object. The user then needs to spec-ify the object. These specifications can be done by dif-ferent methods, each of which entails pros and cons thataffect the system’s intuitiveness and ease of use.

– Reusability of smart objects: To leverage smart objectsin various services, the user needs to load the associationinformation into the services. While the above simplescenario involves only one service at one object, theremay be multiple different services in operation simulta-neously in a home. Therefore, the system needs to enablethe user to use a smart object in those different services.

Based on our experiences in creating smart object ser-vices framework, the most important considerations to achievethe easy sensor node-object association can be captured bytwo dimensions. A point along each of these dimensions em-bodies its own tradeoffs. This section explores these dimen-sions, and the pros and cons associated with each.

2.1 Sensor node specification

One key dimension concerns how a sensor node, which auser wants to associate to an object, is specified. More pre-cisely, it concerns how a user specifies the sensor node’s IDto the system, since we assume that each sensor node hasa unique identifier. The first approach is the manual inputfrom keyboard. Tiny sensor nodes have no space for attach-ing label or bar-code, let alone display which their IDs canbe shown. Thus, consequently, the users are forced to relyon professional identification tools or simply estimate the IDbased on the sensor data packet sent by the node to the net-work. Either way, the procedure could be highly inhibitingto end-users. this approach assumes that users can somehowacquire the sensor node’s ID. Second approach is to mounta special chip for identifying sensor nodes. For example, ifsensor nodes has IrDA, Bluetooth or Near Field Communi-cation (e.g., RFID) chips, users can obtain sensor node IDby using special communication device that has the samechip. However, it is impractical to attach these chips on ev-ery kind of sensor nodes. In addition, mounting these chipswould only increase the cost.

Another approach for identifying sensor node is usingsignal strength of sensor nodes. However, there is a disad-vantage to applying this method to association between sen-sor nodes and objects. The problem is a lack of general ver-satility. Proximity interaction [6] is an example of using sig-nal strength. In the study, a series of experiments have beenconducted using Mote [1]: proximity was monitored basedon radio frequency. For that purpose, each sensor node has touse a different frequency to avoid radio interference. Sincesensor nodes of the same type emit the same radio frequency,we cannot use more than one sensor nodes of the same typewith this method. This could be a major obstacle when alldifferent sensor nodes co-exist in an environment.

The final approach is to characterize the sensor datatransmitted by the node that the user wants to associate toan object. The system detects the characteristic among datareceived from sensor nodes in a network, and determines thenode that the user wants to associate. For example, supposesensor nodes n0, n1, and n2 each containing an light sen-sor. A user wants to associate n0 to a mug. This approachlet the user to do so by flashing a light on it. This causesthat the light sensor data from n0 quickly change, and thatfrom other nodes do not. The system detects this change,and gets to know n0 is the sensor node that the user wantsto associate with an object. If the nodes have other sensors,such as accelerometers, thermometers, and microphones, thecharacterization can be done by shaking, heating, or makingsounds over the nodes. The advantage of this approach is thatusers can specify the sensor node as precisely as the manualinput and more easily than that.

2.2 Object specification

How to express the physical object in the digital world tocombine with sensor node ID is the other dimension. Thevarious expression forms of the object are considerable such

Page 3: Towards a better understanding of association between ...takuro/resources/dipso...pros and cons of our approach comparing related work (in section 4). Finally, we conclude this paper

as the object’s name or property. Ideally, the expression formshould fulfill the following two conditions; 1) it is naturallyunderstandable for human and 2) it is useful for realizingvarious applications.

Similarly to the sensor node specification, the first ap-proach is the manual input from keyboard; a user inputsan object’s name. The advantage of this approach is almostunlimited flexibility; users can specify objects as they like,such as “my favorite Winnie the Pooh mug.” There are anumber of approaches to automate this input. The first ap-proach is to use graphical codes or RF codes printed or pastedon objects. This approach can distinguish all the objects onwhich a code is printed/pasted. If assuming existence of abar-code database that contains object meta-data, the systemcan associate a sensor node with such detailed informationas an object’s name, manufacturer, manufactured date, andso on, by reading the bar-code on the object. The system canprovide users with a flexible smart object lookup capabilityusing this information. However, users suffer from a majorlimitation with this approach; there are a number of objectsthat do not have any graphical code or RF code printed on.

Unlike these approaches that associate sensor nodes withtexts representing objects, the second approach is simply toassociate the nodes with object images. Generally, humanbeings recognize the object through visual organ and shapean impression in the brain, then understand object’s name orproperty by combining the impression with their past mem-ory . Therefore, the use of images to indicate objects is prac-tical and intuitive. Users enjoy the following two advantagesof this approach. One is that they can specify a particularobject out of similar objects as similarly as the manual textinput. For example, even when a user has a number of mugs,he can specify one of them (e.g., the Winnie the Pooh mug)by shooting its image. Second is that they can provide vi-sual information to applications. This enrich the usability ofapplications. The major disadvantage of this approach is de-graded capability to smart object lookup. Use of images asthe target to associate sensor nodes with disables users tosearch for smart objects from their names. However, imageis useful for the last approach is to extract objects’ namefrom their images.

Many researchers have made efforts to extract name ofphysical objects automatically. These studies can be dividedinto two lines: text extraction techniques and image com-parison techniques. Text extraction is useful when text in-formation (e.g., names or properties) is already printed onobjects that need to be identified. That is because this typeof information can be easily extracted with the Optical Char-actor Recognition (OCR). However, as with bar-code or RFcode, we cannot always assume that all objects have the textinformation printed on them. The latter type of technique,image comparison, has an advantage over the text extrac-tion method as we are not confined to pre-printed text in-formation. For instance, the content-based image retrieval(CBIR) system [2] enables us to search for similar typesof image by comparing contents of the queried image it-self (e.g., color, texture, shape, color layout and segmenta-

tion) with those of the image in the large database. How-ever, a problem with this type of technique is that we need aheavy database with a large number of object images relatedto their meta-information to identify all objects existing inour home. In addition, though they can work well for smallnumber of objects under specific conditions, there is still nomethod applicable under a generic condition.

Another approach is pattern-based node specification thatautomates the specification by analyzing sensor data tran-sition [8]. This approach relies on a dictionary of patternsof sensor data transition, each of which represents a typi-cal data transition pattern when a sensor node is attached toa certain object. For example, supposing a sensor node at-tached onto a slide door, the x-axis data from the accelerom-eter on the node would typically transition between plus andminus (or vice versa). The dictionary contains such a pat-tern, hence enabling the system to know the object’s namesuch as “door” by comparing the data with the patterns inthe dictionary. This approach, however, can cause a num-ber of false detection due to the limitation of the patterndictionary. Unlike the door, many everyday objects, such askey chains, bags, and cell phones, are used without specificpatterns. The dictionary cannot contain these randomly-usedobjects. Therefore, many of those objects attached to sensornodes cannot be recognized with this approach.

Since each approach has both pros and cons, the idealscheme of objects’ expression is to include above all ap-proach. However, it requires high cost. The point of designchoice is caring that association technique requires cooper-ative techniques both sensor specification and object speci-fication. From this point of view, we propose spotlight-and-camera based association tool. The detail of our approach isdescribed in the following section.

3 uAssociator: a spotlight-and-camera based associationtool

This section describes uAssociator, a spotlight-and-camerabased association technique for sensor nodes and everydayobjects.

3.1 Design

Our goal for uAssociator is to create efficient design of com-bination two lines of dimension, sensor specification and ob-ject specification.

– Sensor node specification by data characterization:uAssociator enables users to specify a sensor node byflashing light on the node. It does not require any pre-configured database or additional constraints on sensornodes. This advantage increases the system’s feasibilityto home environment.

– Image-based object specification: uAssociator enablesusers to specify an object by taking the object’s pictureusing a digital camera. With images, users can specify a

Page 4: Towards a better understanding of association between ...takuro/resources/dipso...pros and cons of our approach comparing related work (in section 4). Finally, we conclude this paper

Fig. 1 uAssociator’s Interaction called Spot&Snap

particular object out of similar objects without makingmanual key input.

The novelty of uAssociator is that it enables users tofinish these two specifications with one interaction. The in-teraction of uAssociator (we called it Spot&Snap interac-tion) enables users to associate sensor nodes with every-day objects in their homes with help of a digital camerawhich has spotlight. Since uAssociator requires only spot-light and camera as hardwares basically, bluetooth-enableddigital cameras and cellular phones, most of which entails acamera unit these days, is possible to be used for the pur-pose. Figure 1 shows a Spot&Snap interaction using oursecond prototype camera. As the sensor nodes, we use uP-arts [4] sensor network system, since uParts is that they aresmall enough to be mounted on personal effects (their sizeis 10x10mm), with wireless communication, enabling thesetup of high density networks at low cost and with a longlife time. It mounts light, temperature and movement sensor.

After attaching a sensor node to an object, for success-ful first sensor-to-object association, users only need to: 1)direct the camera to the nodes and the object, 2) flash thespotlight on both for a second, and 3) turn off the spotlight.uAssociator recognizes the targeted sensor node ID by com-paring the time , and associates it with the image of the ob-ject obtained from the camera. To this extent, uAssociatorrequires no expert knowledge or skills. Users only need tokeep in mind that whatever object they flash spotlight onwill be associated with the sensor node mounted on them.When uAssociator associates sensor node and object’s im-age, an interface to associate optional information appears(see Figure2). Using this interface, users can input the ob-ject’s name manually. In addition, the users can select vari-ous tags which describes attribute of the object. This meanscurrent uAssociator does not implements object recognitionfunction. However, since uAssociator uses images of objectsas the first association target, it can be used as a basis to ap-ply the object recognition techniques.

Sensor specification way is very simple. The uAssociatorsystem monitors both the time, at which the user turns on oroff the spotlight camera unit, and the set of sensor nodesthat meet a certain condition described below. When a userturns on a spotlight at a time, the system obtains a set ofsensor nodes S1, where the light sensor value of sensor nodestarts indicating maximum in a certain time N . Oppositely,when the user turns off the spotlight, it obtains another set ofsensor nodes S2, where the light sensor value of sensor nodefinish indicating maximum in a time N. If the element countof product set S1 ∩S2 is one, the system determines that s ∈

Fig. 2 Associating additional information to smart objectSpotlight Sensor NodesT1T2

Start ofdetectingMAX lightvalueFinish ofdetecting MAXlight valueS1 S2 S3 Sn

T1 + NT2 + N

ONOFF

...Time ,T-

Fig. 3 Example of sensor node identification

ST1∩ST2 is the sensor node on which the spotlight is flashedby the user (in the case of Figure 3, S1 is the sensor nodewhich is flashed by the user). If the element count is twoor more, the system determines the identification is failed.Allowed time N is provided to cope with packet sendinginterval of sensor nodes and network latency from the timewhen the user spotlight a sensor node to the time when thecomputer receives the packet from the node. The value of Nis calculated by N = 2i, where i is the average packet arrivalinterval of the latest five packets.

The use of light sensor to identify the sensor node in-creases usability of uAssociator. Originally, spotlight is a de-vice to attract the spectator’s attention to the stage. In uAsso-ciator, spotlight enables users to illuminate target object cor-rectly, and plays a role as visual feedback. Thus, spotlightprovides intuitive manipulation to the users. Other sensors(e.g., accelerometer, thermometer) do not meet the require-ment of efficient combination with the camera. For example,accelerometers could be used instead of light sensor by forc-ing users to shake the object. Thermometers could be usedby forcing them to heat the object by hot air from a drier.These actions can individualize the sensor nodes similarlyto the use of light sensor and spotlights. Yet, they require anumber of interactions to finish association: taking an im-age, and shaking the object or operating a drier for a while.uAssociator requires only one quick interaction: clicking theshutter button of a camera.

The association information is stored in the image ob-tained from the camera and stored in a file system. The im-

Page 5: Towards a better understanding of association between ...takuro/resources/dipso...pros and cons of our approach comparing related work (in section 4). Finally, we conclude this paper

<? xml v e r s i o n =” 1 . 0 ” e n c o d i n g =”UTF−8” ?><!DOCTYPE a s s o c i a t i o n i n f o SYSTEM ” Smar tOb jec t Image . d t d ”><a s s o c i a t i o n i n f o>

<s e n s o r i n f o><t y p e> u P a r t </ t y p e><i d> 1 . 2 . 3 . 4 . 0 . 1 . 0 . 1 2 </ i d>

</ s e n s o r i n f o><o b j e c t i n f o>

<name> Keys </ name><t a g s>

<e l e m e n t>I m p o r t a n t</ e l e m e n t><e l e m e n t>S e c u r i t y</ e l e m e n t><e l e m e n t>Home</ e l e m e n t>

</ t a g s></ o b j e c t i n f o><t imes t amp> 1159523569317 </ t imes t amp><owner>Takuro Yonezawa</ owner>

</ a s s o c i a t i o n i n f o>

Fig. 4 Sample meta-data using XML in smart object image file

age is JPEG-formatted containing an EXIF header. Associ-ation information is stored in this header field as an XMLdocument (see an example showed in Figure 4). We call thisimage file smart object image file. The format of smart ob-ject image is inspired from that of u-Photo [10], a past workin our laboratory. This approach increases scalability of uAs-sociator, since association information is embedded in JPEGfiles instead of being stored in a centralized server. In addi-tion, it increases reusability of smart objects. With uAssoci-ator, users can use smart objects in one or more services byputting their smart object images onto the service’s GUI bydrag-and-drop. It is also possible for users to share a smartobject image file by exchanging it (e.g., via e-mail).

3.2 Application Model

Figure 5 shows conceptual model of applications with smartobject images. Each application has own if-then rules andadapts them to target objects which registered by users throughGUI (e.g., open or drag-and-drop smart object images to theapplication). Processor module in the application evaluatesthe rules with sensor data from registered objects. If the sen-sor data fits the rules, actuation operater module invokes anactuation such as sound, e-mail or changing the function ofinformation appliances. For example, a smart light appli-cation can work as following; when the application judgesan object (e.g., a chair) does not move for a certain time, itmakes a light on the desk turn off automatically.

As an another idea of the service, let us describe a smartcaragiver application. Smart caregiver supports the familyor caretaker of elderly person living alone to care remotely.In order to use this service, the family or caretaker first needsto register household items that monitored elderly personfrequently use. Then, smart caregiver application starts tolearn how frequency these item are used by monitoring move-ment sensor of each smart object. After a while, family mem-bers or caretakers should be able to tell, by the extent towhich the usage of a certain object (e.g., mug) deviates fromthe usual pattern, if something unusual might have happened

Smart ObjectImage Sensor DataPacketSensor NetworkSensor Data ReceiverProcessor Actuation OperaterRuleTarget SmartObjectsUser Sensor DataApplicationGU I ActuatorE4mailSoundetc..AssociatedInformation If ruled eventis happenedSelect targetsfor application'sSmart Objects

Fig. 5 Conceptual model of applications with smart object images

to the monitored. A great advantage of this service is that, byattaching a sensor node, household items instantly start in-terfacing physical activity of the monitored persons and thedigital service. Since different persons may use a range ofdifferent items in their homes, this advantage is important toadapt the service to each person’s life.

4 Discussion

In this section, we discuss contributions of this paper to smartobject research area. Since many smart objects in past re-search seems to support no association method, the userscan’t use their belongings in these services easily. We justshowed an approach for association which combines twolines of technology, sensor specification and object speci-fication. To combine these technology and make easy asso-ciation method plays a important role for smart object re-search; with association technique, various smart object ser-vices that many researchers have proposed can be extendedto attachment-type style.

Let discussion go into our approach, uAssociator. Thefeatures of uAssociator are the following two: using char-acterized value of light sensor for sensor specification, andstarting to associate object image. While we used uParts asthe sensor nodes, our approach can apply to other types ofsensor nodes which mounts light sensor. Therefore, uAsso-ciator has possible to cooperate with various smart objectinfrastructure. However, light sensor is greatly influenced byenvironmental condition, especially natural light. To enableuAssociator to recognize influence of the spotlight undersuch circumstances, users need to shadow the sensor node,for example, by their hands or by closing curtains. Other-wise, a sensor cover for reducing infulence of environmentlight is also useful.

Let us describe related works. One of the early studiesfor user-side sensor node-object association is Sensor In-stallation Kit [3], which is designed to assist users whenthey deploy an application named Home Energy Tutor intheir home. Installation of this application requires nonex-pert users to accurately attach sensor nodes to appliances.The Kit guides users through this process as it contains aset of predefined association information. While this guidegreatly helps nonexpert users to install sensor nodes cor-

Page 6: Towards a better understanding of association between ...takuro/resources/dipso...pros and cons of our approach comparing related work (in section 4). Finally, we conclude this paper

rectly, they cannot attach a sensor node to an appliance ifit is not predefined in the Kit. Thus, it cannot apply to as-sociation for a hundred of daily objects. However, when aplace to attach sensor node is important for the application,this type of predefined association method is necessary. Sup-pose an application that requires sensor nodes to attach thebottom of a certain type of cups (e.g., jug). This applicationmust guide users to attach an sensor node to the cup.

To enhance the RFID method, some researchers employedboth light and radio frequency. FindIT Flashlight [7] andRFIG [9] are examples of these combined techniques. Bothtechniques expose light - which contains some coded pat-terns - to an RFID with a light sensor. Then, the RFID, whichis also equipped with an LED light, returns the ID [7] or theID with its location [9]. With the help of these functions,users can easily find items they need. These techniques arevery similar to our approach to sensor node identification inthat both make use of light as the medium. The major dif-ference between these two methods and ours is that FindITFlashlight and RFIG require more preparation; they requirespecial implementation on sensor nodes. However, if RFIG’stechnique can be applied to association, it realizes associa-tion between multiple sensors and objects simultaneously.This is because RFIG uses combination of light (projector)and camera as same as our approach.

Additionary, there are various systems that visualize en-vironment information using images. Of those, u-Photo [10]is an interactive digital image associated with networked ap-pliances and sensors in ubiquitous computing environment.We refered this work in the point of using JPEG headeras a sensor information embedding. The big difference be-tween u-Photo and uAssociator is that using u-Photo sys-tem requires well-configuration by environment developer.This means u-Photo seems like builtin type services. On theother hand, uAssociator focuses to bootstrap phase of ser-vices (i.e., association between sensor nodes and everydayobjects).

5 Conclusion

Providing an easy association method is the key challenge torealize sensor-attachment type smart object services in thehome environment. This paper explored the design space forassociation, and presented our solution called uAssociator.It is an image-based association technique, whose freaturesare the following three-folds. First, its simple and powerfulinteraction enables nonexpert users to specify a particularobject and a sensor node for association without using clas-sic PC interfaces like a keyboard. Second, its scalability al-lows users to augment as many belongings as they have withdigital services. We enabled this feature by introducing thesmart object image coupled with an XML-based associationdescription. Third, its portability enabled users to take ad-vantages of variety of smart object services simultaneouslyusing their smart objects.

Finally we suggest two directions for future work. One isincorporating object recognition technologies. Our approach

is not perfect because it still requires manual input with key-board. Therefore, uAssociator must be extended to use ob-ject recognition techniques, such as those described in sec-tion 2, to support services that cannot exist without havingsmart object information in texts. The other is implementa-tion on cellular phones. Using cellular phones for the uAs-sociator interaction, we can deploy smart object services inour home more easily, because many users already have cel-lular phones. In addition, a more interactive system can bedeveloped using interface on it (e.g., display, sound, or vi-bration). For example, it can tell the users to shield a sensornode from light if it is too bright for uAssociator to recognizethe spotlight on sensor nodes. One great advantage of theuAssociator is that it can be implemented on cellular phonessince it only requires simple hardware components such asa spotlight and camera.

Acknowledgements This research has been conducted as part of UbilaProject supported by Ministry of Internal Affairs and Communications,Japan.

References

1. Crossbow technology inc. http://www.xbow.com/.2. S. Antani, R. Kasturi, and R. Jain. A survey on the use of pattern

recognition methods for abstraction, indexing and retrieval of im-ages and video. Pattern Recognition, 35(4):945–965, April 2002.

3. C. Beckmann, S. Consolvo, and A. LaMarca. Some assembly re-quired: Supporting end-user sensor installation in domestic ubiq-uitous computing environment. In International Conference onUbiquitous Computing, pages 107–124, 2004.

4. M. Beigl, C. Decker, A. Krohn, T. Riedel, and T. Zimmer. µparts:Low cost sensor networks at scale. In International Conferenceon Ubiquitous Computing, Demonstration, 2005.

5. M. Beigl, H.-W. Gellersen, and A. Schmidt. Mediacups: experi-ence with design and use of computer-augmented everyday arti-facts. Computer Networks, 35(4):401–409, 2001.

6. W. Brunette, C. Hartung, B. Nordstrom, and G. Borriello. Proxim-ity interactions between wireless sensors and their application. InACM international conference on Wireless sensor networks andapplications, pages 30–37, New York, NY, USA, 2003. ACMPress.

7. H. Ma and J. A. Paradiso. The findit flashlight: Responsive tag-ging based on optically triggered microprocessor wakeup. In In-ternational conference on Ubiquitous Computing, pages 160–167,London, UK, 2002. Springer-Verlag.

8. T. Okadome, T. Hattori, K. Hiramatsu, and Y. Yanagisawa. Projectpervasive association: Toward acquiring situations in sensor net-worked environments. International Journal of Computer Scienceand Network Security, 6(3B), 2006.

9. R. Raskar, P. Beardsley, J. van Baar, Y. Wang, P. Dietz, J. Lee,D. Leigh, and T. Willwacher. Rfig lamps: interacting with a self-describing world via photosensing wireless tags and projectors.In ACM SIGGRAPH 2004 Papers, pages 406–415. ACM Press,2004.

10. G. Suzuki, S. Aoki, T. Iwamoto, D. Maruyama, T. Koda, N. Ko-htake, K. Takashio, and H. Tokuda. u-photo: Interacting with per-vasive services using digital still images. In International Confer-ence on Pervasive Computing, pages 190–207, 2005.

11. K.-K. Yap, V. Srinivasan, and M. Motani. Max: human-centricsearch of the physical world. In international conference on Em-bedded networked sensor systems, pages 166–179. ACM Press,2005.