You are on page 1of 21

1

Using Home Automation to propose new automatic scenarios to disabled users


Abstract The increasing cost of ageing population and dependency is an unquestionable and worrying trend. But the constant progresses of information technologies also provide real opportunities for healthcare and assistance of dependent people. In this context, this article proposes an original proactive solution for home monitoring, by applying the classication methods. The existing home automation and multimedia services are used as built-in sensors, which are the input data for the analysis of user habits. From this analysis, assisted living services are automatically proposed to the user. Firstly, the architecture of the ambient assisted living system is presented. Then an original clustering procedure is performed for the activity recognition. From this clustering we aim to automate a scenario identication based on non-supervised classication method.

I. I NTRODUCTION It is widely admited that in developed countries, the number of aged people (over 80 years old) is increasing, augmenting the number of fragile people. One question is, can technology help fragile people get more autonomy ? It is important since for aged people, this means staying at home longer and thus keep more comfort and cost less to the society. Lots of research has been carried out in the last decades under the Ambient Assisted Living or Smart Homes thematics, these are very well summed up in [3], [17]. This talk will present what we are doing on this eld in the Lab-STICC Laboratory. Lab-STICC has been investigating this eld, working tightly with the Kerpape Rehabilitation center. These investigations mainly seek to enhance the use of home automation in the rehabilitation center itself. A rst project named QuatrA (Aide Ambiante Ajust e Automatiquement) [1], gave a solution, for controlling electrical devices e through KNX bus and multimedia through an infra-red controller as well as helping disabled people with the navigation with wheelchair in their environment. The aim of this paper is to present a global solution for proposing automatic scenarios to the users based on their use of the home automation system, without adding additional sensors dedicated to measuring activity. This solution is composed of a set of probabilistic tools to analyze activities as well as the AAL architecture supporting these tools. A. Related work Population ageing induces increasing needs of places in hospitals and specialized institutions. Improving the quality of life of the elderly and disabled people staying at home, which reduces the time of regarding hospitalization,

are more and more motivated in telemonitoring technology. For instance, in [21] a method to measure circadian variability of the activities using location sensors (infra-red sensors, and magnetic contacts in door) has been developed. As a result, the system can alert the medical provider of changes in user activity or of unusual user behavior. In [23] the authors construct a video monitoring framework, fed by a network of cameras and contact sensors, in order to automatically recognize specied elderly activities. The authors in [20] construct a remote monitoring system with moving sensors in order to detect user behaviors. One main principle of new solutions based on the Information Technology, is the activity recognition, which is deduced from different kinds of sensors. This activity recognition is then used for different purposes, for example fall detection, anomaly detection, person localization, etc. For instance, the authors in [5] focus on the use of Support Vector Machine (SVM) to learn and classify the activities of daily living that are performed by the elderly, fusing a large number of sensors distributed inside the at. Whereas in [16], based on information collected from all home devices equipped with sensors, the authors apply the functional relation (i.e optional relation or mandatory relation) between different tasks within the activities of daily life for the activity recognition. To limit the cost investment, the authors in [19] intend to use simple sensors that detect the changes in states of objects and devices to recognize the daily services in complex home. More than 70 state-change sensors are used in this approach. Besides, different types of assistance for the elderly are presented in proposals from ambient intelligence, aiming to help the people with disabilities to autonomously integrate to the society. In order to assist the user to simultaneously activate the devices corresponding to the user preference, an intelligent UT-AGENT design for smart home environment is presented in [10]. In another proposal [6], they consider a potential dangerous situation as a consequence of series of temporal events. From this hypothesis, a simulation study was designed to evaluate the dangerous levels generated by the TempCRM (Temporal Context Reasoning Model) model. To improve the quality of people life, several designs of ambient intelligence environment have been proposed in [7], [4]. In the eld of robotic assistance for disabled people, different robotic approaches have been presented such as in [15], [11]. However, these current approaches heavily rely on the use of sensors, cameras and telemonitoring systems to collect the information related to the user situation, which means an equipment and installation cost that may be prohibitive in usual cases. Furthermore, input from users and professionals, including occupational therapists (OT), indicate that such intrusive methods can make people uncomfortable and therefore may not be easily accepted. B. Methodology for services proposition Based on the Quatra results, proving the importance of automatic scenarios for people with disabilities, our contribution in this paper deals with the automatic identication of new scenarios adapted to the user habits. The global methodology is shown in Fig.1. This work relies on the users daily activities via home living devices, to contribute intelligent services for disabled people and the elderly. So a rst step, service modeling, is performed to formulate the database and basic terms from the HAS architecture in the user environment. After dening the user database of service use, we need to

OT

Scenario selection Service modeling

Observation

Service identification

Scenario identification

Fig. 1.

Scheme of our approach

learn the user habits to propose new scenarios adapting to user needs. For this, an observation of the users use of HAS is rstly realized. Then, a step of service identication is performed, dealing with clustering each service into different activities, according to their occurrence times. Based on this step, a process to determine assisted service execution through the use of scenarios, meaning scenarios is proposed in scenario identication step. Then, achieved scenarios are proposed to the user or OT, who makes the nal decision. This methodology is supported by the use of an ambient assisted living architecture, which is presented in section II. Then service identication, is introduced, which helps to get the activities of the user. Constructing scenarios, from the observations of service use is presented in section IV. Finally, results are presented in section V. II. A MBIENT A SSISTED L IVING
ARCHITECTURE

AAL architecture is about providing services to the users. These services are mainly realized on electrical devices (shutters, lights, bed ...) and activated through the home automation system or multimedia devices (TV, DVD, phone, Internet ...) obtained through an infrared remote or realized on a computer. A. Services To give some exibility at the execution, we consider that a service is the realization of one or many operations. An operation being the realization of a function (what) of the system on a given resource (where). This provides reconguration capabilities in case of the unavailability of a resource. One of the fact enlightened by the occupational therapists (OT) from the QuatrA project is the need of scenarios to shorten the delivery time of services and to reduce the energy needed for the activation of the devices (adapted

interfaces are generally difcult to handle). A scenario is the execution of multiple services together and is itself a service. Each service is associated with a Quality of Service (QoS), to assess whether the delivery to the user was good or not. Management of scenarios is performed by the system, it can be seen as a service but we will call it a manager since it doesnt directly delivers it service to the user. Other managers can be encountered in an AAL system, such as the alert management system that checks if the user behavior deviates from what it uses to be or scenario identication, presented in this paper and which monitors the usage of devices to propose new scenarios to the user. In the QuatrA project, we also supported the user with navigation capabilities. B. Hardware Architecture The execution of the services is performed on a resource. But a service is triggered by some program code. An hardware architecture as been dened to support the execution of the services across different technologies and in the scale of a rehabilitation center such as Kerpape. Here are the devices we may nd in such architecture :

User terminals, embedded with the wheelchair-bound user, deals with interactions between the user and his environment. It is responsible for service publishing and for sending commands to the central server via a mobile terminal (bluetooth connexion).

Mobile terminals, also embedded in the wheelchair, act as a bridge between the user terminal and the central server via a stationary terminal (bluetooth connexion). It is also used for path calculation and wheelchair control using embedded sensors.

Stationary terminals are not only a bridge as mentioned above, but also a control unit for local environment (infrared and KNX/EIB1 connexion).

A central server performs all the centralized tasks such as service and scenario identication, alert management, route management for wheelchairs.

Mobile and stationary terminal runs on low cost and low power embedded boards (IGEPv2) with an ARM processor running a lightweight version of Ubuntu Linux, it can access the KNX bus through ethernet and trigger IR commands through an USB dongle. The central server runs on a standard PC running a full desktop Linux whether the user terminals are smartphones or tablets associated with an adapted interface. C. Software Architecture A middleware called DANAH [12] has been developed to support the execution of services on the hardware architecture. Its role is to control the environment and activate the devices according to the user wishes. DANAH is composed of two parts : a server and a client. The client runs on the PDAs and the server on the IGEP board. User wishes are expressed on the client and executed on the server which triggers the devices that can either be on the KNX bus or controlled by infra-red. Scenario facilities are provided.
1 The

KNX/EIB (European Installation Bus), born of a consortium, is an open standard for home and building control.

Fig. 2.

Hardware architecture

D. Formalization of the service architecture From the methodology presented before, the database is collected from the HAS use. In this section, we propose two levels of modeling for the global system. The rst level denes the services proposed from the home automation and multimedia architecture, and is called service architecture. Based on basic services, the scenario level is built up within the scenario identication step. 1) Services modelling: The term of service is central to this approach. There is a need for an obvious denition of the underlying concepts to be able to properly handle them and use them in the data structures. First we need to dene the basic blocks: -Operations are elementary tasks performed by the home automation system. These tasks are the realization of a function (what) by a resource (how) that are divided into two classes: activation operations trigger a service while action operations do the job. Then a service can intuitively be dened with the following denition: - A service is an operation or a set of mutually dependent operations carried out by the system for the user. As stated in the denition, a service is related to the user who requests it. For instance, the open door service is composed of a launch open door from PDA activation operation and an open door action operation, which is automatically performed. Whereas the watching TV service consists of a launch on TV from PDA activation operation, and several action operations of turn on TV, change canal, +/- volume. However, a service can also be seen from the point of view of the HAS. Given the distinction between activation and action operations, a service is triggered by an activation operation, it then executes some action operations that provide an added value to the user. A service is called elementary if only one effect is perceived by the user. For instance, the listen to webradio service is considered as an elementary service, it contains a launch webradio from PDA operation, realizing through the PDA activation resource all necessary steps to set-up the webradio like ordered functions performed on PC : wake-up PC, set-up network, launch application and nally select canal. The denition of the services in the HAS architecture is willingly not complete. To decrease the number

of distinct services, the activation resource associated with the service is not dened until the service is proposed to the user. 2) Scenario modelling: Services are created and composed from the HAS architecture. Elementary services are dened in the architecture itself, though the activation resources are not yet determined. More complex services are built-up from elementary services, dening scenarios. The denition of scenarios can become complex, moreover, the pool of scenarios evolves in time and is different for each user, so we propose to dene scenarios on a higher level, as an association of services. Dening a scenario consists in combining existing services together and allocating an activation resource to each of these services within the scenario. Since each service is launched by the scenario manager, it becomes the activation resource. To enable the launch of a newly created scenario, an activation operation must be added to the scenario itself. From this denition, the execution of a scenario means the execution of all services within the scenario. In cases of automatic execution, these sets of services are automatically executed by the scenario management. We distinguish two kinds of scenario activation.

Automatic. In this kind, no user activation is spent. Semi-automatic. In this kind, we consider three ways to activate a scenario: parallel, where one activation simultaneously executes all services within the scenario; sequential, where one activation consecutively execute all services within the scenario; step by step, where a simple click on the HMI interface executes each service within the scenario.

We note that, both kinds of scenario activation allow to save the user energy by reducing the effort and time to manually nd each service on his HMI interface. Moreover, according to the previously Quatra project investment, different users need different scenarios. Given the interest of new scenarios for the improvement of user autonomy, and the uctuation of scenarios for different users, it is essential, a method to automatically nd new scenarios, adapted to the users habits that we present in the next sections. III. S ERVICE
IDENTIFICATION

We rely on the testimonies of OTs from the Kerpape center, who stress the usual regular habits of the disabled patients they assist. Moreover studies in geriatric contexts [18] show the same regularity and structuring role for certain categories of elder people living alone. Therefore, we consider that regular daily services, modeled previously, are likely to represent the habitual user daily services. Actually, each activation of a service gives a timed value, which is considered as a kind of virtual sensor. But a service can be performed several times per day, and these distinct occurrences can be associated to different scenarios. For instance, the turn on light service can be associated with the wake up scenario in the morning and other scenarios in the evening. Therefore, the rst question is how to distinguish different groups of service occurrence times, while there are no preexisting tags to rely on specic daily use periods ?

Answering this question deals with discovering how to split the population into smaller clusters such that data points in a cluster are more similar to each other than data points in different clusters. Such clusters express use periods of related service, called activities. This is the main principle of the service identication. A. Service subset selection From the above description, database for the analysis is characterized by the names and occurrence times of the services. They correspond to the use of identied HAS devices and the call times respectively, which produce temporal relationship between different services. After a period of observation, a set of occurrence times for each service is obtained. Besides temporal characteristic, a service can be classied into different types according to its use frequency. Typically, we have different kinds of services: daily service, is requested each day; weekly service, is requested each week; k-days service, is requested each k days. These data represent different types of services. In this work, we focus on the combination of different daily services to propose daily scenarios. Note that it is just a case study of the general method. The daily attribute of a service is measured by the number of occurrence days of this service. A service performed once a day for 10 days is more frequent than a service performed 10 times in only a day and absent for nine other days. From this idea, a service is considered as a daily service if it is observed more than a given ratio corresponding, for instance to 50% of N observed days (i.e threshold = 0.5). The value of this threshold depends on the user context, and user habits, as well as user capabilities. Consequently, it is considered as a user parameter to determine daily services corresponding to the user context, and will be advised by the OT or the user. This parameter allows to classify two types of services: daily services and other services. B. Clustering procedure The clustering procedure consists in three steps. First, data are grouped in as many clusters as wished by means of a clustering method. Performing this task for a set of integers allow us in a second time to choose the most appropriate number of clusters, according to a stability criterion. We nally perform an outlier detection within the resulting clusters. 1) Clustering: Collected data consist in occurrence times of each service during an observation period. We intend to group these data into clusters corresponding to different activities of this service. Since we usually have no a priori knowledge of the actual activities, we opted to the most popular of unsupervised clustering techniques, K-means method [16]. For a given integer k, K-means clustering gives a partition of the input data into k sets {C1 , . . . , Ck }, so as to minimize the following objective function:
k 2

E(k) =
r=1 xCr

x cr

(1)

where cr is the center of cluster Cr and . stands for the euclidean norm.

After an initialization step, K-means algorithm proceeds by alternating between assignment and update steps until convergence.

Initialization step: We choose randomly k cluster centers {c1 , . . . , ck } in the data set. Assignment step: We assign each observation in the data set to the cluster with the closest center:
(l) Cr = {xi , xi ck xi ck r j

j = 1, . . . , k}

(2)

Update step: We calculate the new cluster centers to be the means of the data in the clusters: c(l+1) = r where nr = card(Cr ).
(l)

1 nr

xi
(l) xi Cr

(3)

Optimization step: Because of the monotonically non-increasing property of the objective function, the convergence is always assured. However, there is no guarantee that the global optimum is reached. Furthermore, the result may depend on the initialization step. An heuristics is to run the algorithm for different starting points and to consider the best local optimum, e.g. the clustering, which assures the minimum of the objective function.

2) Determining the number of clusters: We generate a Gap curve for the input data by running K-means algorithm for all values of k between 1 and kmax , and for each k we compute the Gap between the resulting clustering and the clusterings of data following a null reference distribution. The largest jump in the Gap curve represents the best choice of k. We chose the uniform distribution to be the null reference distribution and we combined weighted Gap and weighted DDGap methods [22] to analyze the Gap between clusterings. The distortion of a clustering {C1 , . . . , Ck } is dened as:
k

Wk =
r=1

1 Dr 2nr (nr 1)

(4)

with Dr =

i,jCr

xi xj .

The Gap between a clustering in k clusters of the data set and the corresponding clustering of uniformly distributed data is given by: Gap(k) = E (log(Wk )) log(Wk ) (5)

where E (log(Wk )) is the expected log-distortion of uniformly distributed data sets. Weighted Gap method compares successive values of Gap(k) and the best choice of k corresponds to a maximum of Gap(k). On another hand, weighted DDGap method determines the best jump in the Gap curve by comparing the successive slopes: DDGap(k) = DGap(k) DGap(k + 1) with DGap(k) = Gap(k) Gap(k 1). The best choice of k corresponds to a maximum of DDGap(k). Combination of these two Gap measures is motivated by the fact that DGap is more likely than DDGap to overestimate the number of clusters, while DDGap suffers from the constraint k 2. (6)

Hence, we propose the following strategy to determine the number of clusters: 1) Run weighted Gap method. 2) If the best choice of k is 1, check the range of this unique cluster: a cluster with occurrence times separated by a lapse of time too long (greater than a xed threshold) may not be useful to learn habits. In this case, we have better to split the cluster and go to the next step. 3) If k 2, use DDGap method to estimate the number of clusters and check the ranges of all resulting clusters. Large range clusters are split by iterating steps 1 to 3. This procedure builds a top-down hierarchical tree of the data set and the resulting clusters correspond to different activities of the same service. Among obtained clusters, some may correspond to rare activities and others may include rare events. In the next step, we intend to detect such abnormalities . 3) Outlier detection: An outlier is an observation which markedly deviates from other members of the sample in which it occurs [2]. Outliers often contain useful information underlying abnormal behaviors and mining for outliers has a great number of applications in a wide variety of domains. However, in this section, we consider outliers as impediments to the learning process and as such must be identied and eliminated. At this point, a service is clustered into a set of activities and each activity is a set of occurrence times of the service. We are concerned with two types of outliers: outlying observations and outlying clusters. An outlying observation is an observation considerably dissimilar from the remainder of the data in its cluster; it corresponds to an abnormal event. An outlying cluster is a small cluster; it corresponds to a rare activity. Outliers mining method consists in: 1) Detecting small clusters: Following [13], a small cluster is dened as a cluster with fewer points than half the average number of points in the k clusters. 2) Detecting outliers in the rest of the clusters (if any): We compute the median absolute deviation (MAD) of the current cluster which is the mean of the distances between the median of the cluster and each one of the points in the same cluster. On another hand, for each point xk , we compute the median absolute mean of the current cluster deprived of xk , termed declared outlier. Both small clusters and outliers are eliminated before construction of scenarios. The clustering procedure is illustrated in Fig.3. Three clusters are identied, according to the occurrence times. After removing outlier clusters and outliers, which are represented in red circles, two clusters are acquired. Each acquired cluster represents a daily activity of the related service. In other words, a daily activity is dened as the service at a point of time, which allows to distinguish the same service at different occurrence times. Then, these activities are used for the identication of new scenarios, which is presented thereafter.
MADk

. If the ratio

MADk MAD

is greater than a given threshold, xk is

10

Fig. 3.

Clustering results

IV. S CENARIO

CONSTRUCTION

Carrying out every day life activities, even the most harmless ones, may be very difcult even impossible for certain elderly and disabled people. Enabling the automatic management of the home facilities may provide them a valuable support and contribute to give them more autonomy and better quality of life (entertainment, communication, self esteem...). To go farther, we aim to exploit automatic learning to provide a responsive domotic system which offers the user personalized scenarios tailored to his habits, able to anticipate his needs and able to adapt to changes in his habits. Let us rst focus on the construction of personalized and anticipatory scenarios. A. Activity clustering The basic idea of this activity clustering is to group close activities, requested frequently together into sets of activities which propose new scenarios. First, we intend to turn the idea of time-closeness into a metric of similarity used in clustering methods. In literature, many methods for data analysis have been adapted to use solely dissimilarities between data such as K-means clusterings, Kohonens Self Organizing Maps, HAC (Hierarchical Ascendant Classication)... In our context, accounting additionally the neighbor relation between activities in scenario construction, we chose the unsupervised algorithm of DSOM (Dissimilarity Self-Organizing Maps) [9] for grouping close activities into clusters, each one is considered as a new scenario. The data in this algorithm are the dissimilarity values between different activities. A dissimilarity measure is therefore required. Illustrating the dissimilarity between different activities by different shapes, the method is shown in Fig.4. Through DSOM algorithm, similar shapes can be grouped together. From this gure, we see that there are two problems to solve before using DSOM algorithm: i) the denition of dissimilarity between activities; ii) the choice of a relevant number of clusters, also called neurons.

11

Initialization

High similarity

DSOM algorithm
n N iterations

Fig. 4.

Illustration of Kohonen algorithm

1) Dissimilarity measure: We dene the cooccurrence frequency f{i,j} of two activities i and j as: f{i,j} = fij + fji 2 (7)

where fij is the frequency of the sequence (activity i activity j) during an observation period of N days. f{i,j} measures the frequency of the pair of activities {i, j} independently of their occurrence order. The relative lapse of time between the pair of activities {i, j} is dened as: ij = where ij is the mean lapse of time between activities i and j. i is the mean lapse of time between activity i and all other existing activities. The dissimilarity between activities i and j is measured by: ij (1 f{i,j} ) d(i, j) = 0 1 2 ij ij + i j (8)

if i = j (9) otherwise

We built d so as to take into account both the lapse of time and the cooccurrence frequency in such a way that when we x the relative lapse of time ij , the similarity between the pair of activities {i, j} increases with the frequency f{i,j} . On another hand, for a xed frequency, the smaller the relative lapse of time is, the higher the similarity between activities is. 2) Clustering by DSOM algorithm: The SOM algorithm [8] is a non linear projection technique for visualizing the underlying structure of high dimensional vectorial data. Input vectors are represented by prototypes arranged along a regular low dimensional map (usually 1D or 2D), in such a way that similar vectors in input space become spatially close in the map. However in several real applications, no vector data description is available and instead the

12

data are pairwise compared according to a dissimilarity measure. An adaptation of Kohonens SOM to dissimilarity data called DSOM, was proposed in [9]. DSOM algorithm proceeds as follows:

Initialization step: Assign a prototype to each cell (or cluster) of the grid. The prototypes may be chosen randomly in the data set but other heuristics are possible.

Learning step: Alternate between affectation and representation phases until there is no change in the grid. At iteration l: Affectation phase: Assign each observation in the data set to the cluster with the closest prototype regarding the dissimilarity measure. Representation phase: The new prototypes are calculated to be solutions of the minimization problems: cl = argminyC l1 j
j

h(i, j, l)d(x, y)
i
l1 xCi

(10)

l where d(., .) is the dissimilarity measure, cl is the prototype of the cell Cj at iteration l and h is a j

neighborhood function which decreases with time. A common choice for h is a gaussian kernel. For a given grid size, several initializations of DSOM are tried and the map with the least quantization error is selected. We choose the best size of the grid according to the variation of information criterion (VI) proposed in [14]. It is a criterion for comparing partitions, by measuring the distance between two partitions of the same data set. Applying this variation of information criterion for the evaluation of DSOM algorithm, the idea is to compare the stability of a clustering through the variation of information between the clustering of complete data set and the clustering of a reduced data set. The best estimate of the number of clusters is such that its clustering is the most stable with the reduction of some data points. The obtained clusters are sets of close activities called scenarios. The services within a scenario are automatically activated and we will see later the different modes of possible activations. For the moment, we need to order the activities within the scenarios in order to allow any automatic activation. B. Scenario ordering A scenario can be described as a weighted directed acyclic graph whose vertices are the activities within the scenario and the arcs are ordered pairs of activities. The arcs express the direct dependence between activities and as such the weight assigned to an arc (i, j) is the frequency of the sequence (activity i activity j), namely fij . A path is a sequence of vertices connected by arcs and its length is set to be the product of the weights of all traversed arcs: lij = fkl over all the arcs (k, l) of the current path. (11)

A hamiltonian path is a path which visits all vertices of the graph exactly once. We order the activities within a scenario by rst choosing the starting and ending activities and then choosing the hamiltonian path joining them:

13

Step 1: Given two activities i and j, we set the weight Lij of the event i is the starting activity and j is the ending activity to be: Lij = lij over all hamiltonian paths joining i to j. (12)

Step 2: The winning activities are the vertices which maximize Lij . Step 3: Once the starting and ending vertices selected, we choose among of all hamiltonian paths joining them the one (or possibly the ones) with maximal length lij .

The scenarios we deal with consist in a little number of activities; thus the task of searching hamiltonian paths remains of very reasonable cost. Once the scenarios ordered, they are proposed for validation to the user and the OT. Indeed, the user may be uninterested in a given scenario or more problematic, a scenario may go against a therapeutic purpose. The expertise of both the user and the OT is compulsory to make this solution an appropriate and useful answer to the automatic control of HAS environment. C. Dynamic adaptation The behavior of the user may change over time due to changes in his health, private life or seasonality for example. For instance, a change of user habits for the service turn on light between winter and summer is illustrated in Fig.5. To prevent deterioration in the calculated scenarios over time, we have to refresh them periodically. There are two strategies for achieving the updates. The rst one is to adapt the scenarios regularly without considering whether changes have really happened and the second one is to rst detect changes and next to adapt the scenarios. We adopted the rst strategy, as illustrated in Fig.6. It consists, for a xed time window of k days, in constructing scenarios after an observation period of N days and then to construct new scenarios based on the last N days. The choice of k is left to the OT. V. R ESULTS In the experiments, one real-world and articial data sets were used. The articial data sets were generated to analyze the robustness of the approach for handling irregular habits and noise in the data. A. Simulation design We developed a simulator under SCILAB, a free software for numerical computation, to generate articial data sets. To make them as realistic as possible, we followed Quatra project results and OTs advices to create the prole of a ctitious patient at Kerpape center. This patient was given three typical timetables : one for working days, one for saturdays and one for sundays. A typical timetable is a set of services sorted with their typical call times, as shown in Tab.I. Predened scenarios underly the choice of the above activities. For the working days for example, we have the following predened scenarios:

14

"Turn on light" service in winter

"Turn on light" service in summer

"Turn on light" service after 8 days of changes

Fig. 5.

Example of user habit change

1) Wake-up scenario [1-2-3-4] : switch on light, open shutter, turn on TV, turn on hot water. 2) Go-out scenario in the morning [5-6-7-8] : turn off TV, open the door, switch off light, close the door. 3) Go-in scenario in the noon [9-10-11] : open the door, setup bed, close the door. 4) Go-to-bed scenario in the evening [23-24-25-26] : setup bed, turn off TV, switch off out light, switch off light. A data set of N days is generated by introducing noise in the typical timetables by adding :

variations around the typical call times, rare events such as non call or abnormal calls of an activity.

15

Fig. 6.

Principle of dynamic adaptation TABLE I W ORKING DAY ACTIVITIES

Label E1 E2 E3 E4 E5 E6 E7 E8 E9 E10 E11 E12 E13

Time 08:00 08:05 08:10 08:15 08:45 08:55 09:00 09:05 13:00 13:10 13:25 14:30 15:00

Daily activities Switch on light Open shutter Turn on TV Turn on hot water Turn off TV Open door Switch off light Close door Open door Close door Setup bed Unset up bed Turn on computer
* *

E14 E15 E16 E17 E18 E19 E20 E21 E22 E23 E24 E25 E26

17:00 19:00 20:00 20:30 21:00 21:15 21:30 21:50 22:00 22:00 22:15 22:30 22:40

Turn off computer Switch on light Turn on TV Watch DVD Turn on light ext Hang on telephone Hang up telephone Close shutter Turn off DVD Setup bed Turn off TV Switch off out light Switch off light

wished service, but not yet available

We used gaussian law to create variations around the typical call times, uniform law to generate activities occurring randomly in an interval of time and Poisson law to generate rare events. For instance, the patient who usually goes to rehabilitation room at 9.30 am on working days is used to wake up around 8 am. His turn on light times for N days were generated via the gaussian law with mean 8 and variance expressing the regularity of the patient. The higher the variance is, the more irregular his habits are for this particular activity. Sickness and other impediments may cause exceptional calls or non call of an activity. These events are generated via the Poisson law whose parameter characterizes the frequency of such exceptions.

16

B. Simulation results We present here the results of a data set simulating 100 days of the ctitious patient. The lowest coefcient of variation (standard deviation over mean) is 15 minute and is associated with activity open door(standard deviation=4). The highest coefcient of variation is 70 minute and is associated with activity turn on light(mean =7 pm, standard deviation=4). The number of abnormal events represents 0.5% of the data set and 5% of the less dense activity. We are concerned with four matters:

the recognition of the proles the recognition of the daily activities the recognition of the scenarios the dynamic adaptation

1) Recognition of the proles : all three proles were identied. 2) Recognition of daily activities : all activities were correctly identied. Figure 7 shows the partition of the service turn on light into two activities : one in the morning around 8 am and one in the evening around 19 pm.

Turn on light, activity 1

Turn on light, activity 2

Fig. 7.

Occurrence of the turn on light service

3) Scenarios identication : DSOM clustering coupled with variation of information criterion lead to a stable clustering of data in a 3 3 grid. We ordered the activities within clusters as presented in section IV-B. The resulting scenarios are presented in Tab.II.

17

TABLE II S CENARIOS DSOM RESULTS FOR 3 X 3 GRID


[E23 E24 E25 E26] [E17 E18 E19] [E16] [E20 E21 E22] [E13 E14 E15] [E5 E6 E7 E8] [E12] [E9 E10 E11] [E1 E2 E4 E3]

4) Dynamic adaptation : With the seasonality changes of the user, for instance, we chose a xed window of 2 days for the dynamic adaptation. Observing the user habit changes for 25 days, after 8 days of changes, the contain of thewake-up scenario is changed to [2-3-4]: open shutter, turn on TV, turn on hot water. It adapts to the new user habit from winter to summer for the activity turn on light in the morning (c.f. Fig.5). Proposed method showed its robustness by successfully partitioning the data despite the presence of noise and variability in the habits. It is also capable of coping with a certain amount of outliers, thus avoiding the need of extensive preprocessing of the data. C. Real experimentation 1) Data acquisition: The experimentation was performed in the room of an aged patient with reduced physical abilities at Kerpape center, based on preexisting home living devices and IR control. An IR receptor connected to a PC allowed us to record the patients daily activities through his use of HAS devices. The equipments controlled by IR remote control were: a television, a central light, a bed light, two shutters, a telephone and a nurse call switch. The rst issue we faced with the system was its inability to signal changes in the on/off states of the bed light and the shutters. Another problem was the central light, which could be activated by other switches than the one used by the patient and that was often the case. That made us consider the activation of the bed light ON bed light and the activation of the shutters ON shutter as whole services irrespective of their On/Off states. The activation of the central light was too questionable to take it into account. On another hand, since the services turn on television and turn off television are related, we decided to only consider the service turn on television and to collect the durations during which the television was on. We ended with seven services:

ON bed light ON left shutter ON right shutter Turn on television Call medical staff Hang on phone Hang off phone

We collected data for 21 days and in a rst step, we used them to label the services according to their regularity. Regular services are services which are requested by the user an amount of days greater than a predened threshold.

18

Following OTs advice, we set the threshold to be 0.5 meaning that a regular service is a service which is required at least every two days. It happened that the user requested the services related to the phone and the call of staff frequently some days but over the observation period, he did not request them regularly so they were eliminated from the rest of the study. The activity recognition, coupled with an activity pruning, gave us 14 activities listed in Tab.III. The activities have been labeled according to their mean occurrence times. However, their order of presentation does not presume their occurrence order : activity E3 for example does not necessarily occur after activity E2. They may not happen together or activity E2 may sometimes occur before E3.
TABLE III TABLE OF
USER PROFILE IN REAL EXPERIMENTATION

Habitual duration Label Time Daily activities Lower quartile E1 E2 E3 E4 E5 E6 E7 E8 E9 E10 E11 E12 E13 E14 [5h01 - 6h25] [5h36 - 7h14] [5h44 - 7h43] [5h50 - 8h52] [9h49 - 12h33] [11h53 - 13h12] [13h12 - 15h35] [15h07 - 16h59] [16h02 - 18h31] [19h28 - 20h29] [19h13 - 23h21] [21h58 - 22h31] [22h16 - 22h42] [22h17 - 22h57] ON bed light ON left shutter ON right shutter ON television ON television ON bed light ON television ON bed light ON television ON bed light ON television ON bed light ON right shutter ON left shutter / / / 22 47 / 76 / 17 / 42 / / / Upper quartile / / / 48 101 / 261 / 68 / 52 / / /

The clustering procedure lead to the 2 2 grid presented in Tab.IV. This grid corresponds to the most stable clustering of data according to the variation of information criterion. The four scenarios in the map have been ordered following the ordering procedure presented earlier. The scenarios received a further validation from the patient who found them relevant, especially :

the wake-up scenario : switch on light, open left shutter, open right shutter, turn on television the go-to-bed scenario : switch off light, turn off television, close left shutter, close right shutter

19

TABLE IV S CENARIOS DSOM RESULTS FOR 2 X 2 GRID

[E5 E7 E6] [E1 E3 E2 E4]

[E8 E9 E10] [E12 E11 E14 E13]

Not so anecdotal, these scenarios showed that the patient is used to open and close the shutters in an order, which turned out to be dictated by the layout of the room. Pragmatically, these scenarios can be useful for this patient who has great difculties to activate a service by his own on HMI interface. Indeed, if we design the HMI interface according to the scenarios, activating a whole scenario should be done by a single click. For scenarios requesting step by step activations, we still save the users efforts since at each step, the more likely activity is proposed to activation. To conclude, despite poor quality data and limited services, we managed to bring out relevant scenarios and preferences of the patient. The simplication of control on HAS devices can improve his comfort, while giving facility to access to the services. Moreover, the order between activities within scenarios, which is based on the use frequency of these activities, provides useful information for the design of adapted HMI interface for the user, by setting different activities close in the HMI as their order. In consequence, the user can save his effort to request sub scenarios. D. Discussions In these simulated data, it is considered that a complete HAS is equipped for the user context. Thus, we obtained new interesting scenarios adapted to the user habits, and corresponding to the Quatra results. Moreover, the automatic generation of user daily activities allows to gure a large number of user situations, in order to test strategies of proposed approach. With real data, though the current HAS has some limits, an interesting scenario that the user frequently requested is obtained, which is quite useful for the patient. The positive opinion from the patient at Kerpape center emphasizes the usefulness of automatic scenario in improving the users autonomy in his own environment. These rst results of obtained automatic scenarios shows the reality of a exible approach to propose assisted services adapted to the users habits. From these results of simulated data and some rst results of current experimentation, the use of HAS services as built-in sensors has double advantages of transparently collecting user data and propose added value for the user. On the other hand, we consider highly regular HAS use as trustworthy user habits, so we can identify deviations as indicative anomaly for alert detection, which is not presented in this paper. Although we have constructed a realistic simulator, for which the proposed solution are approximate, only real data enable to validate proposed approach. This validation consists in checking that the real threshold values are close to the chosen values used in the simulator. We also noted that different users adapt to different threshold values, even when in the same environment. So, considering that no tool can fully self adapt to human factors and specicities, we came to the conclusion that an efcient approach consists in a comprehensive methodology

20

sustained by a exible software framework. Therefore, the validation of OT/user is integrated in the system for adapting the solution to each different users context. In this sense, the proposed method can not be applied for a user without any regular habits, and the installation of a complete HAS is needed for an effective implementation of proposed approach in real context. Considering the important role of user and OT feedback, adapted HMI interfaces for user and OT are necessary to update the system parameter, which make the system more dynamically adapted to the user context, meaning a more exible software framework. VI. C ONCLUSION This talk describes the set-up of an AAL architecture for providing services to the user. We then show how this architecture can be used to propose scenarios. These scenarios are important to user, because it shortens the time that would be needed to perform each action separately while tting their exact needs. Our originality lies in the activity recognition techniques that does not rely on data collected by dedicated sensors as we want to use information that is already present in an home automation system. To cope with the information we had, we set-up probabilistic tools. Two main contribution have been proposed in this sense. The rst contribution deals with the learning phase of user habits through a clustering procedure. This phase aims to recognize different activities of the same service, according to the occurrence times observed. Based on this rst contribution, the second contribution is the identication of new scenarios for the user with disabilities in an automated environment. The combination of activities clustering and an ordering of activities within scenarios allows to obtain automatic scenarios that improve the user autonomy while facilitating the use of daily services with single control. These two contributions allow for the elderly and disabled people to be more autonomous, while ethical problem is less important with built-in sensors. However, the current experimentation can not completely adapt to our needs. So in the future work, we intend to install the complete embedded HAS architecture in Kerpape Center to validate the solutions. The design of adapted HMI interfaces is also essential to deliver a full and exible system for home healthcare. R EFERENCES
[1] F. de Lamotte et al. Quatra: Final report (technical report, in french). Lab-STICC. See demo here: http://www.youtube.com/watch?v=T6GCFnkLTc0, June 2008. [2] V. Barnett and J. Lewis. Outliers in statistical data. Eds. TECHNIP, 1994. [3] M. Chan, D. Esteve, C. Escriba, and E. Campo. A review of smart homes - present state and future challenges. Computer Methods and Programs in Biomedecine, pages 5581, 2008. [4] F. Doctor and H. Hagras. A fuzzy embedded agent-based approach for realizing ambient intelligence in intelligent inhabited environnements. IEEE Trans. Systems, Man, and Cybernetics, Part A, 35(1):5565, January 2005. [5] A. Fleury, M. Vacher, and N. Noury. SVM-Based multimodal classication of activities of daily living in health smart homes: Sensors, algorithms, and rst experimental results. IEEE Transactions on Information Technology in Biomedicine, 14(2):274283, 2010. [6] L. Hsien-Chou and T. Chien-Chih. A RDF and OWL-Based temporal context reasoning model for smart home. J. Information Technology, 2007. [7] H. Jian, D. Yumin, Z. Yong, and H. Zhangqin. Creating an Ambient-Intelligence environment using Multi-Agent system. In Proc. Int. Conf. Embedded Software and Systems Symposia, pages 253258, 2008.

21

[8] T. Kohonen. Self Organizing Maps. Springer, 3 edition, 2001. [9] T. Kohonen and P.J. Somervuo. Self-organizing maps of symbol strings. Neurocomputing, 21:1930, 1998. [10] N. Kushwaha, M. Kim, D.Y. Kim, and W-D. Cho. An intelligent agent for ubiquitous computing environments: smart home UT-AGENT. In Second IEEE Workshop on Software Technologies for Future Embedded and Ubiquitous Systems, pages 157159, 2004. [11] H. Kwee, M. Thonninsen, G. Cremers, J. Duimel, and R. Westgeest. Conguring the MANUS system. Proc. RESNA Int., pages 584587, 1992. [12] S. Lankri, P. Berruet, A. Rossi, and J.L. Philippe. Architecture and models of the danah assistive system. In Proc. of the 3rd Workshop on Services Integration in Pervasive Environments (SIPE 2008), 2008. [13] A. Loureiro, L. Torgo, and C. Soares. Outlier detection using clustering methods: a data cleaning application. In In proceedings of the Data Mining for Business Workshop, 2004. [14] M. Meila. Comparing clusterings - an information based distance. J. Multivariate Analysis, 98(5):873895, May 2007. [15] M.J. Topping and J.K. Smith. The development of HANDY 1. A robotic system to assist the severely disabled. Technology and Disability, 10(2):95105, 1999. [16] U. Naeem, J. Bigham, and J. Wang. Recognising activities of daily life using hierarchical plans. Lecture Notes in Computer Science, 4793:175, 2007. [17] N. Noury, G. Virone, J. Ye, V. Riall, and J. Demongeot. New trends in health smart homes. ITBM-RBM, 24:122135, 2003. [18] B. Ostlund. Watching television in later life: a deeper understanding of TV viewing in the homes of old people and in geriatric care contexts. Scandinavian J. of Caring Sciences, dec 2009. [19] E. Tapia, S. Intille, and K. Larson. Activity recognition in the home using simple and ubiquitous sensors. In Pervasive Computing, pages 175158. 2004. [20] T.S. Barger, D.E. Brown, and M. Alwan. Health-status monitoring through analysis of behavioral patterns. IEEE Trans. Systems, Man, and Cybernetics, Part A, 35(1):2227, 2005. [21] G. Virone, N. Noury, and J. Demongeot. System for automatic measurement of circadian activity in telemedicine. IEEE Transactions on Biomedical Engineering, 49:14631469, 2002. [22] M. Yan and K. Ye. Determining the number of clusters using the weighted gap statistic. Biometrics, 63(4):10311037, 2007. [23] N. Zouba, F. Bremond, M. Thonnat, and V.T. Vu. Multi-sensors analysis for everyday activity monitoring. Int. Conf. Science of Electronic, Technololgy of Information and Telecommunication, 4, March 2007.

You might also like