MOBISERV – FP7 248434 - CORDIS



|MOBISERV – FP7 248434 | |

|An Integrated Intelligent Home Environment for the Provision of Health, Nutrition and Mobility Services to the | |

|Elderly | |

| | |

| | |

| |Final Deliverable | |

| |D2.2: MOBISERV Validation Plan (Issue 3) |

| |[pic]Date of delivery: Sept 30th , 2011 (Updated to Dec 16th 2011) |

| |Contributing Partners: UWE, ST, SMARTEX, CSEM, ROBS, AUTH, SMH |

| |Date Issued: 20th Dec 2011 |Version: Issue 3 v6.0 |

Document Control

|Title: |D2.2: MOBISERV Validation Plan |

|Project: |MOBISERV (FP7 248434) |

|Nature: |Report |Dissemination Level: Restricted |

|Authors: |UWE, SMH, SYSTEMA, AUTH, CSEM, SMARTEX, ROBOSOFT |

|Origin: |UWE |

|Doc ID: |MOBISERV D2.2vol3_v6.0 |

Amendment History

|Version |Date |Author |Description/Comments |

|v0.1 |2011-09-12 |UWE |First Version |

|v1.1 |2011-09-21 |UWE |Second Version |

|v2.0 |2011-09-26 |UWE, SMH |Third Version – Requesting Technical Partner input |

|v3.0 |2011-10-17 |UWE, SMH, ST, CSEM and |Input from Systema, CSEM and Smartex |

| | |Smartex | |

|V4.0 |2011-12-04 |UWE, SMH, ST, CSEM, |Input from Robosoft and Thessaloniki |

| | |Smartex, ROBS, AUTH | |

|V5.0 |2011-12-16 |UWE, SMH, ST, CSEM, |Final review |

| | |Smartex, ROBS, AUTH | |

|V6.0 |2011-12-20 |UWE, SMH, ST, CSEM, |Responses to Internal Moderation (LUT) incorporated |

| | |Smartex, ROBS, AUTH | |

Table of contents

Executive Summary 10

1 Introduction 12

1.1 System and scope 12

1.2 Objectives and constraints 13

1.3 Intended Audience 14

2 Qualification and Validation of MOBISERV system 15

2.1 Validation approach 15

2.2 WP 4 Component Validation (Nutrition Support System) 17

2.2.1 KPIs and Validation Plan 17

2.2.1.1 KPIs to be used 17

2.2.1.2 Benchmarking Tests to carried out – Methodology 20

2.3 WP 5 Component Validation (Data Logger) 22

2.3.1 KPIs and Validation Plan 22

2.3.1.1 KPIs to be used 23

2.3.1.2 Benchmarking Tests to carried out – Methodology 24

2.4 WP 5 Component Validation (Smart Garments) 26

2.4.1 KPIs and Validation Plan 26

2.4.1.1 KPIs to be used 26

2.4.1.2 Benchmarking Tests to carried out – Methodology 27

2.4.1.3 Operational safety criteria and testing process 29

2.5 WP 6 Component Validation (Information and Coordination and Communication support system) 30

2.5.1 KPIs and Validation Plan 31

2.5.1.1 KPIs to be used 31

2.5.1.2 Benchmarking Tests to carried out – Methodology 32

2.5.1.3 Operational safety criteria and testing process 37

2.6 WP 7 Component Validation (Robotic Platform) 40

2.6.1.1 KPIs to be used 40

2.6.1.2 Benchmarking Tests to be carried out – Methodology 42

2.6.1.3 Operational safety criteria and testing process 45

3 Usability Evaluation Plan (Field Trials at User Sites) 50

3.1 Scope of the first prototype evaluation 50

3.2 Scope of the second prototype evaluation 50

3.3 Key performance indicators 51

3.4 Part A Expert evaluation 51

3.4.1 Participation 51

3.4.2 Technical requirements 51

3.4.3 Aim 51

3.4.4 Activities 51

3.4.5 Outcome 52

3.5 Part B Small focus groups 52

3.5.1 Participation 52

3.5.2 Technical requirements 52

3.5.3 Aim 52

3.5.4 Activities 52

3.5.5 Outcome 53

3.6 Part C Field trials of individual components with users 54

3.6.1 Study Aims 54

3.6.2 Participants 54

3.6.3 Timescales 54

3.6.4 Functions to be evaluated 55

3.6.5 Session 1 (50 minutes) – at individual participant’s homes 55

3.6.5.1 Informed consent and explanation of session (10 minutes) 55

3.6.5.2 Pre-test interview (10 mins) 55

3.6.5.3 Voice training (30 mins) 55

3.6.6 Session 2 (50 minutes) – 2 focus groups (older people and carers) 55

3.6.6.1 Explanation of session (10 minutes) 55

3.6.6.2 Demonstration of PRU (30 minutes) 55

3.6.6.3 Demonstration of WHSU (UK only) (10 minutes) 56

3.6.7 Session 3 (90 minutes) – with individual participants at UK/NL test locations 56

3.6.7.1 Test scenarios (45 mins) 56

3.6.7.1.1 Primary users 56

3.6.7.1.2 Secondary users 58

3.6.7.2 Break (5 mins) 58

3.6.8 Post-session discussion (40 mins) 58

3.6.8.1 Level of user satisfaction 59

3.6.8.2 Level of ease of use (ergonomic) of the input devices for the function 59

3.6.8.3 User rating of function output/feedback (in relation to quality, utility and comprehensibility) 60

3.6.8.4 Acceptance criteria 61

3.6.9 Analysis Plan 61

3.6.9.1 Data analysis after sessions 61

3.6.9.1.1 Level of function usage by the user 61

3.6.9.1.2 System response time for the component/function to user input (voice and touch) 62

3.6.9.1.3 Success of system to adapt to change in environment (e.g. background noises, lighting) 62

3.6.9.2 Video analysis after the sessions 62

Ease of configurability of function settings 63

3.6.9.3 Outcome 63

3.7 Part D Hazard Analysis 64

4 Responsibilities and Documentation 66

4.1 WP leader responsibilities 66

5 References 67

6 Appendix 68

6.1 MOBISERV Project Objectives 68

6.2 Smart home environment 69

6.3 Heuristics for Expert Evaluation 70

6.4 Specification of the Evaluation Wizard 73

6.4.1 Introduction 73

6.4.2 Requirements for Wizard 73

6.4.2.1 Global navigation functions 73

6.4.2.2 Global HMI functions 73

6.4.3 Specific functions 74

6.4.3.1 Drinking 74

6.4.3.2 Eating 74

6.4.3.3 Front door 74

6.4.3.4 Exercise 74

6.4.3.5 Voice/video 74

6.4.4 Proposed design of the Wizard interface 75

6.4.5 Other issues 75

Table of Figures

Figure 1: Relation between project deliverables 13

Figure 2 USE and Validation as part of system development process 15

Figure 3 Validation curves for a continuous video sequence from MOBISERV-AIIA database. 22

Figure 3 2D Layout - Smartest Home of The Netherlands in Eindhoven, 69

Figure 4 Possible wizard control panel layout 75

List of Tables

Table 1 Functional and Non-functional requirements 16

Table 2 KPI Table WP4 Components 18

Table 3 Confusion matrix for an LODO cross-validation experiment with 40 dynemes and classification accuracy rate 79.82%. 19

Table 4 KPI Measurement Table WP4 Components 20

Table 5 KPI Table WP5 Components (Data Logger) 24

Table 6 KPI Measurement Table WP5 Components (Data Logger) 25

Table 7 KPI Table WP4 Components (Smart Garments) 27

Table 8 KPI Measurement Table WP4 Components (Smart Garments) 29

Table 9 KPI Table WP6 Components 32

Table 10 KPI Measurement Table WP6 Components 36

Table 11 KPI Table WP7 Components 42

Table 12 KPI Measurement Table WP7 Components 45

Table 13 Technical Characteristics of Kompai 46

Table 14 Kompai Control Panel Components 48

Table 15. Scope of focus groups 53

Table 16. Target participants for field trials 54

Table 17. Field trial timeline 55

Table 18: Project Objectives with Targets 68

Table 19 Heuristics for Expert evaluation 72

Glossary

|Term |Explanation |

|MOBISERV |An Integrated Intelligent Home Environment for the Provision of Health, Nutrition and Mobility Services to the |

| |Elderly |

|KPI |Key Performance Indicator |

|USE |Usability Evaluation |

|ILAEXP |Independent Living & Ageing & cross-industrial committee of experts |

|USE |Usability Evaluation |

|PRU |Physical Robotic Unit |

|WHSU |Wearable Health Support Unit |

Executive Summary

This document, D2.2: MOBISERV Validation Plan - Issue 3, builds on the previous versions of D2.2 and D2.4 (available from the MOBISERV project website) by defining in more detail the VALIDATION PLAN for the MOBISERV system, its components and required processes. The results will be documented in protocols and separate reports to clearly specify whether the component, the process or the entire MOBISERV system meet the pre-defined project functional and non-functional requirements or not.

In the present approach, User Evaluation and Validation will be considered against identified qualitative and quantitative key performance indicators (KPI) to satisfy functional and non-functional requirements identified at the start of the project as part of the user needs assessment (see document D2.3: Initial System Requirements Specification Vols I, and II) and the main MOBISERV project objectives outlined in Appendix section 9.1.

Controlled experiments for both laboratory and field-testing at user sites will support this validation process to measure the acceptability and usability of components and system functions. A complete list of criteria and methods is given in D2.4, along with descriptions in KPI tables in this document.

This version further updates identified KPIs for specific functional requirements that the ILAEXP committee deemed as being high priority in relation to the core MOBISERV aims and objectives. It also includes qualitative KPIs and specific KPIs for MOBISERV components. The third update of the plan will complete this list when the first context-aware and nutrition support prototype, the first coordination and communication system prototype and the robotic platform first prototype are available to Usability Evaluation and Validation experts.

In summary, the objectives of the Usability Evaluation and Validation process in MOBISERV are as follows:

1. Create evidence-based documents that clearly specify whether the component, the process or the entire MOBISERV system meet the pre-defined project requirements (see D2.3 vols I and II) or not.

2. Identify possible hazards in the system (e.g. through a hazard analysis).

3. Identify appropriate and measurable KPI for each of the MOBISERV system components (WP4, WP5 and WP6) as a stand-alone component and within the integrated overall MOBISERV system.

4. Verify that the MOBISERV equipment and their integration meet the functional and non-functional requirements identified in D2.3 vols I and II.

5. Show that MOBISERV can achieve its overall objectives, which is to develop and use up-to-date technology in a coordinated, intelligent and easy to use way to support independent living of older persons.

Introduction

1 System and scope

This document presents the validation and field evaluation plans for the MOBISERV platform, providing an Integrated Intelligent Home Environment for the Provision of Health, Nutrition and Mobility Services to the Elderly. This system consists of various complex technical components and interacting processes that will be integrated to an automated system in a so-called ‘smart home environment’ in Eindhoven, NL (Appendix Figure 3) which will be one of the user evaluation sites, the other evaluations will be conducted in Bristol, UK. Main components of this system are:

• An autonomous mobile personal robotic unit (PRU) with an interactive GUI

• a multi-sensor in textiles embedded system for vital signs and activity monitoring

• a context-aware and nutrition support system (monitoring system) and

• a coordination and communication system

The present Validation Plan D2.2 is created based on the EU guideline for Good Manufacturing Practice (GMP)[1]. Annex 15 in this guideline outlines the principles for the Qualification and Validation of equipment and processes.

The plan will be maintained as a live document that will be reviewed throughout the project. Two updates of the initial plan (which was submitted at the end of the first quarter of the first year) were planned at time periods when MOBISERV components including their specific technical component specifications became available to Usability Evaluation (USE) experts. This is the second update provided in calendar month M21 when the first context-aware and nutrition support prototype, the first coordination and communication system prototype and the robotic platform first prototype are available. Figure 1 outlines these three deliverables (D2.2 Issue 1, D2.2 Issue 2 and D2.2 Issue 3) with respect to other deliverables of the current project.

[pic]

Figure 1: Relation between project deliverables

2 Objectives and constraints

WP 2 of the overall project seeks to develop and employ a user centred development process, where the key focus is to ensure that at each stage in the product design process there is a rigorous analysis and evaluation of each component from the end-user’s perspective. This is done to assure that by the time each single component will be integrated to the overall MOBISERV system, it will provide the necessary functionality and will be considered usable by the end users.

The objectives of the USE (Usability Evaluation) and Validation process can be summarised as follows:

1. Create evidence-based documents that clearly specify whether the component, the process or the entire MOBISERV system meet the pre-defined project requirements (see D2.3 vols I, II and III) or not.

2. Identify possible hazards in the system (e.g. through a hazard analysis).

3. Identify appropriate and measurable Key Performance Indictors (KPIs) for each of the MOBISERV system components (WP4, WP5 and WP6) as a stand-alone component and within the integrated overall MOBISERV system.

4. Verify that the MOBISERV equipment and their integration meet the functional and non-functional requirements identified in D2.3.

5. Show that MOBISERV can achieve its overall objective which is to develop and use up-to-date technology in a coordinated, intelligent and easy to use way to support independent living of older persons.

3 Intended Audience

The Validation and Usability Evaluation Plan provides a means of communication to everyone associated with the project. All project members will be able to use this document as a guide to implement functional and non-functional requirements identified as part of the user needs assessment. All technical WP leaders have provided information for this document as to how their individual and integrated will be validated.

The document will be reviewed by the technology manager (TM) and committee and maintained by the USE (Usability Evaluation) representatives and USE management to ensure conformance to stated process and procedures and hence delivery against stated objectives.

Qualification and Validation of MOBISERV system

1 Validation approach

As already outlined in section 1.2, WP2 aims to employ a user centred development process where the key focus is to ensure that at each stage in the product design process there is a rigorous analysis and evaluation of each component from the end-user’s perspective. The diagram below outlines a general product development process including highlighted stages which are part of our USE and Validation approach.

[pic]

Figure 2 USE and Validation as part of system development process

(adapted: Roozenburg, el at., 1995)

Usability Evaluation and Validation will be considered against qualitative and quantitative key performance indicators (KPI)[2] in relation to functional and non-functional requirements identified at the start of the project as part of the user needs assessment (see document D2.3 vols I and II) and main MOBISERV project objectives outlined in Appendix section 9.1. Functional requirements describe the behaviours (functions or services) of the system that support user/project goals, tasks and activities (Malan and Bredemeyer, 2001). A complete list of these identified functional requirements is listed in document D2.3 Vols I and II. The main functional system requirements of the MOBISERV system, selected by the ILAEXP committee, are given below in Table 2. On the other hand, non-functional requirements are important properties and characteristics of the system, i.e. qualities that the users care about and hence will affect their degree of user satisfaction with the system. These non-functional requirements were identified also within WP 2 as part of the user assessment (D2.3 vol II and D2.4).

Table 1 Functional and Non-functional requirements

|Main Functional system |Selected Non-functional system |

|requirements selected by ILAEXP |requirements |

|Reminder and Encouragement to eat |Customisability and Support |

|Reminder and Encouragement to drink |Effectiveness and Support |

|Telemedicine/ self check platform |Security and privacy |

|Games for social and cognitive stimulation |Engagement and effectiveness |

|A mobile screen connected to the front door |Reliability |

|Response to call for help from user |Effectiveness |

|Voice/Video/SMS via Robot |Ease of communication |

|-To communicate | |

|with friends and relatives | |

|-For a social caregiver to access remotely | |

|Encouragement to exercise |Adaptability to user’s individual patterns of behaviour |

|Report and communication to health professionals |Privacy and reliability |

2 WP 4 Component Validation (Nutrition Support System)

1 KPIs and Validation Plan

AUTH has focused its research on the eating and drinking activity detection and recognition. Regarding the functionalities of the Nutrition Support System, the eating and drinking detection and recognition subsystem is related to ‘Reminder and Encouragement to eat’ and ‘Reminder and Encouragement to drink’ components.

Eating/Drinking activity recognition is performed by a module named NutritionActivityDetection which will interact with a Microsoft Robotics Developer Studio (MRDS) service named NutritionActivity. Monitoring will be performed at pre-specified time intervals. Information about these time intervals will be provided by the older person or a secondary user (through a graphical user interface), stored in a database and retrieved by the NutritionAgenda module. The InteractionManager will inform the NutritionActivity service that the NutritionActivityDetection module should be initialized and start monitoring. Furthermore, the InteractionManager will interact with the NutritionActivity service in order to obtain information about the Eating/Drinking activity of the person under consideration. The NutritionActivityDetection module should be able to respond to questions set by NutritionActivity service giving a measure of confidence.

For facial expression recognition the developed module performs recognition of the provided facial image depicting a subject while performing a facial action, in one of the seven facial expression classes namely: Anger, Disgust, Fear, Happiness, Sadness, Surprise and Neutral. For details, the reader is referred to D4.1 and D4.2.

1 KPIs to be used

In the validation plan we identify KPIs to give a qualitative and quantitative assessment of the performance of the Eating and Drinking Detection and Tracking Algorithms. The performance indicators will be based on various measures described below.

|WP |Sub ID |System/Sub-Component |Name of KPI |Type of Measure |Expected Target Value |Nature of Expected |

| | | | | | |Result |

|4 |KPI4_1 |Eating and Drinking |Accuracy Rate |Accuracy |Maximized |number |

| | |Detection Algorithm | | |(e.g. >80%) | |

|4 |KPI4_2 |Eating and Drinking |Confusion Matrices |Classification |Balanced accuracy between |NxN matrices, N = |

| | |Detection Algorithm | |Accuracy |the classes |number of classes |

| | | | | |(e.g. difference < 10%) | |

|4 |KPI4_3 |Eating and Drinking |True Positives |Statistical Result |Maximized (e.g., >80%) |number |

| | |Detection Algorithm | | | | |

|4 |KPI4_4 |Eating and Drinking |False Positives |Statistical Result |Minimized (e.g., 0.6) |number |

| | | |Accuracy | | | |

|4 |KPI4_6 |Eating and Drinking |Measure of |Confidence of the |Low confidence for the |number |

| | |Detection Algorithm |Confidence |activity detection |false positives (e.g., | |

| | | | |results |0.6) | |

|4 |KPI4_7 |Facial Expression |Accuracy Rate |Classification |Maximized (e.g., >80%) |number |

| | |Recognition Algorithm| |Accuracy | | |

Table 2 KPI Table WP4 Components

Accuracy Rate

Classification accuracy rate is calculated as the number of the samples classified in the class they actually belong divided by the total number of samples and then multiplied with 100. It is used in various cross validation experiments that were conducted in order to evaluate our algorithms.

Confusion Matrices

The confusion matrix shows in which class each elementary activity video should be classified and in which it was assigned finally. This gives a clearer picture regarding the classification results.

|Actual |Label |eat |drink |apraxia |

|Label |Found | | | |

|eat |304 |45 |21 |

|drink |74 |218 |0 |

|apraxia |32 |10 |198 |

Table 3 Confusion matrix for an LODO cross-validation experiment with 40 dynemes and classification accuracy rate 79.82%.

True Positives

A sample is identified to belong in a class (the primary class) in which it actually belongs.

False Positives

A sample is identified to belong in a class in which it does not belong. For example, an elementary activity video is classified in ‘eating activity’ class, but it illustrates a person while chewing his/her food.

Average Tracking Accuracy

Average Tracking Accuracy (ATA) is calculated by averaging the Frame Detection Accuracy (FDA) measure over all video frames. FDA calculates the overlap area between the ground truth object G and the detected object D at a given video frame t. It takes values from 0 (when the object is lost) to 1 (when detection is accurate).

Measure of Confidence

In order to take a decision based on the activity recognition results, we post process them based on the frequency of the recognized activities. In detail, the algorithm counts in the xml file the eating atoms as well as their occurrence timespan. Therefore, based on a threshold value a binary decision is taken for eating and drinking activity. Moreover, based on the count of eating atoms and their timespan a confidence level is extracted for this decision.

|WP |Sub ID |System/Sub-Component |Name of KPI |Method of Measuring |Data set to be used for|Number of |

| | | | | |testing |repetitions |

|7 |KPI5_1 |Eating and Drinking |Accuracy rate |Leave one day out cross |MOBISERV-AIIA Eating |4 |

| | |Detection Algorithm | |validation (classification|and Drinking Activity | |

| | | | |accuracy) |Recognition Database | |

|7 |KPI5_2 |Eating and Drinking |Accuracy rate |Leave one out cross |MOBISERV-AIIA Eating |12 |

| | |Detection Algorithm | |validation (classification|and Drinking Activity | |

| | | | |accuracy) |Recognition Database | |

|7 |KPI5_3 |Eating and Drinking |Accuracy rate |Leave one out cross |ANANZ Eating and |3 |

| | |Detection Algorithm | |validation (classification|Drinking Database | |

| | | | |accuracy) | | |

|7 |KPI5_4 |Eating and Drinking |Accuracy rate (frame |Validation Curves |MOBISERV-AIIA Eating |1 |

| | |Detection Algorithm |by frame) | |and Drinking Activity | |

| | | | | |Recognition Database | |

|7 |KPI5_5 |Eating and Drinking |Accuracy rate (frame |Validation Curves |ANANZ Eating and |1 |

| | |Detection Algorithm |by frame) | |Drinking Database | |

|7 |KPI5_6 |Facial Expression |Accuracy rate |Leave one person out cross|Cohn-Kanade Database |5 |

| | |Recognition Algorithm| |validation | | |

|7 |KPI5_7 |Facial Expression |Accuracy rate |Leave one person out cross|JAFEE |5 |

| | |Recognition Algorithm| |validation | | |

Table 4 KPI Measurement Table WP4 Components

2 Benchmarking Tests to carried out – Methodology

In order to evaluate the performance of our algorithms various methods of measuring the KPI were used.

Leave One Person Out Cross Validation

This generalized form of the Leave One Out Cross Validation excludes the whole set of patterns belonging to a specific person from the training set and uses this set as validation data. This procedure is repeated as many times as the number of persons participated in the recordings of MOBISERV-AIIA database.

Leave One Day Out Cross Validation

This type of Cross Validation excludes the patterns which belong to the recordings of one day from the training set and uses this set as validation data. This is repeated 4 times, which is the number of distinct recordings of each person in MOBISERV-AIIA database.

Validation Curves

In order to evaluate the performance of an activity recognition algorithm with a trained model applied in a whole video sequence (continuous functionality) a specific procedure is followed.

Two vectors are created from the ‘cut’ annotation file for each video recorded from the frontal camera. A vector containing the number of every frame and another vector containing the corresponding activity label for each of these frames: ‘0’ for drinking activity, ‘1’ for apraxia and ‘2’ for eating activity. For example, all the frames between the frame of ‘eat_sp1’ and the frame of ‘eat_sp4’ are assigned the value ‘2’. Having this continuous sequence we can represent graphically, drawing a curve, the ground-truth of every video of MOBISERV-AIIA database.

The application of our activity recognition algorithms in a continuous video gives as output a .txt file containing the number of each frame and the corresponding activity label beside this. Two vectors are created for the algorithm’s output as above: one vector with the frames of the .txt file and another with the labels for each class as the ground-truth: ‘0’ for drinking, ‘1’ for apraxia and ‘2’ for eating. We apply a median filter in the second vector in order to smooth the resulted curve by replacing each label with the median value of a specific length sequence of labels.

We compare the vectors containing the labels of ground-truth and algorithm’s output frame by frame in order to compute the correct activity classification rate. When the labels are identical a counter is increased by one. This value is finally divided by the number of frames that were compared. In Figure 3 two curves drawn based on the vectors of labels are presented in order to visualize the results. The ground-truth curve is the wide one (red) and the curve from the algorithm’s output is the dashed one (blue). A median filter for a length sequence of 15 labels was applied and the correct activity classification rate for this example, evaluated frame by frame, was 79.78%.

Horizontal axis indicates the frame index from the time a person sits on the chair until he/she stands up. Vertical axis indicates the three classes’ indices: drink (value ‘0’), apraxia (value ‘1’), eat (value ‘2’). From these vectors and these curves various metrics can be computed, such as accuracy rates, true positives for each activity class, false positives etc.

[pic]

Figure 3 Validation curves for a continuous video sequence from MOBISERV-AIIA database.

All tests will take place in a laboratory according to the applicable safety rules.

3 WP 5 Component Validation (Data Logger)

1 KPIs and Validation Plan

Vital signs of the elderly to be monitored are acquired by means of textile electrodes at the sensor level and by embedded electronics device named Data Logger. This latter consist of a small wearable device connected to the garment by means of a jack connector. Data Logger integrated also a wireless Body Array Network (WBAN) used to measure the skin temperature by means of a NTC sensor located somewhere on the body. The vital signs (ECG, respiration, 3-axis acceleration) and the extracted parameters (Heart Rate, Breathing Rate, activity classification (posture (lying, standing) and activity (walking and running))) can be streamed in real time to the Kompai robot (or GUI on PC) or recorded in the data logger when stand-alone mode is selected. For more detail, please refer to the D5.3 Multi-sensor system integrated into wearable fabrics

1 KPIs to be used

In the validation plan we identify KPIs to give a qualitative and quantitative assessment of the performance with a garment integrating textile electrode used as reference. The performance indicators will be based on orthostatic posture and reduced activities (walk). The measurement will be compared with gold standards that are:

• Heart Rate: Holter Lifecard CR

• Breathing Rate: MetaMax *B

|WP |Sub ID |System/Sub-Component|Name of KPI |Type of Measure |Expected Target |Nature of Expected |

| | | | | |Value |Result |

|5 |KPI5_1 |Data_logger |Posture |Streaming mode | |Reaction time: |

| | | | | | |< 5s |

|5 |KPI5_2 | | |Others |0 (Class) | |

| |KPI5_3 | | |Lying on the back |1 (Class) | |

|5 |KPI5_4 | | |Lying face down |1 (Class) | |

|5 |KPI5_5 | | |Lying on the right side |1 (Class) | |

|5 |KPI5_6 | | |Lying on the left side |1 (Class) | |

|5 |KPI5_7 | | |Standing |2 (Class) | |

|5 |KPI5_8 | |Activity |Walking on treadmill (4km/h) |3 (Class) | |

|5 |KPI5_9 | | |Running on treadmill (10km/h) |4 (Class) | |

|5 |KPI5_10 | |HR |Comparison with gold standard |Percent of time |Beat/min |

| | | | | |within+/- 3 | |

| | | | | |Percent of time | |

| | | | | |within +/-10 | |

|5 |KPI5_11 | |Respiration |Comparison with gold standard |Percent of time |Beat /min |

| | | | | |within+/- 3 | |

| | | | | |Percent of time | |

| | | | | |within +/-10 | |

| | | | | | | |

Table 5 KPI Table WP5 Components (Data Logger)

2 Benchmarking Tests to carried out – Methodology

Validation test will be done at CSEM premises in the biomedical laboratory. Gold standard will be used as reference of the measurement. The comparison of the values given by the gold standard and the data logger will be recorded. Three level of accuracy will be defined based on the usual validation. A treadmill will be used for the activity measurement (walking) with the aim at defining an accurate level of activity like the walking speed.

|WP |Sub ID |System/Sub-Component |Name of KPI |Method of Measuring |Data set to be used |Number of |

| | | | | |for testing |repetitions |

|5 |KPI5_1 |CSEM monitor - |Streaming mode |Posture state |Data from monitor |10 |

| | |accelerometer | |classification accuracy |placed in different | |

| | | | | |positions | |

|5 |KPI5_2 |CSEM monitor - |Others |Posture state |Data from monitor |10 |

| | |accelerometer | |classification accuracy |placed in different | |

| | | | | |positions | |

|5 |KPI5_3 |CSEM monitor - |Lying on the back |Posture state |Data from monitor |10 |

| | |accelerometer | |classification accuracy |placed in different | |

| | | | | |positions | |

|5 |KPI5_4 |CSEM monitor - |Lying face down |Posture state |Data from monitor |10 |

| | |accelerometer | |classification accuracy |placed in different | |

| | | | | |positions | |

|5 |KPI5_5 |CSEM monitor - |Lying on the right side|Posture state |Data from monitor |10 |

| | |accelerometer | |classification accuracy |placed in different | |

| | | | | |positions | |

|5 |KPI5_6 |CSEM monitor - |Lying on the left side |Posture state | |10 |

| | |accelerometer | |classification accuracy | | |

|5 |KPI5_7 |CSEM monitor - |Standing |Posture state |Data from monitor |10 |

| | |accelerometer | |classification accuracy |placed in different | |

| | | | | |positions | |

|5 |KPI5_8 |CSEM monitor - |Walking on treadmill |Activity state |Data from monitor on |10 |

| | |accelerometer |(4km/h) |classification accuracy |real condition | |

|5 |KPI5_9 |CSEM monitor - |Running on treadmill |Activity state |Data from monitor on |10 |

| | |accelerometer |(10km/h) |classification accuracy |real condition | |

|5 |KPI5_10 |CSEM monitor - vital |Walking on treadmill |Heart rate |Data from monitor |5 |

| | |signs |(4km/h) | |when user walks for 5| |

| | | | | |minutes | |

|5 |KPI5_11 |CSEM monitor - vital |Walking on treadmill |Breathing rate |Data from monitor |5 |

| | |signs |(4km/h) | |when user walks for 5| |

| | | | | |minutes | |

Table 6 KPI Measurement Table WP5 Components (Data Logger)

All tests will take place in a biomedical laboratory according to the applicable safety rules.

4 WP 5 Component Validation (Smart Garments)

Three different types of garments to monitor physiological parameters of the user during day and night are included in the first MOBISERV WHSU prototype.

The set of garments for day is made of a sensorized shirt and a band. Such a band is easier to be worn and could be used as an alternative solution at least while performing exercises or activities if users denies to wear shirts all day long.

Night monitoring will be performed through the use of a sensorized pyjama.

Please refer to D5.1, D5.2 and D5.3 for further details on garments and their functionalities.

1 KPIs and Validation Plan

In the validation plan we identify KPIs to give a qualitative and quantitative assessment of the performance on both the two different aspects that qualify the smart garments: the strong adherence to quality parameters of clothes in terms of absence of toxic materials and durability on one side and the performance indicators that characterize the textile and integrated sensor on the other side.

1 KPIs to be used

|WP |Sub ID |System/Sub-Component|Name of KPI |Type of Measure |Expected Target Value |Nature of Expected |

| | | | | | |Result |

|5 |KPI5_1 |Textile electrodes |TE Stability |Reliability |< 5% |Variation of |

| | | | | | |resistance value in|

| | | | | | |% |

|5 |KPI5_2 |Textile electrodes |TE |Quality |< 2 KΩ |Ω |

| | | |AC impedance | | | |

|5 |KPI5_3 |Textile electrodes |TE |Quality |< 100 mV |mV |

| | | |DC Offset | | | |

|5 |KPI5_4 |Piezo resistive |PR |Reliability |< 5% |Variation of |

| | |sensor |Stability | | |resistance value in|

| | | | | | |% |

|5 |KPI5_5 |Piezo resistive |PR |Accuracy |> 95% |Signal to Noise |

| | |sensor |SNR | | |ratio (%) |

|5 |KPI5_6 |Temperature |TS |Reliability |< 5% |Variation of |

| | |Sensor |Stability | | |resistance value |

| | | | | | |(%) |

|5 |KPI5_7 |Temperature |TS |Precision |< 5% |% of |

| | |Sensor |precision | | | |

|5 |KPI5_8 |Temperature |TS |Accuracy |< ±0.2°C |°C |

| | |Sensor |accuracy | | | |

|5 |KPI5_9 |Fabrics and clothes |FC |Toxicity |See | |

| | | |OEKO_TEX Standard 100| | | |

|5 |KPI5_10 |Fabrics and clothes |FC |Durability |< 5% |Physical dimensions|

| | | |Shrinkage resistance | | | |

Table 7 KPI Table WP4 Components (Smart Garments)

2 Benchmarking Tests to carried out – Methodology

|WP |Sub ID |System/Sub-Component |Name of KPI |Method of Measuring |Data set to be|Number of |

| | | | | |used for |repetitions |

| | | | | |testing | |

|5 |KPI5_1 |Textile electrodes |TE Stability |Measurements of impedance of electrodes | |4 |

| | | | |before and after 30 washing cycles (UNI | | |

| | | | |EN ISO 6330:2002) | | |

|5 |KPI5_2 |Textile electrodes |TE |Measurements of average value of 10 Hz |AC impedance |10 |

| | | |AC impedance |impedance at a level of impressed current| | |

| | | | |not exceeding 100 µA in a pair of | | |

| | | | |electrodes connected trough hydro gel( | | |

| | | | |Based on ANSI/AAMI EC12:2000 for | | |

| | | | |disposable ECG electrodes) | | |

|5 |KPI5_3 |Textile electrodes |TE |Measurements of the offset voltage |Offset voltage|10 |

| | | |DC Offset |across a pair of electrodes connected | | |

| | | | |trough hydro gel after 1- minute | | |

| | | | |stabilization period( | | |

| | | | |Based on ANSI/AAMI EC12:2000 for | | |

| | | | |disposable ECG electrodes) | | |

|5 |KPI5_4 |Piezo resistive sensor|PR |Resistance measurements of piezo |Resistance |4 |

| | | |Stability |resistive sensor at rest with no applied | | |

| | | | |mechanical stress before and after 30 | | |

| | | | |washing cycles (UNI EN ISO 6330:2002) | | |

|5 |KPI5_5 |Piezo resistive sensor|PR |Measurements of the Signal to Noise ratio|PSD of |10 |

| | | |SNR |(power of the signal output at 0.2Hz / |measured | |

| | | | |total power in the 0.05-0.5Hz bandwith) |output voltage| |

| | | | |on a piezo resistive sensor with an | | |

| | | | |applied sinusoidal mechanical stress at | | |

| | | | |0.2 Hz | | |

|5 |KPI5_6 |Temperature |TS |Resistance measurements of temperature |Mean |4 |

| | |Sensor |Stability |sensor at 25°C before and after 30 |resistance | |

| | | | |washing cycles (UNI EN ISO 6330:2002) | | |

|5 |KPI5_7 |Temperature |TS |Repeated measurements of estimated |Reference |10 |

| | |Sensor |precision |temperature through calibrated |object | |

| | | | |temperature sensor at 35°C, 36°C ,37°C |temperature | |

| | | | |,38°C, 39°C after 2-minutes stabilization|and estimated | |

| | | | |period |temperature | |

|5 |KPI5_8 |Temperature |TS |Repeated measurements of estimated |Reference |10 |

| | |Sensor |accuracy |temperature through calibrated |object | |

| | | | |temperature sensor at 35°C, 36°C ,37°C |temperature | |

| | | | |,38°C, 39°C after 2-minutes stabilization|and estimated | |

| | | | |period |temperature | |

|5 |KPI5_9 |Fabrics and clothes |FC |Several texting procedure to individuate |several |1 |

| | | |OEKO_TEX Standard |presence of toxic materials (please see | | |

| | | |100 |) | | |

|5 |KPI5_10 |Fabrics and clothes |FC |Measurement of dimensional variations on |Height and |4 |

| | | |Shrinkage |samples of fabrics or clothes accordingly|length of the | |

| | | |resitance |to UNI EN ISO 3759 , UNI EN 25077, UNI EN|samples | |

| | | | |ISO 6330 | | |

Table 8 KPI Measurement Table WP4 Components (Smart Garments)

3 Operational safety criteria and testing process

Some textile fibres are highly hygroscopic and their properties change notably as a function of the moisture content. Moisture content is particularly critical in the case of properties, i.e yarn tenacity, elongation, yarn evenness, imperfections, count, electrical properties etc. Therefore conditioning and testing must be carried out under constant standard atmospheric conditions. The standard atmosphere for textile testing involves a temperature of 20±2 degree C, and 65±2% Rh. Prior to testing, the samples must be conditioned under constant standard

atmospheric to attain the moisture equilibrium. To achieve this it requires at least 24 hours.

5 WP 6 Component Validation (Information and Coordination and Communication support system)

Within the scope of WP6 activities, the following components are being developed:

• InteractionManager: The InteractionManager is responsible for coordinating information across the various sensors/ modalities (i.e. Modality Components), which are used within MOBISERV for supporting the implementation scenarios identified.

It handles all events that the other Components generate; it is responsible for the synchronization of data and focus, etc., across different components as well as for the higher-level interaction flow that is independent of a specific (Modality) Component. It also maintains the high-level application data model.

The interaction Manager is being developed as a service in Microsoft Robotics Developer Studio (MRDS).

• HomeAutomationControl/ SMHInterface: This component implements a specific interface with the Smart Home Infrastructure, supporting the remote control of units/devices which are part of the Home Automation Infrastructure.

• WebInterface: This component enables secondary users such as carers or relatives to monitor and view selected data collected on the primary user such as medical information or user activity history.

The abovementioned components are further explained in D6.1.

1 2 KPIs and Validation Plan

1 KPIs to be used

Validation of WP6 components will be considered against the key performance indicators listed in the following table.

|WP |Sub ID |System/Sub-Component |Name of KPI |Type of Measure |Expected Target Value |Nature of Expected Result |

|6 |KPI6_1 |InteractionManager |Number of software defects |Performance |Minimised |Number of bugs or software defects |

| | | | |/Software quality | | |

| | | | | |(e.g., 10 for D6.1, 0 for | |

| | | | | |D6.2) | |

| | |HomeAutomationControl/ | | | | |

| | |SMHInterface | | | | |

| | |WebInterface | | | | |

|6 |KPI6_2 |InteractionManager |% of failed transactions (with other, |Performance / |Minimised |Percentage of failed transactions relative to |

| | | |collaborative components) |Reliability | |all transactions (with other, collaborative |

| | | | | |(e.g. 5% for D6.1, |components)within measurement period |

| | | | | |0% for D6.2) | |

| | |HomeAutomationControl/ | | | | |

| | |SMHInterface | | | | |

| | |WebInterface | | | | |

|6 |KPI6_3 |InteractionManager |Maximum response time of transactions (with |Performance / |Minimised |Maximum response time of transactions within |

| | | |other collaborative components) |Quality of service | |measurement period. |

| | | | | |(e.g., 8sec for D6.1, | |

| | | | | |5sec for D6.2) | |

| | |HomeAutomationControl/ | | | | |

| | |SMHInterface | | | | |

| | |WebInterface | | | | |

|6 |KPI6_4 |InteractionManager |Average response time of transactions (with |Performance / |Minimised |Average response time of transactions within |

| | | |other collaborative components) |Quality of service | |measurement period. |

| | | | | |(e.g., 4sec for D6.1 and 2sec| |

| | | | | |for D6.2) | |

| | |HomeAutomationControl/ | | | | |

| | |SMHInterface | | | | |

| | |WebInterface | | | | |

|6 |KPI6_5 |InteractionManager |Number of defects per test case (test cases |Performance / |Minimised |Number of defects detected in the software by |

| | | |concerning the MOBISERV high-level |Quality of service | |the total number of test cases |

| | | |requirements to be evaluated, as defined in | |(e.g., | |

| | | |D3.1) | |for D6.1-> | |

| | | | | | | |

| | | | | |F01 & F02: 3 | |

| | | | | |F08: 3 | |

| | | | | |F14: 3 | |

| | | | | |F11: 1 | |

| | | | | | | |

| | | | | |for D6.1-> | |

| | | | | |F01 & F02: 0 | |

| | | | | |F08: 0 | |

| | | | | |F14: 0 | |

| | | | | |F11: 0 | |

| | | | | | | |

| | |HomeAutomationControl/ |Number of defects per test case (test case | | | |

| | |SMHInterface |that requires communication with/ control of | | | |

| | | |units / devices in the smart home) | | | |

| | |WebInterface |Number of defects per test case (test cases | | | |

| | | |concerning the MOBISERV high-level | | | |

| | | |requirements to be evaluated, as defined in | | | |

| | | |D3.1) | | | |

Table 9 KPI Table WP6 Components

These KPIs has been properly selected from the KPI library[3].

2 Benchmarking Tests to carried out – Methodology

The methods to be used for measuring the identified KPI's, are briefly described bellow:

|WP |Sub ID |System/Sub-Component |Name of KPI |Method of Measuring |Data set to be used for testing |Number of repetitions |

|6 |KPI6_1 |InteractionManager |Number of software |Component testing |Simulation of input data / events coming from the |5 repetitions for each input|

| | | |defects | |components that the InteractionManager communicates with. |event |

| | | | | |The process specification and sequence diagrams produced | |

| | | | | |for each one of the high-level functionalities to be | |

| | | | | |evaluated will be used as starting point. | |

| | |HomeAutomationControl/ | | |Simulation of input data/events coming from the components|5 repetitions for each input|

| | |SMHInterface | | |the HomeAutomationControl/ SMHInterface communicates with |event |

| | | | | |(i.e. door -related devices in the smart house/ AMX | |

| | | | | |controller and InteractionManager). The Sequence diagram | |

| | | | | |produced for F14 will be used as a starting point. | |

| | |WebInterface | |System Testing |Simulation of user's test data with respect to: |5 repetitions for each |

| | | | | |Nutrition Settings; |functionality provided to |

| | | | | |Exercises Settings and; |secondary users with respect|

| | | | | |View logs. |to: |

| | | | | | |Nutrition Settings; |

| | | | | | |Exercises Settings and; |

| | | | | | |View logs. |

|6 |KPI6_2 |InteractionManager |% of failed |Component testing |Simulation of input data / events coming from the |5 repetitions for each one |

| | | |transactions (with | |components that the InterationManager communicates with. |of the high-level level |

| | | |other, collaborative| |The Sequence diagrams produced for each one of the |functionalities, with |

| | | |components) | |high-level functionalities to be evaluated will be used as|different input events and |

| | | | | |starting point. |at different conditions. |

| | |HomeAutomationControl/ | | |Simulation of input data/events coming from the components|5 repetitions, with |

| | |SMHInterface | | |the HomeAutomationControl/ SMHInterface communicates with |different input events and |

| | | | | |(i.e. door -related devices in the smart house/ AMX |at different conditions. |

| | | | | |controller and InteractionManager). The Sequence diagram | |

| | | | | |produced for F14 will be sued as a starting point. | |

| | |WebInterface | |System testing |Simulation of user's test data with respect to: |5 repetitions for each one |

| | | | | |Nutrition Settings; |of the functions listed |

| | | | | |Exercises Settings and; |bellow: |

| | | | | |View logs. |Nutrition Settings |

| | | | | | |Add new meal |

| | | | | | |Edit meal |

| | | | | | |Delete meal |

| | | | | | |Filtering meals |

| | | | | | |Editing reminder settings |

| | | | | | |Edit images for meal |

| | | | | | |reminders |

| | | | | | |Edit messages for meal |

| | | | | | |reminders |

| | | | | | |Edit meal videos for |

| | | | | | |reminders |

| | | | | | |Editing meal encouragements |

| | | | | | |settings |

| | | | | | |Edit images for meal |

| | | | | | |encouragements |

| | | | | | |Edit messages for meal |

| | | | | | |encouragements |

| | | | | | |Edit videos for meal |

| | | | | | |encouragements |

| | | | | | |Add new drink |

| | | | | | |Edit drink |

| | | | | | |Delete drink |

| | | | | | |Filtering drinks |

| | | | | | |Editing drink reminder |

| | | | | | |settings |

| | | | | | |Edit images for drink |

| | | | | | |reminders |

| | | | | | |Edit messages for drink |

| | | | | | |reminders |

| | | | | | |Edit drink videos for |

| | | | | | |reminders |

| | | | | | |Editing drink encouragements|

| | | | | | |settings |

| | | | | | |Edit images for drink |

| | | | | | |encouragements |

| | | | | | |Edit messages for drink |

| | | | | | |encouragements |

| | | | | | |Edit videos for drink |

| | | | | | |encouragements |

| | | | | | |Save changes |

| | | | | | |Exercises Settings |

| | | | | | |Add exercise |

| | | | | | |Edit exercise |

| | | | | | |Delete exercise |

| | | | | | |Add activity |

| | | | | | |Edit activity |

| | | | | | |Delete activity |

| | | | | | |Add encouragement |

| | | | | | |Edit encouragement |

| | | | | | |Delete encouragement |

| | | | | | |Editing encouragements |

| | | | | | |settings |

| | | | | | |Edit images for |

| | | | | | |encouragements |

| | | | | | |Edit messages for |

| | | | | | |encouragements |

| | | | | | |Edit videos for |

| | | | | | |encouragements |

| | | | | | |Save changes |

| | | | | | |Logs |

| | | | | | |View nutrition logs |

| | | | | | |View hydration logs |

| | | | | | |View exercises and |

| | | | | | |physiological logs |

|6 |KPI6_3 |InteractionManager |Maximum response |Component testing |As described for KPI6_2 |As described for KPI6_2 |

| | | |time of transactions| | | |

| | | |(with other | | | |

| | | |collaborative | | | |

| | | |components) | | | |

| | |HomeAutomationControl/ | | |As described for KPI6_2 |As described for KPI6_2 |

| | |SMHInterface | | | | |

| | |WebInterface | |System testing |As described for KPI6_2 |As described for KPI6_2 |

|6 |KPI6_4 |InteractionManager |Average response |Component testing |As described for KPI6_2 |As described for KPI6_2 |

| | | |time of transactions| | | |

| | | |(with other | | | |

| | | |collaborative | | | |

| | | |components) | | | |

| | |HomeAutomationControl/ | | |As described for KPI6_2 |As described for KPI6_2 |

| | |SMHInterface | | | | |

| | |WebInterface | |System testing |As described for KPI6_2 |As described for KPI6_2 |

|6 |KPI6_5 |InteractionManager |Number of defects |Component testing |As described for KPI6_1 |As described for KPI6_1 |

| | | |per test case | | | |

| | |HomeAutomationControl/ | | |As described for KPI6_1 |As described for KPI6_1 |

| | |SMHInterface | | | | |

| | |WebInterface | |System testing |As described for KPI6_1 |As described for KPI6_1 |

Table 10 KPI Measurement Table WP6 Components

3 Operational safety criteria and testing process

To validate the Interaction Manager and Home Automation Control/ SMH Interface against the identified KPIs, the method of component testing will be used. This method will focus to ensure that the components conform to the specifications, handle all exceptions as expected, and produce the appropriate alerts to satisfy error handling. The testing will be performed in the development environment and conducted by the software developer team who develops the code. The tests will validate the components' logic and their adherence to functional and technical requirements (see also, D2.3, D3.1 and D6.1).

The test cases for the Interaction Manager will be generated taking into account the process specification and sequence diagrams produced for each one of the high-level functionalities to be evaluated in MOBISERV. Subsequently, the software developer team will build a separate programme (simple service in MRDS) to trick the component into believing it is working in a fully functional system (according to the test cases). Through this, it will be possible to simulate input data / events - coming from the components that the InteractionManager communicates with. Test outputs will be monitored through the MRDS "System Services" environment; testers may monitor the states of the InteractionManager service, important messages in the Console Output, etc., that will allow them to properly document on the test results.

The test cases for the HomeAutomationControl/ SMHInterface will be generated taking into account the sequence diagram for F14. As in the case of the InteractionManager, the software developer team will build a separate programme (simple service in MRDS) which will make possible to simulate input data / events coming from either units/ devices in the smart home infrastructure (though the AMX controller) or the InteractionManager. Test outputs will be monitored be means of a) the MRDS "System Services" environment; and b) the NetLinx program (to read logs).

To validate the Web Interface against the identified KPIs, the method of system testing will be used. This method will focus to ensure that the WebInterface - being a 3-tier web application - complies with its specified (secondary users') requirements (as reported in D2.3). System testing requires no knowledge of the inner design of the code or logic and it seeks to detect defects both within the "inter-assemblages" (3-tiers) and also within the system as a whole. The testing will be performed in the development environment and conducted by the project members not necessarily involved in the software development process.

The test cases for the Web Interafce will be generated taking into account the secondary users' functional requirements, i.e. requirements with respect to:

• Nutrition Settings;

• Exercises Settings and;

• Viewing logs.

Testers will be required to manually provide data though the WebInterface (as if they were secondary users) and test the following:

• Nutrition Settings

• Add new meal

• Edit meal

• Delete meal

• Filtering meals

• Editing reminder settings

• Edit images for meal reminders

• Edit messages for meal reminders

• Edit meal videos for reminders

• Editing meal encouragements settings

• Edit images for meal encouragements

• Edit messages for meal encouragements

• Edit videos for meal encouragements

• Add new drink

• Edit drink

• Delete drink

• Filtering drinks

• Editing drink reminder settings

• Edit images for drink reminders

• Edit messages for drink reminders

• Edit drink videos for reminders

• Editing drink encouragements settings

• Edit images for drink encouragements

• Edit messages for drink encouragements

• Edit videos for drink encouragements

• Save changes

• Exercises Settings

• Add exercise

• Edit exercise

• Delete exercise

• Add activity

• Edit activity

• Delete activity

• Add encouragement

• Edit encouragement

• Delete encouragement

• Editing encouragements settings

• Edit images for encouragements

• Edit messages for encouragements

• Edit videos for encouragements

• Save changes

• Logs

• View nutrition logs

• View hydration logs

• View exercises and physiological logs.

Test outputs will be monitored what is being stored in the data-tier and presented in the presentation-tier so as to report on tests.

6 WP 7 Component Validation (Robotic Platform)

The MOBISERV Physical Robotic Unit (PRU) is an indoor mobile platform with 2 propulsive wheels designed to ease the development of advanced robotics solutions. It interacts with the elderly in the Smart-Home. This PRU is an intelligent autonomous robot containing an embedded controller running Windows CE, data storage capability, various sensors for the navigation (laser, odometers, etc.) and also an adjustable camera, a touch screen tablet-PC of adjustable height running Windows 7 for high level applications including a speech synthesis and recognition interface. This laptop-PC is connected to internet.

For more detail, please refer to the D7.1 Robotic platform First Prototype.

1 KPIs to be used

In the validation plan we identify KPIs to give a qualitative and quantitative assessment of the performance with the PRU integrating different components including: actuators, sensors, batteries, and a tablet-PC. The performance indicators characterize the robustness over the time of each one of these components and of the global running.

|WP |Sub ID |System/Sub-Component |Name of KPI |Type of Measure |Expected Target Value |Nature of Expected |

| | | | | | |Result |

|7 |KPI7_1 |Tablet Battery |Tablet Power supply |Reliability |Power supply switched on|Visual signal |

|7 |KPI7_2 |Tablet charge level |Tablet Charge level |Reliability |Power supply switched on|% charge level value |

| | |software | | | | |

|7 |KPI7_3 |Tablet battery |Tablet Battery Life |Reliability |3 hours without external|Time value in hours |

| | | | | |power supply | |

|7 |KPI7_4 |loudspeaker |Sound restitution |Quality |Medium launched on the |Recognized sound |

| | | | | |laptop | |

|7 |KPI7_5 |microphone |Sound record |Quality |User voice recorded |Restitution of recorded |

| | | | | | |voice |

|7 |KPI7_6 |SOS pushbutton |SOS message |Reliability |SOS message send |Illuminated red leds and|

| | | | | | |SOS message received on |

| | | | | | |the tablet-Pc |

|7 |KPI7_7 |Charger |Robot Recharging |Reliability |About 6-7 hours | Level batteries on the |

| | | | | |recharging time |tablet PC and led on the|

| | | | | | |charger |

|7 |KPI7_8 |Robot batteries |Robot batteries life |Reliability |About 8 hours without |Time value in hours |

| | |(Li-ion 24 VDC – 20 | | |external power supply | |

| | |Ah) | | | | |

|7 |KPI7_9 |Pan-Tilt Camera |Video stream |Reliability |Video stream delivered |Video stream |

| | | |restitution | |in real time on the | |

| | | | | |laptop | |

|7 |KPI7_10 |Web Camera |Video stream |Reliability |Video stream delivered |Video stream |

| | | |restitution | |in real time on the | |

| | | | | |laptop | |

|7 |KPI7_11 |Axis IP camera |Video stream |Reliability |Video stream delivered |Video stream |

| | | |restitution | |in real time on the | |

| | | | | |laptop (through | |

| | | | | |Lokarria) | |

|7 |KPI7_12 |Sick laser |2D environment |Reliability |2D shape of the |Displayed figures |

| | | |geometry | |environment (through | |

| | | | | |Lokarria) | |

|7 |KPI7_13 |Joystick |Tele-operation |Reliability |Robot tele-operation |Robot movements |

| | | | | |with the joystick | |

|7 |KPI7_14 |Emergency button |Emergency stop |Reliability |Robot movements disabled|Stationary Robot |

|7 |KPI7_15 |PURE low-level |Low-level software |Reliability |Response frames received|UDP message frames |

| | |software |communication | | | |

|7 |KPI7_16 |Mobiserv Kompaï |High-level software |Reliability |Application successfully|Mobiserv Main GUI |

| | |high-level software |interface | |launched |displayed with |

| | | | | | |functional buttons and |

| | | | | | |speech interaction |

|7 |KPI7_16_1 |Navigation software |Robot autonomous |Reliability |Application successfully|Mobiserv Navigation GUI |

| | | |navigation | |launched |displayed with |

| | | | | | |functional buttons and |

| | | | | | |speech functionalities |

| |KPI7_16_1_1 |Path Following System |Robot autonomous |Performance |Robot displacements at |Performance results – |

| | | |navigation | |approximately 1m/s |Mean travel time |

|7 |KPI7_16_1_2 |Collision Avoidance |Collision Avoidance |Performance |Min and Max |Performance results – |

| | |System | | |size/shape/opacity of |Reliability – |

| | | | | |object |%Accuracy results for |

| | | | | | |different variables |

| | | | | | | |

| | | | | | | |

| | | | | | | |

|7 |KPI7_16_1_3 |Collision Avoidance |Collision Avoidance |Performance |Speed of response in ms |Performance results – |

| | |System |Response Rate | | |Mean Stopping/response |

| | | | | | |time depending on |

| | | | | | |approach speed |

Table 11 KPI Table WP7 Components

2 Benchmarking Tests to be carried out – Methodology

|WP |Sub ID |System/Sub-Component|Name of KPI |Method of Measuring |Data set to be used |Number of |

| | | | | |for testing |repetitions |

|7 |KPI7_1 |Tablet Battery |Tablet Power supply |Power supply switched | |3 |

| | | | |on/off | | |

|7 |KPI7_2 |Tablet charge level |Tablet Charge level |Observation of % | |3 |

| | |software | |charge level value | | |

|7 |KPI7_3 |Tablet battery |Tablet Battery Life |Running the following |Time of the battery |3 |

| | | | |services – operating |duration in hours | |

| | | | |system, wi-fi, | | |

| | | | |Bluetooth, Mobiserv | | |

| | | | |Application | | |

|7 |KPI7_4 |loudspeaker |Sound restitution |Launching a medium |Sound |3 |

|7 |KPI7_5 |microphone |Sound record |Speaking |Sound record |3 |

| | | | | |indicator | |

|7 |KPI7_6 |SOS pushbutton |SOS message |Pushing the SOS button|Illuminated red leds |3 |

| | | | | |and SOS message on | |

| | | | | |the tablet-Pc | |

|7 |KPI7_7 |Charger |Robot Recharging |Recharging (led on the|Recharging time in | 3 |

| | | | |charger to indicate |hours | |

| | | | |the batteries level) | | |

| | | | | | | |

| | | | | | | |

|7 |KPI7_8 |Robot batteries |Robot batteries life |Running the following |Time value in hours |3 |

| | |(Li-ion 24 VDC – 20 | |services – Robot, | | |

| | |Ah) | |operating system, | | |

| | | | |wi-fi, Bluetooth, | | |

| | | | |Mobiserv Application | | |

|7 |KPI7_9 |Pan-Tilt Camera |Video stream restitution|Running the camera |Video stream |3 |

| | | | | |displayed on the | |

| | | | | |laptop | |

|7 |KPI7_10 |Web Camera |Video stream restitution|Running the camera |Video stream |3 |

| | | | | |displayed on the | |

| | | | | |laptop | |

|7 |KPI7_11 |Axis IP camera |Video stream restitution|Running the camera |Video stream |3 |

| | | | | |displayed on the | |

| | | | | |laptop (through | |

| | | | | |Lokarria) | |

|7 |KPI7_12 |Sick laser |2D environment geometry |Running the robot |Figures displayed on |3 |

| | | | | |the laptop (through | |

| | | | | |Lokarria) | |

|7 |KPI7_13 |Joystick |Tele-operation |Running the robot |Robot movements |3 |

| | | | | |checked | |

|7 |KPI7_14 |Emergency button |Emergency stop |Push the emergency |Robot movements |3 |

| | | | |button |disabled | |

|7 |KPI7_15 |PURE low-level |Low-level software |Send a UDP request |Receive a UDP |3 |

| | |software |communication |frame |response frame | |

|7 |KPI7_16 |Mobiserv Kompaï |High-level software |Launch the application|Touch and Speech |3 |

| | |high-level software |interface | | | |

|7 |KPI7_16_1 |Navigation software |Robot autonomous |Launch the application|Application launched |3 |

| | | |navigation | |with the map and | |

| | | | | |Robot movements | |

| | | | | |possible with touch | |

| | | | | |and speech | |

| |KPI7_16_1_1 |Path Following |Robot autonomous |Order a path |Performance results –|3 |

| | |System |navigation |following, using | | |

| | | | |touch or speech |Mean travel time | |

| | | | | | | |

|7 |KPI7_16_1_2 |Collision Avoidance |Collision Avoidance |Test collision |Object sizes, shapes,|3 |

| | |System | |avoidance system using|opacity | |

| | | | |a range of different | | |

| | | | |sized objects and | | |

| | | | |materials of varying | | |

| | | | |opacity | | |

|7 |KPI7_16_1_3 |Collision Avoidance |Collision Avoidance |Test collision |response time |3 |

| | |System |Response Rate |avoidance system with |depending on approach| |

| | | | |different approach |speed | |

| | | | |speeds | | |

|8 |KPI7_16_1_4 |Mapping System |Accuracy of Mapping |Accuracy of Maps |Resolution of map in |3 |

| | | |System |created in different |cms | |

| | | | |environments | | |

Table 12 KPI Measurement Table WP7 Components

3 Operational safety criteria and testing process

Operating conditions

• Use only the battery charged supplied with the PRU.

• Do not wait that the batteries are empty to recharge them.

• Never recharge battery pack when PRU robot is power ON.

• Working conditions: Indoor- flat ground (maximum step = 0,5 cm).

Technical Characteristics

|Dimensions |L x l x h = 455 x 41 x 125 mm |

|Ground clearance |50 mm |

|Weight |31 kg |

|Number of wheel |2 propulsive wheels, 2 castor wheels |

|Direction |Differential type |

|Turning radius |Turn on the spot (middle point of propulsive wheels) |

| |needs an area diameter of 552 mm |

|Max safety speed |1m/s but is adjustable |

|Stopping distance |User specified parameter |

|Payload |max 30 kg |

|Slope |Evolution on flat surface, max slope to 11% without |

| |payload |

|Using temperature |0°C - 40°C |

|Storing temperature |0°C - 60°C  |

|Humidity |0 - 90 % without condensing |

|Batteries |Li-ion 24 VDC – 20 Ah |

|Autonomy |about 8 hours |

|Recharging time |about 6-7h |

|Embedded controller (Low level) |PURE |

|Embedded controller (High level) |Mobiserv-Kompaï |

|Embedded computer OS |Windows CE 6.0  |

|Tablet PC OS |Windows Embedded standard 7 |

|Driving mode |Xbox 360 wireless Gamepad, other.. |

|Sensors |Sick laser  |

| |Axis IP camera |

| |Usb camera |

| |9 US sensors  |

| |16 IR sensors  |

| |2 bumpers (front and rear) |

 Table 13 Technical Characteristics of Kompai

Driving robot

The PRU can be operated in different ways:

• Manually (default mode), using the Xbox360 wireless Gamepad.

• Remotely and/or autonomously, using Mobiserv-Kompaï Application.

Robot control panel

[pic]

Composed of: 

|Label |Description |Function |

|1 |Gamepad receiver |Receive orders from Gamepad controller |

|2 |Ethernet |Provide access to the wired Ethernet network of the robot. |

|3 |L Drive |To configure Left drive |

|4 |R Drive |To configure Right drive |

|5 |CAN |To access CAN network |

|6 |RS232 |To access serial network |

|7 |ON/OFF |Switch used for powering ON and OFF the robot. When robot is ON, the switch led is ON. |

|8 |Valid. |To turn on power on drives. |

| | |If emergency button has been pressed, once released, the validation button must be push to active drive |

| | |power. |

|9 |Batteries level |The 3 states led indicate battery level as following: |

| |indicator |Green: Batteries are fully charged or batteries are under charge |

| | |Orange: Batteries are low and should be recharged |

| | |Red: Batteries are very low and must be recharged |

|10 |Batteries charger |Used to recharge the robot batteries. The robot includes a set of lithium ion batteries: these batteries |

| |plug |accept to be recharged whenever it is possible. |

|11 |Serial Number |  |

 

Table 14 Kompai Control Panel Components

Recharging robot batteries

Never recharge the PRU when it is turned ON.

1. Connect the charger to the robot.

2. Connect the power supply charger cable to the grid.

3. Turn on the charger using its main switch. The LED must be orange during the charging process and green when it is done.

4. Disconnect the charger from the grid and the robot.

Emergency situations 

The PRU has 2 bumpers (front and rear). They avoid the corresponding movement in case of activation (i.e.: robot driven against a wall). 

In addition, on the top rear of the robot, an emergency button could be activated. This emergency button shut down power on drives in order to stop PRU evolution. 

If the emergency button is activated, it must be released once the dangerous situation has been solved. To release the emergency button, just turn it in clockwise quarter turn. Then robot drives must be repowered ON. To do so, press the validation button. The PRU is now ready to go back to movements.

Turning ON the PRU

Press ON/OFF button to power on the robot (LED button must be ON), then press the Validation button to give power to drives. 

By default, the robot launches the software for Xbox 360 Gamepad control as the system startups, so once the embedded PC has booted, you can drive the robot. 

The boot time is about 5 seconds.

Note: When a bumper is activated, the corresponding movement is forbidden meanwhile reverse movement is allowed to avoid obstacle.

Press tablet-PC power-on button to start it.

Turning OFF the PRU

Simply press the ON/OFF button to turn OFF the robot (LED button must be OFF).

Stop tablet-PC using windows stop application.

The hand controller 

The hand controller is a conventional Xbox 360 hand controller. For more information on it, please refer to its documentation. 

The Xbox controller battery pack is composed of 2 standard alcaline LR6 AA batteries.

To drive the robot, press and maintain the “A” green button. If “A” green button is released, the robuLAB-10 is stopped. 

The following table gives actions and buttons: 

|Action |Button |

|Power ON the hand controller |Xbox button |

|Go forward |RT |

|Go background |LT |

|Turn right |Right on Left thumb stick |

|Turn left |Left on Left thumb stick |

 To turn OFF the hand controller, remove the battery pack.

Launching Mobiserv-Kompaï Application

Please refer to D7.2 First MOBISERV System Prototype documentation for more explanations.

Usability Evaluation Plan (Field Trials at User Sites)

1 Scope of the first prototype evaluation

We anticipate that the five primary functions in the first prototype will be delivered individually. As such, we will evaluate these components individually. The five primary functions are:

• Eating (F01)

• Drinking (F02)

• Exercise (F08)

• Voice/video calling (F11)

• Front door control (F14)

We will undertake formative and summative evaluations, comprising three parts: a) an expert evaluation without users; b) focus groups with users; c) field trials of individual components with users.

At all times, ethical considerations towards the people participating in this research will be prioritised – with a focus on their psychological comfort, physical comfort, personal safety and the security of data recorded about them.

Afterwards, the evaluations will be described fully so as to be replicable and reported in the format outlined in document 2.4: Definition of the Evaluation Framework.

The outcome of these usability evaluation studies will comprise D2.5, Issues I and II.

2 Scope of the second prototype evaluation

In the second prototype evaluation, we will conduct a field trial of the updated nutrition detection and hydration detection as well as the remaining functions.

We anticipate that the remaining functions will be delivered in the second prototype, namely:

• Remote consultation with health professionals (F19)

• Tele-medicine / self-check platform (F17)

• Games for social and cognitive stimulation (F18)

• Panic responder (responding to falls / call for help from the user) (F6)

The outcome of these usability evaluation studies will comprise D2.6.

3 Key performance indicators

For each component and related functions, these will be the KPI as appropriate

• Level of user satisfaction

• Level of function usage by the user

• Level of acceptance by the stakeholders

• Level of ease of use (ergonomic) of overall component

• Level of ease of use (ergonomic) of the input devices for the function

• Level of effectiveness of the messages (do they elicit the desired response?)

• Success of system to adapt to change in environment (e.g. background noises, obstacles, lighting)

• Ease of configurability of function settings

• System response time for the component/function to user input (voice and touch)

• Time for error recovery

• User rating of function output/feedback (in relation to quality, utility and comprehensibility)

4 Part A Expert evaluation

1 Participation

No users involved.

2 Technical requirements

None.

3 Aim

The aim of this activity will be to ensure the HMI of the PRU, SHACU and WHSU conforms to best practice in terms of interaction design.

4 Activities

This activity will be undertaken by HCI experts from UWE and SMH using relevant heuristics for human-computer interaction, human-robot interaction, speech interaction, human-to-human interaction and so on. A cognitive walkthrough will also be performed.

Both UWE and SMH will review the interactions for each of the use cases relating to the five functions that are in scope. Findings from both teams will be collated to identify and prioritise key issues that need resolving to ensure the key performance indicators are met.

The set of heuristics used for the expert evaluation are listed in Appendix 6.3

5 Outcome

Afterwards, we will produce a short report with recommendations – minor and major. We will make an update to the HMI specification to include any minor design improvements which Robosoft will immediately incorporate into a new HMI.

5 Part B Small focus groups

1 Participation

• Primary and secondary users.

• UWE will run seven small focus groups, each with 1-4 users.

• SMH and ANNA will run three small focus groups, each with 1-4 users.

2 Technical requirements

None.

3 Aim

The aim of this part is to deepen the understanding of how the system should fit its context. Practically, this will refine and further specify the requirements that have been identified so far.

4 Activities

So, we will facilitate the co-discovery of the pros, cons, effects, ethics and possibilities of the envisioned prototype as real users understand it in context. For each of the following target groups, we will recruit groups of representatives:

• older people

• carers / home-care visitors

• family (‘son’, ’daughter’)

• doctors / care call centre operators (e.g. Good Morning Service, NHS Direct)

• nutrition experts, physiotherapist / fitness coach

Each session will last 1- 1.5 hours. A short introduction to the project will be made verbally. The main activity of the session (45 minutes) will be a small group discussion of three use cases relevant to each user group (as shown in the table 3).

The main discussion points include: What was the old scenario that existed in the past (e.g. baseline); With new scenarios - What is good?; What is bad?; What is the effect of this situation?; Is it right or wrong?; How could this scenario be different? (e.g. extend it, change it).

|User group |Scenarios / Use cases to elaborate |Conducted at |

|Older people |Drinking |Eating |Voice/video calling |UK NL |

|Carers / home-care visitors |Drinking |Eating |Exercise |UK NL |

|Family (‘son’, ’daughter’) |Drinking |Voice/video calling |Front door control |UK |

|Doctors / care call centre |Drinking |Eating |Exercise |NL |

|operators | | | | |

|Nutrition experts, |Drinking |Eating |Exercise |UK NL |

|physiotherapists / fitness coach | | | | |

|Cognitive impairment specialist |Drinking |Voice/video calling |Front door control |NL |

Table 15. Scope of focus groups

5 Outcome

Findings will be reported in D2.5, Issues 1 and II and subsequently, we will make an update to the D2.3 System Requirements deliverable.

6 Part C Field trials of individual components with users

1 Study Aims

• Gather qualitative assessments of users’ responses to the MOBISERV system.

• Gather information on users’ performance and expectations in relation to the range of functionalities.

• Find usability errors and interaction problems.

• Investigate the evaluation criteria outlined in D2.4 Definition of the Evaluation Framework

• Gather qualitative information on users’ priorities for alterations and additions to the future concepts.

2 Participants

Participants will be recruited from the client base of Ananz (NL) and from older people organisations in Bristol (UK). The focus will lie on people aged between 70 and 80 years old that speak English. Participants will take part in three sessions (each one week apart).

People are asked to come to the Smart Home in Eindhoven (NL) or to the UWE labs in Bristol (UK). On both locations, also formal and informal carers will be involved:

| |NL |UK |

|Older adults – no specific condition or disability |±6 |±6 |

|Informal carers |±3 |±3 |

|Formal carers |±3 |±3 |

Table 16. Target participants for field trials

3 Timescales

The following table provides the projected estimate of the field trial timeline.

|Session 1 NL |2 weeks |January 30 – February 10 |

|Session 1 UK |2 weeks |January 30 – February 10 |

|Session 2 NL |1 week |February 13 – February 17 |

|Session 3 NL |2 weeks |February 20 – March 2 |

|Sending PRU |1 week |March 5 – March 9 |

|Session 2 UK |1 week |March – 12 March 16 |

|Session 3 UK |2 weeks |March 19 – March 30 |

Table 17. Field trial timeline

4 Functions to be evaluated

In the UK, UWE will evaluate:

▪ Exercises (including the WHSU) + eating + drinking + video/voice calling

In NL, SMH will evaluate:

▪ Exercises (excluding the WHSU) + eating + drinking + video/voice calling + front door control

5 Session 1 (50 minutes) – at individual participant’s homes

1 Informed consent and explanation of session (10 minutes)

• Very short introduction of their rights

• Overview of the research session

2 Pre-test interview (10 mins)

A preliminary discussion in order to assess

• their experience with technology is

• some basic demographic data

Questions

1. How old are you?

2. What is your gender?

3. Where do you live?

4. Do you live with someone or alone?

5. Do you use any technologies on a regular basis?

6. How would you describe your ethnic status?

3 Voice training (30 mins)

• Voice training for speech recognition

6 Session 2 (50 minutes) – 2 focus groups (older people and carers)

1 Explanation of session (10 minutes)

2 Demonstration of PRU (30 minutes)

Demonstration of Kompaï by UWE/SMH, including:

• Interaction by voice

• Interaction by touch

• Key functions

• Robot control

3 Demonstration of WHSU (UK only) (10 minutes)

Demonstration of WHSU by UWE, including:

• Putting on garments

• Start and stop recording

Volunteers will take the garment home with instructions to enable evaluating their experience of using the garments

• Keeping a diary of their experience with the garments

7 Session 3 (90 minutes) – with individual participants at UK/NL test locations

These sessions will include both primary and secondary stakeholders, with the focus on specific elements of the MOBISERV functions being evaluated. Breaks will be incorporated as necessary based on how the participant is feeling.

1 Test scenarios (45 mins)

The following sections give details of the tests, with guidelines for timing information to ensure that the session proceeds smoothly, without taxing the participant as best as possible.

1 Primary users

Primary users will interact with the PRU and WHSU. They will be asked NOT to ‘think aloud’ during these scenarios.

|Time |Facilitator |User |System |Notes |

|0000 | |Sit on sofa | |PRU hidden |

|0002 |Introduce user and robot |Sit on sofa |PRU approaches | |

| | | | | |

| |Let user have informal hello | | | |

| |session with PRU | | | |

| | | | | |

| |Ask user to adjust the screen | | | |

| |for their optimal viewing and | | | |

| |touch | | | |

|0010 |Scenario 1a: “Try and make a |Sit on sofa |PRU by sofa |Needs to have a person set up in the |

| |voice call to one of the people | | |contacts and ready to accept a call. |

| |in the system. Do this by using | | | |

| |the touchscreen.” | | | |

| | | | | |

| |Scenario 1b: “Try and make a | | | |

| |voice call to one of the people | | | |

| |in the system. Do this by using | | | |

| |voice interaction.” | | | |

|0015 |Scenario 2a (UWE): “Try and put |Stand up |PRU disappears |NOT DONE IN NL |

|(UWE) |on the smart belt. And check it | | | |

| |is recording. And then relax on |Sit on sofa | | |

| |the sofa for a while...” | | | |

| | | | | |

|0015 (SMH) |Scenario 2b (SMH): House control|Sit on sofa |PRU by sofa, issues a |NOT DONE IN UK |

| | | |front door message | |

|0017 |Scenario 3: Eating reminder |Sit on sofa |PRU approaches and |2 or 3 versions of different messages will |

| | | |issues an eating |be used. Users will also be asked about |

| | | |reminder |which versions they would find effective |

| | | | |and ideas for what would encourage them. |

|0022 |Scenario 4: Exercise reminder |Sit on sofa |PRU approaches and |2 or 3 versions of different messages will |

| | | |issues an exercise |be used. Users will also be asked about |

| | | |reminder |which versions they would find effective |

| | | | |and ideas for what would encourage them. |

| | | | |Exercise media will be provided by |

| | | | |registered physiotherapists |

|0025 |User does an exercise |Stands up |PRU by sofa | |

|0027 |User completes exercise |Sit on sofa |PRU disappears | |

|0030 |Scenario 5: Drinking reminder |Sit on sofa |PRU approaches and |2 or 3 versions of different messages will |

| | | |issues an drinking |be used. Users will also be asked about |

| | | |reminder |which versions they would find effective |

| | | | |and ideas for what would encourage them. |

|0035 |Scenario 6: Incoming call |Sit on sofa |PRU approaches and |Needs to have an incoming video call issued|

| | | |issues an incoming | |

| | | |call message | |

|0040 |Scenario 7: “Try and get help |Sit on sofa | | |

| |for using the system” | | | |

|0045 | |Sit on sofa |PRU says goodbye and | |

| | | |disappears | |

2 Secondary users

They will interact with the PRU from a stationary position, using the secondary user interface (and not speech) for the following scenarios:

• Scenario 8: ”Set up eating reminder for an older person”

• Scenario 9: “Review an example exercise data log”

• Scenario 10: “Set up exercise regime for an older person”

• Scenario 11: “Add a contact who an older person could call”

Secondary users will be instructed to ‘think aloud’ during these scenarios.

2 Break (5 mins)

This will be extended if necessary, depending on how the participant is feeling. Refreshments will be provided. Other breaks will be incorporated as necessary based on how the participant is feeling.

8 Post-session discussion (40 mins)

Generally

• What is it like being with a robot like this?

• What do you like and dislike about it?

• Can you imagine this in your home?

• What would you change?

• What would it have to do/not do to become a useful buddy for you?

Questions related to eating

• Do you understand the idea and implications?

• What do you think of the fact that you are observed by your home and robot?

• What do you think about the robot giving you reminders and suggestions to eat?

• What kind of reminders would you prefer? (content, by voice, on screen, …)

Questions related to drinking

• Do you understand the idea and implications?

• What do you think of the fact that you are observed by your home and robot?

• What do you think about the robot giving you reminders and suggestions to drink?

• What kind of reminders would you prefer? (content, by voice, on screen, …)

Questions related to controlling home

• What do you think of these functionalities?

• What kind of things would you like to control in your home?

• What could be improved?

Questions related to exercises

• Do you understand the idea and implications?

• What do you think about the fact that the robot knows how active you are?

• What do you think about wearing special clothes?

• What do you think about wearing a small monitor?

• What do you think about measuring your heartbeat, temperature, respiration and blood pressure?

• Would you like to get suggestions to stay fit?

• What if other people could see these data?

• What do you think about doing exercises coached by the robot?

Questions related to video communication

• What do you think about this function?

• Who would you call?

• What about an automatic video call when something is wrong with you?

• What could be improved?

1 Level of user satisfaction

I am satisfied by the system

|Very satisfied |Satisfied |Neutral |Unsatisfied |Very unsatisfied |

| | | | | |

3 Level of ease of use (ergonomic) of the input devices for the function

How is it physically interacting with the touchscreen on the robot?

|Very easy |Easy |Neutral |Difficult |Very difficult |

| | | | | |

What is it like working out what to do when interacting with the touchscreen on the robot?

|Very easy |Easy |Neutral |Difficult |Very difficult |

| | | | | |

How is it physically speaking with the robot?

|Very easy |Easy |Neutral |Difficult |Very difficult |

| | | | | |

What is it like working out what to do when speaking with the robot?

|Very easy |Easy |Neutral |Difficult |Very difficult |

| | | | | |

How is it physically putting on the smart garment?

|Very easy |Easy |Neutral |Difficult |Very difficult |

| | | | | |

What is it like working out what to do when putting on the smart garment?

|Very easy |Easy |Neutral |Difficult |Very difficult |

| | | | | |

How is it physically wearing the smart garment?

|Very easy |Easy |Neutral |Difficult |Very difficult |

| | | | | |

What is it like working out what to do when wearing the smart garment?

|Very easy |Easy |Neutral |Difficult |Very difficult |

| | | | | |

4 User rating of function output/feedback (in relation to quality, utility and comprehensibility)

For each key system outputs:

What do you think of the quality?

|Very satisfied |Satisfied |Neutral |Unsatisfied |Very unsatisfied |

| | | | | |

What do you think of the usefulness?

|Very satisfied |Satisfied |Neutral |Unsatisfied |Very unsatisfied |

| | | | | |

What do you think of the comprehensibility?

|Very satisfied |Satisfied |Neutral |Unsatisfied |Very unsatisfied |

| | | | | |

5 Acceptance criteria

For each/some of the following statements, I...

|Agree strongly |Agree |Neither agree or disagree |Disagree |Disagree strongly |

| | | | | |

Performance Expectancy

• Using the system in my home would enable me to accomplish things more quickly

Effort expectancy

• Learning to operate the system would be easy for me

• I would find the system easy to use

• I would find the system to be flexible to interact with

• I believe that it is easy to get the system to do what I want it to do

Social influence

• People that influence my behaviour think that I should use the system

• People that are important to me think that I should use the system

• Having the system is a status symbol

Facilitating Conditions

• I have control over using the system

• I have the resources necessary to use the system (e.g AT to help get out of chair and fetch a drink, or go to toilet etc)

• I have the knowledge necessary to use the system

• I think the system fits well with the way I live

• Guidance was available to me in the selection of the system

• Specialised instruction concerning the system was available to me

• A specific person/group is available for assistance with system

9 Analysis Plan

In order to conduct a comprehensive analysis of the sessions, there will be careful review of the various elements of the data gathered. This will include data that is logged by the system during the interaction, as well as video recording of the sessions.

1 Data analysis after sessions

The following sections provide details of the parameters that will be analysed after the test sessions.

1 Level of function usage by the user

Data logging of

• responses to drinking reminder

• responses to eating reminder

• voice/video contacts added

• voice/video calls made

• voice/video calls accepted

• voice/video calls rejected

• responses to exercise reminder

• putting on WHSU garment

• WHSU ecg quality

• exercises viewed

• exercises conducted

• front door visitor calls received

• front door visitor calls accepted

• front door visitor calls rejected

Data logging of speech recognition :

• Certainty of known words

• Number of unknown words

2 System response time for the component/function to user input (voice and touch)

Data logging of

• Response time for voice output (following end of speech command)

• Response time for visual output (following end of touch interaction)

3 Success of system to adapt to change in environment (e.g. background noises, lighting)

During trials we will measure the background noise and light levels to analyse the impact of these on the interaction.

2 Video analysis after the sessions

Overall

• Vocal and facial expressions (using video) (are they impressed, surprised, puzzled, positive/negative, do they seem comfortable, relaxed, anxious, frustrated?)

o While it is far away / moving closer / nearby

o When listening to the robot

o When speaking to the robot

o When a robot fails in a task

o When the robot completes a task

• While interacting, when do they need prompts/reminders?

o Do they know which functions are available?

o Do they know what to say/do to activate these functions?

Speech interaction

• Physical distance people prefer

• Range of words and language used.

• Do users know when they should/can talk?

• Do they understand the voice of the robot?

• Do users change their pace/tone/volume of speech?

GUI interaction

• Physical distance people prefer

• Do they struggle to engage with the tablet from a seated position?

• Is it necessary to manually move the robot closer to the participant or does the participant have to get up in order to engage with the tablet PC? What is the preferred distance?

• Do they appear uncomfortable using the tablet while sat in a chair?

Ease of configurability of function settings

We review the video recording for frequency counts of

• Recovered errors made when configuring setting

• Unrecovered errors made when configuring setting

• Requests for help when configuring settings

• Views of ‘guide’ section

• Number of settings not understood – uncertainty of settings

• Appropriateness of default values for settings

4 Outcome

Afterwards, we will produce a short test protocol report with recommendations for updated system requirements (including instructional help requirements) which will be shared with the project partners.

Subsequently, UWE and SMH will produce the following:

• An updated set of HMI screens (WP7)

• D2.5 Issues 1 and 2

• Updated D2.3 System Requirements

7 Part D Hazard Analysis

The safety requirements will be derived from a functional hazard analysis that will be performed on a high-level requirement specification of the MOBISERV system. Requirements derived from the functional hazard analysis must be satisfied if the MOBISERV system is to achieve an acceptable level of risk. Functional safety requirements for the MOBISERV system are of two distinct categories:

● Functional safety requirements for each component

● Requirements for safety of each identified task

Of all MOBISERV components, Kompai robot presents a highest risk. Several safety requirements have been identified that apply to the mechanisms of the robot, and hence apply to all mission and safety tasks in which those mechanisms are used, e.g. speed control:

robot speed shall be a function of proximity (safe values to be determined in the detailed design), robot speed shall be sufficiently slow at moment of contact that injuries are only minor at worst; or environment sensing - the robot is required to sense the location of itself, significant persons and terrain features in order to avoid collisions correctly, as well as to perform mission task.

In order to do a hazard analysis assessment and specify safety requirements, a task model will be initially compiled.

• Specifying tasks:

– Task Title (name of operation)

– Goal state(s) or conditions to be achieved

– Behaviour type: convergence, maintenance, avoidance

– Characteristics of the environment in which the task is to be performed (location, interacting agents/features)

– State(s) / condition(s) to be avoided (hazards) - we will only identify obvious examples here.

Many methods exist for specifying the externally-observed functionality of a system, including Use Case Design, User Stories, and Viewpoints-based Requirements Engineering. However, for this design study, a method called Hierarchical Task Analysis has been chosen.

Hierarchical Task Analysis (HTA) [1] is a system analysis method that has been developed by the Human Factors Analysis community as a method for eliciting the procedures and action sequences by which a system is used by human operators. System and procedural models identified by HTA are then used as the basis for operator error analyses to determine whether the system functional or user interface design has an increased potential for of hazards due to human error.

HTA proceeds by the identification of the tasks required of the system, and identification of plans, which describe the order in which tasks are to be performed. Tasks are described by the general activity to be performed and/or the desired end state of the system and its environment at the end of the activity. Each task is then successively decomposed into sub-tasks by the same procedure, as far as is reasonable for the purpose of the analysis. Each task is accompanied by its own plan specifying the ordering of the sub-tasks. The results can also be used in the construction of a hierarchical task diagram that presents the organisational structure of the tasks in a graphical format.

HTA Procedure

1. Begin with the highest level task objective / goal this is usually the ultimate goal to be achieved

2. Identify the set of sub-tasks needed to achieve that goal keep the sub-task titles/descriptions as simple as possible

3. Develop a plan that specifies the order of processing of sub-tasks

- identify the main sequence of sub-tasks

- identify exception cases, conditional sub-tasks, alternate sequences etc., and note the enabling/initiating conditions

4. For each sub-task, repeat steps 2 and 3, identifying further hierarchical levels to the task decomposition, until a suitable stopping point is reached

- Stopping rules: common sense, ethological guidelines

When the MOBISERV tasks are defined using the HTA, a list of potential failures and consequences that could be caused will be derived. The hazards will be also be categorised based on their severity and frequency. Preliminary hazard analysis will include:

• Failure Type

• Failure Description

• Operating Phase

• Consequence Description

• Consequence Severity

• Cause Description

• Initial Frequency

Functional safety requirements for the MOBISERV functions will serve as input to the technical partners when refining the components’ design.

Responsibilities and Documentation

This validation study will be performed locally by work packages (WP) leaders at their site in the laboratory. It is the responsibility of the technical WP leaders to ensure that their component meets the required quality/performance level to enable reliable and accurate performance within the integrated MOBISERV system and report all results in WP specific deliverables.

The usability evaluations will be carried out as field trials in the UK and Netherlands. In working with older persons, all due ethical procedures will be followed, as described in D2.3 issue 1. Ethical approval has been obtained for the evaluation studies in both countries. The results will be documented and reported in subsequent WP2 deliverables.

1 WP leader responsibilities

• Verification of the appropriate KPIs and validation methodologies for these for each of their sub-system component.

• Development of technical component specification and delivery of the validation results as part of these specifications.

• Risk assessment for their components and documentation of safety constraints and testing for each of their components – including details of electrical interference tests as appropriate.

References

Cairns, P. & Cox, AL. 2008. Research Methods for Human-Computer Interaction, Cambridge University Press.

EudraLex - Volume 4 Good manufacturing practice (GMP) Guidelines, available online at:

, accessed 17/12/10

Malan, R., Bredemeyer, D. 2001. Defining non-functional requirements, Bredemeyer Consulting, available online at: , accessed 07/01/11

Reh, F.J, Key Performance Indicators, available online at: , accessed 17/12/10

Roozenburg, N. F. M. & Eeekels, J. 1995. Product design: fundamentals and methods, Chichester, Wiley.

Appendix

1 MOBISERV Project Objectives

Table 18: Project Objectives with Targets

|Objective # |Project Objective |

|OT1 |To produce an efficient system design of the Personal |

| |Robotic Platform |

|OT2 |Development of Health Status Monitoring System |

| |integrated into wearable fabrics |

|OT3 |Development of Secure tele-alarm and health reporting |

| |system |

|OT4 |Development of the Nutrition support system |

|OT6 |To apply innovative personal tracking techniques for |

| |monitoring nutrition habits and vital signs of the |

| |elderly citizens while they are accomplishing their |

| |daily activities. The project will also apply location |

| |tracking in cases of harmful situations (e.g. after a |

| |health related incident has occurred) |

|OT7 |To develop a reliable communication platform for |

| |various heterogeneous devices of MOBISERV by providing |

| |a unified interface to different wireless technologies |

| |ranging from mobile technologies to wireless LANs, |

| |enabling seamless connectivity and vertical handovers |

| |to provide maximum reliability. |

|OT8 |To develop methods for securing sensitive and private |

| |communicated information while taking computing |

| |requirements. |

|OR1 |To research, develop and implement self-learning |

| |techniques in relation to optical recognition, pattern |

| |recognition and autonomous navigation techniques. |

|OR2 |To research effective self-learning methods for |

| |predicting and detecting health-related adverse events |

| |of the older citizens from multiple sensors given the |

| |fact that single sensor based solutions are not robust |

| |and reliable enough (issuing false alarms). |

2 Smart home environment

[pic]

Figure 3 2D Layout - Smartest Home of The Netherlands in Eindhoven,

3 Heuristics for Expert Evaluation

| | | |

|1. Consistency | | |

| | | |

|Icons, labels, buttons, and menus (i.e., elements) displayed on | | |

|screen should be consistent in, location, terminology and meaning. | | |

| | | |

| | | |

| |- |Do the elements follow platform conventions? |

| | |(do as everyone else does) |

| | | |

| | | |

| |- |Are the elements directly understandable |

| | |(i.e., not ambiguous), in language and |

| | |visuals? |

| | | |

| | | |

| |- |Is a particular system action always displayed|

| | |in the same manner and always achievable by |

| | |one particular user action? |

| | | |

|2. Simplicity | | |

| | | |

|Elements displayed on screen should not contain functionalities or | | |

|information which is rarely needed or irrelevant. | | |

| | | |

| | | |

| |- |Do the rarely needed or irrelevant elements |

| | |compete with and diminish the visibility of |

| | |relevant units of information? |

| | | |

|3a. Feedback | | |

| | | |

|Elements displayed on screen should keep you informed about the | | |

|past, current, and future system status. | | |

| | | |

| | | |

| |- |Do these feedback elements keep you informed |

| | |about what is going on within a reasonable |

| | |time? |

| | | |

| |- |Do these feedback elements provide an answer |

| | |to the questions: Where am I? Where have I |

| | |been? & Where can I go? |

| | | |

|3b. Feedback | | |

| |- |Do these feedback elements also provide |

|Elements displayed on screen should keep you informed about the | |information about how you’ve got here, how you|

|past, current, and future system status. | |can go back, and how you can go somewhere |

| | |else? |

| | | |

| |- |Are the responses of elements that provide |

| | |feedback for minor and frequent actions |

| | |modest? |

| | | |

|4. Control | | |

| | | |

|Elements displayed on screen should provide you with control and | | |

|freedom. | | |

| |- |Is there are a clearly marked ‘emergency exit’|

| | |to leave an unwanted state? |

| | | |

| |- |Are there undo and redo options? |

| | | |

| |- |Do the elements respond to your actions? |

| | | |

|5. Error | | |

| | | |

|Elements displayed on screen should help you recognize, diagnose, | | |

|and recover from an error. | | |

| | | |

| | | |

| |- |Do these elements display the error in natural|

| | |language, indicate the problem, and suggest a |

| | |solution and what the effect of this will be? |

| | | |

| | | |

| |- |Are the displayed errors blaming the problem |

| | |on user deficiencies? (the user is always |

| | |right) |

| | | |

|6a. Overload | | |

| | | |

|The elements displayed on screen should minimize the memory load of| | |

|the user. | | |

| | | |

| | | |

| |- |Are there elements that provide instructions |

| | |for use of the system and are these |

| | |instructions simple and understandable? |

| | | |

| | | |

| |- |Are the elements on screen static or at least |

| | |low in motion frequency? |

| | | |

| | | |

| |- |Are there more than 7 elements within (wide) |

| | |an action sequence, and more than 3 action |

| | |sequences necessary to perform a task? |

| | | |

| |- |Do the elements and action sequences contain |

| | |metaphors that are known by you? |

| | | |

| |- |Do you have to remember information from one |

| | |part of the system to another? |

| | | |

| | | |

| |- |Are there elements that provide shortcuts to |

| | |frequently made actions? |

Table 19 Heuristics for Expert evaluation

4 Specification of the Evaluation Wizard

1 Introduction

This describes a very simple “control interface” – the wizard – that will be needed to perform planned user evaluation test for the MOBISERV project.

Using this wizard, test facilitators at UWE and SMH can control the robot (for minor adjustments, but also for safety), control the HMI of the robot, and trigger all possible dialogues / pop-ups / messages / reminders of the system.

Later, in real life situations, most of these dialogues will only be triggered when the user has not been eating, or not been physically active, for a couple of hours. To facilitate our tests, evaluate these dialogues, and evaluate the overall user experience, the wizard is necessary to speed up the procedure manually.

2 Requirements for Wizard

1 Global navigation functions

• Navigate / turn / stop the robot freely in space

• Adjust speed of robot

• Move robot to position A (e.g. sofa)

• Move robot to position B (e.g. robot’s resting position)

• Move robot to position C (e.g. near doorway)

2 Global HMI functions

• Adjust volume of speech

• Adjust speed of speech

• Repeat last action

• Repeat last command

• Set ‘user at home’ status

3 Specific functions

1 Drinking

• Issue a question about drinking (version A, version B)

• Issue a reminder to drink (version A, version B)

• Issue an encouragement to drink (version A, version B)

2 Eating

• Issue a question about eating (version A, version B)

• Issue a reminder to eat (version A, version B)

• Issue an encouragement to eat (version A, version B)

3 Front door

• Trigger door bell and pop-up message (version A, version B)

4 Exercise

• Issue persuading exercising message (version A, version B)

5 Voice/video

• Trigger incoming video call

• Trigger incoming audio call

4 Proposed design of the Wizard interface

[pic]

Figure 4 Possible wizard control panel layout

5 Other issues

- Triggering a messaging action should be fired instantly - even if the robot is currently moving and has only partially completed moving to a new preset destination.

- Nice to have: is it possible to add a text field to the wizard, in which we can freely type some text, so that the robot will speak out this text?

-----------------------

[1]

[2] Reh, F.J, Key Performance Indicators, available online at:

[3] KPI Library,

-----------------------

The information contained in this report is subject to change without notice and should not be construed as a commitment by any members of the MOBISERV Consortium. The MOBISERV Consortium assumes no responsibility for the use or inability to use any software or algorithms, which might be described in this report. The information is provided without any warranty of any kind and the MOBISERV Consortium expressly disclaims all implied warranties, including but not limited to the implied warranties of merchantability and fitness for a particular use.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download