Menu
Home
Log in / Register
 
Home arrow Computer Science arrow Augmented and Virtual Reality
< Prev   CONTENTS   Next >

4 Experimental Results

This Section summarizes results of some of the experiments we have performed to test the effectiveness of the proposed approach. These experiments aimed at:

(1) evaluating the ability in recognizing signs in LIS (input module); (2) tuning the remote control of the robotic hand (robotic hand module); (3) assessing the effectiveness in transmitting the information over the whole pipeline; (4) getting feedbacks in order to fix potential errors and problems. In particular, the input module has been more intensively tested, as the ability of recognizing reliably and quickly SL signs is of crucial relevance for the whole system.

4.1 Input Module Validation

For what concerns the classification, we report both the average per-class accuracy and the hand gesture recognition accuracy. The first metric highlights how many times each pixel is labelled correctly by the classification layer. Results,

Fig. 5. Average per-pixel classification accuracy for each hand part. In the x axis, for each finger, palm subscripts identify the metacarpophalangeal joints (MCP), while indexes 1, 2 and 3 identify respectively proximal interphalangeal joints (PIP), distal interphalangeal joints (DIP) and fingertips. The y axis represent, in percentage, how many time the hand part is correctly labelled.

presented in Fig. 5, show that our system is usually able to discriminate among fingers and reach peaks of accuracy in discriminating palm and wrist. Little fingers, as ring and pinky, are obviously more difficult to track, especially for the self-occlusions that are experimented in many poses, and so the accuracy in their labelling is lower. Data presented in Fig. 5 represent the average accuracy of our system with respect to a ground truth set composed by 42 depthmaps, manually labelled. Hand labelling example is given in Fig. 6.

Average accuracy obtained by of our system in per-pixel classification is slightly worse than the one achieved in [16], but this is just due to the fact that we used a much smaller training set, composed of less than 15'000 images, while in [16] 200'000 images are used (and authors could not use more for memory constraints).

However, the experiments confirm that the average accuracy reached by our approach is sufficient to effectively track the hand and discriminate among hand gestures, even if similar. To this extent, Fig. 7 shows a graph summarizing the hand gesture recognition accuracy. Such data are computed using one against everything else cross-correlation validation, a process in which data from one subject is used for testing and all the others are used for training. This is the same metric used in [17] and allowed us a comparison between our approach and the results obtained by Kuznetsova et al. on real data. Error rate that authors report for multi-layered RF relying on decision trees with depth fixed to 20 do

(a) RGB image (b) Labeled Hand image

Fig. 6. Hand labelling example. Depthmap corresponding to the RGB image (a) is processed by the input block: in the labelled image (b), the background is removed and each identified different sub-part of the hand is coloured with the corresponding colour from the model.

not go below 49%. Such results are outperformed by our approach, since we achieve an average error rate of 46% in the same operating conditions.

As shown in Fig. 7, our approach is able to accurately recognize a sign most of the times. Even if accuracy is practically never over 90% in our experiment, we notice that a precision of nearly 35% is always guaranteed and it is sufficient to accurately recognize signs. For instance, Fig. 8 shows two example of a hand labelled by our approach. As it is shown in Fig. 8d, the T letter is easily discriminated even if average classification accuracy is slightly more than 40%.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Philosophy
Political science
Psychology
Religion
Sociology
Travel