Menu
Home
Log in / Register
 
Home arrow Computer Science arrow Virtual, Augmented and Mixed Reality
< Prev   CONTENTS   Next >

Applying Saliency-Based Region of Interest Detection in Developing a Collaborative Active Learning System with Augmented Reality

Abstract. Learning activities are not necessary to be only in traditional physical classrooms but can also be set up in virtual environment. Therefore the authors propose a novel augmented reality system to organize a class supporting real-time collaboration and active interaction between educators and learners. A pre-processing phase is integrated into a visual search engine, the heart of our system, to recognize printed materials with low computational cost and high accuracy. The authors also propose a simple yet efficient visual saliency estimation technique based on regional contrast is developed to quickly filter out low informative regions in printed materials. This technique not only reduces unnecessary computational cost of keypoint descriptors but also increases robustness and accuracy of visual object recognition. Our experimental results show that the whole visual object recognition process can be speed up 19 times and the accuracy can increase up to 22%. Furthermore, this pre-processing stage is independent of the choice of features and matching model in a general process. Therefore it can be used to boost the performance of existing systems into real-time manner.

Keywords: Smart Education, Active Learning, Visual Search, Saliency Image, Human-Computer Interaction.

1 Introduction

Skills for the 21st century require active learning which focuses on the responsibility of learning on learners [1] by stimulating the enthusiasm and involvement of learners in various activities. As learning activities are no longer limited in traditional physical classrooms but can be realized in virtual environment [2], we propose a new system with interaction via Augmented Reality (AR) to enhance the attractiveness and collaboration for learners and educators in virtual environment. To develop a novel AR system for education, we focus on the following two criteria as the main guidelines to design our proposed system, including real-time collaboration and interaction, and naturalness of user experience.

The first property emphasizes real-time collaboration and active interaction between educators and learners via augmented multimedia and social media. Just looking through a mobile device or AR glasses, an educator can monitor the progress of learners or groups via their interactions with augmented content in lectures. The educator also gets feedbacks from learners on the content and activities designed and linked to a specific page in a lecture note or a textbook to improve the quality of lecture design. Learners can create comments, feedback, or other types of social media targeting a section of a lecture note or a page of a textbook for other learners or the educator. A learner can also be notified and know social content created by other team members during the progress of teamwork.

The second property of the system is the naturalness of user experience as the system can aware of the context, i.e. which section of a page in a lecture note or a textbook is being read, by natural images, not artificial markers. Users can also interact with related augmented content with their bare hands. This helps users enhance their experience on both analog aesthetic emotions and immersive digital multisensory feedback by additional multimedia information.

The core component to develop an AR education environment is to recognize certain areas of printed materials, such as books or lecture handouts. As a learner is easily attracted by figures or charts in books and lecture notes, we encourage educators to exploit learners visual sensitivity to graphical areas and embed augmented content to such areas, not text regions, in printed materials to attract learners. Therefore in our proposed system, we do not use optical character recognition but visual content recognition to determine the context of readers in reading printed materials.

In practice, graphical regions of interest that mostly attract readers in a page do not fully cover a whole page. There are other regions that do not provide much useful information for visual recognition, such as small decorations or small texts. Therefore, we propose the novel method based on saliency metric to quickly eliminate unimportant or noisy regions in printed lecture notes or textbooks and speed up the visual context recognition process on mobile devices or AR glasses. Our experimental results show that the whole visual object recognition process can be speed up 19 times and the accuracy can increase up to 22%.

This paper is structured as follows. In Section 2, the authors briefly present and analyze the related work. The proposed system is presented in Section 3. In Section 4, we present the core component of our system the visual search engine. The experiments and evaluations are showed in Section 5. Then we discuss potential use of the system in Section 6. Finally, Section 7 presents conclusion and ideas for future work.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Philosophy
Political science
Psychology
Religion
Sociology
Travel