Ambient Surface: Enhancing Interface Capabilities of Mobile Objects Aided by Ambient Environment
As I am involved in C-Lab project, I was supposed to read this paper. Our project is based on this one. This paper introduces new environment(Ambient environment).
This environment provides two features largely.
1. Provide a wider screen for mobile devices
2. Allow alalog objects to dynamically interact with users.
As for the second feature, let me explain more. There are three scenarios largely.
1. Input: Image, Output: information
A mobile object, such as a paper book or a smartphone, is placed on an ordinary table with no digital equipment. Then, information of the object or a user's interaction with the object is captured by the 2D/3D cameras. The captured information is processed in the system PC and the appropriate feedback images are projected on the table.
2. Input: ID, Output: Information
The ambient surface application, installed on the system PC, detects the information of ID, position and orientation of a mobile object placed on the table by recognizing customized color-coded mini markers.
3. Input: Touch events, Output: Information
Also, it detects a user's finger touch interaction on the table by a 3D depth camera. The PC system analyzes the captured information, and then generates projected images.
'Papers > Machine learning' 카테고리의 다른 글
Actual learning for visual object detection (0) | 2014.12.29 |
---|---|
ICDM 2014 (Shenzen) (0) | 2014.12.18 |
An Introduction to Data Mining (0) | 2014.12.13 |
Image processing and machine learning. (0) | 2014.10.20 |
AverageExplorer: Interactive Exploration and Allignment of Visual Data Collections (0) | 2014.09.29 |