Abstract: Spatial Augmented Reality is a variation of Augmented Reality where the Head-Mounted Display is not required. This variation of Augmented Reality is useful in cases where the need for a Head-Mounted Display itself is a limitation. To achieve this, Spatial Augmented Reality techniques substitute the technological elements of Augmented Reality; the virtual world is projected onto a physical surface. To create an interactive spatial augmented experience, the application must be aware of the spatial relations that exist between its core elements. In this case, the core elements are referred to as a projection system and an input system, and the process to achieve this spatial awareness is called system calibration. The Spatial Augmented Reality system is considered calibrated if the projected virtual world scale is similar to the real-world scale, meaning that a virtual object will maintain its perceived dimensions when projected to the real world. Also, the input system is calibrated if the application knows the relative position of a point in the projection plane and the RGB-depth sensor origin point. Any kind of projection technology can be used, light-based projectors, close-range projectors, and screens, as long as it complies with the defined constraints; the method was tested on different configurations. The proposed procedure does not rely on a physical marker, minimizing the human intervention on the process. The tests are made using a Kinect V2 as an input sensor and several projection devices. In order to test the method, the constraints defined were applied to a variety of physical configurations; once the method was executed, some variables were obtained to measure the method performance. It was demonstrated that the method obtained can solve different arrangements, giving the user a wide range of setup possibilities.
Abstract: Over the past 20 years, technology was rapidly developed and no one expected what will come next. Advancements in technology open new opportunities for immersive learning environments. There is a need to transmit education to a level that makes it more effective for the student. Augmented reality is one of the most popular technologies these days. This paper is an experience of applying Augmented Reality (AR) technology using a marker-based approach in E-learning system to transmitting virtual objects into the real-world scenes. We present a marker-based approach for transmitting virtual objects into real-world scenes to explain information in a better way after we developed a mobile phone application. The mobile phone application was then tested on students to determine the extent to which it encouraged them to learn and understand the subjects. In this paper, we talk about how the beginnings of AR, the fields using AR, how AR is effective in education, the spread of AR these days and the architecture of our work. Therefore, the aim of this paper is to prove how creating an interactive e-learning system using AR technology will encourage students to learn more.
Abstract: New sensors and technologies – such as microphones,
touchscreens or infrared sensors – are currently making their
appearance in the automotive sector, introducing new kinds of
Human-Machine Interfaces (HMIs). The interactions with such tools
might be cognitively expensive, thus unsuitable for driving tasks.
It could for instance be dangerous to use touchscreens with a
visual feedback while driving, as it distracts the driver’s visual
attention away from the road. Furthermore, new technologies in
car cockpits modify the interactions of the users with the central
system. In particular, touchscreens are preferred to arrays of buttons
for space improvement and design purposes. However, the buttons’
tactile feedback is no more available to the driver, which makes
such interfaces more difficult to manipulate while driving. Gestures
combined with an auditory feedback might therefore constitute an
interesting alternative to interact with the HMI. Indeed, gestures can
be performed without vision, which means that the driver’s visual
attention can be totally dedicated to the driving task. In fact, the
auditory feedback can both inform the driver with respect to the task
performed on the interface and on the performed gesture, which might
constitute a possible solution to the lack of tactile information. As
audition is a relatively unused sense in automotive contexts, gesture
sonification can contribute to reducing the cognitive load thanks
to the proposed multisensory exploitation. Our approach consists
in using a virtual object (VO) to sonify the consequences of the
gesture rather than the gesture itself. This approach is motivated
by an ecological point of view: Gestures do not make sound, but
their consequences do. In this experiment, the aim was to identify
efficient sound strategies, to transmit dynamic information of VOs to
users through sound. The swipe gesture was chosen for this purpose,
as it is commonly used in current and new interfaces. We chose
two VO parameters to sonify, the hand-VO distance and the VO
velocity. Two kinds of sound parameters can be chosen to sonify the
VO behavior: Spectral or temporal parameters. Pitch and brightness
were tested as spectral parameters, and amplitude modulation as a
temporal parameter. Performances showed a positive effect of sound
compared to a no-sound situation, revealing the usefulness of sounds
to accomplish the task.
Abstract: This paper describes a 3D modeling system in
Augmented Reality environment, named 3DARModeler. It can be
considered a simple version of 3D Studio Max with necessary
functions for a modeling system such as creating objects, applying
texture, adding animation, estimating real light sources and casting
shadows. The 3DARModeler introduces convenient, and effective
human-computer interaction to build 3D models by combining both
the traditional input method (mouse/keyboard) and the tangible input
method (markers). It has the ability to align a new virtual object with
the existing parts of a model. The 3DARModeler targets nontechnical
users. As such, they do not need much knowledge of
computer graphics and modeling techniques. All they have to do is
select basic objects, customize their attributes, and put them together
to build a 3D model in a simple and intuitive way as if they were
doing in the real world. Using the hierarchical modeling technique,
the users are able to group several basic objects to manage them as a
unified, complex object. The system can also connect with other 3D
systems by importing and exporting VRML/3Ds Max files. A
module of speech recognition is included in the system to provide
flexible user interfaces.
Abstract: The method of modeling is the key technology for
digital mockup (DMU). Based upon the developing for mechanical
product DMU, the theory, method and approach for virtual
environment (VE) and virtual object (VO) were studied. This paper
has expounded the design goal and architecture of DMU system,
analyzed the method of DMU application, and researched the general
process of physics modeling and behavior modeling.
Abstract: It is important to give input information without other device in AR system. One solution is using hand for augmented reality application. Many researchers have proposed different solutions for hand interface in augmented reality. Analyze Histogram and connecting factor is can be example for that. Various Direction searching is one of robust way to recognition hand but it takes too much calculating time. And background should be distinguished with skin color. This paper proposes a hand tracking method to control the 3D object in augmented reality using depth device and skin color. Also in this work discussed relationship between several markers, which is based on relationship between camera and marker. One marker used for displaying virtual object and three markers for detecting hand gesture and manipulating the virtual object.
Abstract: Computer aided design accounts with the support of
parametric software in the design of machine components as well as
of any other pieces of interest. The complexities of the element under
study sometimes offer certain difficulties to computer design, or ever
might generate mistakes in the final body conception. Reverse
engineering techniques are based on the transformation of already
conceived body images into a matrix of points which can be
visualized by the design software. The literature exhibits several
techniques to obtain machine components dimensional fields, as
contact instrument (MMC), calipers and optical methods as laser
scanner, holograms as well as moiré methods. The objective of this
research work was to analyze the moiré technique as instrument of
reverse engineering, applied to bodies of nom complex geometry as
simple solid figures, creating matrices of points. These matrices were
forwarded to a parametric software named SolidWorks to generate
the virtual object. Volume data obtained by mechanical means, i.e.,
by caliper, the volume obtained through the moiré method and the
volume generated by the SolidWorks software were compared and
found to be in close agreement. This research work suggests the
application of phase shifting moiré methods as instrument of reverse
engineering, serving also to support farm machinery element designs.
Abstract: augmented reality is a technique used to insert virtual objects in real scenes. One of the most used libraries in the area is the ARToolkit library. It is based on the recognition of the markers that are in the form of squares with a pattern inside. This pattern which is mostly textual is source of confusing. In this paper, we present the results of a classification of Latin characters as a pattern on the ARToolkit markers to know the most distinguishable among them.
Abstract: There are many researches to detect collision between real object and virtual object in 3D space. In general, these techniques are need to huge computing power. So, many research and study are constructed by using cloud computing, network computing, and distribute computing. As a reason of these, this paper proposed a novel fast 3D collision detection algorithm between real and virtual object using 2D intersection area. Proposed algorithm uses 4 multiple cameras and coarse-and-fine method to improve accuracy and speed performance of collision detection. In the coarse step, this system examines the intersection area between real and virtual object silhouettes from all camera views. The result of this step is the index of virtual sensors which has a possibility of collision in 3D space. To decide collision accurately, at the fine step, this system examines the collision detection in 3D space by using the visual hull algorithm. Performance of the algorithm is verified by comparing with existing algorithm. We believe proposed algorithm help many other research, study and application fields such as HCI, augmented reality, intelligent space, and so on.
Abstract: Interactive public displays give access as an
innovative media to promote enhanced communication between
people and information. However, digital public displays are subject
to a few constraints, such as content presentation. Content
presentation needs to be developed to be more interesting to attract
people’s attention and motivate people to interact with the display. In
this paper, we proposed idea to implement contents with interaction
elements for vision-based digital public display. Vision-based
techniques are applied as a sensor to detect passers-by and theme
contents are suggested to attract their attention for encouraging them
to interact with the announcement content. Virtual object, gesture
detection and projection installation are applied for attracting
attention from passers-by. Preliminary study showed positive
feedback of interactive content designing towards the public display.
This new trend would be a valuable innovation as delivery of
announcement content and information communication through this
media is proven to be more engaging.