Abstract: This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image's content instead of the structural features of the image. The layer is particularly designed to subdue an image's content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods.
Abstract: Rendering light shafts is one of the important topics in computer gaming and interactive applications. The methods and models that are used to generate light shafts play crucial role to make a scene more realistic in computer graphics. This article discusses the image-based shadows and geometric-based shadows that contribute in generating volumetric shadows and light shafts, depending on ray tracing, radiosity, and ray marching technique. The main aim of this study is to provide researchers with background on a progress of light scattering methods so as to make it available for them to determine the technique best suited to their goals. It is also hoped that our classification helps researchers find solutions to the shortcomings of each method.
Abstract: In order to understand whether there is a better than the
learning function of learning methods and improve the CAD Courses
for enterprise’s design human resource development, this research is
applied in learning practical learning computer graphics software. In
this study, Revit building information model for learning content,
design of two different modes of learning curriculum to learning,
learning functions, respectively, and project learning. Via a post-test,
questionnaires and student interviews, etc., to study the effectiveness
of a comparative analysis of two different modes of learning. Students
participate in a period of three weeks after a total of nine-hour course,
and finally written and hands-on test. In addition, fill in the
questionnaire response by the student learning, a total of fifteen
questionnaire title, problem type into the base operating software,
application software and software-based concept features three
directions. In addition to the questionnaire, and participants were
invited to two different learning methods to conduct interviews to
learn more about learning students the idea of two different modes.
The study found that the ad hoc short-term courses in learning, better
learning outcomes. On the other hand, functional style for the whole
course students are more satisfied, and the ad hoc style student is
difficult to accept the ad hoc style of learning.
Abstract: We present an approach to triangle mesh simplification
designed to be executed on the GPU. We use a quadric error metric
to calculate an error value for each vertex of the mesh and order all
vertices based on this value. This step is followed by the parallel
removal of a number of vertices with the lowest calculated error
values. To allow for the parallel removal of multiple vertices we use
a set of per-vertex boundaries that prevent mesh foldovers even when
simplification operations are performed on neighbouring vertices. We
execute multiple iterations of the calculation of the vertex errors,
ordering of the error values and removal of vertices until either a
desired number of vertices remains in the mesh or a minimum error
value is reached. This parallel approach is used to speed up the
simplification process while maintaining mesh topology and avoiding
foldovers at every step of the simplification.
Abstract: Traditional service channel is losing its edge due to emerging service technology. To establish interaction with the clients, the service industry is using effective mechanism to give clients direct access to services with emerging technologies. Thus, as service science receives attention, special and unique consumption pattern evolves; henceforth, leading to new market mechanism and influencing attitudes toward life and consumption patterns. The market demand for customized services is thus valued due to the emphasis of personal value, and is gradually changing the demand and supply relationship in the traditional industry. In respect of interior design service, in the process of traditional interior design, a designer converts to a concrete form the concept generated from the ideas and needs dictated by a user (client), by using his/her professional knowledge and drawing tool. The final product is generated through iterations of communication and modification, which is a very time-consuming process. Although this process has been accelerated with the help of computer graphics software today, repeated discussions and confirmations with users are still required to complete the task. In consideration of what is addressed above a space user’s life model is analyzed with visualization technique to create an interaction system modeled after interior design knowledge. The space user document intuitively personal life experience in a model requirement chart, allowing a researcher to analyze interrelation between analysis documents, identify the logic and the substance of data conversion. The repeated data which is documented are then transformed into design information for reuse and sharing. A professional interior designer may sort out the correlation among user’s preference, life pattern and design specification, thus deciding the critical design elements in the process of service design.
Abstract: This paper describes a 3D modeling system in
Augmented Reality environment, named 3DARModeler. It can be
considered a simple version of 3D Studio Max with necessary
functions for a modeling system such as creating objects, applying
texture, adding animation, estimating real light sources and casting
shadows. The 3DARModeler introduces convenient, and effective
human-computer interaction to build 3D models by combining both
the traditional input method (mouse/keyboard) and the tangible input
method (markers). It has the ability to align a new virtual object with
the existing parts of a model. The 3DARModeler targets nontechnical
users. As such, they do not need much knowledge of
computer graphics and modeling techniques. All they have to do is
select basic objects, customize their attributes, and put them together
to build a 3D model in a simple and intuitive way as if they were
doing in the real world. Using the hierarchical modeling technique,
the users are able to group several basic objects to manage them as a
unified, complex object. The system can also connect with other 3D
systems by importing and exporting VRML/3Ds Max files. A
module of speech recognition is included in the system to provide
flexible user interfaces.
Abstract: The automatic construction of large, high-resolution
image vistas (mosaics) is an active area of research in the fields of
photogrammetry [1,2], computer vision [1,4], medical image
processing [4], computer graphics [3] and biometrics [8]. Image
stitching is one of the possible options to get image mosaics. Vista
Creation in image processing is used to construct an image with a
large field of view than that could be obtained with a single
photograph. It refers to transforming and stitching multiple images
into a new aggregate image without any visible seam or distortion in
the overlapping areas. Vista creation process aligns two partial
images over each other and blends them together. Image mosaics
allow one to compensate for differences in viewing geometry. Thus
they can be used to simplify tasks by simulating the condition in
which the scene is viewed from a fixed position with single camera.
While obtaining partial images the geometric anomalies like rotation,
scaling are bound to happen. To nullify effect of rotation of partial
images on process of vista creation, we are proposing rotation
invariant vista creation algorithm in this paper. Rotation of partial
image parts in the proposed method of vista creation may introduce
some missing region in the vista. To correct this error, that is to fill
the missing region further we have used image inpainting method on
the created vista. This missing view regeneration method also
overcomes the problem of missing view [31] in vista due to cropping,
irregular boundaries of partial image parts and errors in digitization
[35]. The method of missing view regeneration generates the missing
view of vista using the information present in vista itself.
Abstract: The objective of the paper was to understand the use
of an important element of design, namely color in a Semiotic
system. Semiotics is the study of signs and sign processes, it is often
divided into three branches namely (i) Semantics that deals with the
relation between signs and the things to which they refer to mean, (ii)
Syntactics which addresses the relations among signs in formal
structures and (iii) Pragmatics that relates between signs and its
effects on they have on the people who use them to create a plan for
an object or a system referred to as design. Cubism with its versatility
was the key design tool prevalent across the 20th century. In order to
analyze the user's understanding of interaction and appreciation of
color through the movement of Cubism, an exercise was undertaken
in Dept. of Design, IIT Guwahati. This included tasks to design a
composition using color and sign process to the theme 'Between the
Lines' on a given tessellation where the users relate their work to the
world they live in, which in this case was the college campus of IIT
Guwahati. The findings demonstrate impact of the key design
element color on the principles of visual perception based on image
analysis of specific compositions.
Abstract: The algorithms of convex hull have been extensively studied in literature, principally because of their wide range of applications in different areas. This article presents an efficient algorithm to construct approximate convex hull from a set of n points in the plane in O(n + k) time, where k is the approximation error control parameter. The proposed algorithm is suitable for applications preferred to reduce the computation time in exchange of accuracy level such as animation and interaction in computer graphics where rapid and real-time graphics rendering is indispensable.
Abstract: This paper describes a method of modeling to model
shadow play puppet using sophisticated computer graphics techniques
available in OpenGL in order to allow interactive play in real-time
environment as well as producing realistic animation. This paper
proposes a novel real-time method is proposed for modeling of puppet
and its shadow image that allows interactive play of virtual shadow
play using texture mapping and blending techniques. Special effects
such as lighting and blurring effects for virtual shadow play
environment are also developed. Moreover, the use of geometric
transformations and hierarchical modeling facilitates interaction
among the different parts of the puppet during animation. Based on the
experiments and the survey that were carried out, the respondents
involved are very satisfied with the outcomes of these techniques.
Abstract: Color Image quantization (CQ) is an important
problem in computer graphics, image and processing. The aim of
quantization is to reduce colors in an image with minimum distortion.
Clustering is a widely used technique for color quantization; all
colors in an image are grouped to small clusters. In this paper, we
proposed a new hybrid approach for color quantization using firefly
algorithm (FA) and K-means algorithm. Firefly algorithm is a swarmbased
algorithm that can be used for solving optimization problems.
The proposed method can overcome the drawbacks of both
algorithms such as the local optima converge problem in K-means
and the early converge of firefly algorithm. Experiments on three
commonly used images and the comparison results shows that the
proposed algorithm surpasses both the base-line technique k-means
clustering and original firefly algorithm.
Abstract: L-system is a tool commonly used for modeling and simulating the growth of fractal plants. The aim of this paper is to join some problems of the computational geometry with the fractal geometry by using the L-system technique to generate fractal plant in 3D. L-system constructs the fractal structure by applying rewriting rules sequentially and this technique depends on recursion process with large number of iterations to get different shapes of 3D fractal plants. Instead, it was reiterated a specific number of iterations up to three iterations. The vertices generated from the last stage of the Lsystem rewriting process are used as input to the triangulation algorithm to construct the triangulation shape of these vertices. The resulting shapes can be used as covers for the architectural objects and in different computer graphics fields. The paper presents a gallery of triangulation forms which application in architecture creates an alternative for domes and other traditional types of roofs.
Abstract: The demand for higher performance graphics
continues to grow because of the incessant desire towards realism.
And, rapid advances in fabrication technology have enabled us to
build several processor cores on a single die. Hence, it is important to
develop single chip parallel architectures for such data-intensive
applications. In this paper, we propose an efficient PIM architectures
tailored for computer graphics which requires a large number of
memory accesses. We then address the two important tasks necessary
for maximally exploiting the parallelism provided by the architecture,
namely, partitioning and placement of graphic data, which affect
respectively load balances and communication costs. Under the
constraints of uniform partitioning, we develop approaches for optimal
partitioning and placement, which significantly reduce search space.
We also present heuristics for identifying near-optimal placement,
since the search space for placement is impractically large despite our
optimization. We then demonstrate the effectiveness of our partitioning
and placement approaches via analysis of example scenes; simulation
results show considerable search space reductions, and our heuristics
for placement performs close to optimal – the average ratio of
communication overheads between our heuristics and the optimal was
1.05. Our uniform partitioning showed average load-balance ratio of
1.47 for geometry processing and 1.44 for rasterization, which is
reasonable.
Abstract: Three-dimensional reconstruction of small objects has
been one of the most challenging problems over the last decade.
Computer graphics researchers and photography professionals have
been working on improving 3D reconstruction algorithms to fit the
high demands of various real life applications. Medical sciences,
animation industry, virtual reality, pattern recognition, tourism
industry, and reverse engineering are common fields where 3D
reconstruction of objects plays a vital role. Both lack of accuracy and
high computational cost are the major challenges facing successful
3D reconstruction. Fringe projection has emerged as a promising 3D
reconstruction direction that combines low computational cost to both
high precision and high resolution. It employs digital projection,
structured light systems and phase analysis on fringed pictures.
Research studies have shown that the system has acceptable
performance, and moreover it is insensitive to ambient light.
This paper presents an overview of fringe projection approaches. It
also presents an experimental study and implementation of a simple
fringe projection system. We tested our system using two objects
with different materials and levels of details. Experimental results
have shown that, while our system is simple, it produces acceptable
results.
Abstract: World has entered in 21st century. The technology of
computer graphics and digital cameras is prevalent. High resolution
display and printer are available. Therefore high resolution images
are needed in order to produce high quality display images and high
quality prints. However, since high resolution images are not usually
provided, there is a need to magnify the original images. One
common difficulty in the previous magnification techniques is that of
preserving details, i.e. edges and at the same time smoothing the data
for not introducing the spurious artefacts. A definitive solution to this
is still an open issue. In this paper an image magnification using
adaptive interpolation by pixel level data-dependent geometrical
shapes is proposed that tries to take into account information about
the edges (sharp luminance variations) and smoothness of the image.
It calculate threshold, classify interpolation region in the form of
geometrical shapes and then assign suitable values inside
interpolation region to the undefined pixels while preserving the
sharp luminance variations and smoothness at the same time.
The results of proposed technique has been compared qualitatively
and quantitatively with five other techniques. In which the qualitative
results show that the proposed method beats completely the Nearest
Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The
quantitative results are competitive and consistent with NN, BL, BC
and others.
Abstract: This interdisciplinary study is an investigation to evaluate user-interfaces in business administration. The study is going to be implemented on two computerized business administration systems with two distinctive user-interfaces, so that differences between the two systems can be determined. Both systems, a commercial and a prototype developed for the purpose of this study, deal with ordering of supplies, tendering procedures, issuing purchase orders, controlling the movement of the stocks against their actual balances on the shelves and editing them on their tabulations. In the second suggested system, modern computer graphics and multimedia issues were taken into consideration to cover the drawbacks of the first system. To highlight differences between the two investigated systems regarding some chosen standard quality criteria, the study employs various statistical techniques and methods to evaluate the users- interaction with both systems. The study variables are divided into two divisions: independent representing the interfaces of the two systems, and dependent embracing efficiency, effectiveness, satisfaction, error rate etc.