Abstract: This paper describes a method of modeling to model
shadow play puppet using sophisticated computer graphics techniques
available in OpenGL in order to allow interactive play in real-time
environment as well as producing realistic animation. This paper
proposes a novel real-time method is proposed for modeling of puppet
and its shadow image that allows interactive play of virtual shadow
play using texture mapping and blending techniques. Special effects
such as lighting and blurring effects for virtual shadow play
environment are also developed. Moreover, the use of geometric
transformations and hierarchical modeling facilitates interaction
among the different parts of the puppet during animation. Based on the
experiments and the survey that were carried out, the respondents
involved are very satisfied with the outcomes of these techniques.
Abstract: In comparison to the original SVM, which involves a
quadratic programming task; LS–SVM simplifies the required
computation, but unfortunately the sparseness of standard SVM is
lost. Another problem is that LS-SVM is only optimal if the training
samples are corrupted by Gaussian noise. In Least Squares SVM
(LS–SVM), the nonlinear solution is obtained, by first mapping the
input vector to a high dimensional kernel space in a nonlinear
fashion, where the solution is calculated from a linear equation set. In
this paper a geometric view of the kernel space is introduced, which
enables us to develop a new formulation to achieve a sparse and
robust estimate.
Abstract: This paper presents a robust method to detect obstacles in stereo images using shadow removal technique and color information. Stereo vision based obstacle detection is an algorithm that aims to detect and compute obstacle depth using stereo matching and disparity map. The proposed advanced method is divided into three phases, the first phase is detecting obstacles and removing shadows, the second one is matching and the last phase is depth computing. We propose a robust method for detecting obstacles in stereo images using a shadow removal technique based on color information in HIS space, at the first phase. In this paper we use Normalized Cross Correlation (NCC) function matching with a 5 × 5 window and prepare an empty matching table τ and start growing disparity components by drawing a seed s from S which is computed using canny edge detector, and adding it to τ. In this way we achieve higher performance than the previous works [2,17]. A fast stereo matching algorithm is proposed that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. The obstacle identified in phase one which appears in the disparity map of phase two enters to the third phase of depth computing. Finally, experimental results are presented to show the effectiveness of the proposed method.
Abstract: In today's world where everything is rapidly changing
and information technology is high in development, many features of culture, society, politic and economy has changed. The advent of
information technology and electronic data transmission lead to easy communication and fields like e-learning and e-commerce, are
accessible for everyone easily. One of these technologies is virtual
training. The "quality" of such kind of education systems is critical. 131 questionnaires were prepared and distributed among university
student in Toba University. So the research has followed factors that affect the quality of learning from the perspective of staff, students, professors and this type of university. It is concluded that the important factors in virtual training are the quality of professors, the
quality of staff, and the quality of the university. These mentioned factors were the most prior factors in this education system and
necessary for improving virtual training.
Abstract: The sustainability of a place depends on a series of factors which contribute to the quality of life, sense of place and recognition of identity. An activity like walking, which in itself is obviously ''sustainable'', can become non sustainable if the context in which it is carried out does not meet the conditions for an adequate quality of life. This work is aimed at proposing the analytical method of Place Maker to identify the elements that do not feature in traditional mapping and which constitute the contemporary identity of the places, and the relative complex map to represent those elements and support sustainable urban identity design. The method's potential for areas with a predominantly pedestrian vocation is illustrated by means of the case study of the Ramblas in Barcelona.
Abstract: A high performance computer includes a fast
processor and millions bytes of memory. During the data processing,
huge amount of information are shuffled between the memory and
processor. Because of its small size and its effectiveness speed, cache
has become a common feature of high performance computers.
Enhancing cache performance proved to be essential in the speed up
of cache-based computers. Most enhancement approaches can be
classified as either software based or hardware controlled. The
performance of the cache is quantified in terms of hit ratio or miss
ratio. In this paper, we are optimizing the cache performance based
on enhancing the cache hit ratio. The optimum cache performance is
obtained by focusing on the cache hardware modification in the way
to make a quick rejection to the missed line's tags from the hit-or
miss comparison stage, and thus a low hit time for the wanted line in
the cache is achieved. In the proposed technique which we called
Even- Odd Tabulation (EOT), the cache lines come from the main
memory into cache are classified in two types; even line's tags and
odd line's tags depending on their Least Significant Bit (LSB). This
division is exploited by EOT technique to reject the miss match line's
tags in very low time compared to the time spent by the main
comparator in the cache, giving an optimum hitting time for the
wanted cache line. The high performance of EOT technique against
the familiar mapping technique FAM is shown in the simulated
results.
Abstract: In this paper we present a novel approach for human
Body configuration based on the Silhouette. We propose to address
this problem under the Bayesian framework. We use an effective
Model based MCMC (Markov Chain Monte Carlo) method to solve
the configuration problem, in which the best configuration could be
defined as MAP (maximize a posteriori probability) in Bayesian
model. This model based MCMC utilizes the human body model to
drive the MCMC sampling from the solution space. It converses the
original high dimension space into a restricted sub-space constructed
by the human model and uses a hybrid sampling algorithm. We
choose an explicit human model and carefully select the likelihood
functions to represent the best configuration solution. The
experiments show that this method could get an accurate
configuration and timesaving for different human from multi-views.
Abstract: In this paper, we present a new algorithm for clustering data in large datasets using image processing approaches. First the dataset is mapped into a binary image plane. The synthesized image is then processed utilizing efficient image processing techniques to cluster the data in the dataset. Henceforth, the algorithm avoids exhaustive search to identify clusters. The algorithm considers only a small set of the data that contains critical boundary information sufficient to identify contained clusters. Compared to available data clustering techniques, the proposed algorithm produces similar quality results and outperforms them in execution time and storage requirements.
Abstract: Concept maps can be generated manually or
automatically. It is important to recognize differences of the two
types of concept maps. The automatically generated concept maps
are dynamic, interactive, and full of associations between the terms
on the maps and the underlying documents. Through a specific
concept mapping system, Visual Concept Explorer (VCE), this paper
discusses how automatically generated concept maps are different
from manually generated concept maps and how different
applications and learning opportunities might be created with the
automatically generated concept maps. The paper presents several
examples of learning strategies that take advantages of the
automatically generated concept maps for concept learning and
exploration.
Abstract: These days people love to travel around the world.
Regardless of their location and time, they especially Muslims still
need to perform their prayers. Normally for travelers, they need to
bring maps, compass and for Muslim, they even have to bring Qibla
pointer when they travel. It is slightly difficult to determine the Qibla
direction and to know the time for each prayer. As the technology
grows, many PDA equip with maps and GPS to locate their location.
In this paper we present a new electronic device called Mobile Qibla
and Prayer Time Finder to locate the Qibla direction and to
determine each prayer time based on the current user-s location using
PDA. This device use PIC microcontroller equipped with digital
compass where it will communicate with PDA using Bluetooth
technology and display the exact Qibla direction and prayer time
automatically at any place in the world. This device is reliable and
accurate in determining the Qibla direction and prayer time.
Abstract: Previous the 3D model texture generation from multi-view images and mapping algorithms has issues in the texture chart generation which are the self-intersection and the concentration of the texture in texture space. Also we may suffer from some problems due to the occluded areas, such as inside parts of thighs. In this paper we propose a texture mapping technique for 3D models using multi-view images on the GPU. We do texture mapping directly on the GPU fragment shader per pixel without generation of the texture map. And we solve for the occluded area using the 3D model depth information. Our method needs more calculation on the GPU than previous works, but it has shown real-time performance and previously mentioned problems do not occur.
Abstract: The present study was designed to investigate the
cardio protective role of chronic oral administration of alcoholic
extract of Terminalia arjuna in in-vivo ischemic reperfusion injury
and the induction of HSP72. Rabbits, divided into three groups, and
were administered with the alcoholic extract of the bark powder of
Terminalia arjuna (TAAE) by oral gavage [6.75mg/kg: (T1) and
9.75mg/kg: (T2), 6 days /week for 12 weeks]. In open-chest
Ketamine pentobarbitone anaesthetized rabbits, the left anterior
descending coronary artery was occluded for 15 min of ischemia
followed by 60 min of reperfusion. In the vehicle-treated group,
ischemic-reperfusion injury (IRI) was evidenced by depression of
global hemodynamic function (MAP, HR, LVEDP, peak LV (+) & (-
) (dP/dt) along with depletion of HEP compounds. Oxidative stress
in IRI was evidenced by, raised levels of myocardial TBARS and
depletion of endogenous myocardial antioxidants GSH, SOD and
catalase. Western blot analysis showed a single band corresponding
to 72 kDa in homogenates of hearts from rabbits treated with both the
doses. In the alcoholic extract of the bark powder of Terminalia
arjuna treatment groups, both the doses had better recovery of
myocardial hemodynamic function, with significant reduction in
TBARS, and rise in SOD, GSH, catalase were observed. The results
of the present study suggest that the alcoholic extract of the bark
powder of Terminalia arjuna in rabbit induces myocardial HSP 72
and augments myocardial endogenous antioxidants, without causing
any cellular injury and offered better cardioprotection against
oxidative stress associated with myocardial IR injury.
Abstract: Image Compression using Artificial Neural Networks
is a topic where research is being carried out in various directions
towards achieving a generalized and economical network.
Feedforward Networks using Back propagation Algorithm adopting
the method of steepest descent for error minimization is popular and
widely adopted and is directly applied to image compression.
Various research works are directed towards achieving quick
convergence of the network without loss of quality of the restored
image. In general the images used for compression are of different
types like dark image, high intensity image etc. When these images
are compressed using Back-propagation Network, it takes longer
time to converge. The reason for this is, the given image may
contain a number of distinct gray levels with narrow difference with
their neighborhood pixels. If the gray levels of the pixels in an image
and their neighbors are mapped in such a way that the difference in
the gray levels of the neighbors with the pixel is minimum, then
compression ratio as well as the convergence of the network can be
improved. To achieve this, a Cumulative distribution function is
estimated for the image and it is used to map the image pixels. When
the mapped image pixels are used, the Back-propagation Neural
Network yields high compression ratio as well as it converges
quickly.
Abstract: Let X be a connected space, X be a space, let p : X -→ X be a continuous map and let (X, p) be a covering space of X. In the first section we give some preliminaries from covering spaces and their automorphism groups. In the second section we derive some algebraic properties of both universal and regular covering spaces (X, p) of X and also their automorphism groups A(X, p).
Abstract: The research object was wheat bread. Experiments
were carried out at the Faculty of Food Technology of the Latvia
University of Agriculture. An active packaging in combination with
modified atmosphere (MAP, CO2 60% and N2 40%) was examined
and compared with traditional packaging in air ambiance. Polymer
Multibarrier 60, PP and OPP bags were used. Influence of iron based
oxygen absorber in sachets of 100 cc obtained from Mitsubishi Gas
Chemical Europe Ageless® was tested on the quality during the shelf
of wheat bread. Samples of 40±4 g were packaged in polymer
pouches (110 mm x 120 mm), hermetically sealed by MULTIVAC
C300 vacuum chamber machine, and stored in room temperature
+21.0±0.5 °C. The physiochemical properties – weight losses,
moisture content, hardness, pH, colour, changes of atmosphere
content (CO2 and O2) in headspace of packs, and microbial
conditions were analysed before packaging and in the 7th, 14th, 21st
and 28th days of storage.
Abstract: Existing literature ondesign reasoning seems to give
either one sided accounts on expert design behaviour based on
internal processing. In the same way ecological theoriesseem to
focus one sidedly on external elementsthat result in a lack of unifying
design cognition theory. Although current extended design cognition
studies acknowledge the intellectual interaction between internal and
external resources, there still seems to be insufficient understanding
of the complexities involved in such interactive processes. As
such,this paper proposes a novelmulti-directional model for design
researchers tomap the complex and dynamic conduct controlling
behaviour in which both the computational and ecological
perspectives are integrated in a vertical manner. A clear distinction
between identified intentional and emerging physical drivers, and
relationships between them during the early phases of experts- design
process, is demonstrated by presenting a case study in which the
model was employed.
Abstract: This paper presents a new feature based dense stereo
matching algorithm to obtain the dense disparity map via dynamic
programming. After extraction of some proper features, we use some
matching constraints such as epipolar line, disparity limit, ordering
and limit of directional derivative of disparity as well. Also, a coarseto-
fine multiresolution strategy is used to decrease the search space
and therefore increase the accuracy and processing speed. The
proposed method links the detected feature points into the chains and
compares some of the feature points from different chains, to
increase the matching speed. We also employ color stereo matching
to increase the accuracy of the algorithm. Then after feature
matching, we use the dynamic programming to obtain the dense
disparity map. It differs from the classical DP methods in the stereo
vision, since it employs sparse disparity map obtained from the
feature based matching stage. The DP is also performed further on a
scan line, between any matched two feature points on that scan line.
Thus our algorithm is truly an optimization method. Our algorithm
offers a good trade off in terms of accuracy and computational
efficiency. Regarding the results of our experiments, the proposed
algorithm increases the accuracy from 20 to 70%, and reduces the
running time of the algorithm almost 70%.
Abstract: In this research three methods of Maximum Likelihood, Mahalanobis Distance and Minimum Distance were analyzed in the Western part of Isfahan province in the Iran country. For this purpose, the IRS satellite images and various land preparation uses in region including rangelands, irrigation farming, dry farming, gardens and urban areas were separated and identified. In these methods, matrix error and Kappa index were calculated and accuracy of each method, based on percentages: 53.13, 56.64 and 48.44, were obtained respectively. Considering the low accuracy of these methods to separate land uses due to spread of the land uses, it-s suggested the visual interpretation of the map, to preparing the land use map in this region. The map prepared by visual interpretation is in high accuracy if it will be accompany with the visit of the region.
Abstract: This work presents a neural network model for the
clustering analysis of data based on Self Organizing Maps (SOM).
The model evolves during the training stage towards a hierarchical
structure according to the input requirements. The hierarchical structure
symbolizes a specialization tool that provides refinements of the
classification process. The structure behaves like a single map with
different resolutions depending on the region to analyze. The benefits
and performance of the algorithm are discussed in application to the
Iris dataset, a classical example for pattern recognition.
Abstract: This study aims to assess the vulnerability and risk of
the coastal areas of Taijiang to abnormal oceanographic phenomena.
In addition, this study aims to investigate and collect data regarding
the disaster losses, land utilization, and other social, economic, and
environmental issues in these coastal areas to construct a coastal
vulnerability and risk map based on the obtained climate-change risk
assessment results. Considering the indexes of the three coastal
vulnerability dimensions, namely, man-made facilities, environmental
geography, and social economy, this study adopted the equal
weighting process and Analytic Hierarchy Process to analyze the
vulnerability of these coastal areas to disasters caused by climatic
changes. Among the areas with high coastal vulnerability to climatic
changes, three towns had the highest coastal vulnerability and four had
the highest relative vulnerability. Areas with lower disaster risks were
found to be increasingly vulnerable to disasters caused by climatic
changes as time progresses.