Abstract: Over the past few years, XML (eXtensible Mark-up
Language) has emerged as the standard for information
representation and data exchange over the Internet. This paper
provides a kick-start for new researches venturing in XML databases
field. We survey the storage representation for XML document,
review the XML query processing and optimization techniques with
respect to the particular storage instance. Various optimization
technologies have been developed to solve the query retrieval and
updating problems. Towards the later year, most researchers
proposed hybrid optimization techniques. Hybrid system opens the
possibility of covering each technology-s weakness by its strengths.
This paper reviews the advantages and limitations of optimization
techniques.
Abstract: We describe a work with an evolutionary computing
algorithm for non photo–realistic rendering of a target image. The
renderings are produced by genetic programming. We have used two
different types of strokes: “empty triangle" and “filled triangle" in
color level. We compare both empty and filled triangular strokes to
find which one generates more aesthetic pleasing images. We found
the filled triangular strokes have better fitness and generate more
aesthetic images than empty triangular strokes.
Abstract: One of the most important problems to solve is eye
location for a driver fatigue monitoring system. This paper presents an
efficient method to achieve fast and accurate eye location in grey level
images obtained in the real-word driving conditions. The structure of
eye region is used as a robust cue to find possible eye pairs. Candidates
of eye pair at different scales are selected by finding regions which
roughly match with the binary eye pair template. To obtain real one,
all the eye pair candidates are then verified by using support vector
machines. Finally, eyes are precisely located by using binary vertical
projection and eye classifier in eye pair images. The proposed method
is robust to deal with illumination changes, moderate rotations, glasses
wearing and different eye states. Experimental results demonstrate its
effectiveness.
Abstract: The reliability of distributed systems and computer
networks have been modeled by a probabilistic network or a graph G.
Computing the residual connectedness reliability (RCR), denoted by
R(G), under the node fault model is very useful, but is an NP-hard
problem. Since it may need exponential time of the network size to
compute the exact value of R(G), it is important to calculate its tight
approximate value, especially its lower bound, at a moderate
calculation time. In this paper, we propose an efficient algorithm for
reliability lower bound of distributed systems with unreliable nodes.
We also applied our algorithm to several typical classes of networks
to evaluate the lower bounds and show the effectiveness of our
algorithm.
Abstract: In this paper, a new reversible watermarking method is presented that reduces the size of a stereoscopic image sequence while keeping its content visible. The proposed technique embeds the residuals of the right frames to the corresponding frames of the left sequence, halving the total capacity. The residual frames may result in after a disparity compensated procedure between the two video streams or by a joint motion and disparity compensation. The residuals are usually lossy compressed before embedding because of the limited embedding capacity of the left frames. The watermarked frames are visible at a high quality and at any instant the stereoscopic video may be recovered by an inverse process. In fact, the left frames may be exactly recovered whereas the right ones are slightly distorted as the residuals are not embedded intact. The employed embedding method reorders the left frame into an array of consecutive pixel pairs and embeds a number of bits according to their intensity difference. In this way, it hides a number of bits in intensity smooth areas and most of the data in textured areas where resulting distortions are less visible. The experimental evaluation demonstrates that the proposed scheme is quite effective.
Abstract: Here, a new idea to speed up the operation of
complex valued time delay neural networks is presented. The whole
data are collected together in a long vector and then tested as a one
input pattern. The proposed fast complex valued time delay neural
networks uses cross correlation in the frequency domain between the
tested data and the input weights of neural networks. It is proved
mathematically that the number of computation steps required for
the presented fast complex valued time delay neural networks is less
than that needed by classical time delay neural networks. Simulation
results using MATLAB confirm the theoretical computations.
Abstract: In this paper, we propose a high capacity image hiding
technology based on pixel prediction and the difference of modified histogram. This approach is used the pixel prediction and the
difference of modified histogram to calculate the best embedding point.
This approach can improve the predictive accuracy and increase the pixel difference to advance the hiding capacity. We also use the
histogram modification to prevent the overflow and underflow. Experimental results demonstrate that our proposed method within the
same average hiding capacity can still keep high quality of image and low distortion
Abstract: Visualizing “Courses – Pre – Required -
Architecture" on the screen has proven to be useful and helpful for
university actors and specially for students. In fact, these students
can easily identify courses and their pre required, perceive the
courses to follow in the future, and then can choose rapidly the
appropriate course to register in. Given a set of courses and their prerequired,
we present an algorithm for visualization a graph entitled
“Courses-Pre-Required-Graph" that present courses and their prerequired
in order to help students to recognize, lonely, what courses
to take in the future and perceive the contain of all courses that they
will study. Our algorithm using “Force Directed Placement"
technique visualizes the “Courses-Pre-Required-Graph" in such way
that courses are easily identifiable. The time complexity of our
drawing algorithm is O (n2), where n is the number of courses in the
“Courses-Pre-Required-Graph".
Abstract: Identifying and classifying intersections according to
severity is very important for implementation of safety related
counter measures and effective models are needed to compare and
assess the severity. Highway safety organizations have considered
intersection safety among their priorities. In spite of significant
advances in highways safety, the large numbers of crashes with high
severities still occur in the highways. Investigation of influential
factors on crashes enables engineers to carry out calculations in order
to reduce crash severity. Previous studies lacked a model capable of
simultaneous illustration of the influence of human factors, road,
vehicle, weather conditions and traffic features including traffic
volume and flow speed on the crash severity. Thus, this paper is
aimed at developing the models to illustrate the simultaneous
influence of these variables on the crash severity in urban highways.
The models represented in this study have been developed using
binary Logit Models. SPSS software has been used to calibrate the
models. It must be mentioned that backward regression method in
SPSS was used to identify the significant variables in the model.
Consider to obtained results it can be concluded that the main
factor in increasing of crash severity in urban highways are driver
age, movement with reverse gear, technical defect of the vehicle,
vehicle collision with motorcycle and bicycle, bridge, frontal impact
collisions, frontal-lateral collisions and multi-vehicle crashes in
urban highways which always increase the crash severity in urban
highways.
Abstract: Radio frequency identification (RFID) applications have grown rapidly in many industries, especially in indoor location identification. The advantage of using received signal strength indicator (RSSI) values as an indoor location measurement method is a cost-effective approach without installing extra hardware. Because the accuracy of many positioning schemes using RSSI values is limited by interference factors and the environment, thus it is challenging to use RFID location techniques based on integrating positioning algorithm design. This study proposes the location estimation approach and analyzes a scheme relying on RSSI values to minimize location errors. In addition, this paper examines different factors that affect location accuracy by integrating the backpropagation neural network (BPN) with the LANDMARC algorithm in a training phase and an online phase. First, the training phase computes coordinates obtained from the LANDMARC algorithm, which uses RSSI values and the real coordinates of reference tags as training data for constructing an appropriate BPN architecture and training length. Second, in the online phase, the LANDMARC algorithm calculates the coordinates of tracking tags, which are then used as BPN inputs to obtain location estimates. The results show that the proposed scheme can estimate locations more accurately compared to LANDMARC without extra devices.
Abstract: Eye localization is necessary for face recognition and
related application areas. Most of eye localization algorithms reported
so far still need to be improved about precision and computational
time for successful applications. In this paper, we propose an eye
location method based on multi-scale Gabor feature vectors, which is
more robust with respect to initial points. The eye localization based
on Gabor feature vectors first needs to constructs an Eye Model Bunch
for each eye (left or right eye) which consists of n Gabor jets and
average eye coordinates of each eyes obtained from n model face
images, and then tries to localize eyes in an incoming face image by
utilizing the fact that the true eye coordinates is most likely to be very
close to the position where the Gabor jet will have the best Gabor jet
similarity matching with a Gabor jet in the Eye Model Bunch. Similar
ideas have been already proposed in such as EBGM (Elastic Bunch
Graph Matching). However, the method used in EBGM is known to be
not robust with respect to initial values and may need extensive search
range for achieving the required performance, but extensive search
ranges will cause much more computational burden. In this paper, we
propose a multi-scale approach with a little increased computational
burden where one first tries to localize eyes based on Gabor feature
vectors in a coarse face image obtained from down sampling of the
original face image, and then localize eyes based on Gabor feature
vectors in the original resolution face image by using the eye
coordinates localized in the coarse scaled image as initial points.
Several experiments and comparisons with other eye localization
methods reported in the other papers show the efficiency of our
proposed method.
Abstract: CFlow is a flow chart software, it contains facilities to
draw and evaluate a flow chart. A flow chart evaluation applies a
simulation method to enable presentation of work flow in a flow
chart solution. Flow chart simulation of CFlow is executed by
manipulating the CFlow data file which is saved in a graphical vector
format. These text-based data are organised by using a data
classification technic based on a Library classification-scheme. This
paper describes the file format for flow chart simulation software of
CFlow.
Abstract: In this paper, the processing of sonar signals has been
carried out using Minimal Resource Allocation Network (MRAN)
and a Probabilistic Neural Network (PNN) in differentiation of
commonly encountered features in indoor environments. The
stability-plasticity behaviors of both networks have been
investigated. The experimental result shows that MRAN possesses
lower network complexity but experiences higher plasticity than
PNN. An enhanced version called parallel MRAN (pMRAN) is
proposed to solve this problem and is proven to be stable in
prediction and also outperformed the original MRAN.
Abstract: MicroRNAs (miRNAs) are a class of non-coding
RNAs that hybridize to mRNAs and induce either translation
repression or mRNA cleavage. Recently, it has been reported that
miRNAs could possibly play an important role in human diseases. By
integrating miRNA target genes, cancer genes, miRNA and mRNA
expression profiles information, a database is developed to link
miRNAs to cancer target genes. The database provides experimentally
verified human miRNA target genes information, including oncogenes
and tumor suppressor genes. In addition, fragile sites information for
miRNAs, and the strength of the correlation of miRNA and its target
mRNA expression level for nine tissue types are computed, which
serve as an indicator for suggesting miRNAs could play a role in
human cancer. The database is freely accessible at
http://ppi.bioinfo.asia.edu.tw/mirna_target/index.html.
Abstract: Although achieving zero-defect software release is
practically impossible, software industries should take maximum
care to detect defects/bugs well ahead in time allowing only bare
minimums to creep into released version. This is a clear indicator of
time playing an important role in the bug detection. In addition to
this, software quality is the major factor in software engineering
process. Moreover, early detection can be achieved only through
static code analysis as opposed to conventional testing.
BugCatcher.Net is a static analysis tool, which detects bugs in .NET®
languages through MSIL (Microsoft Intermediate Language)
inspection. The tool utilizes a Parser based on Finite State Automata
to carry out bug detection. After being detected, bugs need to be
corrected immediately. BugCatcher.Net facilitates correction, by
proposing a corrective solution for reported warnings/bugs to end
users with minimum side effects. Moreover, the tool is also capable
of analyzing the bug trend of a program under inspection.
Abstract: Face Recognition is a field of multidimensional
applications. A lot of work has been done, extensively on the most of
details related to face recognition. This idea of face recognition using
PCA is one of them. In this paper the PCA features for Feature
extraction are used and matching is done for the face under
consideration with the test image using Eigen face coefficients. The
crux of the work lies in optimizing Euclidean distance and paving the
way to test the same algorithm using Matlab which is an efficient tool
having powerful user interface along with simplicity in representing
complex images.
Abstract: Social learning network analysis has drawn attention
for most researcher on e-learning research domain. This is due to the
fact that it has the capability to identify the behavior of student
during their social interaction inside e-learning. Normally, the social
network analysis (SNA) is treating the students' interaction merely as
node and edge with less meaning. This paper focuses on providing an
ontology structure of e-learning Moodle that can enrich the
relationships among students, as well as between the students and the
teacher. This ontology structure brings great benefit to the future
development of e-learning system.
Abstract: Reentry trajectory optimization is a multi-constraints
optimal control problem which is hard to solve. To tackle it, we
proposed a new algorithm named CDEN(Constrained Differential
Evolution Newton-Raphson Algorithm) based on Differential Evolution(
DE) and Newton-Raphson.We transform the infinite dimensional
optimal control problem to parameter optimization which is finite
dimensional by discretize control parameter. In order to simplify
the problem, we figure out the control parameter-s scope by process
constraints. To handle constraints, we proposed a parameterless constraints
handle process. Through comprehensive analyze the problem,
we use a new algorithm integrated by DE and Newton-Raphson to
solve it. It is validated by a reentry vehicle X-33, simulation results
indicated that the algorithm is effective and robust.
Abstract: Authentication of multimedia contents has gained much attention in recent times. In this paper, we propose a secure semi-fragile watermarking, with a choice of two watermarks to be embedded. This technique operates in integer wavelet domain and makes use of semi fragile watermarks for achieving better robustness. A self-recovering algorithm is employed, that hides the image digest into some Wavelet subbands to detect possible malevolent object manipulation undergone by the image (object replacing and/or deletion). The Semi-fragility makes the scheme tolerant for JPEG lossy compression as low as quality of 70%, and locate the tempered area accurately. In addition, the system ensures more security because the embedded watermarks are protected with private keys. The computational complexity is reduced using parameterized integer wavelet transform. Experimental results show that the proposed scheme guarantees the safety of watermark, image recovery and location of the tempered area accurately.
Abstract: Ringing effect is one of the most annoying visual
artifacts in digital video. It is a significant factor of subjective quality
deterioration. However, there is a widely-accepted misunderstanding
of its cause. In this paper, we propose a reasonable interpretation of the
cause of ringing effect. Based on the interpretation, we suggest further
two methods to reduce ringing effect in DCT-based video coding. The
methods adaptively adjust quantizers according to video features. Our
experiments proved that the methods could efficiently improve
subjective quality with acceptable additional computing costs.