Abstract: The H.264/AVC video coding standard contains a number of advanced features. Ones of the new features introduced in this standard is the multiple intramode prediction. Its function exploits directional spatial correlation with adjacent block for intra prediction. With this new features, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standard, but computational complexity is increased significantly when brut force rate distortion optimization (RDO) algorithm is used. In this paper, we propose a new fast intra prediction mode decision method for the complexity reduction of H.264 video coding. for luma intra prediction, the proposed method consists of two step: in the first step, we make the RDO for four mode of intra 4x4 block, based the distribution of RDO cost of those modes and the idea that the fort correlation with adjacent mode, we select the best mode of intra 4x4 block. In the second step, we based the fact that the dominating direction of a smaller block is similar to that of bigger block, the candidate modes of 8x8 blocks and 16x16 macroblocks are determined. So, in case of chroma intra prediction, the variance of the chroma pixel values is much smaller than that of luma ones, since our proposed uses only the mode DC. Experimental results show that the new fast intra mode decision algorithm increases the speed of intra coding significantly with negligible loss of PSNR.
Abstract: In Virtual organization, Knowledge Discovery (KD)
service contains distributed data resources and computing grid nodes.
Computational grid is integrated with data grid to form Knowledge
Grid, which implements Apriori algorithm for mining association
rule on grid network. This paper describes development of parallel
and distributed version of Apriori algorithm on Globus Toolkit using
Message Passing Interface extended with Grid Services (MPICHG2).
The creation of Knowledge Grid on top of data and
computational grid is to support decision making in real time
applications. In this paper, the case study describes design and
implementation of local and global mining of frequent item sets. The
experiments were conducted on different configurations of grid
network and computation time was recorded for each operation. We
analyzed our result with various grid configurations and it shows
speedup of computation time is almost superlinear.
Abstract: New graph similarity methods have been proposed in this work with the aim to refining the chemical information extracted from molecules matching. For this purpose, data fusion of the isomorphic and nonisomorphic subgraphs into a new similarity measure, the Approximate Similarity, was carried out by several approaches. The application of the proposed method to the development of quantitative structure-activity relationships (QSAR) has provided reliable tools for predicting several pharmacological parameters: binding of steroids to the globulin-corticosteroid receptor, the activity of benzodiazepine receptor compounds, and the blood brain barrier permeability. Acceptable results were obtained for the models presented here.
Abstract: In this paper we study different similarity based approaches for the development of QSAR model devoted to the prediction of activity of antiobesity drugs. Classical similarity approaches are compared regarding to dissimilarity models based on the consideration of the calculation of Euclidean distances between the nonisomorphic fragments extracted in the matching process. Combining the classical similarity and dissimilarity approaches into a new similarity measure, the Approximate Similarity was also studied, and better results were obtained. The application of the proposed method to the development of quantitative structure-activity relationships (QSAR) has provided reliable tools for predicting of inhibitory activity of drugs. Acceptable results were obtained for the models presented here.
Abstract: The drug discovery process starts with protein
identification because proteins are responsible for many functions
required for maintenance of life. Protein identification further needs
determination of protein function. Proposed method develops a
classifier for human protein function prediction. The model uses
decision tree for classification process. The protein function is
predicted on the basis of matched sequence derived features per each
protein function. The research work includes the development of a
tool which determines sequence derived features by analyzing
different parameters. The other sequence derived features are
determined using various web based tools.
Abstract: To investigates the effect of fiberglass clamping
process improvement on drape simulation prediction. This has
great effect on the mould and the fiber during manufacturing
process. This also, improves the fiber strain, the quality of the
fiber orientation in the area of folding and wrinkles formation
during the press-forming process. Drape simulation software
tool was used to digitalize the process, noting the formation
problems on the contour sensitive part. This was compared
with the real life clamping processes using single and double
frame set-ups to observe the effects. Also, restrains are
introduced by using clips, and the G-clamps with predetermine
revolution to; restrain the fabric deformation during the
forming process.The incorporation of clamping and fabric
restrain deformation improved on the prediction of the
simulation tool. Therefore, for effective forming process,
incorporation of clamping process into the drape simulation
process will assist in the development of fiberglass application
in manufacturing process.
Abstract: How to efficiently assign system resource to route the
Client demand by Gateway servers is a tricky predicament. In this
paper, we tender an enhanced proposal for autonomous recital of
Gateway servers under highly vibrant traffic loads. We devise a
methodology to calculate Queue Length and Waiting Time utilizing
Gateway Server information to reduce response time variance in
presence of bursty traffic.
The most widespread contemplation is performance, because
Gateway Servers must offer cost-effective and high-availability
services in the elongated period, thus they have to be scaled to meet
the expected load. Performance measurements can be the base for
performance modeling and prediction. With the help of performance
models, the performance metrics (like buffer estimation, waiting
time) can be determined at the development process.
This paper describes the possible queue models those can be
applied in the estimation of queue length to estimate the final value
of the memory size. Both simulation and experimental studies using
synthesized workloads and analysis of real-world Gateway Servers
demonstrate the effectiveness of the proposed system.
Abstract: Although many researchers have studied the flow
hydraulics in compound channels, there are still many complicated problems in determination of their flow rating curves. Many different
methods have been presented for these channels but extending them
for all types of compound channels with different geometrical and
hydraulic conditions is certainly difficult. In this study, by aid of nearly 400 laboratory and field data sets of geometry and flow rating
curves from 30 different straight compound sections and using artificial neural networks (ANNs), flow discharge in compound channels was estimated. 13 dimensionless input variables including relative depth, relative roughness, relative width, aspect ratio, bed
slope, main channel side slopes, flood plains side slopes and berm
inclination and one output variable (flow discharge), have been used
in ANNs. Comparison of ANNs model and traditional method
(divided channel method-DCM) shows high accuracy of ANNs model results. The results of Sensitivity analysis showed that the relative depth with 47.6 percent contribution, is the most effective input parameter for flow discharge prediction. Relative width and
relative roughness have 19.3 and 12.2 percent of importance, respectively. On the other hand, shape parameter, main channel and
flood plains side slopes with 2.1, 3.8 and 3.8 percent of contribution, have the least importance.
Abstract: In this study, a classification-based video
super-resolution method using artificial neural network (ANN) is
proposed to enhance low-resolution (LR) to high-resolution (HR)
frames. The proposed method consists of four main steps:
classification, motion-trace volume collection, temporal adjustment,
and ANN prediction. A classifier is designed based on the edge
properties of a pixel in the LR frame to identify the spatial information.
To exploit the spatio-temporal information, a motion-trace volume is
collected using motion estimation, which can eliminate unfathomable
object motion in the LR frames. In addition, temporal lateral process is
employed for volume adjustment to reduce unnecessary temporal
features. Finally, ANN is applied to each class to learn the complicated
spatio-temporal relationship between LR and HR frames. Simulation
results show that the proposed method successfully improves both
peak signal-to-noise ratio and perceptual quality.
Abstract: Fatigue life prediction and evaluation are the key
technologies to assure the safety and reliability of automotive rubber
components. The objective of this study is to develop the fatigue
analysis process for vulcanized rubber components, which is
applicable to predict fatigue life at initial product design step. Fatigue
life prediction methodology of vulcanized natural rubber was
proposed by incorporating the finite element analysis and fatigue
damage parameter of maximum strain appearing at the critical location
determined from fatigue test. In order to develop an appropriate
fatigue damage parameter of the rubber material, a series of
displacement controlled fatigue test was conducted using threedimensional
dumbbell specimen with different levels of mean
displacement. It was shown that the maximum strain was a proper
damage parameter, taking the mean displacement effects into account.
Nonlinear finite element analyses of three-dimensional dumbbell
specimens were performed based on a hyper-elastic material model
determined from the uni-axial tension, equi-biaxial tension and planar
test. Fatigue analysis procedure employed in this study could be used
approximately for the fatigue design.
Abstract: An adaptive software reliability prediction model
using evolutionary connectionist approach based on Recurrent Radial
Basis Function architecture is proposed. Based on the currently
available software failure time data, Fuzzy Min-Max algorithm is
used to globally optimize the number of the k Gaussian nodes. The
corresponding optimized neural network architecture is iteratively
and dynamically reconfigured in real-time as new actual failure time
data arrives. The performance of our proposed approach has been
tested using sixteen real-time software failure data. Numerical results
show that our proposed approach is robust across different software
projects, and has a better performance with respect to next-steppredictability
compared to existing neural network model for failure
time prediction.
Abstract: In this paper, we propose a hybrid machine learning
system based on Genetic Algorithm (GA) and Support Vector
Machines (SVM) for stock market prediction. A variety of indicators
from the technical analysis field of study are used as input features.
We also make use of the correlation between stock prices of different
companies to forecast the price of a stock, making use of technical
indicators of highly correlated stocks, not only the stock to be
predicted. The genetic algorithm is used to select the set of most
informative input features from among all the technical indicators.
The results show that the hybrid GA-SVM system outperforms the
stand alone SVM system.
Abstract: Tandem mass spectrometry (MS/MS) is the engine
driving high-throughput protein identification. Protein mixtures possibly
representing thousands of proteins from multiple species are
treated with proteolytic enzymes, cutting the proteins into smaller
peptides that are then analyzed generating MS/MS spectra. The
task of determining the identity of the peptide from its spectrum
is currently the weak point in the process. Current approaches to de
novo sequencing are able to compute candidate peptides efficiently.
The problem lies in the limitations of current scoring functions. In this
paper we introduce the concept of proteome signature. By examining
proteins and compiling proteome signatures (amino acid usage) it is
possible to characterize likely combinations of amino acids and better
distinguish between candidate peptides. Our results strongly support
the hypothesis that a scoring function that considers amino acid usage
patterns is better able to distinguish between candidate peptides. This
in turn leads to higher accuracy in peptide prediction.
Abstract: Long term rainfall analysis and prediction is a
challenging task especially in the modern world where the impact of
global warming is creating complications in environmental issues.
These factors which are data intensive require high performance
computational modeling for accurate prediction. This research paper
describes a prototype which is designed and developed on grid
environment using a number of coupled software infrastructural
building blocks. This grid enabled system provides the demanding
computational power, efficiency, resources, user-friendly interface,
secured job submission and high throughput. The results obtained
using sequential execution and grid enabled execution shows that
computational performance has enhanced among 36% to 75%, for
decade of climate parameters. Large variation in performance can be
attributed to varying degree of computational resources available for
job execution.
Grid Computing enables the dynamic runtime selection, sharing
and aggregation of distributed and autonomous resources which plays
an important role not only in business, but also in scientific
implications and social surroundings. This research paper attempts to
explore the grid enabled computing capabilities on weather indices
from HOAPS data for climate impact modeling and change
detection.
Abstract: An advanced Monte Carlo simulation method, called Subset Simulation (SS) for the time-dependent reliability prediction for underground pipelines has been presented in this paper. The SS can provide better resolution for low failure probability level with efficient investigating of rare failure events which are commonly encountered in pipeline engineering applications. In SS method, random samples leading to progressive failure are generated efficiently and used for computing probabilistic performance by statistical variables. SS gains its efficiency as small probability event as a product of a sequence of intermediate events with larger conditional probabilities. The efficiency of SS has been demonstrated by numerical studies and attention in this work is devoted to scrutinise the robustness of the SS application in pipe reliability assessment. It is hoped that the development work can promote the use of SS tools for uncertainty propagation in the decision-making process of underground pipelines network reliability prediction.
Abstract: Many studies have focused on the nonlinear analysis
of electroencephalography (EEG) mainly for the characterization of
epileptic brain states. It is assumed that at least two states of the
epileptic brain are possible: the interictal state characterized by a
normal apparently random, steady-state EEG ongoing activity; and
the ictal state that is characterized by paroxysmal occurrence of
synchronous oscillations and is generally called in neurology, a
seizure.
The spatial and temporal dynamics of the epileptogenic process is
still not clear completely especially the most challenging aspects of
epileptology which is the anticipation of the seizure. Despite all the
efforts we still don-t know how and when and why the seizure
occurs. However actual studies bring strong evidence that the
interictal-ictal state transition is not an abrupt phenomena. Findings
also indicate that it is possible to detect a preseizure phase.
Our approach is to use the neural network tool to detect interictal
states and to predict from those states the upcoming seizure ( ictal
state). Analysis of the EEG signal based on neural networks is used
for the classification of EEG as either seizure or non-seizure. By
applying prediction methods it will be possible to predict the
upcoming seizure from non-seizure EEG.
We will study the patients admitted to the epilepsy monitoring
unit for the purpose of recording their seizures. Preictal, ictal, and
post ictal EEG recordings are available on such patients for analysis
The system will be induced by taking a body of samples then
validate it using another. Distinct from the two first ones a third body
of samples is taken to test the network for the achievement of
optimum prediction. Several methods will be tried 'Backpropagation
ANN' and 'RBF'.
Abstract: Since primary school trips usually start from home,
attention by many scholars have been focused on the home end for
data gathering. Thereafter category analysis has often been relied
upon when predicting school travel demands. In this paper, school
end was relied on for data gathering and multivariate regression for
future travel demand prediction. 9859 pupils were surveyed by way
of questionnaires at 21 primary schools. The town was divided into 5
zones. The study was carried out in Skudai Town, Malaysia. Based
on the hypothesis that the number of primary school trip ends are
expected to be the same because school trips are fixed, the choice of
trip end would have inconsequential effect on the outcome. The
study compared empirical data for home and school trip end
productions and attractions. Variance from both data results was
insignificant, although some claims from home based family survey
were found to be grossly exaggerated. Data from the school trip ends
was relied on for travel demand prediction because of its
completeness. Accessibility, trip attraction and trip production were
then related to school trip rates under daylight and dry weather
conditions. The paper concluded that, accessibility is an important
parameter when predicting demand for future school trip rates.
Abstract: It is important to predict yield in semiconductor test process in order to increase yield. In this study, yield prediction means finding out defective die, wafer or lot effectively. Semiconductor test process consists of some test steps and each test includes various test items. In other world, test data has a big and complicated characteristic. It also is disproportionably distributed as the number of data belonging to FAIL class is extremely low. For yield prediction, general data mining techniques have a limitation without any data preprocessing due to eigen properties of test data. Therefore, this study proposes an under-sampling method using support vector machine (SVM) to eliminate an imbalanced characteristic. For evaluating a performance, randomly under-sampling method is compared with the proposed method using actual semiconductor test data. As a result, sampling method using SVM is effective in generating robust model for yield prediction.
Abstract: Dust storms are one of the most costly and destructive
events in many desert regions. They can cause massive damages both
in natural environments and human lives. This paper is aimed at
presenting a preliminary study on dust storms, as a major natural
hazard in arid and semi-arid regions. As a case study, dust storm
events occurred in Zabol city located in Sistan Region of Iran was
analyzed to diagnose and predict dust storms. The identification and
prediction of dust storm events could have significant impacts on
damages reduction. Present models for this purpose are complicated
and not appropriate for many areas with poor-data environments. The
present study explores Gamma test for identifying inputs of ANNs
model, for dust storm prediction. Results indicate that more attempts
must be carried out concerning dust storms identification and
segregate between various dust storm types.
Abstract: A numerical analysis of wave and hydrodynamic models
is used to investigate the influence of WAve and Storm Surge
(WASS) in the regional and coastal zones. The numerical analyzed
system consists of the WAve Model Cycle 4 (WAMC4) and the
Princeton Ocean Model (POM) which used to solve the energy
balance and primitive equations respectively. The results of both
models presented the incorporated surface wave in the regional
zone affected the coastal storm surge zone. Specifically, the results
indicated that the WASS generally under the approximation is not
only the peak surge but also the coastal water level drop which
can also cause substantial impact on the coastal environment. The
wave–induced surface stress affected the storm surge can significantly
improve storm surge prediction. Finally, the calibration of wave
module according to the minimum error of the significant wave height
(Hs) is not necessarily result in the optimum wave module in the
WASS analyzed system for the WASS prediction.