Abstract: Global approximation using metamodel for complex
mathematical function or computer model over a large variable
domain is often needed in sensibility analysis, computer simulation,
optimal control, and global design optimization of complex, multiphysics
systems. To overcome the limitations of the existing
response surface (RS), surrogate or metamodel modeling methods for
complex models over large variable domain, a new adaptive and
regressive RS modeling method using quadratic functions and local
area model improvement schemes is introduced. The method applies
an iterative and Latin hypercube sampling based RS update process,
divides the entire domain of design variables into multiple cells,
identifies rougher cells with large modeling error, and further divides
these cells along the roughest dimension direction. A small number
of additional sampling points from the original, expensive model are
added over the small and isolated rough cells to improve the RS
model locally until the model accuracy criteria are satisfied. The
method then combines local RS cells to regenerate the global RS
model with satisfactory accuracy. An effective RS cells sorting
algorithm is also introduced to improve the efficiency of model
evaluation. Benchmark tests are presented and use of the new
metamodeling method to replace complex hybrid electrical vehicle
powertrain performance model in vehicle design optimization and
optimal control are discussed.
Abstract: Laser interferometric methods have been utilized for the measurement of natural convection heat transfer from a heated vertical flat plate, in the investigation presented here. The study mainly aims at comparing two different fringe orientations in the wedge fringe setting of Mach-Zehnder interferometer (MZI), used for the measurements. The interference fringes are set in horizontal and vertical orientations with respect to the heated surface, and two different fringe analysis methods, namely the stepping method and the method proposed by Naylor and Duarte, are used to obtain the heat transfer coefficients. The experimental system is benchmarked with theoretical results, thus validating its reliability in heat transfer measurements. The interference fringe patterns are analyzed digitally using MATLAB 7 and MOTIC Plus softwares, which ensure improved efficiency in fringe analysis, hence reducing the errors associated with conventional fringe tracing. The work also discuss the relative merits and limitations of the two methods used.
Abstract: Signal processing applications which are iterative in
nature are best represented by data flow graphs (DFG). In these
applications, the maximum sampling frequency is dependent on the
topology of the DFG, the cyclic dependencies in particular. The
determination of the iteration bound, which is the reciprocal of the
maximum sampling frequency, is critical in the process of hardware
implementation of signal processing applications. In this paper, a
novel technique to compute the iteration bound is proposed. This
technique is different from all previously proposed techniques, in the
sense that it is based on the natural flow of tokens into the DFG
rather than the topology of the graph. The proposed algorithm has
lower run-time complexity than all known algorithms. The
performance of the proposed algorithm is illustrated through
analytical analysis of the time complexity, as well as through
simulation of some benchmark problems.
Abstract: Internet Protocol version 4 (IPv4) address is decreasing and a rapid transition method to the next generation IP address (IPv6) should be established. This study aims to evaluate and select the best performance of the IPv6 address network transitionmechanisms, such as IPv4/IPv6 dual stack, transport Relay Translation (TRT) and Reverse Proxy with additional features. It is also aim to prove that faster access can be done while ensuring optimal usage of available resources used during the test and actual implementation. This study used two test methods such asInternet Control Message Protocol (ICMP)ping and ApacheBenchmark (AB) methodsto evaluate the performance.Performance metrics for this study include aspects ofaverageaccessin one second,time takenfor singleaccess,thedata transfer speed and the costof additional requirements.Reverse Proxy with Caching featureis the most efficientmechanism because of it simpler configurationandthe best performerfrom the test conducted.
Abstract: This paper is concerned with the production of an Arabic word semantic similarity benchmark dataset. It is the first of its kind for Arabic which was particularly developed to assess the accuracy of word semantic similarity measurements. Semantic similarity is an essential component to numerous applications in fields such as natural language processing, artificial intelligence, linguistics, and psychology. Most of the reported work has been done for English. To the best of our knowledge, there is no word similarity measure developed specifically for Arabic. In this paper, an Arabic benchmark dataset of 70 word pairs is presented. New methods and best possible available techniques have been used in this study to produce the Arabic dataset. This includes selecting and creating materials, collecting human ratings from a representative sample of participants, and calculating the overall ratings. This dataset will make a substantial contribution to future work in the field of Arabic WSS and hopefully it will be considered as a reference basis from which to evaluate and compare different methodologies in the field.
Abstract: The Minimum Vertex Cover (MVC) problem is a classic
graph optimization NP - complete problem. In this paper a competent
algorithm, called Vertex Support Algorithm (VSA), is designed to
find the smallest vertex cover of a graph. The VSA is tested on a
large number of random graphs and DIMACS benchmark graphs.
Comparative study of this algorithm with the other existing methods
has been carried out. Extensive simulation results show that the VSA
can yield better solutions than other existing algorithms found in the
literature for solving the minimum vertex cover problem.
Abstract: In the last decade, energy based control theory has undergone a significant breakthrough in dealing with underactated mechanical systems with two successful and similar tools, controlled Lagrangians and controlled Hamiltanians (IDA-PBC). However, because of the complexity of these tools, successful case studies are lacking, in particular, MIMO cases. The seminal theoretical paper of controlled Lagrangians proposed by Bloch and his colleagues presented a benchmark example–a 4 d.o.f underactuated pendulum on a cart but a detailed and completed design is neglected. To compensate this ignorance, the note revisit their design idea by addressing explicit control functions for a similar device motivated by a vector thrust body hovering in the air. To the best of our knowledge, this system is the first MIMO, underactuated example that is stabilized by using energy based tools at the courtesy of the original design idea. Some observations are given based on computer simulation.
Abstract: The network of delivering commodities has been an important design problem in our daily lives and many transportation applications. The delivery performance is evaluated based on the system reliability of delivering commodities from a source node to a sink node in the network. The system reliability is thus maximized to find the optimal routing. However, the design problem is not simple because (1) each path segment has randomly distributed attributes; (2) there are multiple commodities that consume various path capacities; (3) the optimal routing must successfully complete the delivery process within the allowable time constraints. In this paper, we want to focus on the design optimization of the Multi-State Flow Network (MSFN) for multiple commodities. We propose an efficient approach to evaluate the system reliability in the MSFN with respect to randomly distributed path attributes and find the optimal routing subject to the allowable time constraints. The delivery rates, also known as delivery currents, of the path segments are evaluated and the minimal-current arcs are eliminated to reduce the complexity of the MSFN. Accordingly, the correct optimal routing is found and the worst-case reliability is evaluated. It has been shown that the reliability of the optimal routing is at least higher than worst-case measure. Two benchmark examples are utilized to demonstrate the proposed method. The comparisons between the original and the reduced networks show that the proposed method is very efficient.
Abstract: Naive Bayes Nearest Neighbor (NBNN) and its variants, i,e., local NBNN and the NBNN kernels, are local feature-based classifiers that have achieved impressive performance in image classification. By exploiting instance-to-class (I2C) distances (instance means image/video in image/video classification), they avoid quantization errors of local image descriptors in the bag of words (BoW) model. However, the performances of NBNN, local NBNN and the NBNN kernels have not been validated on video analysis. In this paper, we introduce these three classifiers into human action recognition and conduct comprehensive experiments on the benchmark KTH and the realistic HMDB datasets. The results shows that those I2C based classifiers consistently outperform the SVM classifier with the BoW model.
Abstract: Al-Murabahah is an Islamic financing facility used in
asset financing, the profit rate of the contract is determined by
components which are also being used in the conventional banking.
Such are cost of fund, overhead cost, risk premium cost and bank-s
profit margin. At the same time, the profit rate determined by Islamic
banking system also refers to Inter-Bank Offered Rate (LIBOR) in
London as a benchmark. This practice has risen arguments among
Muslim scholars in term of its validity of the contract; whether the
contract maintains the Shariah compliance or not. This paper aims to
explore the view of Shariah towards the above components practiced
by Islamic Banking in determining the profit rate of al-murabahah
asset financing in Malaysia. This is a comparative research which
applied the views of Muslim scholars from all major mazahibs in
Islamic jurisprudence and examined the practices by Islamic banks in
Malaysia for the above components. The study found that the shariah
accepts all the components with conditions. The cost of fund is
accepted as a portion of al-mudarabah-s profit, the overhead cost is
accepted as a cost of product, risk premium cost consist of business
risk and mitigation risk are accepted through the concept of alta-awun
and bank-s profit margin is accepted as a right of bank after
venturing in risky investment.
Abstract: The Partitioned Global Address Space (PGAS) programming
paradigm offers ease-of-use in expressing parallelism
through a global shared address space while emphasizing performance
by providing locality awareness through the partitioning of
this address space. Therefore, the interest in PGAS programming
languages is growing and many new languages have emerged and
are becoming ubiquitously available on nearly all modern parallel
architectures. Recently, new parallel machines with multiple cores
are designed for targeting high performance applications. Most of the
efforts have gone into benchmarking but there are a few examples of
real high performance applications running on multicore machines.
In this paper, we present and evaluate a parallelization technique
for implementing a local DNA sequence alignment algorithm using
a PGAS based language, UPC (Unified Parallel C) on a chip
multithreading architecture, the UltraSPARC T1.
Abstract: In the upstream we place a piece of ring and rotate
it with 83Hz, 166Hz, 333Hz,and 666H to find the effect of the
periodic distortion.In the experiment this type of the perturbation
will not allow since the mechanical failure of any parts of the
equipment in the upstream will destroy the blade system. This type of
study will be only possible by CFD. We use two pumps NS32
(ENSAM) and three blades pump (Tamagawa Univ). The benchmark
computations were performed without perturbation parts, and confirm
the computational results well agreement in head-flow rate. We
obtained the pressure fluctuation growth rate that is representing the
global instability of the turbo-system. The fluctuating torque
components were 0.01Nm(5000rpm), 0.1Nm(10000rmp),
0.04Nm(20000rmp), 0.15Nm( 40000rmp) respectively. Only for
10000rpm(166Hz) the output toque was random, and it implies that it
creates unsteady flow by separations on the blades, and will reduce the
pressure loss significantly
Abstract: Solution to unsteady Navier-Stokes equation by Splitting method in physical orthogonal algebraic curvilinear coordinate system, also termed 'Non-linear grid system' is presented. The linear terms in Navier-Stokes equation are solved by Crank- Nicholson method while the non-linear term is solved by the second order Adams-Bashforth method. This work is meant to bring together the advantage of Splitting method as pressure-velocity solver of higher efficiency with the advantage of consuming Non-linear grid system which produce more accurate results in relatively equal number of grid points as compared to Cartesian grid. The validation of Splitting method as a solution of Navier-Stokes equation in Nonlinear grid system is done by comparison with the benchmark results for lid driven cavity flow by Ghia and some case studies including Backward Facing Step Flow Problem.
Abstract: The physical methods for RNA secondary structure prediction are time consuming and expensive, thus methods for computational prediction will be a proper alternative. Various algorithms have been used for RNA structure prediction including dynamic programming and metaheuristic algorithms. Musician's behaviorinspired harmony search is a recently developed metaheuristic algorithm which has been successful in a wide variety of complex optimization problems. This paper proposes a harmony search algorithm (HSRNAFold) to find RNA secondary structure with minimum free energy and similar to the native structure. HSRNAFold is compared with dynamic programming benchmark mfold and metaheuristic algorithms (RnaPredict, SetPSO and HelixPSO). The results showed that HSRNAFold is comparable to mfold and better than metaheuristics in finding the minimum free energies and the number of correct base pairs.
Abstract: As a popular rank-reduced vector space approach,
Latent Semantic Indexing (LSI) has been used in information
retrieval and other applications. In this paper, an LSI-based content
vector model for text classification is presented, which constructs
multiple augmented category LSI spaces and classifies text by their
content. The model integrates the class discriminative information
from the training data and is equipped with several pertinent feature
selection and text classification algorithms. The proposed classifier
has been applied to email classification and its experiments on a
benchmark spam testing corpus (PU1) have shown that the approach
represents a competitive alternative to other email classifiers based
on the well-known SVM and naïve Bayes algorithms.
Abstract: The various applications of VLSI circuits in highperformance
computing, telecommunications, and consumer
electronics has been expanding progressively, and at a very hasty
pace. This paper describes a new model for partitioning a circuit
using DBSCAN and fuzzy ARTMAP neural network. The first step
is concerned with feature extraction, where we had make use
DBSCAN algorithm. The second step is the classification and is
composed of a fuzzy ARTMAP neural network. The performance of
both approaches is compared using benchmark data provided by
MCNC standard cell placement benchmark netlists. Analysis of the
investigational results proved that the fuzzy ARTMAP with
DBSCAN model achieves greater performance then only fuzzy
ARTMAP in recognizing sub-circuits with lowest amount of
interconnections between them The recognition rate using fuzzy
ARTMAP with DBSCAN is 97.7% compared to only fuzzy
ARTMAP.
Abstract: Thepurpose of the research is to characterize the levels
of satisfaction of the students in e-learning post-graduate courses,
taking into account specific dimensions of the course which were
considered as benchmarks for the quality of this type of online
learning initiative, as well as the levels of satisfaction towards each
specific indicator identified in each dimension. It was also an aim of
this study to understand how thesedimensions relate to one another.
Using a quantitative research approach in the collection and analysis
of the data, the study involves the participation of the students who
attended on e-learning course in 2010/2011. The conclusions of this
study suggest that online students present relatively high levels of
satisfaction, which points towards a positive experience during the
course. It is possible to note that there is a correlation between the
different dimensions studied, consequently leading to different
improvement strategies. Ultimately, this investigation aims to
contribute to the promotion of quality and the success of e-learning
initiatives in Higher Education.
Abstract: The Spalart and Allmaras turbulence model has been
implemented in a numerical code to study the compressible turbulent
flows, which the system of governing equations is solved with a
finite volume approach using a structured grid. The AUSM+ scheme
is used to calculate the inviscid fluxes. Different benchmark
problems have been computed to validate the implementation and
numerical results are shown. A special Attention is paid to wall jet
applications. In this study, the jet is submitted to various wall
boundary conditions (adiabatic or uniform heat flux) in forced
convection regime and both two-dimensional and axisymmetric wall
jets are considered. The comparison between the numerical results
and experimental data has given the validity of this turbulence model
to study the turbulent wall jets especially in engineering applications.
Abstract: When binary decision diagrams are formed from
uniformly distributed Monte Carlo data for a large number of
variables, the complexity of the decision diagrams exhibits a
predictable relationship to the number of variables and minterms. In
the present work, a neural network model has been used to analyze the
pattern of shortest path length for larger number of Monte Carlo data
points. The neural model shows a strong descriptive power for the
ISCAS benchmark data with an RMS error of 0.102 for the shortest
path length complexity. Therefore, the model can be considered as a
method of predicting path length complexities; this is expected to lead
to minimum time complexity of very large-scale integrated circuitries
and related computer-aided design tools that use binary decision
diagrams.
Abstract: In this paper, we propose an adaptation of the Patricia-Tree for sparse datasets to generate non redundant rule associations. Using this adaptation, we can generate frequent closed itemsets that are more compact than frequent itemsets used in Apriori approach. This adaptation has been experimented on a set of datasets benchmarks.