Abstract: This paper presents the H-ARQ techniques comparison for OFDM systems with a new family of non-binary LDPC codes which has been developed within the EU FP7 DAVINCI project. The punctured NB-LDPC codes have been used in a simulated model of the transmission system. The link level performance has been evaluated in terms of spectral efficiency, codeword error rate and average number of retransmissions. The NB-LDPC codes can be easily and effective implemented with different methods of the retransmission needed if correct decoding of a codeword failed. Here the Optimal Symbol Selection method is proposed as a Chase Combining technique.
Abstract: The Connection Admission Control (CAC) problem is formulated in this paper as a discrete time optimal control problem. The control variables account for the acceptance/ rejection of new connections and forced dropping of in-progress connections. These variables are constrained to meet suitable conditions which account for the QoS requirements (Link Availability, Blocking Probability, Dropping Probability). The performance index evaluates the total throughput. At each discrete time, the problem is solved as an integer-valued linear programming one. The proposed procedure was successfully tested against suitably simulated data.
Abstract: In this paper a combined feature selection method is
proposed which takes advantages of sample domain filtering,
resampling and feature subset evaluation methods to reduce
dimensions of huge datasets and select reliable features. This method
utilizes both feature space and sample domain to improve the process
of feature selection and uses a combination of Chi squared with
Consistency attribute evaluation methods to seek reliable features.
This method consists of two phases. The first phase filters and
resamples the sample domain and the second phase adopts a hybrid
procedure to find the optimal feature space by applying Chi squared,
Consistency subset evaluation methods and genetic search.
Experiments on various sized datasets from UCI Repository of
Machine Learning databases show that the performance of five
classifiers (Naïve Bayes, Logistic, Multilayer Perceptron, Best First
Decision Tree and JRIP) improves simultaneously and the
classification error for these classifiers decreases considerably. The
experiments also show that this method outperforms other feature
selection methods.
Abstract: In this paper, the Tabu search algorithm is used to
solve a transportation problem which consists of determining the
shortest routes with the appropriate vehicle capacity to facilitate the
travel of the students attending the University of Mauritius. The aim
of this work is to minimize the total cost of the distance travelled by
the vehicles in serving all the customers. An initial solution is
obtained by the TOUR algorithm which basically constructs a giant
tour containing all the customers and partitions it in an optimal way
so as to produce a set of feasible routes. The Tabu search algorithm
then makes use of a search procedure, a swapping procedure and the
intensification and diversification mechanism to find the best set of
feasible routes.
Abstract: In this paper a nonlinear model is presented to
demonstrate the relation between production and marketing
departments. By introducing some functions such as pricing cost and
market share loss functions it will be tried to show some aspects of
market modelling which has not been regarded before. The proposed
model will be a constrained signomial geometric programming
model. For model solving, after variables- modifications an iterative
technique based on the concept of geometric mean will be introduced
to solve the resulting non-standard posynomial model which can be
applied to a wide variety of models in non-standard posynomial
geometric programming form. At the end a numerical analysis will
be presented to accredit the validity of the mentioned model.
Abstract: Clustering techniques have been used by many intelligent software agents to group similar access patterns of the Web users into high level themes which express users intentions and interests. However, such techniques have been mostly focusing on one salient feature of the Web document visited by the user, namely the extracted keywords. The major aim of these techniques is to come up with an optimal threshold for the number of keywords needed to produce more focused themes. In this paper we focus on both keyword and similarity thresholds to generate themes with concentrated themes, and hence build a more sound model of the user behavior. The purpose of this paper is two fold: use distance based clustering methods to recognize overall themes from the Proxy log file, and suggest an efficient cut off levels for the keyword and similarity thresholds which tend to produce more optimal clusters with better focus and efficient size.
Abstract: Despite the extensive use of eLearning systems, there
is no consensus on a standard framework for evaluating this kind of
quality system. Hence, there is only a minimum set of tools that can
supervise this judgment and gives information about the course
content value. This paper presents two kinds of quality set evaluation
indicators for eLearning courses based on the computational process
of three known metrics, the Euclidian, Hamming and Levenshtein
distances. The “distance" calculus is applied to standard evaluation
templates (i.e. the European Commission Programme procedures vs.
the AFNOR Z 76-001 Standard), determining a reference point in the
evaluation of the e-learning course quality vs. the optimal concept(s).
The case study, based on the results of project(s) developed in the
framework of the European Programme “Leonardo da Vinci", with
Romanian contractors, try to put into evidence the benefits of such a
method.
Abstract: The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.
Abstract: Based on a global kinetics of direct dimethyl ether (DME) synthesis process from syngas, a steady-state one-dimensional mathematical model for the bubble column slurry reactor (BCSR) has been established. It was built on the assumption of plug flow of gas phase, sedimentation-dispersion model of catalyst grains and isothermal chamber regardless of reaction heats and rates for the design of an industrial scale bubble column slurry reactor. The simulation results indicate that higher pressure and lower temperature were favorable to the increase of CO conversion, DME selectivity, products yield and the height of slurry bed, which has a coincidence with the characteristic of DME synthesis reaction system, and that the height of slurry bed is lessen with the increasing of operation temperature in the range of 220-260℃. CO conversion, the optimal operation conditions in BCSR were proposed.
Abstract: Computing and maintaining network structures for efficient
data aggregation incurs high overhead for dynamic events
where the set of nodes sensing an event changes with time. Moreover,
structured approaches are sensitive to the waiting time that is used
by nodes to wait for packets from their children before forwarding
the packet to the sink. An optimal routing and data aggregation
scheme for wireless sensor networks is proposed in this paper. We
propose Tree on DAG (ToD), a semistructured approach that uses
Dynamic Forwarding on an implicitly constructed structure composed
of multiple shortest path trees to support network scalability. The key
principle behind ToD is that adjacent nodes in a graph will have
low stretch in one of these trees in ToD, thus resulting in early
aggregation of packets. Based on simulations on a 2,000-node Mica2-
based network, we conclude that efficient aggregation in large-scale
networks can be achieved by our semistructured approach.
Abstract: Coagulation of water involves the use of coagulating
agents to bring the suspended matter in the raw water together for
settling and the filtration stage. Present study is aimed to examine the
effects of aluminum sulfate as coagulant in conjunction with Moringa
Oleifera Coagulant Protein as coagulant aid on turbidity, hardness,
and bacteria in turbid water. A conventional jar test apparatus was
employed for the tests. The best removal was observed at a pH of 7
to 7.5 for all turbidities. Turbidity removal efficiency was resulted
between % 80 to % 99 by Moringa Oleifera Coagulant Protein as
coagulant aid. Dosage of coagulant and coagulant aid decreased with
increasing turbidity. In addition, Moringa Oleifera Coagulant Protein
significantly has reduced the required dosage of primary coagulant.
Residual Al+3 in treated water were less than 0.2 mg/l and meets the
environmental protection agency guidelines. The results showed that
turbidity reduction of % 85.9- % 98 paralleled by a primary
Escherichia coli reduction of 1-3 log units (99.2 – 99.97%) was
obtained within the first 1 to 2 h of treatment. In conclusions,
Moringa Oleifera Coagulant Protein as coagulant aid can be used for
drinking water treatment without the risk of organic or nutrient
release. We demonstrated that optimal design method is an efficient
approach for optimization of coagulation-flocculation process and
appropriate for raw water treatment.
Abstract: Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.
Abstract: In the context of sensor networks, where every few
dB saving counts, the novel node cooperation schemes are reviewed
where MIMO techniques play a leading role. These methods could be
treated as joint approach for designing physical layer of their
communication scenarios. Then we analyzed the BER performance
of transmission diversity schemes under a general fading channel
model and proposed a power allocation strategy to the transmitting
sensor nodes. This approach is then compared to an equal-power
assignment method and its performance enhancement is verified by
the simulation. Another key point of the contribution lies in the
combination of optimal power allocation and sensor nodes-
cooperation in a transmission diversity regime (MISO). Numerical
results are given through figures to demonstrate the optimality and
efficiency of proposed combined approach.
Abstract: Applicability of tuning the controller gains for Stewart manipulator using genetic algorithm as an efficient search technique is investigated. Kinematics and dynamics models were introduced in detail for simulation purpose. A PD task space control scheme was used. For demonstrating technique feasibility, a Stewart manipulator numerical-model was built. A genetic algorithm was then employed to search for optimal controller gains. The controller was tested onsite a generic circular mission. The simulation results show that the technique is highly convergent with superior performance operating for different payloads.
Abstract: The use of the oncologic index ISTER allows for a more effective planning of the radiotherapic facilities in the hospitals. Any change in the radiotherapy treatment, due to unexpected stops, may be adapted by recalculating the doses to the new treatment duration while keeping the optimal prognosis. The results obtained in a simulation model on millions of patients allow the definition of optimal success probability algorithms.
Abstract: To improve the classification rate of the face
recognition, features combination and a novel non-linear kernel are
proposed. The feature vector concatenates three different radius of
local binary patterns and Gabor wavelet features. Gabor features are
the mean, standard deviation and the skew of each scaling and
orientation parameter. The aim of the new kernel is to incorporate
the power of the kernel methods with the optimal balance between
the features. To verify the effectiveness of the proposed method,
numerous methods are tested by using four datasets, which are
consisting of various emotions, orientations, configuration,
expressions and lighting conditions. Empirical results show the
superiority of the proposed technique when compared to other
methods.
Abstract: An advanced composite flywheel rotor consisting of
intra and inter hybrid rims was designed to optimally increase the energy capacity, and was manufactured using filament winding with
in-situ curing. The flywheel has recently attracted considerable attention from many investigators since it possesses great potential in
many energy storage applications, including electric utilities, hybrid or
electric automobiles, and space vehicles. In this investigation, a comprehensive study was conducted with the intent to implement
composites in high performance flywheel applications.The inner two
intra-hybrid rims (rims 1 and 2) were manufactured as a whole part
through continuous filament winding under in-situ curing conditions,
and so were the outer two rims (rims 3 and 4). The outer surface of rim
2 and the inner surface of rim 3 were CNC-tapered for press-fitting. Machined rims were finally press-fitted using a hydraulic press with a
maximum compressive force of approximately 1000 ton.
Abstract: NFκB activation plays a crucial role in anti-apoptotic responses in response to the apoptotic signaling during tumor necrosis factor (TNFa) stimulation in Multiple Myeloma (MM). Although several drugs have been found effective for the treatment of MM by mainly inhibiting NFκB pathway, there are no any quantitative or qualitative results of comparison assessment on inhibition effect between different single drugs or drug combinations. Computational modeling is becoming increasingly indispensable for applied biological research mainly because it can provide strong quantitative predicting power. In this study, a novel computational pathway modeling approach is employed to comparably assess the inhibition effects of specific single drugs and drug combinations on the NFκB pathway in MM, especially the prediction of synergistic drug combinations.
Abstract: This paper presents two simplified models to
determine nodal voltages in power distribution networks. These
models allow estimating the impact of the installation of reactive
power compensations equipments like fixed or switched capacitor
banks. The procedure used to develop the models is similar to the
procedure used to develop linear power flow models of transmission
lines, which have been widely used in optimization problems of
operation planning and system expansion. The steady state non-linear
load flow equations are approximated by linear equations relating the
voltage amplitude and currents. The approximations of the linear
equations are based on the high relationship between line resistance
and line reactance (ratio R/X), which is valid for power distribution
networks. The performance and accuracy of the models are evaluated
through comparisons with the exact results obtained from the
solution of the load flow using two test networks: a hypothetical
network with 23 nodes and a real network with 217 nodes.
Abstract: In some real applications of Statistical Process Control
it is necessary to design a control chart to not detect small process
shifts, but keeping a good performance to detect moderate and large
shifts in the quality. In this work we develop a new quality control
chart, the synthetic T2 control chart, that can be designed to cope with
this objective. A multi-objective optimization is carried out employing
Genetic Algorithms, finding the Pareto-optimal front of
non-dominated solutions for this optimization problem.