Abstract: Classification of Persian printed numeral characters
has been considered and a proposed system has been introduced. In
representation stage, for the first time in Persian optical character
recognition, extended moment invariants has been utilized as
characters image descriptor. In classification stage, four different
classifiers namely minimum mean distance, nearest neighbor rule,
multi layer perceptron, and fuzzy min-max neural network has been
used, which first and second are traditional nonparametric statistical
classifier. Third is a well-known neural network and forth is a kind of
fuzzy neural network that is based on utilizing hyperbox fuzzy sets.
Set of different experiments has been done and variety of results has
been presented. The results showed that extended moment invariants
are qualified as features to classify Persian printed numeral
characters.
Abstract: It has been shown that a load discontinuity at the end of
an impulse will result in an extra impulse and hence an extra amplitude
distortion if a step-by-step integration method is employed to yield the
shock response. In order to overcome this difficulty, three remedies
are proposed to reduce the extra amplitude distortion. The first remedy
is to solve the momentum equation of motion instead of the force
equation of motion in the step-by-step solution of the shock response,
where an external momentum is used in the solution of the momentum
equation of motion. Since the external momentum is a resultant of the
time integration of external force, the problem of load discontinuity
will automatically disappear. The second remedy is to perform a single
small time step immediately upon termination of the applied impulse
while the other time steps can still be conducted by using the time step
determined from general considerations. This is because that the extra
impulse caused by a load discontinuity at the end of an impulse is
almost linearly proportional to the step size. Finally, the third remedy
is to use the average value of the two different values at the integration
point of the load discontinuity to replace the use of one of them for
loading input. The basic motivation of this remedy originates from the
concept of no loading input error associated with the integration point
of load discontinuity. The feasibility of the three remedies are
analytically explained and numerically illustrated.
Abstract: In this paper, a new face recognition method based on
PCA (principal Component Analysis), LDA (Linear Discriminant
Analysis) and neural networks is proposed. This method consists of
four steps: i) Preprocessing, ii) Dimension reduction using PCA, iii)
feature extraction using LDA and iv) classification using neural
network. Combination of PCA and LDA is used for improving the
capability of LDA when a few samples of images are available and
neural classifier is used to reduce number misclassification caused by
not-linearly separable classes. The proposed method was tested on
Yale face database. Experimental results on this database
demonstrated the effectiveness of the proposed method for face
recognition with less misclassification in comparison with previous
methods.
Abstract: Among various testing methodologies, Built-in Self-
Test (BIST) is recognized as a low cost, effective paradigm. Also,
full adders are one of the basic building blocks of most arithmetic
circuits in all processing units. In this paper, an optimized testable 2-
bit full adder as a test building block is proposed. Then, a BIST
procedure is introduced to scale up the building block and to generate
a self testable n-bit full adders. The target design can achieve 100%
fault coverage using insignificant amount of hardware redundancy.
Moreover, Overall test time is reduced by utilizing polymorphic
gates and also by testing full adder building blocks in parallel.
Abstract: Carbon Capture & Storage (CCS) is one of the various
methods that can be used to reduce the carbon footprint of the
energy sector. This paper focuses on the absorption of CO2 from
flue gas using packed columns, whose efficiency is highly dependent
on the structure of the liquid films within the column. To study the
characteristics of liquid films a CFD solver, OpenFOAM is utilised
to solve two-phase, isothermal film flow using the volume-of-fluid
(VOF) method. The model was validated using existing experimental
data and the Nusselt theory. It was found that smaller plate inclination
angles, with respect to the horizontal plane, resulted in larger wetted
areas on smooth plates. However, only a slight improvement in
the wetted area was observed. Simulations were also performed
using a ridged plate and it was observed that these surface textures
significantly increase the wetted area of the plate. This was mainly
attributed to the channelling effect of the ridges, which helped to
oppose the surface tension forces trying to minimise the surface area.
Rivulet formations on the ridged plate were also flattened out and
spread across a larger proportion of the plate width.
Abstract: Camera calibration is an important step in 3D
reconstruction. Camera calibration may be classified into two major types: traditional calibration and self-calibration. However, a calibration method in using a checkerboard is intermediate between traditional calibration and self-calibration. A self
is proposed based on a square in this paper. Only a square in the planar
template, the camera self-calibration can be completed through the single view. The proposed algorithm is that the virtual circle and straight line are established by a square on planar template, and
circular points, vanishing points in straight lines and the relation
between them are be used, in order to obtain the image of the absolute
conic (IAC) and establish the camera intrinsic parameters. To make
the calibration template is simpler, as compared with the Zhang Zhengyou-s method. Through real experiments and experiments, the experimental results show that this algorithm is
feasible and available, and has a certain precision and robustness.
Abstract: The job shop scheduling problem (JSSP) is a
notoriously difficult problem in combinatorial optimization. This
paper presents a hybrid artificial immune system for the JSSP with the
objective of minimizing makespan. The proposed approach combines
the artificial immune system, which has a powerful global exploration
capability, with the local search method, which can exploit the optimal
antibody. The antibody coding scheme is based on the operation based
representation. The decoding procedure limits the search space to the
set of full active schedules. In each generation, a local search heuristic
based on the neighborhood structure proposed by Nowicki and
Smutnicki is applied to improve the solutions. The approach is tested
on 43 benchmark problems taken from the literature and compared
with other approaches. The computation results validate the
effectiveness of the proposed algorithm.
Abstract: In this paper a combination approach of two heuristic-based algorithms: genetic algorithm and tabu search is proposed. It has been developed to obtain the least cost based on the split-pipe design of looped water distribution network. The proposed combination algorithm has been applied to solve the three well-known water distribution networks taken from the literature. The development of the combination of these two heuristic-based algorithms for optimization is aimed at enhancing their strengths and compensating their weaknesses. Tabu search is rather systematic and deterministic that uses adaptive memory in search process, while genetic algorithm is probabilistic and stochastic optimization technique in which the solution space is explored by generating candidate solutions. Split-pipe design may not be realistic in practice but in optimization purpose, optimal solutions are always achieved with split-pipe design. The solutions obtained in this study have proved that the least cost solutions obtained from the split-pipe design are always better than those obtained from the single pipe design. The results obtained from the combination approach show its ability and effectiveness to solve combinatorial optimization problems. The solutions obtained are very satisfactory and high quality in which the solutions of two networks are found to be the lowest-cost solutions yet presented in the literature. The concept of combination approach proposed in this study is expected to contribute some useful benefits in diverse problems.
Abstract: The Indian subcontinent is facing a massive challenge with regards to the energy security in member countries, i.e. providing a reliable source of electricity to facilitate development across various sectors of the economy and thereby achieve the developmental targets it has set for itself. A highly precarious situation exists in the subcontinent which is observed in the series of system failures which most of the times leads to system collapses-blackouts. To mitigate the issues related with energy security as well as keep in check the increasing supply demand gap, a possible solution that stands in front of the subcontinent is the deployment of an interconnected electricity ‘Supergrid’ designed to carry huge quanta of power across the sub continent as well as provide the infra structure for RES integration. This paper assesses the need and conditions for a Supergrid deployment and consequently proposes a meshed topology based on VSC HVDC converters for the Supergrid modeling.
Abstract: Massive use of places with strong tourist attraction
with the consequent possibility of losing place-identity produces
harmful effects on cities and their users. In order to mitigate this risk,
areas close to such places can be identified so as to widen the
visitor-s range of action and offer alternative activities integrated
with the main site. The cultural places and appropriate activities can
be identified using a method of analysis and design able to trace the
identity of the places, their characteristics and potential, and to
provide a sustainable improvement. The aim of this work is to
propose PlaceMaker as a method of urban analysis and design which
both detects elements that do not feature in traditional mapping and
which constitute the contemporary identity of the places, and
identifies appropriate project interventions. Two final complex maps
– the first of analysis and the second of design – respectively
represent the identity of places and project interventions. In order to
illustrate the method-s potential; the results of the experimentation
carried out in the Trevi-Pantheon route in Rome and the appropriate
interventions to decongest the area are illustrated.
Abstract: In this paper, a new model predictive PID controller
design method for the slip suppression control of EVs (electric
vehicles) is proposed. The proposed method aims to improve the
maneuverability and the stability of EVs by controlling the wheel
slip ratio. The optimal control gains of PID framework are derived
by the model predictive control (MPC) algorithm. There also include
numerical simulation results to demonstrate the effectiveness of the
method.
Abstract: During last decades, developing multi-objective
evolutionary algorithms for optimization problems has found
considerable attention. Flexible job shop scheduling problem, as an
important scheduling optimization problem, has found this attention
too. However, most of the multi-objective algorithms that are
developed for this problem use nonprofessional approaches. In
another words, most of them combine their objectives and then solve
multi-objective problem through single objective approaches. Of
course, except some scarce researches that uses Pareto-based
algorithms. Therefore, in this paper, a new Pareto-based algorithm
called controlled elitism non-dominated sorting genetic algorithm
(CENSGA) is proposed for the multi-objective FJSP (MOFJSP). Our
considered objectives are makespan, critical machine work load, and
total work load of machines. The proposed algorithm is also
compared with one the best Pareto-based algorithms of the literature
on some multi-objective criteria, statistically.
Abstract: The search for factors that influence user behavior has remained an important theme for both the academic and practitioner Information Systems Communities. In this paper we examine relevant user behaviors in the phase after adoption and investigate two factors that are expected to influence such behaviors, namely User Involvement (UI) and Personal Innovativeness in IT (PIIT). We conduct a field study to examine how these factors influence postadoption behavior and how they are interrelated. Building on theoretical premises and prior empirical findings, we propose and test two alternative models of the relationship between these factors. Our results reveal that the best explanation of post-adoption behavior is provided by the model where UI and PIIT independently influence post-adoption behavior. Our findings have important implications for research and practice. To that end, we offer directions for future research.
Abstract: In this paper, a target signal detection method using
multiple signal classification (MUSIC) algorithm is proposed. The
MUSIC algorithm is a subspace-based direction of arrival (DOA)
estimation method. The algorithm detects the DOAs of multiple
sources using the inverse of the eigenvalue-weighted eigen spectra. To
apply the algorithm to target signal detection for GSC-based
beamforming, we utilize its spectral response for the target DOA in
noisy conditions. For evaluation of the algorithm, the performance of
the proposed target signal detection method is compared with that of
the normalized cross-correlation (NCC), the fixed beamforming, and
the power ratio method. Experimental results show that the proposed
algorithm significantly outperforms the conventional ones in receiver
operating characteristics(ROC) curves.
Abstract: This article proposes a current-mode square-rooting
circuit using current follower transconductance amplifier (CTFA).
The amplitude of the output current can be electronically controlled
via input bias current with wide input dynamic range. The proposed
circuit consists of only single CFTA. Without any matching
conditions and external passive elements, the circuit is then
appropriate for an IC architecture. The magnitude of the output signal
is temperature-insensitive. The PSpice simulation results are
depicted, and the given results agree well with the theoretical
anticipation. The power consumption is approximately 1.96mW at
±1.5V supply voltages.
Abstract: In this paper, a block code to minimize the peak-toaverage
power ratio (PAPR) of orthogonal frequency division
multiplexing (OFDM) signals is proposed. It is shown that cyclic
shift and codeword inversion cause not change to peak envelope
power. The encoding rule for the proposed code comprises of
searching for a seed codeword, shifting the register elements, and
determining codeword inversion, eliminating the look-up table for
one-to-one correspondence between the source and the coded data.
Simulation results show that OFDM systems with the proposed code
always have the minimum PAPR.
Abstract: Analysis and visualization of microarraydata is veryassistantfor biologists and clinicians in the field of diagnosis and treatment of patients. It allows Clinicians to better understand the structure of microarray and facilitates understanding gene expression in cells. However, microarray dataset is a complex data set and has thousands of features and a very small number of observations. This very high dimensional data set often contains some noise, non-useful information and a small number of relevant features for disease or genotype. This paper proposes a non-linear dimensionality reduction algorithm Local Principal Component (LPC) which aims to maps high dimensional data to a lower dimensional space. The reduced data represents the most important variables underlying the original data. Experimental results and comparisons are presented to show the quality of the proposed algorithm. Moreover, experiments also show how this algorithm reduces high dimensional data whilst preserving the neighbourhoods of the points in the low dimensional space as in the high dimensional space.
Abstract: A fast settling multipath CMOS OTA for high speed
switched capacitor applications is presented here. With the basic
topology similar to folded-cascode, bandwidth and DC gain of the
OTA are enhanced by adding extra paths for signal from input to
output. Designed circuit is simulated with HSPICE using level 49
parameters (BSIM 3v3) in 0.35mm standard CMOS technology. DC
gain achieved is 56.7dB and Unity Gain Bandwidth (UGB) obtained
is 1.15GHz. These results confirm that adding extra paths for signal
can improve DC gain and UGB of folded-cascode significantly.
Abstract: This paper proposes a novel approach to the question of lithofacies classification based on an assessment of the uncertainty in the classification results. The proposed approach has multiple neural networks (NN), and interval neutrosophic sets (INS) are used to classify the input well log data into outputs of multiple classes of lithofacies. A pair of n-class neural networks are used to predict n-degree of truth memberships and n-degree of false memberships. Indeterminacy memberships or uncertainties in the predictions are estimated using a multidimensional interpolation method. These three memberships form the INS used to support the confidence in results of multiclass classification. Based on the experimental data, our approach improves the classification performance as compared to an existing technique applied only to the truth membership. In addition, our approach has the capability to provide a measure of uncertainty in the problem of multiclass classification.
Abstract: To investigate some relations between higher mathe¬matics scores in Chinese graduate student entrance examination and calculus (resp. linear algebra, probability statistics) scores in subject's completion examination of Chinese university, we select 20 students as a sample, take higher mathematics score as a decision attribute and take calculus score, linear algebra score, probability statistics score as condition attributes. In this paper, we are based on rough-set theory (Rough-set theory is a logic-mathematical method proposed by Z. Pawlak. In recent years, this theory has been widely implemented in the many fields of natural science and societal science.) to investigate importance of condition attributes with respective to decision attribute and strength of condition attributes supporting decision attribute. Results of this investigation will be helpful for university students to raise higher mathematics scores in Chinese graduate student entrance examination.