Abstract: For over a decade, the Pulse Coupled Neural Network
(PCNN) based algorithms have been successfully used in image
interpretation applications including image segmentation. There are
several versions of the PCNN based image segmentation methods,
and the segmentation accuracy of all of them is very sensitive to the
values of the network parameters. Most methods treat PCNN
parameters like linking coefficient and primary firing threshold as
global parameters, and determine them by trial-and-error. The
automatic determination of appropriate values for linking coefficient,
and primary firing threshold is a challenging problem and deserves
further research. This paper presents a method for obtaining global as
well as local values for the linking coefficient and the primary firing
threshold for neurons directly from the image statistics. Extensive
simulation results show that the proposed approach achieves
excellent segmentation accuracy comparable to the best accuracy
obtainable by trial-and-error for a variety of images.
Abstract: the current study presents a modeling framework to determine the torsion strength of an induction hardened splined shaft by considering geometry and material aspects with the aim to optimize the static torsion strength by selection of spline geometry and hardness depth. Six different spline geometries and seven different hardness profiles including non-hardened and throughhardened shafts have been considered. The results reveal that the torque that causes initial yielding of the induction hardened splined shaft is strongly dependent on the hardness depth and the geometry of the spline teeth. Guidelines for selection of the appropriate hardness depth and spline geometry are given such that an optimum static torsion strength of the component can be achieved.
Abstract: This paper deals with a portfolio selection problem
based on the possibility theory under the assumption that the returns
of assets are LR-type fuzzy numbers. A possibilistic portfolio model
with transaction costs is proposed, in which the possibilistic mean
value of the return is termed measure of investment return, and the
possibilistic variance of the return is termed measure of investment
risk. Due to considering transaction costs, the existing traditional
optimization algorithms usually fail to find the optimal solution
efficiently and heuristic algorithms can be the best method. Therefore,
a particle swarm optimization is designed to solve the corresponding
optimization problem. At last, a numerical example is given to
illustrate our proposed effective means and approaches.
Abstract: In this paper, a new learning approach for network
intrusion detection using naïve Bayesian classifier and ID3 algorithm
is presented, which identifies effective attributes from the training
dataset, calculates the conditional probabilities for the best attribute
values, and then correctly classifies all the examples of training and
testing dataset. Most of the current intrusion detection datasets are
dynamic, complex and contain large number of attributes. Some of
the attributes may be redundant or contribute little for detection
making. It has been successfully tested that significant attribute
selection is important to design a real world intrusion detection
systems (IDS). The purpose of this study is to identify effective
attributes from the training dataset to build a classifier for network
intrusion detection using data mining algorithms. The experimental
results on KDD99 benchmark intrusion detection dataset demonstrate
that this new approach achieves high classification rates and reduce
false positives using limited computational resources.
Abstract: Today, computer systems are more and more complex and support growing security risks. The security managers need to find effective security risk assessment methodologies that allow modeling well the increasing complexity of current computer systems but also maintaining low the complexity of the assessment procedure. This paper provides a brief analysis of common security risk assessment methodologies leading to the selection of a proper methodology to fulfill these requirements. Then, a detailed analysis of the most effective methodology is accomplished, presenting numerical examples to demonstrate how easy it is to use.
Abstract: This paper presents a conceptual model of agreement
options for negotiation support in multi-person decision on
optimizing high-rise building columns. The decision is complicated
since many parties involved in choosing a single alternative from a
set of solutions. There are different concern caused by differing
preferences, experiences, and background. Such building columns as
alternatives are referred to as agreement options which are
determined by identifying the possible decision maker group,
followed by determining the optimal solution for each group. The
group in this paper is based on three-decision makers preferences that
are designer, programmer, and construction manager. Decision
techniques applied to determine the relative value of the alternative
solutions for performing the function. Analytical Hierarchy Process
(AHP) was applied for decision process and game theory based agent
system for coalition formation. An n-person cooperative game is
represented by the set of all players. The proposed coalition
formation model enables each agent to select individually its allies or
coalition. It further emphasizes the importance of performance
evaluation in the design process and value-based decision.
Abstract: Artificial Immune System is adopted as a Heuristic
Algorithm to solve the combinatorial problems for decades.
Nevertheless, many of these applications took advantage of the benefit
for applications but seldom proposed approaches for enhancing the
efficiency. In this paper, we continue the previous research to develop
a Self-evolving Artificial Immune System II via coordinating the T
and B cell in Immune System and built a block-based artificial
chromosome for speeding up the computation time and better
performance for different complexities of problems. Through the
design of Plasma cell and clonal selection which are relative the
function of the Immune Response. The Immune Response will help
the AIS have the global and local searching ability and preventing
trapped in local optima. From the experimental result, the significant
performance validates the SEAIS II is effective when solving the
permutation flows-hop problems.
Abstract: Since communications between tag and reader in RFID
system are by radio, anyone can access the tag and obtain its any
information. And a tag always replies with the same ID so that it is
hard to distinguish between a real and a fake tag. Thus, there are many
security problems in today-s RFID System. Firstly, unauthorized
reader can easily read the ID information of any Tag. Secondly,
Adversary can easily cheat the legitimate reader using the collected
Tag ID information, such as the any legitimate Tag. These security
problems can be typically solved by encryption of messages
transmitted between Tag and Reader and by authentication for Tag.
In this paper, to solve these security problems on RFID system, we
propose the Tag Authentication Scheme based on self shrinking
generator (SSG). SSG Algorithm using in our scheme is proposed by
W.Meier and O.Staffelbach in EUROCRYPT-94. This Algorithm is
organized that only one LFSR and selection logic in order to generate
random stream. Thus it is optimized to implement the hardware logic
on devices with extremely limited resource, and the output generating
from SSG at each time do role as random stream so that it is allow our
to design the light-weight authentication scheme with security against
some network attacks. Therefore, we propose the novel tag
authentication scheme which use SSG to encrypt the Tag-ID
transmitted from tag to reader and achieve authentication of tag.
Abstract: The design of a pattern classifier includes an attempt
to select, among a set of possible features, a minimum subset of
weakly correlated features that better discriminate the pattern classes.
This is usually a difficult task in practice, normally requiring the
application of heuristic knowledge about the specific problem
domain. The selection and quality of the features representing each
pattern have a considerable bearing on the success of subsequent
pattern classification. Feature extraction is the process of deriving
new features from the original features in order to reduce the cost of
feature measurement, increase classifier efficiency, and allow higher
classification accuracy. Many current feature extraction techniques
involve linear transformations of the original pattern vectors to new
vectors of lower dimensionality. While this is useful for data
visualization and increasing classification efficiency, it does not
necessarily reduce the number of features that must be measured
since each new feature may be a linear combination of all of the
features in the original pattern vector. In this paper a new approach is
presented to feature extraction in which feature selection, feature
extraction, and classifier training are performed simultaneously using
a genetic algorithm. In this approach each feature value is first
normalized by a linear equation, then scaled by the associated weight
prior to training, testing, and classification. A knn classifier is used to
evaluate each set of feature weights. The genetic algorithm optimizes
a vector of feature weights, which are used to scale the individual
features in the original pattern vectors in either a linear or a nonlinear
fashion. By this approach, the number of features used in classifying
can be finely reduced.
Abstract: A new approach based on the consideration that electroencephalogram (EEG) signals are chaotic signals was presented for automated diagnosis of electroencephalographic changes. This consideration was tested successfully using the nonlinear dynamics tools, like the computation of Lyapunov exponents. This paper presented the usage of statistics over the set of the Lyapunov exponents in order to reduce the dimensionality of the extracted feature vectors. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The selected Lyapunov exponents of the EEG signals were used as inputs of the MLPNN trained with Levenberg- Marquardt algorithm. The classification results confirmed that the proposed MLPNN has potential in detecting the electroencephalographic changes.
Abstract: Protective clothing limits heat transfer and hampers
task performance due to the increased weight. Militarism protective
clothing enables humans to operate in adverse environments. In the
selection and evaluation of militarism protective clothing attention
should be given to heat strain, ergonomic and fit issues next to the
actual protection it offers.
Fifty Male healthy subjects participated in the study. The subjects
were dressed in shorts, T-shirts, socks, sneakers and four deferent
kinds of militarism protective clothing such as CS, CSB, CS with
NBC protection and CS with NBC- protection added.
Ergonomically and psychological strains of every four cloths were
investigated on subjects by walking on a treadmill (7km/hour) with a
19.7 kg backpack. As a result of these tests were showed that, the
highest heart rate was found wearing the NBC-protection added
outfit, the highest temperatures were observed wearing NBCprotection
added, followed by respectively CS with NBC protection,
CSB and CS and the highest value for thermal comfort (implying
worst thermal comfort) was observed wearing NBC-protection
added.
Abstract: Optimizing equipment selection in heavy earthwork
operations is a critical key in the success of any construction project.
The objective of this research incentive was geared towards
developing a computer model to assist contractors and construction
managers in estimating the cost of heavy earthwork operations.
Economical operation analysis was conducted for an equipment fleet
taking into consideration the owning and operating costs involved in
earthwork operations. The model is being developed in a Microsoft
environment and is capable of being integrated with other estimating
and optimization models. In this study, Caterpillar® Performance
Handbook [5] was the main resource used to obtain specifications of
selected equipment. The implementation of the model shall give
optimum selection of equipment fleet not only based on cost
effectiveness but also in terms of versatility. To validate the model, a
case study of an actual dam construction project was selected to
quantify its degree of accuracy.
Abstract: This paper presents a novel approach for optimal
reconfiguration of radial distribution systems. Optimal
reconfiguration involves the selection of the best set of branches to
be opened, one each from each loop, such that the resulting radial
distribution system gets the desired performance. In this paper an
algorithm is proposed based on simple heuristic rules and identified
an effective switch status configuration of distribution system for the
minimum loss reduction. This proposed algorithm consists of two
parts; one is to determine the best switching combinations in all loops
with minimum computational effort and the other is simple optimum
power loss calculation of the best switching combination found in
part one by load flows. To demonstrate the validity of the proposed
algorithm, computer simulations are carried out on 33-bus system.
The results show that the performance of the proposed method is
better than that of the other methods.
Abstract: In this paper, we focus on the fusion of images from
different sources using multiresolution wavelet transforms. Based on
reviews of popular image fusion techniques used in data analysis,
different pixel and energy based methods are experimented. A novel
architecture with a hybrid algorithm is proposed which applies pixel
based maximum selection rule to low frequency approximations and
filter mask based fusion to high frequency details of wavelet
decomposition. The key feature of hybrid architecture is the
combination of advantages of pixel and region based fusion in a
single image which can help the development of sophisticated
algorithms enhancing the edges and structural details. A Graphical
User Interface is developed for image fusion to make the research
outcomes available to the end user. To utilize GUI capabilities for
medical, industrial and commercial activities without MATLAB
installation, a standalone executable application is also developed
using Matlab Compiler Runtime.
Abstract: Molodstov-s soft sets theory was originally proposed
as general mathematical tool for dealing with uncertainty problems. The matrix form has been introduced in soft set and some of its
properties have been discussed. However, the formulation of soft
matrix in group decision making problem only with equal importance
weights of criteria, which does not show the true opinion of decision maker on each criteria. The aim of this paper is to propose a method
for solving group decision making problem incorporating the importance of criteria by using soft matrices in a more objective manner. The weight of each criterion is calculated by using the Analytic Hierarchy Process (AHP) method. An example of house
selection process is given to illustrate the effectiveness of the proposed method.
Abstract: Multi criteria decision analysis (MDCA) covers both
data and experience. It is very common to solve the problems with
many parameters and uncertainties. GIS supported solutions improve
and speed up the decision process. Weighted grading as a MDCA
method is employed for solving the geotechnical problems. In this
study, geotechnical parameters namely soil type; SPT (N) blow
number, shear wave velocity (Vs) and depth of underground water
level (DUWL) have been engaged in MDCA and GIS. In terms of
geotechnical aspects, the settlement suitability of the municipal area
was analyzed by the method. MDCA results were compatible with
the geotechnical observations and experience. The method can be
employed in geotechnical oriented microzoning studies if the criteria
are well evaluated.
Abstract: In this paper a polymer electrolyte membrane (PEM)
fuel cell power system including burner, steam reformer, heat
exchanger and water heater has been considered to meet the
electrical, heating, cooling and domestic hot water loads of
residential building which in Tehran. The system uses natural gas as
fuel and works in CHP mode. Design and operating conditions of a
PEM fuel cell system is considered in this study. The energy
requirements of residential building and the number of fuel cell
stacks to meet them have been estimated. The method involved
exergy analysis and entropy generation thorough the months of the
year. Results show that all the energy needs of the building can be
met with 12 fuel cell stacks at a nominal capacity of 8.5 kW. Exergy
analysis of the CHP system shows that the increase in the ambient air
temperature from 1oC to 40oC, will have an increase of entropy
generation by 5.73%.Maximum entropy generates for 15 hour in 15th
of June and 15th of July is estimated to amount at 12624 (kW/K).
Entropy generation of this system through a year is estimated to
amount to 1004.54 GJ/k.year.
Abstract: Evolvable hardware (EHW) refers to a selfreconfiguration
hardware design, where the configuration is under
the control of an evolutionary algorithm (EA). A lot of research has
been done in this area several different EA have been introduced.
Every time a specific EA is chosen for solving a particular problem,
all its components, such as population size, initialization, selection
mechanism, mutation rate, and genetic operators, should be selected
in order to achieve the best results. In the last three decade a lot of
research has been carried out in order to identify the best parameters
for the EA-s components for different “test-problems". However
different researchers propose different solutions. In this paper the
behaviour of mutation rate on (1+λ) evolution strategy (ES) for
designing logic circuits, which has not been done before, has been
deeply analyzed. The mutation rate for an EHW system modifies
values of the logic cell inputs, the cell type (for example from AND
to NOR) and the circuit output. The behaviour of the mutation has
been analyzed based on the number of generations, genotype
redundancy and number of logic gates used for the evolved circuits.
The experimental results found provide the behaviour of the mutation
rate to be used during evolution for the design and optimization of
logic circuits. The researches on the best mutation rate during the last
40 years are also summarized.
Abstract: Subdivision surfaces were applied to the entire
meshes in order to produce smooth surfaces refinement from coarse
mesh. Several schemes had been introduced in this area to provide a
set of rules to converge smooth surfaces. However, to compute and
render all the vertices are really inconvenient in terms of memory
consumption and runtime during the subdivision process. It will lead
to a heavy computational load especially at a higher level of
subdivision. Adaptive subdivision is a method that subdivides only at
certain areas of the meshes while the rest were maintained less
polygons. Although adaptive subdivision occurs at the selected areas,
the quality of produced surfaces which is their smoothness can be
preserved similar as well as regular subdivision. Nevertheless,
adaptive subdivision process burdened from two causes; calculations
need to be done to define areas that are required to be subdivided and
to remove cracks created from the subdivision depth difference
between the selected and unselected areas. Unfortunately, the result
of adaptive subdivision when it reaches to the higher level of
subdivision, it still brings the problem with memory consumption.
This research brings to iterative process of adaptive subdivision to
improve the previous adaptive method that will reduce memory
consumption applied on triangular mesh. The result of this iterative
process was acceptable better in memory and appearance in order to
produce fewer polygons while it preserves smooth surfaces.
Abstract: This work discusses an innovative methodology for
deployment of service quality characteristics. Four groups of organizational features that may influence the quality of services are identified: human resource, technology, planning, and organizational
relationships. A House of Service Quality (HOSQ) matrix is built to
extract the desired improvement in the service quality characteristics
and to translate them into a hierarchy of important organizational
features. The Mean Square Error (MSE) criterion enables the
pinpointing of the few essential service quality characteristics to be
improved as well as selection of the vital organizational features. The
method was implemented in an engineering supply enterprise and
provides useful information on its vital service dimensions.