Abstract: This paper presents a new method which applies an
artificial bee colony algorithm (ABC) for capacitor placement in
distribution systems with an objective of improving the voltage profile
and reduction of power loss. The ABC algorithm is a new population
based meta heuristic approach inspired by intelligent foraging behavior
of honeybee swarm. The advantage of ABC algorithm is that
it does not require external parameters such as cross over rate and
mutation rate as in case of genetic algorithm and differential evolution
and it is hard to determine these parameters in prior. The other
advantage is that the global search ability in the algorithm is implemented
by introducing neighborhood source production mechanism
which is a similar to mutation process. To demonstrate the validity
of the proposed algorithm, computer simulations are carried out on
69-bus system and compared the results with the other approach
available in the literature. The proposed method has outperformed the
other methods in terms of the quality of solution and computational
efficiency.
Abstract: Economic Load Dispatch (ELD) is a method of determining
the most efficient, low-cost and reliable operation of a power
system by dispatching available electricity generation resources to
supply load on the system. The primary objective of economic
dispatch is to minimize total cost of generation while honoring
operational constraints of available generation resources. In this paper
an intelligent water drop (IWD) algorithm has been proposed to
solve ELD problem with an objective of minimizing the total cost of
generation. Intelligent water drop algorithm is a swarm-based natureinspired
optimization algorithm, which has been inspired from natural
rivers. A natural river often finds good paths among lots of possible
paths in its ways from source to destination and finally find almost
optimal path to their destination. These ideas are embedded into
the proposed algorithm for solving economic load dispatch problem.
The main advantage of the proposed technique is easy is implement
and capable of finding feasible near global optimal solution with
less computational effort. In order to illustrate the effectiveness of
the proposed method, it has been tested on 6-unit and 20-unit test
systems with incremental fuel cost functions taking into account the
valve point-point loading effects. Numerical results shows that the
proposed method has good convergence property and better in quality
of solution than other algorithms reported in recent literature.
Abstract: Modern highly automated production systems faces
problems of reliability. Machine function reliability results in
changes of productivity rate and efficiency use of expensive
industrial facilities. Predicting of reliability has become an important
research and involves complex mathematical methods and
calculation. The reliability of high productivity technological
automatic machines that consists of complex mechanical, electrical
and electronic components is important. The failure of these units
results in major economic losses of production systems. The
reliability of transport and feeding systems for automatic
technological machines is also important, because failure of transport
leads to stops of technological machines. This paper presents
reliability engineering on the feeding system and its components for
transporting a complex shape parts to automatic machines. It also
discusses about the calculation of the reliability parameters of the
feeding unit by applying the probability theory. Equations produced
for calculating the limits of the geometrical sizes of feeders and the
probability of sticking the transported parts into the chute represents
the reliability of feeders as a function of its geometrical parameters.
Abstract: The acoustic and articulatory properties of fricative speech sounds are being studied using magnetic resonance imaging (MRI) and acoustic recordings from a single subject. Area functions were derived from a complete set of axial and coronal MR slices using two different methods: the Mermelstein technique and the Blum transform. Area functions derived from the two techniques were shown to differ significantly in some cases. Such differences will lead to different acoustic predictions and it is important to know which is the more accurate. The vocal tract acoustic transfer function (VTTF) was derived from these area functions for each fricative and compared with measured speech signals for the same fricative and same subject. The VTTFs for /f/ in two vowel contexts and the corresponding acoustic spectra are derived here; the Blum transform appears to show a better match between prediction and measurement than the Mermelstein technique.
Abstract: Unsteady boundary layer flow of an incompressible
micropolar fluid over a stretching sheet when the sheet is stretched in
its own plane is studied in this paper. The stretching velocity is
assumed to vary linearly with the distance along the sheet. Two equal
and opposite forces are impulsively applied along the x-axis so that the
sheet is stretched, keeping the origin fixed in a micropolar fluid. The
transformed unsteady boundary layer equations are solved
numerically using the Keller-box method for the whole transient from
the initial state to final steady-state flow. Numerical results are
obtained for the velocity and microrotation distributions as well as the
skin friction coefficient for various values of the material parameter K.
It is found that there is a smooth transition from the small-time
solution to the large-time solution.
Abstract: Natural outdoor scene classification is active and
promising research area around the globe. In this study, the
classification is carried out in two phases. In the first phase, the
features are extracted from the images by wavelet decomposition
method and stored in a database as feature vectors. In the second
phase, the neural classifiers such as back-propagation neural network
(BPNN) and resilient back-propagation neural network (RPNN) are
employed for the classification of scenes. Four hundred color images
are considered from MIT database of two classes as forest and street.
A comparative study has been carried out on the performance of the
two neural classifiers BPNN and RPNN on the increasing number of
test samples. RPNN showed better classification results compared to
BPNN on the large test samples.
Abstract: Reliability Centered Maintenance(RCM) is one of
most widely used methods in the modern power system to schedule a
maintenance cycle and determine the priority of inspection. In order
to apply the RCM method to the Smart Grid, a precedence study for
the new structure of rearranged system should be performed due to
introduction of additional installation such as renewable and
sustainable energy resources, energy storage devices and advanced
metering infrastructure. This paper proposes a new method to
evaluate the priority of maintenance and inspection of the power
system facilities in the Smart Grid using the Risk Priority Number. In
order to calculate that risk index, it is required that the reliability
block diagram should be analyzed for the Smart Grid system. Finally,
the feasible technical method is discussed to estimate the risk
potential as part of the RCM procedure.
Abstract: This research elaborates decision models for product
innovation in the early phases, focusing on one of the most widely
implemented method in marketing research: conjoint analysis and the
related conjoint-based models with special focus on heuristics
programming techniques for the development of optimal product
innovation. The concept, potential, requirements and limitations of
conjoint analysis and its conjoint-based heuristics successors are
analysed and the development of conceptual framework of Genetic
Algorithm (GA) as one of the most widely implemented heuristic
methods for developing product innovations are discussed.
Abstract: Grid computing provides a virtual framework for
controlled sharing of resources across institutional boundaries.
Recently, trust has been recognised as an important factor for
selection of optimal resources in a grid. We introduce a new method
that provides a quantitative trust value, based on the past interactions
and present environment characteristics. This quantitative trust value
is used to select a suitable resource for a job and eliminates run time
failures arising from incompatible user-resource pairs. The proposed
work will act as a tool to calculate the trust values of the various
components of the grid and there by improves the success rate of the
jobs submitted to the resource on the grid. The access to a resource
not only depend on the identity and behaviour of the resource but
also upon its context of transaction, time of transaction, connectivity
bandwidth, availability of the resource and load on the resource. The
quality of the recommender is also evaluated based on the accuracy
of the feedback provided about a resource. The jobs are submitted for
execution to the selected resource after finding the overall trust value
of the resource. The overall trust value is computed with respect to
the subjective and objective parameters.
Abstract: One problem in evaluating recent computational models of human category learning is that there is no standardized method for systematically comparing the models' assumptions or hypotheses. In the present study, a flexible general model (called GECLE) is introduced that can be used as a framework to systematically manipulate and compare the effects and descriptive validities of a limited number of assumptions at a time. Two example simulation studies are presented to show how the GECLE framework can be useful in the field of human high-order cognition research.
Abstract: Tomato powder has good potential as substitute of tomato paste and other tomato products. In order to protect physicochemical properties and nutritional quality of tomato during dehydration process, investigation was carried out using different drying methods and pretreatments. Solar drier and continuous conveyor (tunnel) drier were used for dehydration where as calcium chloride (CaCl2), potassium metabisulphite (KMS), calcium chloride and potassium metabisulphite (CaCl2 +KMS), and sodium chloride (NaCl) selected for treatment.. lycopene content, dehydration ratio, rehydration ratio and non-enzymatic browning in addition to moisture, sugar and titrable acidity were studied. Results show that pre-treatment with CaCl2 and NaCl increased water removal and moisture mobility in tomato slices during drying of tomatoes. Where CaCl2 used along with KMS the NEB was recorded the least compared to other treatments and the best results were obtained while using the two chemicals in combination form. Storage studies in LDPE polymeric and metalized polyesters films showed less changes in the products packed in metallized polyester pouches and even after 6 months lycopene content did not decrease more than 20% as compared to the control sample and provide extension of shelf life in acceptable condition for 6 months. In most of the quality characteristics tunnel drier samples presented better values in comparison to solar drier.
Abstract: The contact width is important design parameter for
optimizing the design of new metal gasket for asbestos substitution
gasket. The contact width is found have relationship with the helium
leak quantity. In the increasing of axial load value, the helium leak
quantity is decreasing and the contact width is increasing. This study
provides validity method using simulation analysis and the result is
compared to experimental using pressure sensitive paper. The results
denote similar trend data between simulation and experimental result.
Final evaluation is determined by helium leak quantity to check
leakage performance of gasket design. Considering the phenomena of
position change on the convex contact, it can be developed the
optimization of gasket design by increasing contact width.
Abstract: Organ motion, especially respiratory motion, is a technical challenge to radiation therapy planning and dosimetry. This motion induces displacements and deformation of the organ tissues within the irradiated region which need to be taken into account when simulating dose distribution during treatment. Finite element modeling (FEM) can provide a great insight into the mechanical behavior of the organs, since they are based on the biomechanical material properties, complex geometry of organs, and anatomical boundary conditions. In this paper we present an original approach that offers the possibility to combine image-based biomechanical models with particle transport simulations. We propose a new method to map material density information issued from CT images to deformable tetrahedral meshes. Based on the principle of mass conservation our method can correlate density variation of organ tissues with geometrical deformations during the different phases of the respiratory cycle. The first results are particularly encouraging, as local error quantification of density mapping on organ geometry and density variation with organ motion are performed to evaluate and validate our approach.
Abstract: We propose our genuine research of geometric
moments which detects the mineral inadequacy in the frail groundnut
plant. This plant is prone to many deficiencies as a result of the
variance in the soil nutrients. By analyzing the leaves of the plant, we
detect the visual symptoms that are not recognizable to the naked eyes.
We have collected about 160 samples of leaves from the nearby fields.
The images have been taken by keeping every leaf into a black box to
avoid the external interference. For the first time, it has been possible
to provide the farmer with the stages of deficiencies. This paper has
applied the algorithms successfully to many other plants like Lady-s
finger, Green Bean, Lablab Bean, Chilli and Tomato. But we submit
the results of the groundnut predominantly. The accuracy of our
algorithm and method is almost 93%. This will again pioneer a kind of
green revolution in the field of agriculture and will be a boon to that
field.
Abstract: There are many problems associated with the World Wide
Web: getting lost in the hyperspace; the web content is still accessible only
to humans and difficulties of web administration. The solution to these
problems is the Semantic Web which is considered to be the extension
for the current web presents information in both human readable and
machine processable form. The aim of this study is to reach new
generic foundation architecture for the Semantic Web because there
is no clear architecture for it, there are four versions, but still up to
now there is no agreement for one of these versions nor is there a
clear picture for the relation between different layers and
technologies inside this architecture. This can be done depending on
the idea of previous versions as well as Gerber-s evaluation method
as a step toward an agreement for one Semantic Web architecture.
Abstract: This study focuses on the development of triangular fuzzy numbers, the revising of triangular fuzzy numbers, and the constructing of a HCFN (half-circle fuzzy number) model which can be utilized to perform more plural operations. They are further transformed for trigonometric functions and polar coordinates. From half-circle fuzzy numbers we can conceive cylindrical fuzzy numbers, which work better in algebraic operations. An example of fuzzy control is given in a simulation to show the applicability of the proposed half-circle fuzzy numbers.
Abstract: This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.
Abstract: This paper presents a systematic approach for designing Unified Power Flow Controller (UPFC) based supplementary damping controllers for damping low frequency oscillations in a single-machine infinite-bus power system. Detailed investigations have been carried out considering the four alternatives UPFC based damping controller namely modulating index of series inverter (mB), modulating index of shunt inverter (mE), phase angle of series inverter (δB ) and phase angle of the shunt inverter (δE ). The design problem of the proposed controllers is formulated as an optimization problem and Real- Coded Genetic Algorithm (RCGA) is employed to optimize damping controller parameters. Simulation results are presented and compared with a conventional method of tuning the damping controller parameters to show the effectiveness and robustness of the proposed design approach.
Abstract: We analyze the problem of decision making under
ignorance with regrets. Recently, Yager has developed a new method
for decision making where instead of using regrets he uses another
type of transformation called negrets. Basically, the negret is
considered as the dual of the regret. We study this problem in detail
and we suggest the use of geometric aggregation operators in this
method. For doing this, we develop a different method for
constructing the negret matrix where all the values are positive. The
main result obtained is that now the model is able to deal with
negative numbers because of the transformation done in the negret
matrix. We further extent these results to another model developed
also by Yager about mixing valuations and negrets. Unfortunately, in
this case we are not able to deal with negative numbers because the
valuations can be either positive or negative.
Abstract: Since dealing with high dimensional data is
computationally complex and sometimes even intractable, recently
several feature reductions methods have been developed to reduce
the dimensionality of the data in order to simplify the calculation
analysis in various applications such as text categorization, signal
processing, image retrieval, gene expressions and etc. Among feature
reduction techniques, feature selection is one the most popular
methods due to the preservation of the original features.
In this paper, we propose a new unsupervised feature selection
method which will remove redundant features from the original
feature space by the use of probability density functions of various
features. To show the effectiveness of the proposed method, popular
feature selection methods have been implemented and compared.
Experimental results on the several datasets derived from UCI
repository database, illustrate the effectiveness of our proposed
methods in comparison with the other compared methods in terms of
both classification accuracy and the number of selected features.