Abstract: The evolutionary design of electronic circuits, or
evolvable hardware, is a discipline that allows the user to
automatically obtain the desired circuit design. The circuit
configuration is under the control of evolutionary algorithms. Several
researchers have used evolvable hardware to design electrical
circuits. Every time that one particular algorithm is selected to carry
out the evolution, it is necessary that all its parameters, such as
mutation rate, population size, selection mechanisms etc. are tuned in
order to achieve the best results during the evolution process. This
paper investigates the abilities of evolution strategy to evolve digital
logic circuits based on programmable logic array structures when
different mutation rates are used. Several mutation rates (fixed and
variable) are analyzed and compared with each other to outline the
most appropriate choice to be used during the evolution of
combinational logic circuits. The experimental results outlined in this
paper are important as they could be used by every researcher who
might need to use the evolutionary algorithm to design digital logic
circuits.
Abstract: Various intelligences and inspirations have been
adopted into the iterative searching process called as meta-heuristics.
They intelligently perform the exploration and exploitation in the
solution domain space aiming to efficiently seek near optimal
solutions. In this work, the bee algorithm, inspired by the natural
foraging behaviour of honey bees, was adapted to find the near
optimal solutions of the transportation management system, dynamic
multi-zone dispatching. This problem prepares for an uncertainty and
changing customers- demand. In striving to remain competitive,
transportation system should therefore be flexible in order to cope
with the changes of customers- demand in terms of in-bound and outbound
goods and technological innovations. To remain higher service
level but lower cost management via the minimal imbalance scenario,
the rearrangement penalty of the area, in each zone, including time
periods are also included. However, the performance of the algorithm
depends on the appropriate parameters- setting and need to be
determined and analysed before its implementation. BEE parameters
are determined through the linear constrained response surface
optimisation or LCRSOM and weighted centroid modified simplex
methods or WCMSM. Experimental results were analysed in terms
of best solutions found so far, mean and standard deviation on the
imbalance values including the convergence of the solutions
obtained. It was found that the results obtained from the LCRSOM
were better than those using the WCMSM. However, the average
execution time of experimental run using the LCRSOM was longer
than those using the WCMSM. Finally a recommendation of proper
level settings of BEE parameters for some selected problem sizes is
given as a guideline for future applications.
Abstract: This paper shows the potential system benefits of
simple tracking solar system using a stepper motor and light sensor.
This method is increasing power collection efficiency by developing
a device that tracks the sun to keep the panel at a right angle to its
rays. A solar tracking system is designed, implemented and
experimentally tested. The design details and the experimental results
are shown.
Abstract: This study reports the preparation of soft magnetic
ribbons of Fe-based amorphous alloys using the single-roller melt-spinning technique. Ribbon width varied from 142 mm to 213
mm and, with a thickness of approximately 22 μm ± 2 μm. The microstructure and magnetic properties of the ribbons were
characterized by differential scanning calorimeter (DSC), X-ray diffraction (XRD), vibrating sample magnetometer (VSM), and electrical resistivity measurements (ERM). The amorphous material
properties dependence of the cooling rate and nozzle pressure have uneven surface in ribbon thicknesses are investigated. Magnetic
measurement results indicate that some region of the ribbon exhibits good magnetic properties, higher saturation induction and lower coercivity. However, due to the uneven surface of 213 mm wide
ribbon, the magnetic responses are not uniformly distributed. To
understand the transformer magnetic performances, this study analyzes the measurements of a three-phase 2 MVA amorphous-cored transformer. Experimental results confirm that the transformer with a
ribbon width of 142 mm has better magnetic properties in terms of lower core loss, exciting power, and audible noise.
Abstract: An evolutionary method whose selection and recombination
operations are based on generalization error-bounds of
support vector machine (SVM) can select a subset of potentially
informative genes for SVM classifier very efficiently [7]. In this
paper, we will use the derivative of error-bound (first-order criteria)
to select and recombine gene features in the evolutionary process,
and compare the performance of the derivative of error-bound with
the error-bound itself (zero-order) in the evolutionary process. We
also investigate several error-bounds and their derivatives to compare
the performance, and find the best criteria for gene selection
and classification. We use 7 cancer-related human gene expression
datasets to evaluate the performance of the zero-order and first-order
criteria of error-bounds. Though both criteria have the same strategy
in theoretically, experimental results demonstrate the best criterion
for microarray gene expression data.
Abstract: Texture information plays increasingly an important
role in remotely sensed imagery classification and many pattern
recognition applications. However, the selection of relevant textural
features to improve this classification accuracy is not a straightforward
task. This work investigates the effectiveness of two Mutual
Information Feature Selector (MIFS) algorithms to select salient
textural features that contain highly discriminatory information for
multispectral imagery classification. The input candidate features are
extracted from a SPOT High Resolution Visible(HRV) image using
Wavelet Transform (WT) at levels (l = 1,2).
The experimental results show that the selected textural features
according to MIFS algorithms make the largest contribution to
improve the classification accuracy than classical approaches such
as Principal Components Analysis (PCA) and Linear Discriminant
Analysis (LDA).
Abstract: This work presents the experimental results obtained
at a pilot plant which works with a slow, wet and catalytic pyrolysis
process of dry fowl manure. This kind of process mainly consists in
the cracking of the organic matrix and in the following reaction of
carbon with water, which is either already contained in the organic
feed or added, to produce carbon monoxide and hydrogen. Reactions
are conducted in a rotating reactor maintained at a temperature of
500°C; the required amount of water is about 30% of the dry organic
feed. This operation yields a gas containing about 59% (on a volume
basis) of hydrogen, 17% of carbon monoxide and other products such
as light hydrocarbons (methane, ethane, propane) and carbon
monoxide in lesser amounts. The gas coming from the reactor can be
used to produce not only electricity, through internal combustion
engines, but also heat, through direct combustion in industrial
boilers. Furthermore, as the produced gas is devoid of both solid
particles and pollutant species (such as dioxins and furans), the
process (in this case applied to fowl manure) can be considered as an
optimal way for the disposal and the contemporary energetic
valorization of organic materials, in such a way that is not damaging
to the environment.
Abstract: Due to availability of powerful image processing software
and improvement of human computer knowledge, it becomes
easy to tamper images. Manipulation of digital images in different
fields like court of law and medical imaging create a serious problem
nowadays. Copy-move forgery is one of the most common types
of forgery which copies some part of the image and pastes it to
another part of the same image to cover an important scene. In
this paper, a copy-move forgery detection method proposed based
on Fourier transform to detect forgeries. Firstly, image is divided to
same size blocks and Fourier transform is performed on each block.
Similarity in the Fourier transform between different blocks provides
an indication of the copy-move operation. The experimental results
prove that the proposed method works on reasonable time and works
well for gray scale and colour images. Computational complexity
reduced by using Fourier transform in this method.
Abstract: The composite materials were prepared by sawdust, cassava starch and natural rubber latex (NR). The mixtures of 15%w/v gelatinized cassava starch and 15%w/v PVOH were used as the binder of these composite materials. The concentrated rubber latex was added to the mixtures. They were mixed rigorously to the treated sawdust in the ratio of 70:30 until achive uniform dispersion. The batters were subjected to the hot compression moulding at the temperature of 160°C and 3,000 psi pressure for 5 min. The experimental results showed that the mechanical properties of composite materials, which contained the gelatinized cassava starch and PVOH in the ratio of 2:1, 20% NR latex by weight of the dry starch and treated sawdust with 5%NaOH or 1% BPO, were the best. It contributed the maximal compression strength (341.10 + 26.11 N), puncture resistance (8.79 + 0.98 N/mm2) and flexural strength (3.99 + 0.72N/mm2). It is also found that the physicochemical and mechanical properties of composites strongly depends on the interface quality of sawdust, cassava starch and NR latex.
Abstract: Flexible macroblock ordering (FMO), adopted in the
H.264 standard, allows to partition all macroblocks (MBs) in a frame
into separate groups of MBs called Slice Groups (SGs). FMO can not
only support error-resilience, but also control the size of video packets
for different network types. However, it is well-known that the number
of bits required for encoding the frame is increased by adopting FMO.
In this paper, we propose a novel algorithm that can reduce the bitrate
overhead caused by utilizing FMO. In the proposed algorithm, all MBs
are grouped in SGs based on the similarity of the transform
coefficients. Experimental results show that our algorithm can reduce
the bitrate as compared with conventional FMO.
Abstract: The main goal of the study is to analyze all relevant properties of the electro hydraulic systems and based on that to make a proper choice of the neural network control strategy that may be used for the control of the mechatronic system. A combination of electronic and hydraulic systems is widely used since it combines the advantages of both. Hydraulic systems are widely spread because of their properties as accuracy, flexibility, high horsepower-to-weight ratio, fast starting, stopping and reversal with smoothness and precision, and simplicity of operations. On the other hand, the modern control of hydraulic systems is based on control of the circuit fed to the inductive solenoid that controls the position of the hydraulic valve. Since this circuit may be easily handled by PWM (Pulse Width Modulation) signal with a proper frequency, the combination of electrical and hydraulic systems became very fruitful and usable in specific areas as airplane and military industry. The study shows and discusses the experimental results obtained by the control strategy of neural network control using MATLAB and SIMULINK [1]. Finally, the special attention was paid to the possibility of neuro-controller design and its application to control of electro-hydraulic systems and to make comparative with other kinds of control.
Abstract: Measurement of the quality of image compression is important for image processing application. In this paper, we propose an objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken. The new objective image quality measurement is developed from a few fundamental of objective measurements to evaluate the compressed image quality based on JPEG and JPEG2000. The reliability between each fundamental objective measurement and subjective measurement (MOS) is found. From the experimental results, we found that the Maximum Difference measurement (MD) and a new proposed measurement, Structural Content Laplacian Mean Square Error (SCLMSE), are the suitable measurements that can be used to evaluate the quality of JPEG200 and JPEG compressed image, respectively. In addition, MD and SCLMSE measurements are scaled to make them equivalent to MOS, given the rate of compressed image quality from 1 to 5 (unacceptable to excellent quality).
Abstract: We prove detailed analysis of a waveguide-based Schottky barrier photodetector (SBPD) where a thin silicide film is put on the top of a silicon-on-insulator (SOI) channel waveguide to absorb light propagating along the waveguide. Taking both the confinement factor of light absorption and the wall scanning induced gain of the photoexcited carriers into account, an optimized silicide thickness is extracted to maximize the effective gain, thereby the responsivity. For typical lengths of the thin silicide film (10-20 Ðçm), the optimized thickness is estimated to be in the range of 1-2 nm, and only about 50-80% light power is absorbed to reach the maximum responsivity. Resonant waveguide-based SBPDs are proposed, which consist of a microloop, microdisc, or microring waveguide structure to allow light multiply propagating along the circular Si waveguide beneath the thin silicide film. Simulation results suggest that such resonant waveguide-based SBPDs have much higher repsonsivity at the resonant wavelengths as compared to the straight waveguidebased detectors. Some experimental results about Si waveguide-based SBPD are also reported.
Abstract: In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of its simplicity. Experimental results also show that it outperforms other in-place sorting algorithms. Finally, the analysis of time and space complexity, and required number of moves are presented, along with the auxiliary storage requirements of the proposed algorithm.
Abstract: Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.
Abstract: The paper depicts air velocity values, reproduced by laser Doppler anemometer (LDA) and ultrasonic anemometer (UA), relations with calculated ones from flow rate measurements using the gas meter which calibration uncertainty is ± (0.15 – 0.30) %. Investigation had been performed in channel installed in aerodynamical facility used as a part of national standard of air velocity. Relations defined in a research let us confirm the LDA and UA for air velocity reproduction to be the most advantageous measures. The results affirm ultrasonic anemometer to be reliable and favourable instrument for measurement of mean velocity or control of velocity stability in the velocity range of 0.05 m/s – 10 (15) m/s when the LDA used. The main aim of this research is to investigate low velocity regularities, starting from 0.05 m/s, including region of turbulent, laminar and transitional air flows. Theoretical and experimental results and brief analysis of it are given in the paper. Maximum and mean velocity relations for transitional air flow having unique distribution are represented. Transitional flow having distinctive and different from laminar and turbulent flow characteristics experimentally have not yet been analysed.
Abstract: Automated discovery of hierarchical structures in
large data sets has been an active research area in the recent past.
This paper focuses on the issue of mining generalized rules with crisp
hierarchical structure using Genetic Programming (GP) approach to
knowledge discovery. The post-processing scheme presented in this
work uses flat rules as initial individuals of GP and discovers
hierarchical structure. Suitable genetic operators are proposed for the
suggested encoding. Based on the Subsumption Matrix(SM), an
appropriate fitness function is suggested. Finally, Hierarchical
Production Rules (HPRs) are generated from the discovered
hierarchy. Experimental results are presented to demonstrate the
performance of the proposed algorithm.
Abstract: There are two common types of operational research techniques, optimisation and metaheuristic methods. The latter may be defined as a sequential process that intelligently performs the exploration and exploitation adopted by natural intelligence and strong inspiration to form several iterative searches. An aim is to effectively determine near optimal solutions in a solution space. In this work, a type of metaheuristics called Ant Colonies Optimisation, ACO, inspired by a foraging behaviour of ants was adapted to find optimal solutions of eight non-linear continuous mathematical models. Under a consideration of a solution space in a specified region on each model, sub-solutions may contain global or multiple local optimum. Moreover, the algorithm has several common parameters; number of ants, moves, and iterations, which act as the algorithm-s driver. A series of computational experiments for initialising parameters were conducted through methods of Rigid Simplex, RS, and Modified Simplex, MSM. Experimental results were analysed in terms of the best so far solutions, mean and standard deviation. Finally, they stated a recommendation of proper level settings of ACO parameters for all eight functions. These parameter settings can be applied as a guideline for future uses of ACO. This is to promote an ease of use of ACO in real industrial processes. It was found that the results obtained from MSM were pretty similar to those gained from RS. However, if these results with noise standard deviations of 1 and 3 are compared, MSM will reach optimal solutions more efficiently than RS, in terms of speed of convergence.
Abstract: In this paper, a novel copyright protection scheme for digital images based on Visual Cryptography and Statistics is proposed. In our scheme, the theories and properties of sampling distribution of means and visual cryptography are employed to achieve the requirements of robustness and security. Our method does not need to alter the original image and can identify the ownership without resorting to the original image. Besides, our method allows multiple watermarks to be registered for a single host image without causing any damage to other hidden watermarks. Moreover, it is also possible for our scheme to cast a larger watermark into a smaller host image. Finally, experimental results will show the robustness of our scheme against several common attacks.
Abstract: In most rule-induction algorithms, the only operator used against nominal attributes is the equality operator =. In this paper, we first propose the use of the inequality operator, ≠, in addition to the equality operator, to increase the expressiveness of induced rules. Then, we present a new method, Binary Coding, which can be used along with an arbitrary rule-induction algorithm to make use of the inequality operator without any need to change the algorithm. Experimental results suggest that the Binary Coding method is promising enough for further investigation, especially in cases where the minimum number of rules is desirable.