Abstract: This paper presents an approach which is based on the
use of supervised feed forward neural network, namely multilayer
perceptron (MLP) neural network and finite element method (FEM)
to solve the inverse problem of parameters identification. The
approach is used to identify unknown parameters of ferromagnetic
materials. The methodology used in this study consists in the
simulation of a large number of parameters in a material under test,
using the finite element method (FEM). Both variations in relative
magnetic permeability and electrical conductivity of the material
under test are considered. Then, the obtained results are used to
generate a set of vectors for the training of MLP neural network.
Finally, the obtained neural network is used to evaluate a group of
new materials, simulated by the FEM, but not belonging to the
original dataset. Noisy data, added to the probe measurements is used
to enhance the robustness of the method. The reached results
demonstrate the efficiency of the proposed approach, and encourage
future works on this subject.
Abstract: Breast carcinoma is the most common form of cancer
in women. Multicolour fluorescent in-situ hybridisation (m-FISH) is
a common method for staging breast carcinoma. The interpretation
of m-FISH images is complicated due to two effects: (i) Spectral
overlap in the emission spectra of fluorochrome marked DNA probes
and (ii) tissue autofluorescence. In this paper hyper-spectral images of
m-FISH samples are used and spectral unmixing is applied to produce
false colour images with higher contrast and better information
content than standard RGB images. The spectral unmixing is realised
by combinations of: Orthogonal Projection Analysis (OPA), Alterating
Least Squares (ALS), Simple-to-use Interactive Self-Modeling
Mixture Analysis (SIMPLISMA) and VARIMAX. These are applied
on the data to reduce tissue autofluorescence and resolve the spectral
overlap in the emission spectra. The results show that spectral unmixing
methods reduce the intensity caused by tissue autofluorescence by
up to 78% and enhance image contrast by algorithmically reducing
the overlap of the emission spectra.
Abstract: In this paper, we propose a new image segmentation approach for colour textured images. The proposed method for image segmentation consists of two stages. In the first stage, textural features using gray level co-occurrence matrix(GLCM) are computed for regions of interest (ROI) considered for each class. ROI acts as ground truth for the classes. Ohta model (I1, I2, I3) is the colour model used for segmentation. Statistical mean feature at certain inter pixel distance (IPD) of I2 component was considered to be the optimized textural feature for further segmentation. In the second stage, the feature matrix obtained is assumed to be the degraded version of the image labels and modeled as Markov Random Field (MRF) model to model the unknown image labels. The labels are estimated through maximum a posteriori (MAP) estimation criterion using ICM algorithm. The performance of the proposed approach is compared with that of the existing schemes, JSEG and another scheme which uses GLCM and MRF in RGB colour space. The proposed method is found to be outperforming the existing ones in terms of segmentation accuracy with acceptable rate of convergence. The results are validated with synthetic and real textured images.
Abstract: Using entropy weight and TOPSIS method, a
comprehensive evaluation is done on the development level of
Chinese regional service industry in this paper. Firstly, based on
existing research results, an evaluation index system is constructed
from the scale of development, the industrial structure and the
economic benefits. An evaluation model is then built up based on
entropy weight and TOPSIS, and an empirical analysis is conducted on
the development level of service industries in 31 Chinese provinces
during 2006 and 2009 from the two dimensions or time series and
cross section, which provides new idea for assessing regional service
industry. Furthermore, the 31 provinces are classified into four
categories based on the evaluation results, and deep analysis is carried
out on the evaluation results.
Abstract: In this paper, we suggest new product-type estimators for the population mean of the variable of interest exploiting the first or the third quartile of the auxiliary variable. We obtain mean square error equations and the bias for the estimators. We study the properties of these estimators using simple random sampling (SRS) and ranked set sampling (RSS) methods. It is found that, SRS and RSS produce approximately unbiased estimators of the population mean. However, the RSS estimators are more efficient than those obtained using SRS based on the same number of measured units for all values of the correlation coefficient.
Abstract: Most of the nonlinear equation solvers do not converge always or they use the derivatives of the function to approximate the
root of such equations. Here, we give a derivative-free algorithm that guarantees the convergence. The proposed two-step method, which
is to some extent like the secant method, is accompanied with some
numerical examples. The illustrative instances manifest that the rate of convergence in proposed algorithm is more than the quadratically
iterative schemes.
Abstract: Moulded parts contribute to more than 70% of
components in products. However, common defects particularly in
plastic injection moulding exist such as: warpage, shrinkage, sink
marks, and weld lines. In this paper Taguchi experimental design
methods are applied to reduce the warpage defect of thin plate
Acrylonitrile Butadiene Styrene (ABS) and are demonstrated in two
levels; namely, orthogonal arrays of Taguchi and the Analysis of
Variance (ANOVA). Eight trials have been run in which the optimal
parameters that can minimize the warpage defect in factorial
experiment are obtained. The results obtained from ANOVA
approach analysis with respect to those derived from MINITAB
illustrate the most significant factors which may cause warpage in
injection moulding process. Moreover, ANOVA approach in
comparison with other approaches like S/N ratio is more accurate and
with the interaction of factors it is possible to achieve higher and the
better outcomes.
Abstract: This paper proposes a new method for image searches and image indexing in databases with a color temperature histogram. The color temperature histogram can be used for performance improvement of content–based image retrieval by using a combination of color temperature and histogram. The color temperature histogram can be represented by a range of 46 colors. That is more than the color histogram and the dominant color temperature. Moreover, with our method the colors that have the same color temperature can be separated while the dominant color temperature can not. The results showed that the color temperature histogram retrieved an accurate image more often than the dominant color temperature method or color histogram method. This also took less time so the color temperature can be used for indexing and searching for images.
Abstract: The design of a gravity dam is performed through an
interactive process involving a preliminary layout of the structure
followed by a stability and stress analysis. This study presents a
method to define the optimal top width of gravity dam with genetic
algorithm. To solve the optimization task (minimize the cost of the
dam), an optimization routine based on genetic algorithms (GAs) was
implemented into an Excel spreadsheet. It was found to perform well
and GA parameters were optimized in a parametric study. Using the
parameters found in the parametric study, the top width of gravity
dam optimization was performed and compared to a gradient-based
optimization method (classic method). The accuracy of the results
was within close proximity. In optimum dam cross section, the ratio
of is dam base to dam height is almost equal to 0.85, and ratio of dam
top width to dam height is almost equal to 0.13. The computerized
methodology may provide the help for computation of the optimal
top width for a wide range of height of a gravity dam.
Abstract: The k-nearest neighbors (knn) is a simple but effective method of classification. In this paper we present an extended version of this technique for chemical compounds used in High Throughput Screening, where the distances of the nearest neighbors can be taken into account. Our algorithm uses kernel weight functions as guidance for the process of defining activity in screening data. Proposed kernel weight function aims to combine properties of graphical structure and molecule descriptors of screening compounds. We apply the modified knn method on several experimental data from biological screens. The experimental results confirm the effectiveness of the proposed method.
Abstract: Because of importance of energy, optimization of
power generation systems is necessary. Gas turbine cycles are
suitable manner for fast power generation, but their efficiency is
partly low. In order to achieving higher efficiencies, some
propositions are preferred such as recovery of heat from exhaust
gases in a regenerator, utilization of intercooler in a multistage
compressor, steam injection to combustion chamber and etc.
However thermodynamic optimization of gas turbine cycle, even
with above components, is necessary. In this article multi-objective
genetic algorithms are employed for Pareto approach optimization of
Regenerative-Intercooling-Gas Turbine (RIGT) cycle. In the multiobjective
optimization a number of conflicting objective functions
are to be optimized simultaneously. The important objective
functions that have been considered for optimization are entropy
generation of RIGT cycle (Ns) derives using Exergy Analysis and
Gouy-Stodola theorem, thermal efficiency and the net output power
of RIGT Cycle. These objectives are usually conflicting with each
other. The design variables consist of thermodynamic parameters
such as compressor pressure ratio (Rp), excess air in combustion
(EA), turbine inlet temperature (TIT) and inlet air temperature (T0).
At the first stage single objective optimization has been investigated
and the method of Non-dominated Sorting Genetic Algorithm
(NSGA-II) has been used for multi-objective optimization.
Optimization procedures are performed for two and three objective
functions and the results are compared for RIGT Cycle. In order to
investigate the optimal thermodynamic behavior of two objectives,
different set, each including two objectives of output parameters, are
considered individually. For each set Pareto front are depicted. The
sets of selected decision variables based on this Pareto front, will
cause the best possible combination of corresponding objective
functions. There is no superiority for the points on the Pareto front
figure, but they are superior to any other point. In the case of three
objective optimization the results are given in tables.
Abstract: For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources will be saved in the active database named "History". At first some of the attributes will be exploit from the main job and according to a defined similarity algorithm the most similar executed job will be exploited from "History" using statistic terms such as linear regression or average, resource requirements will be predicted. The new idea in this research is based on active database and centralized history maintenance. Implementation and testing of the proposed architecture results in accuracy percentage of 96.68% to predict CPU usage of jobs and 91.29% of memory usage and 89.80% of the band width usage.
Abstract: A two-dimensional moving mesh algorithm is developed to simulate the general motion of two rotating bodies with relative translational motion. The grid includes a background grid and two sets of grids around the moving bodies. With this grid arrangement rotational and translational motions of two bodies are handled separately, with no complications. Inter-grid boundaries are determined based on their distances from two bodies. In this method, the overset concept is applied to hybrid grid, and flow variables are interpolated using a simple stencil. To evaluate this moving mesh algorithm unsteady Euler flow is solved for different cases using dual-time method of Jameson. Numerical results show excellent agreement with experimental data and other numerical results. To demonstrate the capability of present algorithm for accurate solution of flow fields around moving bodies, some benchmark problems have been defined in this paper.
Abstract: This paper estimates the economic values of
household preference for enhanced solid waste disposal services in
Malaysia. The contingent valuation (CV) method estimates an
average additional monthly willingness-to-pay (WTP) in solid waste
management charges of Ôé¼0.77 to 0.80 for improved waste disposal
services quality. The finding of a slightly higher WTP from the
generic CV question than that of label-specific, further reveals a
higher WTP for sanitary landfill, at Ôé¼0.90, than incineration, at Ôé¼0.63.
This suggests that sanitary landfill is a more preferred alternative.
The logistic regression estimation procedure reveals that household-s
concern of where their rubbish is disposed, age, ownership of house,
household income and format of CV question are significant factors
in influencing WTP.
Abstract: Stair climbing is one of critical issues for field robots to
widen applicable areas. This paper presents optimal design on
kinematic parameters of a new robotic platform for stair climbing. The
robotic platform climbs various stairs by body flip locomotion with
caterpillar type main platform. Kinematic parameters such as platform
length, platform height, and caterpillar rotation speed are optimized to
maximize stair climbing stability. Three types of stairs are used to
simulate typical user conditions. The optimal design process is
conducted based on Taguchi methodology, and resulting parameters
with optimized objective function are presented. In near future, a
prototype is assembled for real environment testing.
Abstract: In this paper is shown that the probability-statistic methods application, especially at the early stage of the aviation gas turbine engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence is considered the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods. Training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. Thus for GTE technical condition more adequate model making are analysed dynamics of skewness and kurtosis coefficients' changes. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows to draw conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. For checking of models adequacy is considered the Fuzzy Multiple Correlation Coefficient of Fuzzy Multiple Regression. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-bystage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine temperature condition was made.
Abstract: Web applications have become complex and crucial for many firms, especially when combined with areas such as CRM (Customer Relationship Management) and BPR (Business Process Reengineering). The scientific community has focused attention to Web application design, development, analysis, testing, by studying and proposing methodologies and tools. Static and dynamic techniques may be used to analyze existing Web applications. The use of traditional static source code analysis may be very difficult, for the presence of dynamically generated code, and for the multi-language nature of the Web. Dynamic analysis may be useful, but it has an intrinsic limitation, the low number of program executions used to extract information. Our reverse engineering analysis, used into our WAAT (Web Applications Analysis and Testing) project, applies mutational techniques in order to exploit server side execution engines to accomplish part of the dynamic analysis. This paper studies the effects of mutation source code analysis applied to Web software to build application models. Mutation-based generated models may contain more information then necessary, so we need a pruning mechanism.
Abstract: This study aims to screen out and to optimize the
major nutrients for maximum carotenoid production and
antioxidation characteristics by Rhodotorula rubra. It was found that
supplementary of 10 g/l glucose as carbon source, 1 g/l ammonium
sulfate as nitrogen source and 1 g/l yeast extract as growth factor in
the medium provided the better yield of carotenoid content of 30.39
μg/g cell dry weight the amount of antioxidation of Rhodotorula
rubra by DPPH, ABTS and MDA method were 1.463%, 34.21% and
34.09 μmol/l, respectively.
Abstract: With deep development of software reuse, componentrelated
technologies have been widely applied in the development of
large-scale complex applications. Component identification (CI) is
one of the primary research problems in software reuse, by analyzing
domain business models to get a set of business components with high
reuse value and good reuse performance to support effective reuse.
Based on the concept and classification of CI, its technical stack is
briefly discussed from four views, i.e., form of input business models,
identification goals, identification strategies, and identification
process. Then various CI methods presented in literatures are
classified into four types, i.e., domain analysis based methods,
cohesion-coupling based clustering methods, CRUD matrix based
methods, and other methods, with the comparisons between these
methods for their advantages and disadvantages. Additionally, some
insufficiencies of study on CI are discussed, and the causes are
explained subsequently. Finally, it is concluded with some
significantly promising tendency about research on this problem.
Abstract: We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.