Abstract: The design of a gravity dam is performed through an
interactive process involving a preliminary layout of the structure
followed by a stability and stress analysis. This study presents a
method to define the optimal top width of gravity dam with genetic
algorithm. To solve the optimization task (minimize the cost of the
dam), an optimization routine based on genetic algorithms (GAs) was
implemented into an Excel spreadsheet. It was found to perform well
and GA parameters were optimized in a parametric study. Using the
parameters found in the parametric study, the top width of gravity
dam optimization was performed and compared to a gradient-based
optimization method (classic method). The accuracy of the results
was within close proximity. In optimum dam cross section, the ratio
of is dam base to dam height is almost equal to 0.85, and ratio of dam
top width to dam height is almost equal to 0.13. The computerized
methodology may provide the help for computation of the optimal
top width for a wide range of height of a gravity dam.
Abstract: The customary practice of identifying industrial sickness is a set traditional techniques which rely upon a range of manual monitoring and compilation of financial records. It makes the process tedious, time consuming and often are susceptible to manipulation. Therefore, certain readily available tools are required which can deal with such uncertain situations arising out of industrial sickness. It is more significant for a country like India where the fruits of development are rarely equally distributed. In this paper, we propose an approach based on Artificial Neural Network (ANN) to deal with industrial sickness with specific focus on a few such units taken from a less developed north-east (NE) Indian state like Assam. The proposed system provides decision regarding industrial sickness using eight different parameters which are directly related to the stages of sickness of such units. The mechanism primarily uses certain signals and symptoms of industrial health to decide upon the state of a unit. Specifically, we formulate an ANN based block with data obtained from a few selected units of Assam so that required decisions related to industrial health could be taken. The system thus formulated could become an important part of planning and development. It can also contribute towards computerization of decision support systems related to industrial health and help in better management.
Abstract: The k-nearest neighbors (knn) is a simple but effective method of classification. In this paper we present an extended version of this technique for chemical compounds used in High Throughput Screening, where the distances of the nearest neighbors can be taken into account. Our algorithm uses kernel weight functions as guidance for the process of defining activity in screening data. Proposed kernel weight function aims to combine properties of graphical structure and molecule descriptors of screening compounds. We apply the modified knn method on several experimental data from biological screens. The experimental results confirm the effectiveness of the proposed method.
Abstract: With deep development of software reuse, componentrelated
technologies have been widely applied in the development of
large-scale complex applications. Component identification (CI) is
one of the primary research problems in software reuse, by analyzing
domain business models to get a set of business components with high
reuse value and good reuse performance to support effective reuse.
Based on the concept and classification of CI, its technical stack is
briefly discussed from four views, i.e., form of input business models,
identification goals, identification strategies, and identification
process. Then various CI methods presented in literatures are
classified into four types, i.e., domain analysis based methods,
cohesion-coupling based clustering methods, CRUD matrix based
methods, and other methods, with the comparisons between these
methods for their advantages and disadvantages. Additionally, some
insufficiencies of study on CI are discussed, and the causes are
explained subsequently. Finally, it is concluded with some
significantly promising tendency about research on this problem.
Abstract: In this paper, a class of recurrent neural networks (RNNs) with variable delays are studied on almost periodic time scales, some sufficient conditions are established for the existence and global exponential stability of the almost periodic solution. These results have important leading significance in designs and applications of RNNs. Finally, two examples and numerical simulations are presented to illustrate the feasibility and effectiveness of the results.
Abstract: To explore pipelines is one of various bio-mimetic
robot applications. The robot may work in common buildings such as
between ceilings and ducts, in addition to complicated and massive
pipeline systems of large industrial plants. The bio-mimetic robot finds
any troubled area or malfunction and then reports its data. Importantly,
it can not only prepare for but also react to any abnormal routes in the
pipeline. The pipeline monitoring tasks require special types of mobile
robots. For an effective movement along a pipeline, the movement of
the robot will be similar to that of insects or crawling animals. During
its movement along the pipelines, a pipeline monitoring robot has an
important task of finding the shapes of the approaching path on the
pipes. In this paper we propose an effective solution to the pipeline
pattern recognition, based on the fuzzy classification rules for the
measured IR distance data.
Abstract: We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.
Abstract: Structural redundancy is an interesting point in
seismic design of structures. Initially, the structural redundancy is
described as indeterminate degree of a system. Although many definitions are presented for redundancy in structures, recently the
definition of structural redundancy has been related to the configuration of structural system and the number of lateral load
transferring directions in the structure. The steel frames with infill walls are general systems in the constructing of usual residential buildings in some countries. It is
obviously declared that the performance of structures will be affected by adding masonry infill walls. In order to investigate the effect of
infill walls on the redundancy of the steel frame which constructed
with masonry walls, the components of redundancy including redundancy variation index, redundancy strength index and
redundancy response modification factor were extracted for the
frames with masonry infills. Several steel frames with typical storey number and various numbers of bays were designed and considered.
The redundancy of frames with and without infill walls was evaluated by proposed method. The results showed the presence of infill causes increase of redundancy.
Abstract: Classical Bose-Chaudhuri-Hocquenghem (BCH) codes C that contain their dual codes can be used to construct quantum stabilizer codes this chapter studies the properties of such codes. It had been shown that a BCH code of length n which contains its dual code satisfies the bound on weight of any non-zero codeword in C and converse is also true. One impressive difficulty in quantum communication and computation is to protect informationcarrying quantum states against undesired interactions with the environment. To address this difficulty, many good quantum errorcorrecting codes have been derived as binary stabilizer codes. We were able to shed more light on the structure of dual containing BCH codes. These results make it possible to determine the parameters of quantum BCH codes in terms of weight of non-zero dual codeword.
Abstract: Support vector machines (SVMs) are considered to be
the best machine learning algorithms for minimizing the predictive
probability of misclassification. However, their drawback is that for
large data sets the computation of the optimal decision boundary is a
time consuming function of the size of the training set. Hence several
methods have been proposed to speed up the SVM algorithm. Here
three methods used to speed up the computation of the SVM
classifiers are compared experimentally using a musical genre
classification problem. The simplest method pre-selects a random
sample of the data before the application of the SVM algorithm. Two
additional methods use proximity graphs to pre-select data that are
near the decision boundary. One uses k-Nearest Neighbor graphs and
the other Relative Neighborhood Graphs to accomplish the task.
Abstract: This paper deals with the extraction of information from the experts to automatically identify and recognize Ganoderma infection in oil palm stem using tomography images. Expert-s knowledge are used as rules in a Fuzzy Inference Systems to classify each individual patterns observed in he tomography image. The classification is done by defining membership functions which assigned a set of three possible hypotheses : Ganoderma infection (G), non Ganoderma infection (N) or intact stem tissue (I) to every abnormalities pattern found in the tomography image. A complete comparison between Mamdani and Sugeno style,triangular, trapezoids and mixed triangular-trapezoids membership functions and different methods of aggregation and defuzzification is also presented and analyzed to select suitable Fuzzy Inference System methods to perform the above mentioned task. The results showed that seven out of 30 initial possible combination of available Fuzzy Inference methods in MATLAB Fuzzy Toolbox were observed giving result close to the experts estimation.
Abstract: The development of aid's systems for the medical
diagnosis is not easy thing because of presence of inhomogeneities in
the MRI, the variability of the data from a sequence to the other as
well as of other different source distortions that accentuate this
difficulty. A new automatic, contextual, adaptive and robust
segmentation procedure by MRI brain tissue classification is
described in this article. A first phase consists in estimating the
density of probability of the data by the Parzen-Rozenblatt method.
The classification procedure is completely automatic and doesn't
make any assumptions nor on the clusters number nor on the
prototypes of these clusters since these last are detected in an
automatic manner by an operator of mathematical morphology called
skeleton by influence zones detection (SKIZ). The problem of
initialization of the prototypes as well as their number is transformed
in an optimization problem; in more the procedure is adaptive since it
takes in consideration the contextual information presents in every
voxel by an adaptive and robust non parametric model by the
Markov fields (MF). The number of bad classifications is reduced by
the use of the criteria of MPM minimization (Maximum Posterior
Marginal).
Abstract: Low silica type X (LSX) Zeolite is one of useful
material in many manufacturing due to the advantage properties
including high surface area, stability, microporous crystalline
aluminosilicates and positive ion in an extra–framework. The LSX
was used rice husk silica source which obtained by leaching with
hydrochloric acid and calcination at 500C. To improve the
synthesis method, the LSX was crystallizated in Teflon–lined
autoclave will expedite deceasing of the amorphous particles. The
mixed gel with composition of 5.5 Na2O : 1.65 K2O : Al2O3 : 2.2
SiO2 : 122 H2O was crystallized in different container
(Polypropylene bottom and Teflon–lined autoclave). The obtained
powder was characterized by X–ray diffraction (XRD), X–ray
fluorescence spectrometry, N2 adsorption-desorption analysis BET
surface area Scanning electron microscopy (SEM) and Fourier
transform infrared spectroscopy to justify the quality of zeolite. The
results showed the crystallized zeolite in Teflon lined autoclave has
102.8 nm of crystal size, 286 m2/g of surface area and fewer amounts
of round amorphous particles when compared with the crystallized
zeolite in Polypropylene.
Abstract: Steel surface defect detection is essentially one of
pattern recognition problems. Support Vector Machines (SVMs) are
known as one of the most proper classifiers in this application. In this
paper, we introduce a more accurate classification method by using
SVMs as our final classifier of the inspection system. In this scheme,
multiclass classification task is performed based on the "one-againstone"
method and different kernels are utilized for each pair of the
classes in multiclass classification of the different defects.
In the proposed system, a decision tree is employed in the first
stage for two-class classification of the steel surfaces to "defect" and
"non-defect", in order to decrease the time complexity. Based on
the experimental results, generated from over one thousand images,
the proposed multiclass classification scheme is more accurate than
the conventional methods and the overall system yields a sufficient
performance which can meet the requirements in steel manufacturing.
Abstract: This paper focuses on a novel method for semantic
searching and retrieval of information about learning materials.
Metametadata encapsulate metadata instances by using the properties
and attributes provided by ontologies rather than describing learning
objects. A novel metametadata taxonomy has been developed which
provides the basis for a semantic search engine to extract, match and
map queries to retrieve relevant results. The use of ontological views
is a foundation for viewing the pedagogical content of metadata
extracted from learning objects by using the pedagogical attributes
from the metametadata taxonomy. Using the ontological approach
and metametadata (based on the metametadata taxonomy) we present
a novel semantic searching mechanism.These three strands – the
taxonomy, the ontological views, and the search algorithm – are
incorporated into a novel architecture (OMESCOD) which has been
implemented.
Abstract: Studies on the distribution of traffic demands have
been proceeding by providing traffic information for reducing
greenhouse gases and reinforcing the road's competitiveness in the
transport section, however, since it is preferentially required the
extensive studies on the driver's behavior changing routes and its
influence factors, this study has been developed a discriminant model
for changing routes considering driving conditions including traffic
conditions of roads and driver's preferences for information media. It
is divided into three groups depending on driving conditions in group
classification with the CART analysis, which is statistically
meaningful. And the extent that driving conditions and preferred
media affect a route change is examined through a discriminant
analysis, and it is developed a discriminant model equation to predict a
route change. As a result of building the discriminant model equation,
it is shown that driving conditions affect a route change much more,
the entire discriminant hit ratio is derived as 64.2%, and this
discriminant equation shows high discriminant ability more than a
certain degree.
Abstract: We identify clawback triggers from firms- proxy
statements (Form DEF 14A) and use the likelihood of restatements to
proxy for financial reporting quality. Based on a sample of 578 U.S.
firms that voluntarily adopt clawback provisions during 2003-2009,
when restatement-based triggers could be decomposed into two types:
fraud and unintentional error, and we do observe the evidence that
using fraud triggers is associated with high financial reporting quality.
The findings support that fraud triggers can enhance deterrent effect of
clawback provision by establishing a viable disincentive against fraud,
misconduct, and otherwise harmful acts. These results are robust to
controlling for the compensation components, to different sample
specifications and to a number of sensitivity.
Abstract: Serial hierarchical support vector machine (SHSVM)
is proposed to discriminate three brain tissues which are white matter
(WM), gray matter (GM), and cerebrospinal fluid (CSF). SHSVM
has novel classification approach by repeating the hierarchical
classification on data set iteratively. It used Radial Basis Function
(rbf) Kernel with different tuning to obtain accurate results. Also as
the second approach, segmentation performed with DAGSVM
method. In this article eight univariate features from the raw DTI data
are extracted and all the possible 2D feature sets are examined within
the segmentation process. SHSVM succeed to obtain DSI values
higher than 0.95 accuracy for all the three tissues, which are higher
than DAGSVM results.
Abstract: It is an important task in Korean-English machine
translation to classify the gender of names correctly. When a sentence
is composed of two or more clauses and only one subject is given as a proper noun, it is important to find the gender of the proper noun
for correct translation of the sentence. This is because a singular pronoun has a gender in English while it does not in Korean. Thus,
in Korean-English machine translation, the gender of a proper noun should be determined. More generally, this task can be expanded into the classification of the general Korean names. This paper proposes a statistical method for this problem. By considering a name as just
a sequence of syllables, it is possible to get a statistics for each name from a collection of names. An evaluation of the proposed method
yields the improvement in accuracy over the simple looking-up of the
collection. While the accuracy of the looking-up method is 64.11%, that of the proposed method is 81.49%. This implies that the proposed
method is more plausible for the gender classification of the Korean names.
Abstract: The problem of spam has been seriously troubling the Internet community during the last few years and currently reached an alarming scale. Observations made at CERN (European Organization for Nuclear Research located in Geneva, Switzerland) show that spam mails can constitute up to 75% of daily SMTP traffic. A naïve Bayesian classifier based on a Bag Of Words representation of an email is widely used to stop this unwanted flood as it combines good performance with simplicity of the training and classification processes. However, facing the constantly changing patterns of spam, it is necessary to assure online adaptability of the classifier. This work proposes combining such a classifier with another NBC (naïve Bayesian classifier) based on pairs of adjacent words. Only the latter will be retrained with examples of spam reported by users. Tests are performed on considerable sets of mails both from public spam archives and CERN mailboxes. They suggest that this architecture can increase spam recall without affecting the classifier precision as it happens when only the NBC based on single words is retrained.