Abstract: ABC classification is widely used by managers for
inventory control. The classical ABC classification is based on Pareto
principle and according to the criterion of the annual use value only.
Single criterion classification is often insufficient for a closely
inventory control. Multi-criteria inventory classification models have
been proposed by researchers in order to consider other important
criteria. From these models, we will consider a specific model in
order to make a sensitive analysis on the composite score calculated
for each item. In fact, this score, based on a normalized average
between a good and a bad optimized index, can affect the ABC-item
classification. We will focus on items differently assigned to classes
and then propose a classification compromise.
Abstract: In the past few years, the amount of malicious software
increased exponentially and, therefore, machine learning algorithms
became instrumental in identifying clean and malware files through
(semi)-automated classification. When working with very large
datasets, the major challenge is to reach both a very high malware
detection rate and a very low false positive rate. Another challenge
is to minimize the time needed for the machine learning algorithm to
do so. This paper presents a comparative study between different
machine learning techniques such as linear classifiers, ensembles,
decision trees or various hybrids thereof. The training dataset consists
of approximately 2 million clean files and 200.000 infected files,
which is a realistic quantitative mixture. The paper investigates the
above mentioned methods with respect to both their performance
(detection rate and false positive rate) and their practicability.
Abstract: The principle of the seismic performance evaluation methods is to provide a measure of capability for a building or set of buildings to be damaged by an earthquake. The common objective of many of these methods is to supply classification criteria. The purpose of this study is to present a method for assessing the seismic performance of structures, based on Pushover method; we are particularly interested in reinforced concrete frame structures, which represent a significant percentage of damaged structures after a seismic event. The work is based on the characterization of seismic movement of the various earthquake zones in terms of PGA and PGD that is obtained by means of SIMQK_GR and PRISM software and the correlation between the points of performance and the scalar characterizing the earthquakes will developed.
Abstract: Subspace channel estimation methods have been
studied widely, where the subspace of the covariance matrix is
decomposed to separate the signal subspace from noise subspace. The
decomposition is normally done by using either the eigenvalue
decomposition (EVD) or the singular value decomposition (SVD) of
the auto-correlation matrix (ACM). However, the subspace
decomposition process is computationally expensive. This paper
considers the estimation of the multipath slow frequency hopping
(FH) channel using noise space based method. In particular, an
efficient method is proposed to estimate the multipath time delays by
applying multiple signal classification (MUSIC) algorithm which is
based on the null space extracted by the rank revealing LU (RRLU)
factorization. As a result, precise information is provided by the
RRLU about the numerical null space and the rank, (i.e., important
tool in linear algebra). The simulation results demonstrate the
effectiveness of the proposed novel method by approximately
decreasing the computational complexity to the half as compared
with RRQR methods keeping the same performance.
Abstract: Significant quota of Municipal Electrical Energy
consumption is related to Decentralized Air Conditioning which is
mostly provided by evaporative coolers. So the aim is to optimize
design of air conditioners to increase their efficiencies. To achieve
this goal, results of practical standardized tests for 40 evaporative
coolers in different types collected and simultaneously results for
same coolers based on one of EER (Energy Efficiency Ratio)
modeling styles are figured out. By comparing experimental results
of different coolers standardized tests with modeling results,
preciseness of used model is assessed and after comparing gained
preciseness with international standards based on EER for cooling
capacity, aeration, and also electrical energy consumption, energy
label from A (most effective) to G (less effective) is classified; finally
needed methods to optimize energy consumption and coolers’
classification are provided.
Abstract: Traditional document representation for classification
follows Bag of Words (BoW) approach to represent the term weights.
The conventional method uses the Vector Space Model (VSM) to
exploit the statistical information of terms in the documents and they
fail to address the semantic information as well as order of the terms
present in the documents. Although, the phrase based approach
follows the order of the terms present in the documents rather than
semantics behind the word. Therefore, a semantic concept based
approach is used in this paper for enhancing the semantics by
incorporating the ontology information. In this paper a novel method
is proposed to forecast the intraday stock market price directional
movement based on the sentiments from Twitter and money control
news articles. The stock market forecasting is a very difficult and
highly complicated task because it is affected by many factors such
as economic conditions, political events and investor’s sentiment etc.
The stock market series are generally dynamic, nonparametric, noisy
and chaotic by nature. The sentiment analysis along with wisdom of
crowds can automatically compute the collective intelligence of
future performance in many areas like stock market, box office sales
and election outcomes. The proposed method utilizes collective
sentiments for stock market to predict the stock price directional
movements. The collective sentiments in the above social media have
powerful prediction on the stock price directional movements as
up/down by using Granger Causality test.
Abstract: An artificial neural network is a mathematical model
inspired by biological neural networks. There are several kinds of
neural networks and they are widely used in many areas, such as:
prediction, detection, and classification. Meanwhile, in day to day life,
people always have to make many difficult decisions. For example,
the coach of a soccer club has to decide which offensive player
to be selected to play in a certain game. This work describes a
novel Neural Network using a combination of the General Regression
Neural Network and the Probabilistic Neural Networks to help a
soccer coach make an informed decision.
Abstract: This paper introduces an original method for
guaranteed estimation of the accuracy for an ensemble of Lipschitz
classifiers. The solution was obtained as a finite closed set of
alternative hypotheses, which contains an object of classification with
probability of not less than the specified value. Thus, the
classification is represented by a set of hypothetical classes. In this
case, the smaller the cardinality of the discrete set of hypothetical
classes is, the higher is the classification accuracy. Experiments have
shown that if cardinality of the classifiers ensemble is increased then
the cardinality of this set of hypothetical classes is reduced. The
problem of the guaranteed estimation of the accuracy for an ensemble
of Lipschitz classifiers is relevant in multichannel classification of
target events in C-OTDR monitoring systems. Results of suggested
approach practical usage to accuracy control in C-OTDR monitoring
systems are present.
Abstract: Data mining idea is mounting rapidly in admiration
and also in their popularity. The foremost aspire of data mining
method is to extract data from a huge data set into several forms that
could be comprehended for additional use. The data mining is a
technology that contains with rich potential resources which could be
supportive for industries and businesses that pay attention to collect
the necessary information of the data to discover their customer’s
performances. For extracting data there are several methods are
available such as Classification, Clustering, Association,
Discovering, and Visualization… etc., which has its individual and
diverse algorithms towards the effort to fit an appropriate model to
the data. STATISTICA mostly deals with excessive groups of data
that imposes vast rigorous computational constraints. These results
trials challenge cause the emergence of powerful STATISTICA Data
Mining technologies. In this survey an overview of the STATISTICA
software is illustrated along with their significant features.
Abstract: Previous studies on financial distress prediction choose
the conventional failing and non-failing dichotomy; however, the
distressed extent differs substantially among different financial
distress events. To solve the problem, “non-distressed”, “slightlydistressed”
and “reorganization and bankruptcy” are used in our article
to approximate the continuum of corporate financial health. This paper
explains different financial distress events using the two-stage method.
First, this investigation adopts firm-specific financial ratios, corporate
governance and market factors to measure the probability of various
financial distress events based on multinomial logit models.
Specifically, the bootstrapping simulation is performed to examine the
difference of estimated misclassifying cost (EMC). Second, this work
further applies macroeconomic factors to establish the credit cycle
index and determines the distressed cut-off indicator of the two-stage
models using such index. Two different models, one-stage and
two-stage prediction models are developed to forecast financial
distress, and the results acquired from different models are compared
with each other, and with the collected data. The findings show that the
one-stage model has the lower misclassification error rate than the
two-stage model. The one-stage model is more accurate than the
two-stage model.
Abstract: In this paper, we used data mining to extract
biomedical knowledge. In general, complex biomedical data
collected in studies of populations are treated by statistical methods,
although they are robust, they are not sufficient in themselves to
harness the potential wealth of data. For that you used in step two
learning algorithms: the Decision Trees and Support Vector Machine
(SVM). These supervised classification methods are used to make the
diagnosis of thyroid disease. In this context, we propose to promote
the study and use of symbolic data mining techniques.
Abstract: Due to the rapid increase of Internet, web opinion
sources dynamically emerge which is useful for both potential
customers and product manufacturers for prediction and decision
purposes. These are the user generated contents written in natural
languages and are unstructured-free-texts scheme. Therefore, opinion
mining techniques become popular to automatically process customer
reviews for extracting product features and user opinions expressed
over them. Since customer reviews may contain both opinionated and
factual sentences, a supervised machine learning technique applies
for subjectivity classification to improve the mining performance. In
this paper, we dedicate our work is the task of opinion
summarization. Therefore, product feature and opinion extraction is
critical to opinion summarization, because its effectiveness
significantly affects the identification of semantic relationships. The
polarity and numeric score of all the features are determined by
Senti-WordNet Lexicon. The problem of opinion summarization
refers how to relate the opinion words with respect to a certain
feature. Probabilistic based model of supervised learning will
improve the result that is more flexible and effective.
Abstract: This paper proposes a rotational invariant texture
feature based on the roughness property of the image for psoriasis
image analysis. In this work, we have applied this feature for image
classification and segmentation. The fuzzy concept is employed to
overcome the imprecision of roughness. Since the psoriasis lesion is
modeled by a rough surface, the feature is extended for calculating
the Psoriasis Area Severity Index value. For classification and
segmentation, the Nearest Neighbor algorithm is applied. We have
obtained promising results for identifying affected lesions by using
the roughness index and severity level estimation.
Abstract: In this paper, groundwater seepage into Amirkabir
tunnel has been estimated using analytical and numerical methods for
14 different sections of the tunnel. Site Groundwater Rating (SGR)
method also has been performed for qualitative and quantitative
classification of the tunnel sections. The obtained results of above
mentioned methods were compared together. The study shows
reasonable accordance with results of the all methods unless for two
sections of tunnel. In these two sections there are some significant
discrepancies between numerical and analytical results mainly
originated from model geometry and high overburden. SGR and the
analytical and numerical calculations, confirm high concentration of
seepage inflow in fault zones. Maximum seepage flow into tunnel has
been estimated 0.425 lit/sec/m using analytical method and 0.628
lit/sec/m using numerical method occured in crashed zone. Based on
SGR method, six sections of 14 sections in Amirkabir tunnel axis are
found to be in "No Risk" class that is supported by the analytical and
numerical seepage value of less than 0.04 lit/sec/m.
Abstract: Maize constitutes a major agrarian production for use
by the vast population but despite its economic importance; it has not
been produced to meet the economic needs of the country. Achieving
optimum yield in maize can meaningfully be supported by land
suitability analysis in order to guarantee self-sufficiency for future
production optimization. This study examines land suitability for
maize production through the analysis of the physicochemical
variations in soil properties and other land attributes over space using
a Geographic Information System (GIS) framework.
Physicochemical parameters of importance selected include slope,
landuse, physical and chemical properties of the soil, and climatic
variables. Landsat imagery was used to categorize the landuse,
Shuttle Radar Topographic Mapping (SRTM) generated the slope and
soil samples were analyzed for its physical and chemical components.
Suitability was categorized into highly, moderately and marginally
suitable based on Food and Agricultural Organisation (FAO)
classification, using the Analytical Hierarchy Process (AHP)
technique of GIS. This result can be used by small scale farmers for
efficient decision making in the allocation of land for maize
production.
Abstract: The aim of this work is to build a model based on
tissue characterization that is able to discriminate pathological and
non-pathological regions from three-phasic CT images. With our
research and based on a feature selection in different phases, we are
trying to design a neural network system with an optimal neuron
number in a hidden layer. Our approach consists of three steps:
feature selection, feature reduction, and classification. For each
region of interest (ROI), 6 distinct sets of texture features are
extracted such as: first order histogram parameters, absolute gradient,
run-length matrix, co-occurrence matrix, autoregressive model, and
wavelet, for a total of 270 texture features. When analyzing more
phases, we show that the injection of liquid cause changes to the high
relevant features in each region. Our results demonstrate that for
detecting HCC tumor phase 3 is the best one in most of the features
that we apply to the classification algorithm. The percentage of
detection between pathology and healthy classes, according to our
method, relates to first order histogram parameters with accuracy of
85% in phase 1, 95% in phase 2, and 95% in phase 3.
Abstract: The concept of technology as well as itself has
evolved continuously over time, such that, nowadays, this concept is
still marked by myths and realities. Even the concept of science is
frequently misunderstood as technology. In this way, this paper
presents different forms of interpretation of the concept of technology
in the course of history, as well as the social and cultural aspects
associated with it, through an analysis made by means of insights
from sociological studies of science and technology and its multiple
relations with society. Through the analysis of contents, the paper
presents a classification of how technology is interpreted in the social
sphere and search channel efforts to show how a broader
understanding can contribute to better interpretations of how
scientific and technological development influences the environment
in which we operate. The text also presents a particular point of view
for the interpretation of the concept from the analysis throughout the
whole work.
Abstract: The goal of image segmentation is to cluster pixels
into salient image regions. Segmentation could be used for object
recognition, occlusion boundary estimation within motion or stereo
systems, image compression, image editing, or image database lookup.
In this paper, we present a color image segmentation using
support vector machine (SVM) pixel classification. Firstly, the pixel
level color and texture features of the image are extracted and they
are used as input to the SVM classifier. These features are extracted
using the homogeneity model and Gabor Filter. With the extracted
pixel level features, the SVM Classifier is trained by using FCM
(Fuzzy C-Means).The image segmentation takes the advantage of
both the pixel level information of the image and also the ability of
the SVM Classifier. The Experiments show that the proposed method
has a very good segmentation result and a better efficiency, increases
the quality of the image segmentation compared with the other
segmentation methods proposed in the literature.
Abstract: This paper introduces an original method of
parametric optimization of the structure for multimodal decisionlevel
fusion scheme which combines the results of the partial solution
of the classification task obtained from assembly of the mono-modal
classifiers. As a result, a multimodal fusion classifier which has the
minimum value of the total error rate has been obtained.
Abstract: The web’s increased popularity has included a huge
amount of information, due to which automated web page
classification systems are essential to improve search engines’
performance. Web pages have many features like HTML or XML
tags, hyperlinks, URLs and text contents which can be considered
during an automated classification process. It is known that Webpage
classification is enhanced by hyperlinks as it reflects Web page
linkages. The aim of this study is to reduce the number of features to
be used to improve the accuracy of the classification of web pages. In
this paper, a novel feature selection method using an improved
Particle Swarm Optimization (PSO) using principle of evolution is
proposed. The extracted features were tested on the WebKB dataset
using a parallel Neural Network to reduce the computational cost.