Abstract: The CMLP building was developed to be a model for
sustainability with strategies to reduce water, energy and pollution,
and to provide a healthy environment for the building occupants. The
aim of this paper is to investigate the environmental effects of energy
used by this building. A LCA (life cycle analysis) was led to measure
the real environmental effects produced by the use of energy. The
impact categories most affected by the energy use were found to be
the human health effects, as well as ecotoxicity. Natural gas
extraction, uranium milling for nuclear energy production, and the
blasting for mining and infrastructure construction are the processes
contributing the most to emissions in the human health effect. Data
comparing LCA results of CMLP building with a conventional
building results showed that energy used by the CMLP building has
less damage for the environment and human health than a
conventional building.
Abstract: The objective of this study is to identify the factors
that influence the online purchasing loyalty for Thai herbal products.
Survey research is used to gather data from Thai herb online
merchants to assess factors that have impacts on enhancing loyalty.
Data were collected from 300 online customers who had experience
in online purchasing of Thai Herbal products. Prior experience
consists of data from previous usage of online herbs, herb purchase
and internet usage. E-Quality data consists of information quality,
system quality, service quality and the product quality of Thai herbal
products sold online. The results suggest that prior experience, Equality,
attitude toward purchase and trust in online merchant have
major impacts on loyalty. The good attitude and E-Quality of
purchasing Thai herbal product online are the most significant
determinants affecting loyalty.
Abstract: Estimation time and cost of work completion in a
project and follow up them during execution are contributors to
success or fail of a project, and is very important for project
management team. Delivering on time and within budgeted cost
needs to well managing and controlling the projects. To dealing with
complex task of controlling and modifying the baseline project
schedule during execution, earned value management systems have
been set up and widely used to measure and communicate the real
physical progress of a project. But it often fails to predict the total
duration of the project. In this paper data mining techniques is used
predicting the total project duration in term of Time Estimate At
Completion-EAC (t). For this purpose, we have used a project with
90 activities, it has updated day by day. Then, it is used regular
indexes in literature and applied Earned Duration Method to
calculate time estimate at completion and set these as input data for
prediction and specifying the major parameters among them using
Clem software. By using data mining, the effective parameters on
EAC and the relationship between them could be extracted and it is
very useful to manage a project with minimum delay risks. As we
state, this could be a simple, safe and applicable method in prediction
the completion time of a project during execution.
Abstract: This paper presents a text clustering system developed based on a k-means type subspace clustering algorithm to cluster large, high dimensional and sparse text data. In this algorithm, a new step is added in the k-means clustering process to automatically calculate the weights of keywords in each cluster so that the important words of a cluster can be identified by the weight values. For understanding and interpretation of clustering results, a few keywords that can best represent the semantic topic are extracted from each cluster. Two methods are used to extract the representative words. The candidate words are first selected according to their weights calculated by our new algorithm. Then, the candidates are fed to the WordNet to identify the set of noun words and consolidate the synonymy and hyponymy words. Experimental results have shown that the clustering algorithm is superior to the other subspace clustering algorithms, such as PROCLUS and HARP and kmeans type algorithm, e.g., Bisecting-KMeans. Furthermore, the word extraction method is effective in selection of the words to represent the topics of the clusters.
Abstract: In the territories where high-intensity
earthquakes are frequent is paid attention to the solving of the
seismic problems. In the paper are described two
computational model variants based on finite element method
of the construction with different subsoil simulation (rigid or
elastic subsoil) is used. For simulation and calculations
program system based on method final elements ANSYS was
used. Seismic responses calculations of residential building
structure were effected on loading characterized by
accelerogram for comparing with the responses spectra
method.
Abstract: This paper details the application of a genetic
programming framework for induction of useful classification rules
from a database of income statements, balance sheets, and cash flow
statements for North American public companies. Potentially
interesting classification rules are discovered. Anomalies in the
discovery process merit further investigation of the application of
genetic programming to the dataset for the problem domain.
Abstract: The goal of data mining algorithms is to discover
useful information embedded in large databases. One of the most
important data mining problems is discovery of frequently occurring
patterns in sequential data. In a multidimensional sequence each
event depends on more than one dimension. The search space is quite
large and the serial algorithms are not scalable for very large
datasets. To address this, it is necessary to study scalable parallel
implementations of sequence mining algorithms.
In this paper, we present a model for multidimensional sequence
and describe a parallel algorithm based on data parallelism.
Simulation experiments show good load balancing and scalable and
acceptable speedup over different processors and problem sizes and
demonstrate that our approach can works efficiently in a real parallel
computing environment.
Abstract: Chatter vibration has been a troublesome problem for a
machine tool toward the high precision and high speed machining.
Essentially, the machining performance is determined by the dynamic
characteristics of the machine tool structure and dynamics of cutting
process. Therefore the dynamic vibration behavior of spindle tool
system greatly determines the performance of machine tool. The
purpose of this study is to investigate the influences of the machine
frame structure on the dynamic frequency of spindle tool unit through
finite element modeling approach. To this end, a realistic finite
element model of the vertical milling system was created by
incorporated the spindle-bearing model into the spindle head stock of
the machine frame. Using this model, the dynamic characteristics of
the milling machines with different structural designs of spindle head
stock and identical spindle tool unit were demonstrated. The results of
the finite element modeling reveal that the spindle tool unit behaves
more compliant when the excited frequency approaches the natural
mode of the spindle tool; while the spindle tool show a higher dynamic
stiffness at lower frequency that may be initiated by the structural
mode of milling head. Under this condition, it is concluded that the
structural configuration of spindle head stock associated with the
vertical column of milling machine plays an important role in
determining the machining dynamics of the spindle unit.
Abstract: In this paper a new approach for transmission pricing
is presented. The main idea is voltage angle allocation, i.e.
determining the contribution of each contract on the voltage angle of
each bus. DC power flow is used to compute a primary solution for
angle decomposition. To consider the impacts of system non-linearity
on angle decomposition, the primary solution is corrected in different
iterations of decoupled Newton-Raphson power flow. Then, the
contribution of each contract on power flow of each transmission line
is computed based on angle decomposition. Contract-related flows
are used as a measure for “extent of use" of transmission network
capacity and consequently transmission pricing. The presented
approach is applied to a 4-bus test system and IEEE 30-bus test
system.
Abstract: In the era of great competition, understanding and satisfying
customers- requirements are the critical tasks for a company
to make a profits. Customer relationship management (CRM) thus
becomes an important business issue at present. With the help of
the data mining techniques, the manager can explore and analyze
from a large quantity of data to discover meaningful patterns and
rules. Among all methods, well-known association rule is most
commonly seen. This paper is based on Apriori algorithm and uses
genetic algorithms combining a data mining method to discover fuzzy
classification rules. The mined results can be applied in CRM to
help decision marker make correct business decisions for marketing
strategies.
Abstract: The purpose of this paper is to conceptualize a futureoriented
human work environment and organizational activity in
deep mines that entails a vision of good and safe workplace. Futureoriented
technological challenges and mental images required for
modern work organization design were appraised. It is argued that an
intelligent-deep-mine covering the entire value chain, including
environmental issues and with work organization that supports good
working and social conditions towards increased human productivity
could be designed. With such intelligent system and work
organization in place, the mining industry could be seen as a place
where cooperation, skills development and gender equality are key
components. By this perspective, both the youth and women might
view mining activity as an attractive job and the work environment
as a safe, and this could go a long way in breaking the unequal
gender balance that exists in most mines today.
Abstract: A research project dealing with the phytoremediation
of a soil polluted by some heavy metals is currently running. The
case study is represented by a mining area in Hamedan province in
the central west part of Iran. The potential of phytoextraction and
phytostabilization of plants was evaluated considering the
concentration of heavy metals in the plant tissues and also the
bioconcentration factor (BCF) and the translocation factor (TF). Also
the several established criteria were applied to define
hyperaccumulator plants in the studied area. Results showed that
none of the collected plant species were suitable for phytoextraction
of Cu, Zn, Fe and Mn, but among the plants, Euphorbia macroclada
was the most efficient in phytostabilization of Cu and Fe, while,
Ziziphora clinopodioides, Cousinia sp. and Chenopodium botrys
were the most suitable for phytostabilization of Zn and Chondrila
juncea and Stipa barbata had the potential for phytostabilization of
Mn. Using the most common criterion, Euphorbia macroclada and
Verbascum speciosum were Fe hyperaccumulator plants. Present
study showed that native plant species growing on contaminated sites
may have the potential for phytoremediation.
Abstract: Soil organic carbon (SOC) plays a key role in soil
fertility, hydrology, contaminants control and acts as a sink or source
of terrestrial carbon content that can affect the concentration of
atmospheric CO2. SOC supports the sustainability and quality of
ecosystems, especially in semi-arid region. This study was
conducted to determine relative importance of 13 different
exploratory climatic, soil and geometric factors on the SOC contents
in one of the semiarid watershed zones in Iran. Two methods
canonical discriminate analysis (CDA) and feed-forward back
propagation neural networks were used to predict SOC. Stepwise
regression and sensitivity analysis were performed to identify
relative importance of exploratory variables. Results from sensitivity
analysis showed that 7-2-1 neural networks and 5 inputs in CDA
models output have highest predictive ability that explains %70 and
%65 of SOC variability. Since neural network models outperformed
CDA model, it should be preferred for estimating SOC.
Abstract: The paper discusses the mathematics of pattern
indexing and its applications to recognition of visual patterns that are
found in video clips. It is shown that (a) pattern indexes can be
represented by collections of inverted patterns, (b) solutions to
pattern classification problems can be found as intersections and
histograms of inverted patterns and, thus, matching of original
patterns avoided.
Abstract: In many data mining applications, it is a priori known
that the target function should satisfy certain constraints imposed
by, for example, economic theory or a human-decision maker. In this
paper we consider partially monotone prediction problems, where the
target variable depends monotonically on some of the input variables
but not on all. We propose a novel method to construct prediction
models, where monotone dependences with respect to some of
the input variables are preserved by virtue of construction. Our
method belongs to the class of mixture models. The basic idea is to
convolute monotone neural networks with weight (kernel) functions
to make predictions. By using simulation and real case studies,
we demonstrate the application of our method. To obtain sound
assessment for the performance of our approach, we use standard
neural networks with weight decay and partially monotone linear
models as benchmark methods for comparison. The results show that
our approach outperforms partially monotone linear models in terms
of accuracy. Furthermore, the incorporation of partial monotonicity
constraints not only leads to models that are in accordance with the
decision maker's expertise, but also reduces considerably the model
variance in comparison to standard neural networks with weight
decay.
Abstract: In this paper parametric analytical studies have been carried out to examine the intrinsic flow physics pertaining to the liftoff time of solid propellant rockets. Idealized inert simulators of solid rockets are selected for numerical studies to examining the preignition chamber dynamics. Detailed diagnostic investigations have been carried out using an unsteady two-dimensional k-omega turbulence model. We conjectured from the numerical results that the altered variations of the igniter jet impingement angle, turbulence level, time and location of the first ignition, flame spread characteristics, the overall chamber dynamics including the boundary layer growth history are having bearing on the time for nozzle flow chocking for establishing the required thrust for the rocket liftoff. We concluded that the altered flow choking time of strap-on motors with the pre-determined identical ignition time at the lift off phase will lead to the malfunctioning of the rocket. We also concluded that, in the light of the space debris, an error in predicting the liftoff time can lead to an unfavorable launch window amounts the satellite injection errors and/or the mission failures.
Abstract: Bus networks design is an important problem in
public transportation. The main step to this design, is determining the
number of required terminals and their locations. This is an especial
type of facility location problem, a large scale combinatorial
optimization problem that requires a long time to be solved.
The genetic algorithm (GA) is a search and optimization technique
which works based on evolutionary principle of natural
chromosomes. Specifically, the evolution of chromosomes due to the
action of crossover, mutation and natural selection of chromosomes
based on Darwin's survival-of-the-fittest principle, are all artificially
simulated to constitute a robust search and optimization procedure.
In this paper, we first state the problem as a mixed integer
programming (MIP) problem. Then we design a new crossover and
mutation for bus terminal location problem (BTLP). We tested the
different parameters of genetic algorithm (for a sample problem) and
obtained the optimal parameters for solving BTLP with numerical try
and error.
Abstract: Accurate assessment of the primary tumor response to
treatment is important in the management of breast cancer. This
paper introduces a new set of treatment evaluation indicators for
breast cancer cases based on the computational process of three
known metrics, the Euclidian, Hamming and Levenshtein distances.
The distance principals are applied to pairs of mammograms and/or
echograms, recorded before and after treatment, determining a
reference point in judging the evolution amount of the studied
carcinoma. The obtained numerical results are indeed very
transparent and indicate not only the evolution or the involution of
the tumor under treatment, but also a quantitative measurement of the
benefit in using the selected method of treatment.
Abstract: We present a method to create special domain
collections from news sites. The method only requires a single
sample article as a seed. No prior corpus statistics are needed and the
method is applicable to multiple languages. We examine various
similarity measures and the creation of document collections for
English and Japanese. The main contributions are as follows. First,
the algorithm can build special domain collections from as little as
one sample document. Second, unlike other algorithms it does not
require a second “general" corpus to compute statistics. Third, in our
testing the algorithm outperformed others in creating collections
made up of highly relevant articles.
Abstract: In this paper we propose a new criterion for solving
the problem of channel shortening in multi-carrier systems. In a
discrete multitone receiver, a time-domain equalizer (TEQ) reduces
intersymbol interference (ISI) by shortening the effective duration of
the channel impulse response. Minimum mean square error (MMSE)
method for TEQ does not give satisfactory results. In [1] a new
criterion for partially equalizing severe ISI channels to reduce the
cyclic prefix overhead of the discrete multitone transceiver (DMT),
assuming a fixed transmission bandwidth, is introduced. Due to
specific constrained (unit morm constraint on the target impulse
response (TIR)) in their method, the freedom to choose optimum
vector (TIR) is reduced. Better results can be obtained by avoiding
the unit norm constraint on the target impulse response (TIR). In
this paper we change the cost function proposed in [1] to the cost
function of determining the maximum of a determinant subject to
linear matrix inequality (LMI) and quadratic constraint and solve the
resulting optimization problem. Usefulness of the proposed method
is shown with the help of simulations.