Abstract: Natural outdoor scene classification is active and
promising research area around the globe. In this study, the
classification is carried out in two phases. In the first phase, the
features are extracted from the images by wavelet decomposition
method and stored in a database as feature vectors. In the second
phase, the neural classifiers such as back-propagation neural network
(BPNN) and resilient back-propagation neural network (RPNN) are
employed for the classification of scenes. Four hundred color images
are considered from MIT database of two classes as forest and street.
A comparative study has been carried out on the performance of the
two neural classifiers BPNN and RPNN on the increasing number of
test samples. RPNN showed better classification results compared to
BPNN on the large test samples.
Abstract: This study proposes and tests a rescapturing elements of perceived gain and loss that, by perceived value of medical tourism products, influencintention of potential customers. Data from 301 usable qwere tested against the research model using the structmodeling approach. The results indicated that perceivedkey predictor of customer intentions. As for benefimedical quality, service quality and enjoyment wcomponents that significantly influenced the perceptiRegarding sacrifice, the effects of perceived risk on pewere significant. The findings can provide insights intohow destination countries can make medical tourism a wfor themselves and international patients.KeywordsMedical tourism, perceived valueintention.
Abstract: Grid computing provides a virtual framework for
controlled sharing of resources across institutional boundaries.
Recently, trust has been recognised as an important factor for
selection of optimal resources in a grid. We introduce a new method
that provides a quantitative trust value, based on the past interactions
and present environment characteristics. This quantitative trust value
is used to select a suitable resource for a job and eliminates run time
failures arising from incompatible user-resource pairs. The proposed
work will act as a tool to calculate the trust values of the various
components of the grid and there by improves the success rate of the
jobs submitted to the resource on the grid. The access to a resource
not only depend on the identity and behaviour of the resource but
also upon its context of transaction, time of transaction, connectivity
bandwidth, availability of the resource and load on the resource. The
quality of the recommender is also evaluated based on the accuracy
of the feedback provided about a resource. The jobs are submitted for
execution to the selected resource after finding the overall trust value
of the resource. The overall trust value is computed with respect to
the subjective and objective parameters.
Abstract: Emotion recognition is an important research field that finds lots of applications nowadays. This work emphasizes on recognizing different emotions from speech signal. The extracted features are related to statistics of pitch, formants, and energy contours, as well as spectral, perceptual and temporal features, jitter, and shimmer. The Artificial Neural Networks (ANN) was chosen as the classifier. Working on finding a robust and fast ANN classifier suitable for different real life application is our concern. Several experiments were carried out on different ANN to investigate the different factors that impact the classification success rate. Using a database containing 7 different emotions, it will be shown that with a proper and careful adjustment of features format, training data sorting, number of features selected and even the ANN type and architecture used, a success rate of 85% or even more can be achieved without increasing the system complicity and the computation time
Abstract: This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.
Abstract: Stock portfolio selection is a classic problem in finance,
and it involves deciding how to allocate an institution-s or an individual-s
wealth to a number of stocks, with certain investment objectives
(return and risk). In this paper, we adopt the classical Markowitz
mean-variance model and consider an additional common realistic
constraint, namely, the cardinality constraint. Thus, stock portfolio
optimization becomes a mixed-integer quadratic programming problem
and it is difficult to be solved by exact optimization algorithms.
Chemical Reaction Optimization (CRO), which mimics the molecular
interactions in a chemical reaction process, is a population-based
metaheuristic method. Two different types of CRO, named canonical
CRO and Super Molecule-based CRO (S-CRO), are proposed to solve
the stock portfolio selection problem. We test both canonical CRO
and S-CRO on a benchmark and compare their performance under
two criteria: Markowitz efficient frontier (Pareto frontier) and Sharpe
ratio. Computational experiments suggest that S-CRO is promising
in handling the stock portfolio optimization problem.
Abstract: This article presents the implementation of several
different e/b-Learning collaborative activities, used to improve the
students learning process in an high school Polytechnic Institution. A
new learning model arises, based on a combination between face-toface
and distance leaning. Learning is now becoming centered with
the development of collaborative activities, and its actors (teachers
and students) have to be re-socialized to a new e/b-Learning
paradigm. Measuring approaches are proposed for this model and
results are presented, showing prospective correlation between
students learning success and the use of online collaborative
activities.
Abstract: This paper presents an alternate approach that uses
artificial neural network to simulate the flood level dynamics in a
river basin. The algorithm was developed in a decision support
system environment in order to enable users to process the data. The
decision support system is found to be useful due to its interactive
nature, flexibility in approach and evolving graphical feature and can
be adopted for any similar situation to predict the flood level. The
main data processing includes the gauging station selection, input
generation, lead-time selection/generation, and length of prediction.
This program enables users to process the flood level data, to
train/test the model using various inputs and to visualize results. The
program code consists of a set of files, which can as well be modified
to match other purposes. This program may also serve as a tool for
real-time flood monitoring and process control. The running results
indicate that the decision support system applied to the flood level
seems to have reached encouraging results for the river basin under
examination. The comparison of the model predictions with the
observed data was satisfactory, where the model is able to forecast
the flood level up to 5 hours in advance with reasonable prediction
accuracy. Finally, this program may also serve as a tool for real-time
flood monitoring and process control.
Abstract: Cow milk, is a product of the mammary gland and
soymilk is a beverage made from soybeans; it is the liquid that
remains after soybeans are soaked. In this research effort, we
compared nutritional parameters of this two kind milk such as total
fat, fiber, protein, minerals (Ca, Fe and P), fatty acids, carbohydrate,
lactose, water, total solids, ash, pH, acidity and calories content in
one cup (245 g). Results showed soymilk contains 4.67 grams of fat,
0.52 of fatty acids, 3.18 of fiber, 6.73 of protein, 4.43 of
carbohydrate, 0.00 of lactose, 228.51 of water, 10.40 of total solids
and 0.66 of ash, also 9.80 milligrams of Ca, 1.42 of Fe, and 120.05 of
P, 79 Kcal of calories, pH=6.74 and acidity was 0.24%. Cow milk
contains 8.15 grams of fat, 5.07 of fatty acids, 0.00 of fiber, 8.02 of
protein, 11.37 of carbohydrate, ´Çá4.27 of lactose, 214.69 of water,
12.90 of total solids, 1.75 of ash, 290.36 milligrams of Ca, 0.12 of
Fe, and 226.92 of P, 150 Kcal of calories, pH=6.90 and acidity was
0.21% . Soy milk is one of plant-based complete proteins and cow
milk is a rich source of nutrients as well. Cow milk is containing near
twice as much fat as and ten times more fatty acids do soymilk. Cow
milk contains greater amounts of mineral (except Fe) it contain more
than three hundred times the amount of Ca and nearly twice the
amount of P as does soymilk but soymilk contains more Fe (ten time
more) than does cow milk. Cow milk and soy milk contain nearly
identical amounts of protein and water and fiber is a big plus, dairy
has none. Although what we choose to drink is really a mater of
personal preference and our health objectives but looking at the
comparison, soy looks like healthier choices.
Abstract: Customarily, the LMTD correction factor, FT, is used
to screen alternative designs for a heat exchanger. Designs with
unacceptably low FT values are discarded. In this paper, authors have
proposed a more fundamental criterion, based on feasibility of a
multipass exchanger as the only criteria, followed by economic
optimization. This criterion, coupled with asymptotic energy targets,
provide the complete optimization space in a heat exchanger network
(HEN), where cost-optimization of HEN can be performed with only
Heat Recovery Approach temperature (HRAT) and number-of-shells
as variables.
Abstract: As increasing importance of symbiosis and cooperation among mobile communication industries, the mobile ecosystem has been especially highlighted in academia and practice. The structure of mobile ecosystem is quite complex and the ecological role of actors is important to understand that structure. In this respect, this study aims to explore structure of mobile ecosystem in the case of Korea using inter-industry network analysis. Then, the ecological roles in mobile ecosystem are identified using centrality measures as a result of network analysis: degree of centrality, closeness, and betweenness. The result shows that the manufacturing and service industries are separate. Also, the ecological roles of some actors are identified based on the characteristics of ecological terms: keystone, niche, and dominator. Based on the result of this paper, we expect that the policy makers can formulate the future of mobile industry and healthier mobile ecosystem can be constructed.
Abstract: In the package design industry, there are a lot of tacit knowledge resided within each designer. The objectives are to capture them and compile it to be used as a teaching resource and to create a video clip of package design process as well as to evaluate its quality and learning effectiveness. Interview were used as a technique for capturing knowledge in brand design concept, differentiation, recognition, rank of recognition factor, consumer survey, knowledge about marketing, research, graphic design, the effect of color, and law and regulation. Video clip about package design were created. The clip consisted of both the speech and clip of actual process. The quality of the video in term of media was ranked as good while the content was ranked as excellent. The students- score on post-test was significantly greater than that of pretest (p>0.001).
Abstract: Measures of complexity and entropy have not converged to a single quantitative description of levels of organization of complex systems. The need for such a measure is increasingly necessary in all disciplines studying complex systems. To address this problem, starting from the most fundamental principle in Physics, here a new measure for quantity of organization and rate of self-organization in complex systems based on the principle of least (stationary) action is applied to a model system - the central processing unit (CPU) of computers. The quantity of organization for several generations of CPUs shows a double exponential rate of change of organization with time. The exact functional dependence has a fine, S-shaped structure, revealing some of the mechanisms of self-organization. The principle of least action helps to explain the mechanism of increase of organization through quantity accumulation and constraint and curvature minimization with an attractor, the least average sum of actions of all elements and for all motions. This approach can help describe, quantify, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self organization to improve their quality. It can be applied to other complex systems from Physics, Chemistry, Biology, Ecology, Economics, Cities, network theory and others where complex systems are present.
Abstract: B2E portals represent a new class of web-based
information technologies which many organisations are introducing
in recent years to stay in touch with their distributed workforces and
enable them to perform value added activities for organisations.
However, actual usage of these emerging systems (measured using
suitable instruments) has not been reported in the contemporary
scholarly literature. We argue that many of the instruments to
measure usage of various types of IT-enabled information systems
are not directly applicable for B2E portals because they were
developed for the context of traditional mainframe and PC-based
information systems. It is therefore important to develop a new
instrument for web-based portal technologies aimed at employees. In
this article, we report on the development and initial qualitative
evaluation of an instrument that seeks to operationaise a set of
independent factors affecting the usage of portals by employees. The
proposed instrument is useful to IT/e-commerce researchers and
practitioners alike as it enhances their confidence in predicting
employee usage of portals in organisations.
Abstract: The impact force of a rockfall is mainly determined by
its moving behavior and velocity, which are contingent on the rock
shape, slope gradient, height, and surface roughness of the moving
path. It is essential to precisely calculate the moving path of the
rockfall in order to effectively minimize and prevent damages caused
by the rockfall. By applying the Colorado Rockfall Simulation
Program (CRSP) program as the analysis tool, this research studies the
influence of three shapes of rock (spherical, cylindrical and discoidal)
and surface roughness on the moving path of a single rockfall. As
revealed in the analysis, in addition to the slope gradient, the geometry
of the falling rock and joint roughness coefficient ( JRC ) of the slope
are the main factors affecting the moving behavior of a rockfall. On a
single flat slope, both the rock-s bounce height and moving velocity
increase as the surface gradient increases, with a critical gradient value
of 1:m = 1 . Bouncing behavior and faster moving velocity occur more
easily when the rock geometry is more oval. A flat piece tends to cause
sliding behavior and is easily influenced by the change of surface
undulation. When JRC
Abstract: Both the minimum energy consumption and
smoothness, which is quantified as a function of jerk, are generally
needed in many dynamic systems such as the automobile and the
pick-and-place robot manipulator that handles fragile equipments.
Nevertheless, many researchers come up with either solely
concerning on the minimum energy consumption or minimum jerk
trajectory. This research paper considers the indirect minimum Jerk
method for higher order differential equation in dynamics
optimization proposes a simple yet very interesting indirect jerks
approaches in designing the time-dependent system yielding an
alternative optimal solution. Extremal solutions for the cost functions
of indirect jerks are found using the dynamic optimization methods
together with the numerical approximation. This case considers the
linear equation of a simple system, for instance, mass, spring and
damping. The simple system uses two mass connected together by
springs. The boundary initial is defined the fix end time and end
point. The higher differential order is solved by Galerkin-s methods
weight residual. As the result, the 6th higher differential order shows
the faster solving time.
Abstract: This paper presents a method for the detection of OD in the retina which takes advantage of the powerful preprocessing techniques such as the contrast enhancement, Gabor wavelet transform for vessel segmentation, mathematical morphology and Earth Mover-s distance (EMD) as the matching process. The OD detection algorithm is based on matching the expected directional pattern of the retinal blood vessels. Vessel segmentation method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel-s feature vector. Feature vectors are composed of the pixel-s intensity and 2D Gabor wavelet transform responses taken at multiple scales. A simple matched filter is proposed to roughly match the direction of the vessels at the OD vicinity using the EMD. The minimum distance provides an estimate of the OD center coordinates. The method-s performance is evaluated on publicly available DRIVE and STARE databases. On the DRIVE database the OD center was detected correctly in all of the 40 images (100%) and on the STARE database the OD was detected correctly in 76 out of the 81 images, even in rather difficult pathological situations.
Abstract: Brassinosteroids (BRs) regulate cell elongation,
vascular differentiation, senescence, and stress responses. BRs signal
through the BES1/BZR1 family of transcription factors, which
regulate hundreds of target genes involved in this pathway. In this
research a comprehensive genome-wide analysis was carried out in
BES1/BZR1 gene family in Arabidopsis thaliana, Cucumis sativus,
Vitis vinifera, Glycin max and Brachypodium distachyon.
Specifications of the desired sequences, dot plot and hydropathy plot
were analyzed in the protein and genome sequences of five plant
species. The maximum amino acid length was attributed to protein
sequence Brdic3g with 374aa and the minimum amino acid length
was attributed to protein sequence Gm7g with 163aa. The maximum
Instability index was attributed to protein sequence AT1G19350
equal with 79.99 and the minimum Instability index was attributed to
protein sequence Gm5g equal with 33.22. Aliphatic index of these
protein sequences ranged from 47.82 to 78.79 in Arabidopsis
thaliana, 49.91 to 57.50 in Vitis vinifera, 55.09 to 82.43 in Glycin
max, 54.09 to 54.28 in Brachypodium distachyon 55.36 to 56.83 in
Cucumis sativus. Overall, data obtained from our investigation
contributes a better understanding of the complexity of the
BES1/BZR1 gene family and provides the first step towards directing
future experimental designs to perform systematic analysis of the
functions of the BES1/BZR1 gene family.
Abstract: The building of a factory can be a strategic investment
owing to its long service life. An evaluation that only focuses, for
example, on payments for the building, the technical equipment of
the factory, and the personnel for the enterprise is – considering the
complexity of the system factory – not sufficient for this long-term
view. The success of an investment is secured, among other things,
by the attainment of nonmonetary goals, too, like transformability.
Such aspects are not considered in traditional investment calculations
like the net present value method. This paper closes this gap with the
enhanced economic evaluation (EWR) for factory planning. The
procedure and the first results of an application in a project are
presented.
Abstract: In this paper we present the deep study about the Bio-
Medical Images and tag it with some basic extracting features (e.g.
color, pixel value etc). The classification is done by using a nearest
neighbor classifier with various distance measures as well as the
automatic combination of classifier results. This process selects a
subset of relevant features from a group of features of the image. It
also helps to acquire better understanding about the image by
describing which the important features are. The accuracy can be
improved by increasing the number of features selected. Various
types of classifications were evolved for the medical images like
Support Vector Machine (SVM) which is used for classifying the
Bacterial types. Ant Colony Optimization method is used for optimal
results. It has high approximation capability and much faster
convergence, Texture feature extraction method based on Gabor
wavelets etc..