Abstract: In this paper we present a Feed-Foward Neural
Networks Autoregressive (FFNN-AR) model with genetic algorithms
training optimization in order to predict the gross domestic product
growth of six countries. Specifically we propose a kind of weighted
regression, which can be used for econometric purposes, where the
initial inputs are multiplied by the neural networks final optimum
weights from input-hidden layer of the training process. The
forecasts are compared with those of the ordinary autoregressive
model and we conclude that the proposed regression-s forecasting
results outperform significant those of autoregressive model.
Moreover this technique can be used in Autoregressive-Moving
Average models, with and without exogenous inputs, as also the
training process with genetics algorithms optimization can be
replaced by the error back-propagation algorithm.
Abstract: This paper describes a newly designed decentralized
nonlinear control strategy to control a robot manipulator. Based on the
concept of the nonlinear state feedback theory and decentralized
concept is developed to improve the drawbacks in previous works
concerned with complicate intelligent control and low cost effective
sensor. The control methodology is derived in the sense of Lyapunov
theorem so that the stability of the control system is guaranteed. The
decentralized algorithm does not require other joint angle and velocity
information. Individual Joint controller is implemented using a digital
processor with nearly actuator to make it possible to achieve good
dynamics and modular. Computer simulation result has been
conducted to validate the effectiveness of the proposed control scheme
under the occurrence of possible uncertainties and different reference
trajectories. The merit of the proposed control system is indicated in
comparison with a classical control system.
Abstract: Gas flaring is one of the most GHG emitting sources in the oil and gas industries. It is also a major way for wasting such an energy that could be better utilized and even generates revenue. Minimize flaring is an effective approach for reducing GHG emissions and also conserving energy in flaring systems. Integrating waste and flared gases into the fuel gas networks (FGN) of refineries is an efficient tool. A fuel gas network collects fuel gases from various source streams and mixes them in an optimal manner, and supplies them to different fuel sinks such as furnaces, boilers, turbines, etc. In this article we use fuel gas network model proposed by Hasan et al. as a base model and modify some of its features and add constraints on emission pollution by gas flaring to reduce GHG emissions as possible. Results for a refinery case study showed that integration of flare gas stream with waste and natural gas streams to construct an optimal FGN can significantly reduce total annualized cost and flaring emissions.
Abstract: In this research, a systematic investigation was carried out to determine the optimum conditions of HDS reactor. Moreover, a suitable model was developed for a rigorous RTO (real time optimization) loop of HDS (Hydro desulfurization) process. A systematic experimental series was designed based on CCD (Central Composite design) and carried out in the related pilot plant to tune the develop model. The designed variables in the experiments were Temperature, LHSV and pressure. However, the hydrogen over fresh feed ratio was remained constant. The ranges of these variables were respectively equal to 320-380ºC, 1- 21/hr and 50-55 bar. a power law kinetic model was also developed for our further research in the future .The rate order and activation energy , power of reactant concentration and frequency factor of this model was respectively equal to 1.4, 92.66 kJ/mol and k0=2.7*109 .
Abstract: Benchmarking cleaner production performance is an
effective way of pollution control and emission reduction in coal-fired
power industry. A benchmarking method using two-stage
super-efficiency data envelopment analysis for coal-fired power plants
is proposed – firstly, to improve the cleaner production performance of
DEA-inefficient or weakly DEA-efficient plants, then to select the
benchmark from performance-improved power plants. An empirical
study is carried out with the survey data of 24 coal-fired power plants.
The result shows that in the first stage the performance of 16 plants is
DEA-efficient and that of 8 plants is relatively inefficient. The target
values for improving DEA-inefficient plants are acquired by
projection analysis. The efficient performance of 24 power plants and
the benchmarking plant is achieved in the second stage. The two-stage
benchmarking method is practical to select the optimal benchmark in
the cleaner production of coal-fired power industry and will
continuously improve plants- cleaner production performance.
Abstract: The objective of this work was to investigate flow
properties of powdered infant formula samples. Samples were
purchased at a local pharmacy and differed in composition. Lactose
free infant formula, gluten free infant formula and infant formulas
containing dietary fibers and probiotics were tested and compared
with a regular infant formula sample which did not contain any of
these supplements. Particle size and bulk density were determined
and their influence on flow properties was discussed. There were no
significant differences in bulk densities of the samples, therefore the
connection between flow properties and bulk density could not be
determined. Lactose free infant formula showed flow properties
different to standard supplement-free sample. Gluten free infant
formula with addition of probiotic microorganisms and dietary fiber
had the narrowest particle size distribution range and exhibited the
best flow properties. All the other samples exhibited the same
tendency of decreasing compaction coefficient with increasing flow
speed, which means they all become freer flowing with higher flow
speeds.
Abstract: Many real-world data sets consist of a very high dimensional feature space. Most clustering techniques use the distance or similarity between objects as a measure to build clusters. But in high dimensional spaces, distances between points become relatively uniform. In such cases, density based approaches may give better results. Subspace Clustering algorithms automatically identify lower dimensional subspaces of the higher dimensional feature space in which clusters exist. In this paper, we propose a new clustering algorithm, ISC – Intelligent Subspace Clustering, which tries to overcome three major limitations of the existing state-of-art techniques. ISC determines the input parameter such as є – distance at various levels of Subspace Clustering which helps in finding meaningful clusters. The uniform parameters approach is not suitable for different kind of databases. ISC implements dynamic and adaptive determination of Meaningful clustering parameters based on hierarchical filtering approach. Third and most important feature of ISC is the ability of incremental learning and dynamic inclusion and exclusions of subspaces which lead to better cluster formation.
Abstract: Paced Auditory Serial Addition Test (PASAT) has
been used as a common research tool for different neurological
disorders like Multiple Sclerosis. Recently, technology let
researchers to introduce a new versions of the visual test, the paced
visual serial addition test (PVSAT). In this paper, the computerized
version of these two tests is introduced. Beside the number of true
responses are interpreted, the reaction time of subjects are calculated
by the software. We hypothesize that paying attention to the reaction
time may be valuable. For this purpose, sixty eight female normal
subjects and fifty eight male normal subjects are enrolled in the
study. We investigate the similarity between the PASAT3 and
PVSAT3 in number of true responses and the new criterion (the
average reaction time of each subject). The similarity between two
tests were rejected (p-value = 0.000) which means that these two test
differ. The effect of sex in the tests were not approved since the pvalues
of different between PASAT3 and PVSAT3 in both sex is the
same (p-value = 0.000) which means that male and female subjects
performed the tests at no different level of performance. The new
criterion shows a negative correlation with the age which offers aged
normal subjects may have the same number of true responses as the
young subjects but they have latent responses. This will give prove
for the importance of reaction time.
Abstract: The noteworthy point in the advancement of Brain Machine Interface (BMI) research is the ability to accurately extract features of the brain signals and to classify them into targeted control action with the easiest procedures since the expected beneficiaries are of disabled. In this paper, a new feature extraction method using the combination of adaptive band pass filters and adaptive autoregressive (AAR) modelling is proposed and applied to the classification of right and left motor imagery signals extracted from the brain. The introduction of the adaptive bandpass filter improves the characterization process of the autocorrelation functions of the AAR models, as it enhances and strengthens the EEG signal, which is noisy and stochastic in nature. The experimental results on the Graz BCI data set have shown that by implementing the proposed feature extraction method, a LDA and SVM classifier outperforms other AAR approaches of the BCI 2003 competition in terms of the mutual information, the competition criterion, or misclassification rate.
Abstract: Prickly pear (Opuntia spp) fruit has received renewed
interest since it contains a betalain pigment that has an attractive
purple colour for the production of juice. Prickly pear juice was
prepared by homogenizing the fruit and treating the pulp with 48 g of
pectinase from Aspergillus niger. Titratable acidity was determined
by diluting 10 ml prickly pear juice with 90 ml deionized water and
titrating to pH 8.2 with 0.1 N NaOH. Brix was measured using a
refractometer and ascorbic acid content assayed
spectrophotometrically. Colour variation was determined
colorimetrically (Hunter L.a.b.). Hunter L.a.b. analysis showed that
the red purple colour of prickly pear juice had been affected by juice
treatments. This was indicated by low light values of colour
difference meter (CDML*), hue, CDMa* and CDMb* values. It was
observed that non-treated prickly pear juice had a high (colour
difference meter of light) CDML* of 3.9 compared to juice
treatments (range 3.29 to 2.14). The CDML* significantly (p
Abstract: Mining frequent tree patterns have many useful
applications in XML mining, bioinformatics, network routing, etc.
Most of the frequent subtree mining algorithms (i.e. FREQT,
TreeMiner and CMTreeMiner) use anti-monotone property in the
phase of candidate subtree generation. However, none of these
algorithms have verified the correctness of this property in tree
structured data. In this research it is shown that anti-monotonicity
does not generally hold, when using weighed support in tree pattern
discovery. As a result, tree mining algorithms that are based on this
property would probably miss some of the valid frequent subtree
patterns in a collection of trees. In this paper, we investigate the
correctness of anti-monotone property for the problem of weighted
frequent subtree mining. In addition we propose W3-Miner, a new
algorithm for full extraction of frequent subtrees. The experimental
results confirm that W3-Miner finds some frequent subtrees that the
previously proposed algorithms are not able to discover.
Abstract: The paper deals with the possibilities of modelling
vapour propagation of explosive substances in the FLUENT
software. With regard to very low tensions of explosive substance
vapours the experiment has been verified as exemplified by
mononitrotoluene. Either constant or time variable meteorological
conditions have been used for calculation. Further, it has been
verified that the eluent source may be time-dependent and may reflect
a real situation or the liberation rate may be constant. The execution
of the experiment as well as evaluation were clear and it could also
be used for modelling vapour and aerosol propagation of selected
explosive substances in the atmospheric boundary layer.
Abstract: In this paper, the melting of a semi-infinite body as a
result of a moving laser beam has been studied. Because the Fourier
heat transfer equation at short times and large dimensions does not
have sufficient accuracy; a non-Fourier form of heat transfer
equation has been used. Due to the fact that the beam is moving in x
direction, the temperature distribution and the melting pool shape are
not asymmetric. As a result, the problem is a transient threedimensional
problem. Therefore, thermophysical properties such as
heat conductivity coefficient, density and heat capacity are functions
of temperature and material states. The enthalpy technique, used for
the solution of phase change problems, has been used in an explicit
finite volume form for the hyperbolic heat transfer equation. This
technique has been used to calculate the transient temperature
distribution in the semi-infinite body and the growth rate of the melt
pool. In order to validate the numerical results, comparisons were
made with experimental data. Finally, the results of this paper were
compared with similar problem that has used the Fourier theory. The
comparison shows the influence of infinite speed of heat propagation
in Fourier theory on the temperature distribution and the melt pool
size.
Abstract: In this study, the hydrogen transport phenomenon was
numerically evaluated by using hydrogen-enhanced localized
plasticity (HELP) mechanisms. Two dominant governing equations,
namely, the hydrogen transport model and the elasto-plastic model,
were introduced. In addition, the implicitly formulated equations of
the governing equations were implemented into ABAQUS UMAT
user-defined subroutines. The simulation results were compared to
published results to validate the proposed method.
Abstract: Checkpointing is one of the commonly used techniques to provide fault-tolerance in distributed systems so that the system can operate even if one or more components have failed. However, mobile computing systems are constrained by low bandwidth, mobility, lack of stable storage, frequent disconnections and limited battery life. Hence, checkpointing protocols having lesser number of synchronization messages and fewer checkpoints are preferred in mobile environment. There are two different approaches, although not orthogonal, to checkpoint mobile computing systems namely, time-based and index-based. Our protocol is a fusion of these two approaches, though not first of its kind. In the present exposition, an index-based checkpointing protocol has been developed, which uses time to indirectly coordinate the creation of consistent global checkpoints for mobile computing systems. The proposed algorithm is non-blocking, adaptive, and does not use any control message. Compared to other contemporary checkpointing algorithms, it is computationally more efficient because it takes lesser number of checkpoints and does not need to compute dependency relationships. A brief account of important and relevant works in both the fields, time-based and index-based, has also been included in the presentation.
Abstract: We present a new algorithm for nonlinear dimensionality reduction that consistently uses global information, and that enables understanding the intrinsic geometry of non-convex manifolds. Compared to methods that consider only local information, our method appears to be more robust to noise. Unlike most methods that incorporate global information, the proposed approach automatically handles non-convexity of the data manifold. We demonstrate the performance of our algorithm and compare it to state-of-the-art methods on synthetic as well as real data.
Abstract: There are a lot of extensions made to the classic model of multi-layer perceptron (MLP). A notable amount of them has been designed to hasten the learning process without considering the quality of generalization. The paper proposes a new MLP extension based on exploiting topology of the input layer of the network. Experimental results show the extended model to improve upon generalization capability in certain cases. The new model requires additional computational resources to compare to the classic model, nevertheless the loss in efficiency isn-t regarded to be significant.
Abstract: This paper presents a new and efficient approach for
capacitor placement in radial distribution systems that determine
the optimal locations and size of capacitor with an objective of
improving the voltage profile and reduction of power loss. The
solution methodology has two parts: in part one the loss sensitivity
factors are used to select the candidate locations for the capacitor
placement and in part two a new algorithm that employs Plant growth
Simulation Algorithm (PGSA) is used to estimate the optimal size
of capacitors at the optimal buses determined in part one. The main
advantage of the proposed method is that it does not require any
external control parameters. The other advantage is that it handles the
objective function and the constraints separately, avoiding the trouble
to determine the barrier factors. The proposed method is applied to 9
and 34 bus radial distribution systems. The solutions obtained by the
proposed method are compared with other methods. The proposed
method has outperformed the other methods in terms of the quality
of solution.
Abstract: Low power consumption is a major constraint for battery-powered system like computer notebook or PDA. In the past, specialists usually designed both specific optimized equipments and codes to relief this concern. Doing like this could work for quite a long time, however, in this era, there is another significant restraint, the time to market. To be able to serve along the power constraint while can launch products in shorter production period, objectoriented programming (OOP) has stepped in to this field. Though everyone knows that OOP has quite much more overhead than assembly and procedural languages, development trend still heads to this new world, which contradicts with the target of low power consumption. Most of the prior power related software researches reported that OOP consumed much resource, however, as industry had to accept it due to business reasons, up to now, no papers yet had mentioned about how to choose the best OOP practice in this power limited boundary. This article is the pioneer that tries to specify and propose the optimized strategy in writing OOP software under energy concerned environment, based on quantitative real results. The language chosen for studying is C# based on .NET Framework 2.0 which is one of the trendy OOP development environments. The recommendation gotten from this research would be a good roadmap that can help developers in coding that well balances between time to market and time of battery.
Abstract: Paper deals with environmental metrics and assessment systems devoted to Small and Medium Sized Enterprises. Authors are presenting proposed assessment model which has an ability to discover current environmental strengths and weaknesses of Small and Middle Sized Enterprise. Suggested model has also an ambition to become a Sustainability Decision Tool. Model is able to identify "best environmental devision" in the company, and to quantify how this decision contributed into overall environmental improvement. Authors understand environmental improvements as environmental innovations (product, process and organizational). Suggested model is based on its own concept; however, authors are also utilizing already existing environmental assessment tools.