Abstract: High voltage generators are being subject to higher
voltage rating and are being designed to operate in harsh conditions.
Stator windings are the main component of generators in which
Electrical, magnetically and thermal stresses remain major failures
for insulation degradation accelerated aging. A large number of
generators failed due to stator winding problems, mainly insulation
deterioration. Insulation degradation assessment plays vital role in the
asset life management. Mostly the stator failure is catastrophic
causing significant damage to the plant. Other than generation loss,
stator failure involves heavy repair or replacement cost. Electro
thermal analysis is the main characteristic for improvement design of
stator slot-s insulation. Dielectric parameters such as insulation
thickness, spacing, material types, geometry of winding and slot are
major design consideration. A very powerful method available to
analyze electro thermal performance is Finite Element Method
(FEM) which is used in this paper. The analysis of various stator coil
and slot configurations are used to design the better dielectric system
to reduce electrical and thermal stresses in order to increase the
power of generator in the same volume of core. This paper describes
the process used to perform classical design and improvement
analysis of stator slot-s insulation.
Abstract: In general, class complexity is measured based on any
one of these factors such as Line of Codes (LOC), Functional points
(FP), Number of Methods (NOM), Number of Attributes (NOA) and so on. There are several new techniques, methods and metrics with
the different factors that are to be developed by the researchers for calculating the complexity of the class in Object Oriented (OO)
software. Earlier, Arockiam et.al has proposed a new complexity measure namely Extended Weighted Class Complexity (EWCC)
which is an extension of Weighted Class Complexity which is proposed by Mishra et.al. EWCC is the sum of cognitive weights of
attributes and methods of the class and that of the classes derived. In EWCC, a cognitive weight of each attribute is considered to be 1.
The main problem in EWCC metric is that, every attribute holds the
same value but in general, cognitive load in understanding the
different types of attributes cannot be the same. So here, we are proposing a new metric namely Attribute Weighted Class Complexity
(AWCC). In AWCC, the cognitive weights have to be assigned for the attributes which are derived from the effort needed to understand
their data types. The proposed metric has been proved to be a better
measure of complexity of class with attributes through the case studies and experiments
Abstract: With the proliferation of mobile computing technology, mobile learning (m-learning) will play a vital role in the rapidly growing electronic learning market. However, the acceptance of m-learning by individuals is critical to the successful implementation of m-learning systems. Thus, there is a need to research the factors that affect users- intention to use m-learning. Based on an updated information system (IS) success model, data collected from 350 respondents in Taiwan were tested against the research model using the structural equation modeling approach. The data collected by questionnaire were analyzed to check the validity of constructs. Then hypotheses describing the relationships between the identified constructs and users- satisfaction were formulated and tested.
Abstract: This paper describes a method to improve the robustness of a face recognition system based on the combination of two compensating classifiers. The face images are preprocessed by the appearance-based statistical approaches such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). LDA features of the face image are taken as the input of the Radial Basis Function Network (RBFN). The proposed approach has been tested on the ORL database. The experimental results show that the LDA+RBFN algorithm has achieved a recognition rate of 93.5%
Abstract: In this study the effect of incorporation of recycled
glass-fibre reinforced polymer (GFRP) waste materials, obtained by
means of milling processes, on mechanical behaviour of polyester
polymer mortars was assessed. For this purpose, different contents of
recycled GFRP waste powder and fibres, with distinct size gradings,
were incorporated into polyester based mortars as sand aggregates
and filler replacements. Flexural and compressive loading capacities
were evaluated and found better than unmodified polymer mortars.
GFRP modified polyester based mortars also show a less brittle
behaviour, with retention of some loading capacity after peak load.
Obtained results highlight the high potential of recycled GFRP waste
materials as efficient and sustainable reinforcement and admixture for
polymer concrete and mortars composites, constituting an emergent
waste management solution.
Abstract: Gradual patterns have been studied for many years as
they contain precious information. They have been integrated in
many expert systems and rule-based systems, for instance to reason
on knowledge such as “the greater the number of turns, the greater
the number of car crashes”. In many cases, this knowledge has been
considered as a rule “the greater the number of turns → the greater
the number of car crashes” Historically, works have thus been
focused on the representation of such rules, studying how implication
could be defined, especially fuzzy implication. These rules were
defined by experts who were in charge to describe the systems they
were working on in order to turn them to operate automatically. More
recently, approaches have been proposed in order to mine databases
for automatically discovering such knowledge. Several approaches
have been studied, the main scientific topics being: how to determine
what is an relevant gradual pattern, and how to discover them as
efficiently as possible (in terms of both memory and CPU usage).
However, in some cases, end-users are not interested in raw level
knowledge, and are rather interested in trends. Moreover, it may be
the case that no relevant pattern can be discovered at a low level of
granularity (e.g. city), whereas some can be discovered at a higher
level (e.g. county). In this paper, we thus extend gradual pattern
approaches in order to consider multiple level gradual patterns. For
this purpose, we consider two aggregation policies, namely
horizontal and vertical.
Abstract: In this paper, a wavelet-based neural network (WNN) classifier for recognizing EEG signals is implemented and tested under three sets EEG signals (healthy subjects, patients with epilepsy and patients with epileptic syndrome during the seizure). First, the Discrete Wavelet Transform (DWT) with the Multi-Resolution Analysis (MRA) is applied to decompose EEG signal at resolution levels of the components of the EEG signal (δ, θ, α, β and γ) and the Parseval-s theorem are employed to extract the percentage distribution of energy features of the EEG signal at different resolution levels. Second, the neural network (NN) classifies these extracted features to identify the EEGs type according to the percentage distribution of energy features. The performance of the proposed algorithm has been evaluated using in total 300 EEG signals. The results showed that the proposed classifier has the ability of recognizing and classifying EEG signals efficiently.
Abstract: In the paper the method of product analysis from
recycling point of view has been described. The analysis bases on set
of measures that assess a product from the point of view of final
stages of its lifecycle. It was assumed that such analysis will be
performed at the design phase – in order to conduct such analysis the
computer system that aids the designer during the design process has
been developed. The structure of the computer tool, based on agent
technology, and example results has been also included in the paper.
Abstract: This paper presents the development techniques
for a complete autonomous design model of an advanced train
control system and gives a new approach for the
implementation of multi-agents based system. This research
work proposes to develop a novel control system to enhance
the efficiency of the vehicles under constraints of various
conditions, and contributes in stability and controllability
issues, considering relevant safety and operational
requirements with command control communication and
various sensors to avoid accidents. The approach of speed
scheduling, management and control in local and distributed
environment is given to fulfill the dire needs of modern trend
and enhance the vehicles control systems in automation. These
techniques suggest the state of the art microelectronic
technology with accuracy and stability as forefront goals.
Abstract: Sediment formation and its transport along the river course is considered as important hydraulic consideration in river engineering. Their impact on the morphology of rivers on one hand and important considerations of which in the design and construction of the hydraulic structures on the other has attracted the attention of experts in arid and semi-arid regions. Under certain conditions where the momentum energy of the flow stream reaches a specific rate, the sediment materials start to be transported with the flow. This can usually be analyzed in two different categories of suspended and bed load materials. Sedimentation phenomenon along the waterways and the conveyance of vast volume of materials into the canal networks can potentially influence water abstraction in the intake structures. This can pose a serious threat to operational sustainability and water delivery performance in the canal networks. The situation is serious where ineffective watershed management (poor vegetation cover in the water basin) is the underlying cause of soil erosion which feeds the materials into the waterways that intern would necessitate comprehensive study. The present paper aims to present an analytical investigation of the sediment process in the waterways on one hand and estimation of the sediment load transport into the lined canals using the SHARC software on the other. For this reason, the paper focuses on the comparative analysis of the hydraulic behaviors of the Sabilli main canal that feeds the pumping station with that of the Western canal in the Greater Dezful region to identify effective factors in sedimentation and ways of mitigating their impact on water abstraction in the canal systems. The method involved use of observational data available in the Dezful Dastmashoon hydrometric station along a 6 km waterway of the Sabilli main canal using the SHARC software to estimate the suspended load concentration and bed load materials. Results showed the transport of a significant volume of sediment loads from the waterways into the canal system which is assumed to have arisen from the absence of stilling basin on one hand and the gravity flow on the other has caused serious challenges. This is contrary to what occurs in the Sabilli canal, where the design feature which incorporates a settling basin just before the pumping station is the major cause of reduced sediment load transport into the canal system.Results showed that modification of the present design features by constructing a settling basin just upstream of the western intake structure can considerably reduce the entry of sediment materials into the canal system. Not only this can result in the sustainability of the hydraulic structures but can also improve operational performance of water conveyance and distribution system, all of which are the pre-requisite to secure reliable and equitable water delivery regime for the command area.
Abstract: This research is aimed to compare the percentages of correct classification of Empirical Bayes method (EB) to Classical method when data are constructed as near normal, short-tailed and long-tailed symmetric, short-tailed and long-tailed asymmetric. The study is performed using conjugate prior, normal distribution with known mean and unknown variance. The estimated hyper-parameters obtained from EB method are replaced in the posterior predictive probability and used to predict new observations. Data are generated, consisting of training set and test set with the sample sizes 100, 200 and 500 for the binary classification. The results showed that EB method exhibited an improved performance over Classical method in all situations under study.
Abstract: Planning community has been long discussing emerging paradigms within the planning theory in the face of the changing conditions of the world order. The paradigm shift concept was introduced by Thomas Kuhn, in 1960, who claimed the necessity of shifting within scientific knowledge boundaries; and following him in 1970 Imre Loktas also gave priority to the emergence of multi-paradigm societies [24]. Multi-paradigm is changing our predetermined lifeworld through uncertainties. Those uncertainties are reflected in two sides, the first one is uncertainty as a concept of possibility and creativity in public sphere and the second one is uncertainty as a risk. Therefore, it is necessary to apply a resilience planning approach to be more dynamic in controlling uncertainties which have the potential to transfigure present time and space definitions. In this way, stability of system can be achieved. Uncertainty is not only an outcome of worldwide changes but also a place-specific issue, i.e. it changes from continent to continent, a country to country; a region to region. Therefore, applying strategic spatial planning with respect to resilience principle contributes to: control, grasp and internalize uncertainties through place-specific strategies. In today-s fast changing world, planning system should follow strategic spatial projects to control multi-paradigm societies with adaptability capacities. Here, we have selected two alternatives to demonstrate; these are; 1.Tehran (Iran) from the Middle East 2.Bath (United Kingdom) from Europe. The study elaborates uncertainties and particularities in their strategic spatial planning processes in a comparative manner. Through the comparison, the study aims at assessing place-specific priorities in strategic planning. The approach is to a two-way stream, where the case cities from the extreme end of the spectrum can learn from each other. The structure of this paper is to firstly compare semi-periphery (Tehran) and coreperiphery (Bath) cities, with the focus to reveal how they equip to face with uncertainties according to their geographical locations and local particularities. Secondly, the key message to address is “Each locality requires its own strategic planning approach to be resilient.--
Abstract: The problem of natural convection about a cone embedded in a porous medium at local Rayleigh numbers based on the boundary layer approximation and the Darcy-s law have been studied before. Similarity solutions for a full cone with the prescribed wall temperature or surface heat flux boundary conditions which is the power function of distance from the vertex of the inverted cone give us a third-order nonlinear differential equation. In this paper, an approximate method for solving higher-order ordinary differential equations is proposed. The approach is based on a rational Chebyshev Tau (RCT) method. The operational matrices of the derivative and product of rational Chebyshev (RC) functions are presented. These matrices together with the Tau method are utilized to reduce the solution of the higher-order ordinary differential equations to the solution of a system of algebraic equations. We also present the comparison of this work with others and show that the present method is applicable.
Abstract: Electromagnetic interference (EMI) is one of the
serious problems in most electrical and electronic appliances
including fluorescent lamps. The electronic ballast used to regulate
the power flow through the lamp is the major cause for EMI. The
interference is because of the high frequency switching operation of
the ballast. Formerly, some EMI mitigation techniques were in
practice, but they were not satisfactory because of the hardware
complexity in the circuit design, increased parasitic components and
power consumption and so on. The majority of the researchers have
their spotlight only on EMI mitigation without considering the other
constraints such as cost, effective operation of the equipment etc. In
this paper, we propose a technique for EMI mitigation in fluorescent
lamps by integrating Frequency Modulation and Evolutionary
Programming. By the Frequency Modulation technique, the
switching at a single central frequency is extended to a range of
frequencies, and so, the power is distributed throughout the range of
frequencies leading to EMI mitigation. But in order to meet the
operating frequency of the ballast and the operating power of the
fluorescent lamps, an optimal modulation index is necessary for
Frequency Modulation. The optimal modulation index is determined
using Evolutionary Programming. Thereby, the proposed technique
mitigates the EMI to a satisfactory level without disturbing the
operation of the fluorescent lamp.
Abstract: The analysis of Acoustic Emission (AE) signal
generated from metal cutting processes has often approached
statistically. This is due to the stochastic nature of the emission
signal as a result of factors effecting the signal from its generation
through transmission and sensing. Different techniques are applied in
this manner, each of which is suitable for certain processes. In metal
cutting where the emission generated by the deformation process is
rather continuous, an appropriate method for analysing the AE signal
based on the root mean square (RMS) of the signal is often used and
is suitable for use with the conventional signal processing systems.
The aim of this paper is to set a strategy in tool failure detection in
turning processes via the statistic analysis of the AE generated from
the cutting zone. The strategy is based on the investigation of the
distribution moments of the AE signal at predetermined sampling.
The skews and kurtosis of these distributions are the key elements in
the detection. A normal (Gaussian) distribution has first been
suggested then this was eliminated due to insufficiency. The so
called Beta distribution was then considered, this has been used with
an assumed β density function and has given promising results with
regard to chipping and tool breakage detection.
Abstract: If a possibility distribution and a probability distribution
are describing values x of one and the same system or process
x(t), can they relate to each other? Though in general the possibility
and probability distributions might be not connected at all, we
can assume that in some particular cases there is an association linked
them.
In the presented paper, we consider distributions of bloodstream
concentrations of physiologically active substances and propose that
the probability to observe a concentration x of a substance X can be
produced from the possibility of the event X = x .
The proposed assumptions and resulted theoretical distributions
are tested against the data obtained from various panel studies of the
bloodstream concentrations of the different physiologically active
substances in patients and healthy adults as well.
Abstract: An analysis is made of the flow of an incompressible viscoelastic fluid (of small memory) over a porous plate subject to suction or blowing. It is found that velocity at a point increases with increase in the elasticity in the fluid. It is also shown that wall shear stress depends only on suction and is also independent of the material of fluids. No steady solution for velocity distribution exists when there is blowing at the plate. Temperature distribution in the boundary layer is determined and it is found that temperature at a point decreases with increase in the elasticity in the fluid.
Abstract: In this paper a procedure for the split-pipe design of looped water distribution network based on the use of simulated annealing is proposed. Simulated annealing is a heuristic-based search algorithm, motivated by an analogy of physical annealing in solids. It is capable for solving the combinatorial optimization problem. In contrast to the split-pipe design that is derived from a continuous diameter design that has been implemented in conventional optimization techniques, the split-pipe design proposed in this paper is derived from a discrete diameter design where a set of pipe diameters is chosen directly from a specified set of commercial pipes. The optimality and feasibility of the solutions are found to be guaranteed by using the proposed method. The performance of the proposed procedure is demonstrated through solving the three well-known problems of water distribution network taken from the literature. Simulated annealing provides very promising solutions and the lowest-cost solutions are found for all of these test problems. The results obtained from these applications show that simulated annealing is able to handle a combinatorial optimization problem of the least cost design of water distribution network. The technique can be considered as an alternative tool for similar areas of research. Further applications and improvements of the technique are expected as well.
Abstract: A Personal Distributed Environment (PDE) is an
example of an IP-based system architecture designed for future
mobile communications. In a single PDE, there exist several Subnetworks
hosting devices located across the infrastructure, which will
inter-work with one another through the coordination of a Device
Management Entity (DME). Some of these Sub-networks are fixed
and some are mobile. In order to support Mobile Sub-networks
mobility in the PDE, the PDE-NEMO protocol was proposed. This
paper discussed the signalling cost analysis of PDE-NEMO by use of
a detailed simulation model. The paper started with the introduction
of the protocol, followed by the experiments and results and then
followed by discussions.
Abstract: Adsorption of Toluidine blue dye from aqueous solutions onto Neem Leaf Powder (NLP) has been investigated. The surface characterization of this natural material was examined by Particle size analysis, Scanning Electron Microscopy (SEM), Fourier Transform Infrared (FTIR) spectroscopy and X-Ray Diffraction (XRD). The effects of process parameters such as initial concentration, pH, temperature and contact duration on the adsorption capacities have been evaluated, in which pH has been found to be most effective parameter among all. The data were analyzed using the Langmuir and Freundlich for explaining the equilibrium characteristics of adsorption. And kinetic models like pseudo first- order, second-order model and Elovich equation were utilized to describe the kinetic data. The experimental data were well fitted with Langmuir adsorption isotherm model and pseudo second order kinetic model. The thermodynamic parameters, such as Free energy of adsorption (AG"), enthalpy change (AH') and entropy change (AS°) were also determined and evaluated.