Abstract: Using neural network we try to model the unknown function f for given input-output data pairs. The connection strength of each neuron is updated through learning. Repeated simulations of crisp neural network produce different values of weight factors that are directly affected by the change of different parameters. We propose the idea that for each neuron in the network, we can obtain quasi-fuzzy weight sets (QFWS) using repeated simulation of the crisp neural network. Such type of fuzzy weight functions may be applied where we have multivariate crisp input that needs to be adjusted after iterative learning, like claim amount distribution analysis. As real data is subjected to noise and uncertainty, therefore, QFWS may be helpful in the simplification of such complex problems. Secondly, these QFWS provide good initial solution for training of fuzzy neural networks with reduced computational complexity.
Abstract: HSDPA is a new feature which is introduced in
Release-5 specifications of the 3GPP WCDMA/UTRA standard to
realize higher speed data rate together with lower round-trip times.
Moreover, the HSDPA concept offers outstanding improvement of
packet throughput and also significantly reduces the packet call
transfer delay as compared to Release -99 DSCH. Till now the
HSDPA system uses turbo coding which is the best coding technique
to achieve the Shannon limit. However, the main drawbacks of turbo
coding are high decoding complexity and high latency which makes
it unsuitable for some applications like satellite communications,
since the transmission distance itself introduces latency due to
limited speed of light. Hence in this paper it is proposed to use LDPC
coding in place of Turbo coding for HSDPA system which decreases
the latency and decoding complexity. But LDPC coding increases the
Encoding complexity. Though the complexity of transmitter
increases at NodeB, the End user is at an advantage in terms of
receiver complexity and Bit- error rate. In this paper LDPC Encoder
is implemented using “sparse parity check matrix" H to generate a
codeword at Encoder and “Belief Propagation algorithm "for LDPC
decoding .Simulation results shows that in LDPC coding the BER
suddenly drops as the number of iterations increase with a small
increase in Eb/No. Which is not possible in Turbo coding. Also same
BER was achieved using less number of iterations and hence the
latency and receiver complexity has decreased for LDPC coding.
HSDPA increases the downlink data rate within a cell to a theoretical
maximum of 14Mbps, with 2Mbps on the uplink. The changes that
HSDPA enables includes better quality, more reliable and more
robust data services. In other words, while realistic data rates are
only a few Mbps, the actual quality and number of users achieved
will improve significantly.
Abstract: Results are presented from a combined experimental
and modeling study undertaken to understand the effect of fuel spray
angle on soot production in turbulent liquid spray flames. The
experimental work was conducted in a cylindrical laboratory furnace
at fuel spray cone angle of 30º, 45º and 60º. Soot concentrations
inside the combustor are measured by filter paper technique. The soot
concentration is modeled by using the soot particle number density
and the mass density based acetylene concentrations. Soot oxidation
occurred by both hydroxide radicals and oxygen molecules. The
comparison of calculated results against experimental measurements
shows good agreement. Both the numerical and experimental results
show that the peak value of soot and its location in the furnace
depend on fuel spray cone angle. An increase in spray angle enhances
the evaporating rate and peak temperature near the nozzle. Although
peak soot concentration increase with enhance of fuel spray angle but
soot emission from the furnace decreases.
Abstract: Proper orthogonal decomposition (POD) is used to reconstruct spatio-temporal data of a fully developed turbulent channel flow with density variation at Reynolds number of 150, based on the friction velocity and the channel half-width, and Prandtl number of 0.71. To apply POD to the fully developed turbulent channel flow with density variation, the flow field (velocities, density, and temperature) is scaled by the corresponding root mean square values (rms) so that the flow field becomes dimensionless. A five-vector POD problem is solved numerically. The reconstructed second-order moments of velocity, temperature, and density from POD eigenfunctions compare favorably to the original Direct Numerical Simulation (DNS) data.
Abstract: With today's fast lifestyles and busy schedule, nuclear
families are becoming popular. Thus, the elderly members of these
families are often neglected. This has lead to the popularity of the
concept of Community living for the aged. The elders reside at a
centre, which is controlled by the MANAGER. The manager takes
responsibility of the functioning of the centre which includes taking
care of 'residents' at the centre along with managing the daily chores
of the centre, which he accomplishes with the help of a number of
staff members and volunteers Often the Manager is not an employee
but a volunteer. In such cases especially, time is an important
constraint. A system, which provides an easy and efficient manner of
managing the working of an old age home in detail, will prove to be
of great benefit. We have developed a P.C. based organizer used to
monitor the various activities of an old age home. It is an effective
and easy-to-use system which will enable the manager to keep an
account of all the residents, their accounts, staff members, volunteers,
the centre-s logistic requirements etc. It is thus, a comprehensive
'Organizer' for Old Age Homes.
Abstract: Most CT reconstruction system x-ray computed
tomography (CT) is a well established visualization technique in
medicine and nondestructive testing. However, since CT scanning
requires sampling of radiographic projections from different viewing
angles, common CT systems with mechanically moving parts are too
slow for dynamic imaging, for instance of multiphase flows or live
animals. A large number of X-ray projections are needed to
reconstruct CT images, so the collection and calculation of the
projection data consume too much time and harmful for patient. For
the purpose of solving the problem, in this study, we proposed a
method for tomographic reconstruction of a sample from a limited
number of x-ray projections by using linear interpolation method. In
simulation, we presented reconstruction from an experimental x-ray
CT scan of a Aluminum phantom that follows to two steps: X-ray
projections will be interpolated using linear interpolation method and
using it for CT reconstruction based upon Ordered Subsets
Expectation Maximization (OSEM) method.
Abstract: Since the European renewable energy directives set the
target for 22.1% of electricity generation to be supplied by 2010
[1], there has been increased interest in using green technologies
also within the urban enviroment. The most commonly considered
installations are solar thermal and solar photovoltaics. Nevertheless,
as observed by Bahaj et al. [2], small scale turbines can reduce the
built enviroment related CO2 emissions. Thus, in the last few years,
an increasing number of manufacturers have developed small wind
turbines specifically designed for the built enviroment. The present
work focuses on the integration into architectural systems of such
installations and presents a survey of successful case studies.
Abstract: The Prediction of aerodynamic characteristics and
shape optimization of airfoil under the ground effect have been carried
out by integration of computational fluid dynamics and the multiobjective
Pareto-based genetic algorithm. The main flow
characteristics around an airfoil of WIG craft are lift force, lift-to-drag
ratio and static height stability (H.S). However, they show a strong
trade-off phenomenon so that it is not easy to satisfy the design
requirements simultaneously. This difficulty can be resolved by the
optimal design. The above mentioned three characteristics are chosen
as the objective functions and NACA0015 airfoil is considered as a
baseline model in the present study. The profile of airfoil is
constructed by Bezier curves with fourteen control points and these
control points are adopted as the design variables. For multi-objective
optimization problems, the optimal solutions are not unique but a set
of non-dominated optima and they are called Pareto frontiers or Pareto
sets. As the results of optimization, forty numbers of non- dominated
Pareto optima can be obtained at thirty evolutions.
Abstract: This paper is introduced a modification to Diffie-
Hellman protocol to be applicable on the decimal numbers, which
they are the numbers between zero and one. For this purpose we
extend the theory of the congruence. The new congruence is over
the set of the real numbers and it is called the “real congruence"
or the “real modulus". We will refer to the existing congruence by
the “integer congruence" or the “integer modulus". This extension
will define new terms and redefine the existing terms. As the
properties and the theorems of the integer modulus are extended as
well. Modified Diffie-Hellman key exchange protocol is produced a
sharing, secure and decimal secret key for the the cryptosystems that
depend on decimal numbers.
Abstract: Our study proposes an alternative method in building
Fuzzy Rule-Based System (FRB) from Support Vector Machine
(SVM). The first set of fuzzy IF-THEN rules is obtained through
an equivalence of the SVM decision network and the zero-ordered
Sugeno FRB type of the Adaptive Network Fuzzy Inference System
(ANFIS). The second set of rules is generated by combining the
first set based on strength of firing signals of support vectors using
Gaussian kernel. The final set of rules is then obtained from the
second set through input scatter partitioning. A distinctive advantage
of our method is the guarantee that the number of final fuzzy IFTHEN
rules is not more than the number of support vectors in the
trained SVM. The final FRB system obtained is capable of performing
classification with results comparable to its SVM counterpart, but it
has an advantage over the black-boxed SVM in that it may reveal
human comprehensible patterns.
Abstract: This paper presents a new circuit arrangement for a
current-mode Wheatstone bridge that is suitable for low-voltage
integrated circuits implementation. Compared to the other proposed
circuits, this circuit features severe reduction of the elements number,
low supply voltage (1V) and low power consumption (
Abstract: This paper simulates the ad-hoc mesh network in rural areas, where such networks receive great attention due to their cost, since installing the infrastructure for regular networks in these areas is not possible due to the high cost. The distance between the communicating nodes is the most obstacles that the ad-hoc mesh network will face. For example, in Terranet technology, two nodes can communicate if they are only one kilometer far from each other. However, if the distance between them is more than one kilometer, then each node in the ad-hoc mesh networks has to act as a router that forwards the data it receives to other nodes. In this paper, we try to find the critical number of nodes which makes the network fully connected in a particular area, and then propose a method to enhance the intermediate node to accept to be a router to forward the data from the sender to the receiver. Much work was done on technological changes on peer to peer networks, but the focus of this paper will be on another feature which is to find the minimum number of nodes needed for a particular area to be fully connected and then to enhance the users to switch on their phones and accept to work as a router for other nodes. Our method raises the successful calls to 81.5% out of 100% attempt calls.
Abstract: Finding the minimal logical functions has important applications in the design of logical circuits. This task is solved by many different methods but, frequently, they are not suitable for a computer implementation. We briefly summarise the well-known Quine-McCluskey method, which gives a unique procedure of computing and thus can be simply implemented, but, even for simple examples, does not guarantee an optimal solution. Since the Petrick extension of the Quine-McCluskey method does not give a generally usable method for finding an optimum for logical functions with a high number of values, we focus on interpretation of the result of the Quine-McCluskey method and show that it represents a set covering problem that, unfortunately, is an NP-hard combinatorial problem. Therefore it must be solved by heuristic or approximation methods. We propose an approach based on genetic algorithms and show suitable parameter settings.
Abstract: Cooktop burners are widely used nowadays. In
cooktop burner design, nozzle efficiency and greenhouse
gas(GHG) emissions mainly depend on heat transfer from the
premixed flame to the impinging surface. This is a complicated
issue depending on the individual and combined effects of various
input combustion variables. Optimal operating conditions for
sustainable burner design were rarely addressed, especially in the
case of multiple slot-jet burners. Through evaluating the optimal
combination of combustion conditions for a premixed slot-jet
array, this paper develops a practical approach for the sustainable
design of gas cooktop burners. Efficiency, CO and NOx emissions
in respect of an array of slot jets using premixed flames were
analysed. Response surface experimental design were applied to
three controllable factors of the combustion process, viz.
Reynolds number, equivalence ratio and jet-to-vessel distance.
Desirability Function Approach(DFA) is the analytic technique
used for the simultaneous optimization of the efficiency and
emission responses.
Abstract: This paper presents the results of a comprehensive
investigation of five blackouts that occurred on 28 August to 8
September 2011 due to bushing failures of the 132/33 kV, 125 MVA
transformers at JBB Ali Grid station. The investigation aims to
explore the root causes of the bushing failures and come up with
recommendations that help in rectifying the problem and avoiding the
reoccurrence of similar type of incidents. The incident reports about
the failed bushings and the SCADA reports at this grid station were
examined and analyzed. Moreover, comprehensive power quality
field measurements at ten 33/11 kV substations (S/Ss) in JBB Ali
area were conducted, and frequency scans were performed to verify
any harmonic resonance frequencies due to power factor correction
capacitors. Furthermore, the daily operations of the on-load tap
changers (OLTCs) of both the 125 MVA and 20 MVA transformers
at JBB Ali Grid station have been analyzed. The investigation
showed that the five bushing failures were due to a local problem, i.e.
internal degradation of the bushing insulation. This has been
confirmed by analyzing the time interval between successive OLTC
operations of the faulty grid transformers. It was also found that
monitoring the number of OLTC operations can help in predicting
bushing failure.
Abstract: The present work faces the problem of automatic enumeration and recognition of an unknown and time-varying number of environmental sound sources while using a single microphone. The assumption that is made is that the sound recorded is a realization of sound sources belonging to a group of audio classes which is known a-priori. We describe two variations of the same principle which is to calculate the distance between the current unknown audio frame and all possible combinations of the classes that are assumed to span the soundscene. We concentrate on categorizing environmental sound sources, such as birds, insects etc. in the task of monitoring the biodiversity of a specific habitat.
Abstract: The multiple traveling salesman problem (mTSP) can be used to model many practical problems. The mTSP is more complicated than the traveling salesman problem (TSP) because it requires determining which cities to assign to each salesman, as well as the optimal ordering of the cities within each salesman's tour. Previous studies proposed that Genetic Algorithm (GA), Integer Programming (IP) and several neural network (NN) approaches could be used to solve mTSP. This paper compared the results for mTSP, solved with Genetic Algorithm (GA) and Nearest Neighbor Algorithm (NNA). The number of cities is clustered into a few groups using k-means clustering technique. The number of groups depends on the number of salesman. Then, each group is solved with NNA and GA as an independent TSP. It is found that k-means clustering and NNA are superior to GA in terms of performance (evaluated by fitness function) and computing time.
Abstract: The increase in energy demand has raised concerns
over adverse impacts on the environment from energy generation. It
is important to understand the status of energy consumption for
institutions such as Curtin Sarawak to ensure the sustainability of
energy usage, and also to reduce its costs. In this study, a preliminary
audit framework was developed and was conducted around the
Malaysian campus to obtain information such as the number and
specifications of electrical appliances, built-up area and ambient
temperature to understand the relationship of these factors with
energy consumption. It was found that the number and types of
electrical appliances, population and activities in the campus
impacted the energy consumption of Curtin Sarawak directly.
However, the built-up area and ambient temperature showed no clear
correlation with energy consumption. An investigation of the diurnal
and seasonal energy consumption of the campus was also carried out.
From the data, recommendations were made to improve the energy
efficiency of the campus.
Abstract: A numerical study is presented on convective heat transfer in enclosures. The results are addressed to automotive headlights containing new-age light sources like Light Emitting Diodes (LED). The heat transfer from the heat source (LED) to the enclosure walls is investigated for mixed convection as interaction of the forced convection flow from an inlet and an outlet port and the natural convection at the heat source. Unlike existing studies, inlet and outlet port are thermally coupled and do not serve to remove hot fluid. The input power of the heat source is expressed by the Rayleigh number. The internal position of the heat source, the aspect ratio of the enclosure, and the inclination angle of one wall are varied. The results are given in terms of the global Nusselt number and the enclosure Nusselt number that characterize the heat transfer from the source and from the interior fluid to the enclosure walls, respectively. It is found that the heat transfer from the source to the fluid can be maximized if the source is placed in the main stream from the inlet to the outlet port. In this case, the Reynolds number and heat source position have the major impact on the heat transfer. A disadvantageous position has been found where natural and forced convection compete each other. The overall heat transfer from the source to the wall increases with increasing Reynolds number as well as with increasing aspect ratio and decreasing inclination angle. The heat transfer from the interior fluid to the enclosure wall increases upon decreasing the aspect ratio and increasing the inclination angle. This counteracting behaviour is caused by the variation of the area of the enclosure wall. All mixed convection results are compared to the natural convection limit.
Abstract: In this paper, we introduce a new method for elliptical
object identification. The proposed method adopts a hybrid scheme
which consists of Eigen values of covariance matrices, Circular
Hough transform and Bresenham-s raster scan algorithms. In this
approach we use the fact that the large Eigen values and small Eigen
values of covariance matrices are associated with the major and minor
axial lengths of the ellipse. The centre location of the ellipse can be
identified using circular Hough transform (CHT). Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze zero
elements and contain a small number of nonzero elements they
provide an advantage of matrix storage space and computational time.
Neighborhood suppression scheme is used to find the valid Hough
peaks. The accurate position of circumference pixels is identified
using raster scan algorithm which uses the geometrical symmetry
property. This method does not require the evaluation of tangents or
curvature of edge contours, which are generally very sensitive to
noise working conditions. The proposed method has the advantages of
small storage, high speed and accuracy in identifying the feature. The
new method has been tested on both synthetic and real images.
Several experiments have been conducted on various images with
considerable background noise to reveal the efficacy and robustness.
Experimental results about the accuracy of the proposed method,
comparisons with Hough transform and its variants and other
tangential based methods are reported.