Abstract: An on-demand routing protocol for wireless ad hoc
networks is one that searches for and attempts to discover a route to
some destination node only when a sending node originates a data
packet addressed to that node. In order to avoid the need for such a
route discovery to be performed before each data packet is sent, such
routing protocols must cache routes previously discovered. This
paper presents an analysis of the effect of intelligent caching in a non
clustered network, using on-demand routing protocols in wireless ad
hoc networks. The analysis carried out is based on the Dynamic
Source Routing protocol (DSR), which operates entirely on-demand.
DSR uses the cache in every node to save the paths that are learnt
during route discovery procedure. In this implementation, caching
these paths only at intermediate nodes and using the paths from these
caches when required is tried. This technique helps in storing more
number of routes that are learnt without erasing the entries in the
cache, to store a new route that is learnt.
The simulation results on DSR have shown that this technique
drastically increases the available memory for caching the routes
discovered without affecting the performance of the DSR routing
protocol in any way, except for a small increase in end to end delay.
Abstract: This paper describes the authorization system
architecture for Pervasive Grid environment. It discusses the
characteristics of classical authorization system and requirements of
the authorization system in pervasive grid environment as well.
Based on our analysis of current systems and taking into account the
main requirements of such pervasive environment, we propose new
authorization system architecture as an extension of the existing grid
authorization mechanisms. This architecture not only supports user
attributes but also context attributes which act as a key concept for
context-awareness thought. The architecture allows authorization of
users dynamically when there are changes in the pervasive grid
environment. For this, we opt for hybrid authorization method that
integrates push and pull mechanisms to combine the existing grid
authorization attributes with dynamic context assertions. We will
investigate the proposed architecture using a real testing environment
that includes heterogeneous pervasive grid infrastructures mapped
over multiple virtual organizations. Various scenarios are described
in the last section of the article to strengthen the proposed mechanism
with different facilities for the authorization procedure.
Abstract: Noise causes significant sensibility changes on a human. This study investigated the effect of five different noises on electroencephalogram (EEG) and subjective evaluation. Six human subjects were exposed to classic piano, ocean wave, alarm in army, ambulance, mosquito noise and EEG data were collected during the experimental session. Alpha band activity in the mosquito noise was smaller than that in the classic piano. Alpha band activity decreased 43.4 ± 8.2 % in the mosquito noise. On the other hand, Beta band activity in the mosquito noise was greater than that in the classic piano. Beta band activity increased 60.1 ± 10.7 % in the mosquito noise. The advances from this study may aid the product design process with human sensibility engineering. This result may provide useful information in designing a human-oriented product to avoid the stress.
Abstract: This study investigates the performance of radial basis function networks (RBFN) in forecasting the monthly CO2 emissions of an electric power utility. We also propose a method for input variable selection. This method is based on identifying the general relationships between groups of input candidates and the output. The effect that each input has on the forecasting error is examined by removing all inputs except the variable to be investigated from its group, calculating the networks parameter and performing the forecast. Finally, the new forecasting error is compared with the reference model. Eight input variables were identified as the most relevant, which is significantly less than our reference model with 30 input variables. The simulation results demonstrate that the model with the 8 inputs selected using the method introduced in this study performs as accurate as the reference model, while also being the most parsimonious.
Abstract: Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Abstract: The incorporation of renewable energy sources for the sustainable electricity production is undertaking a more prominent role in electric power systems. Thus, it will be an indispensable incident that the characteristics of future power networks, their prospective stability for instance, get influenced by the imposed features of sustainable energy sources. One of the distinctive attributes of the sustainable energy sources is exhibiting the stochastic behavior. This paper investigates the impacts of this stochastic behavior on the small disturbance rotor angle stability in the upcoming electric power networks. Considering the various types of renewable energy sources and the vast variety of system configurations, the sensitivity analysis can be an efficient breakthrough towards generalizing the effects of new energy sources on the concept of stability. In this paper, the definition of small disturbance angle stability for future power systems and the iterative-stochastic way of its analysis are presented. Also, the effects of system parameters on this type of stability are described by performing a sensitivity analysis for an electric power test system.
Abstract: The protection of parallel transmission lines has been a challenging task due to mutual coupling between the adjacent circuits of the line. This paper presents a novel scheme for detection and classification of faults on parallel transmission lines. The proposed approach uses combination of wavelet transform and neural network, to solve the problem. While wavelet transform is a powerful mathematical tool which can be employed as a fast and very effective means of analyzing power system transient signals, artificial neural network has a ability to classify non-linear relationship between measured signals by identifying different patterns of the associated signals. The proposed algorithm consists of time-frequency analysis of fault generated transients using wavelet transform, followed by pattern recognition using artificial neural network to identify the type of the fault. MATLAB/Simulink is used to generate fault signals and verify the correctness of the algorithm. The adaptive discrimination scheme is tested by simulating different types of fault and varying fault resistance, fault location and fault inception time, on a given power system model. The simulation results show that the proposed scheme for fault diagnosis is able to classify all the faults on the parallel transmission line rapidly and correctly.
Abstract: Current technological advances pale in comparison to the changes in social behaviors and 'sense of place' that is being empowered since the Internet made it on the scene. Today-s students view the Internet as both a source of entertainment and an educational tool. The development of virtual environments is a conceptual framework that needs to be addressed by educators and it is important that they become familiar with who these virtual learners are and how they are motivated to learn. Massively multiplayer online role playing games (MMORPGs), if well designed, could become the vehicle of choice to deliver learning content. We suggest that these games, in order to accomplish these goals, must begin with well-established instructional design principles that are co-aligned with established principles of video game design. And have the opportunity to provide an instructional model of significant prescriptive power. The authors believe that game designers need to take advantage of the natural motivation player-learners have for playing games by developing them in such a way so as to promote, intrinsic motivation, content learning, transfer of knowledge, and naturalization.
Abstract: The goal of the study reported in the paper was to
determine whether Ambient Occlusion Shading (AOS) has a significant effect on users' perception of American Sign Language (ASL) finger spelling animations. Seventy-one (71) subjects
participated in the study; all subjects were fluent in ASL. The participants were asked to watch forty (40) sign language animation
clips representing twenty (20) finger spelled words. Twenty (20) clips did not show ambient occlusion, whereas the other twenty (20) were
rendered using ambient occlusion shading. After viewing each animation, subjects were asked to type the word being finger-spelled and rate its legibility. Findings show that the presence of AOS had a significant effect on the subjects perception of the signed words.
Subjects were able to recognize the animated words rendered with AOS with higher level of accuracy, and the legibility ratings of the animations showing AOS were consistently higher across subjects.
Abstract: Bridge is an architectural symbol in Iran as Badgir
(wind catcher); fire temples and arch are vaults are such. Therefore, from the very old ages, construction of bridges in Iran has mixed with
architecture, social customs, alms and charity and holiness. Since long ago, from Mad, Achaemenid, Parthian and Sassanid times which construction of bridges got an inseparable relation with social dependency and architecture, based on those dependency bridges and
dams got holy names; as Dokhtar castle and Dokhtar bridges were constructed. This method continued even after Islam and whenever
Iranians got free from political fights and the immunity of roads were established the bridge construction did also prospered. In ancient
times bridge construction passes through it growing and completion process and in Sassanid time in some way it reached to the peak of art
and glory; as after Islam especially during 4th. century (Arab calendar) it put behind a period of glory and in Safavid time it
reached to an exceptional glory and magnificence by constructing
glorious bridges on Zayandeh Roud River in Isfahan.
Having a combined style and changeability into bridge barrier, some of these bridges develop into magnificent constructions. The
sustainable structures, mentioned above, are constructed for various
reasons as follows: connecting two sides of a river, storing water,
controlling floods, using water energy to operate water windmills, making lanes of streams for farms- use, and building recreational
places for people, etc. These studies carried in bridges reveals the fact
that in construction and designing mentioned above, lots of
technological factors have been taken into consideration such as
exceeding floods in the rives, hydraulic and hydrology of the rivers and bridges, geology, foundation, structure, construction material, and adopting appropriate executing methods, all of which are being analyzed in this article.
Abstract: The aim of this paper is to continue the study of (T1, T2)-semi star generalized closed sets by introducing the concepts of (T1, T2)-semi star generalized locally closed sets and study their basic properties in bitopological spaces.
Abstract: Automatic currency note recognition invariably
depends on the currency note characteristics of a particular country
and the extraction of features directly affects the recognition ability.
Sri Lanka has not been involved in any kind of research or
implementation of this kind. The proposed system “SLCRec" comes
up with a solution focusing on minimizing false rejection of notes.
Sri Lankan currency notes undergo severe changes in image quality
in usage. Hence a special linear transformation function is adapted to
wipe out noise patterns from backgrounds without affecting the
notes- characteristic images and re-appear images of interest. The
transformation maps the original gray scale range into a smaller
range of 0 to 125. Applying Edge detection after the transformation
provided better robustness for noise and fair representation of edges
for new and old damaged notes. A three layer back propagation
neural network is presented with the number of edges detected in row
order of the notes and classification is accepted in four classes of
interest which are 100, 500, 1000 and 2000 rupee notes. The
experiments showed good classification results and proved that the
proposed methodology has the capability of separating classes
properly in varying image conditions.
Abstract: This paper proposes a new approach for image encryption
using a combination of different permutation techniques.
The main idea behind the present work is that an image can be
viewed as an arrangement of bits, pixels and blocks. The intelligible
information present in an image is due to the correlations among the
bits, pixels and blocks in a given arrangement. This perceivable information
can be reduced by decreasing the correlation among the bits,
pixels and blocks using certain permutation techniques. This paper
presents an approach for a random combination of the aforementioned
permutations for image encryption. From the results, it is observed
that the permutation of bits is effective in significantly reducing the
correlation thereby decreasing the perceptual information, whereas
the permutation of pixels and blocks are good at producing higher
level security compared to bit permutation. A random combination
method employing all the three techniques thus is observed to be
useful for tactical security applications, where protection is needed
only against a casual observer.
Abstract: Design for cost (DFC) is a method that reduces life
cycle cost (LCC) from the angle of designers. Multiple domain
features mapping (MDFM) methodology was given in DFC. Using
MDFM, we can use design features to estimate the LCC. From the
angle of DFC, the design features of family cars were obtained, such
as all dimensions, engine power and emission volume. At the
conceptual design stage, cars- LCC were estimated using back
propagation (BP) artificial neural networks (ANN) method and
case-based reasoning (CBR). Hamming space was used to measure the
similarity among cases in CBR method. Levenberg-Marquardt (LM)
algorithm and genetic algorithm (GA) were used in ANN. The
differences of LCC estimation model between CBR and artificial
neural networks (ANN) were provided. ANN and CBR separately
each method has its shortcomings. By combining ANN and CBR
improved results accuracy was obtained. Firstly, using ANN selected
some design features that affect LCC. Then using LCC estimation
results of ANN could raise the accuracy of LCC estimation in CBR
method. Thirdly, using ANN estimate LCC errors and correct errors in
CBR-s estimation results if the accuracy is not enough accurate.
Finally, economically family cars and sport utility vehicle (SUV) was
given as LCC estimation cases using this hybrid approach combining
ANN and CBR.
Abstract: In this paper, we propose the low-MAC FEC controller for practical implementation of JPEG2000 image transmission using IEEE 802.15.4. The proposed low-MAC FEC controller has very small HW size and spends little computation to estimate channel state. Because of this advantage, it is acceptable to apply IEEE 802.15.4 which has to operate more than 1 year with battery. For the image transmission, we integrate the low-MAC FEC controller and RCPC coder in sensor node of LR-WPAN. The modified sensor node has increase of 3% hardware size than conventional zigbee sensor node.
Abstract: Variable channel conditions in underwater networks,
and variable distances between sensors due to water current, leads to
variable bit error rate (BER). This variability in BER has great
effects on energy efficiency of error correction techniques used. In
this paper an efficient energy adaptive hybrid error correction
technique (AHECT) is proposed. AHECT adaptively changes error
technique from pure retransmission (ARQ) in a low BER case to a
hybrid technique with variable encoding rates (ARQ & FEC) in a
high BER cases. An adaptation algorithm depends on a precalculated
packet acceptance rate (PAR) look-up table, current BER,
packet size and error correction technique used is proposed. Based
on this adaptation algorithm a periodically 3-bit feedback is added to
the acknowledgment packet to state which error correction technique
is suitable for the current channel conditions and distance.
Comparative studies were done between this technique and other
techniques, and the results show that AHECT is more energy
efficient and has high probability of success than all those
techniques.
Abstract: Natural convection heat transfer from a heated
horizontal semi-circular cylinder (flat surface upward) has been
investigated for the following ranges of conditions; Grashof number,
and Prandtl number. The governing partial differential equations
(continuity, Navier-Stokes and energy equations) have been solved
numerically using a finite volume formulation. In addition, the role of
the type of the thermal boundary condition imposed at cylinder
surface, namely, constant wall temperature (CWT) and constant heat
flux (CHF) are explored. Natural convection heat transfer from a
heated horizontal semi-circular cylinder (flat surface upward) has
been investigated for the following ranges of conditions; Grashof
number, and Prandtl number, . The governing partial differential
equations (continuity, Navier-Stokes and energy equations) have
been solved numerically using a finite volume formulation. In
addition, the role of the type of the thermal boundary condition
imposed at cylinder surface, namely, constant wall temperature
(CWT) and constant heat flux (CHF) are explored. The resulting flow
and temperature fields are visualized in terms of the streamline and
isotherm patterns in the proximity of the cylinder. The flow remains
attached to the cylinder surface over the range of conditions spanned
here except that for and ; at these conditions, a separated flow
region is observed when the condition of the constant wall
temperature is prescribed on the surface of the cylinder. The heat
transfer characteristics are analyzed in terms of the local and average
Nusselt numbers. The maximum value of the local Nusselt number
always occurs at the corner points whereas it is found to be minimum
at the rear stagnation point on the flat surface. Overall, the average
Nusselt number increases with Grashof number and/ or Prandtl
number in accordance with the scaling considerations. The numerical
results are used to develop simple correlations as functions of
Grashof and Prandtl number thereby enabling the interpolation of the
present numerical results for the intermediate values of the Prandtl or
Grashof numbers for both thermal boundary conditions.
Abstract: Data mining can be called as a technique to extract
information from data. It is the process of obtaining hidden
information and then turning it into qualified knowledge by statistical
and artificial intelligence technique. One of its application areas is
medical area to form decision support systems for diagnosis just by
inventing meaningful information from given medical data. In this
study a decision support system for diagnosis of illness that make use
of data mining and three different artificial intelligence classifier
algorithms namely Multilayer Perceptron, Naive Bayes Classifier and
J.48. Pima Indian dataset of UCI Machine Learning Repository was
used. This dataset includes urinary and blood test results of 768
patients. These test results consist of 8 different feature vectors.
Obtained classifying results were compared with the previous studies.
The suggestions for future studies were presented.
Abstract: The aim of this research is to determine how preservice Turkish teachers perceive themselves in terms of problem solving skills. Students attending Department of Turkish Language Teaching of Gazi University Education Faculty in 2005-2006 academic year constitute the study group (n= 270) of this research in which survey model was utilized. Data were obtained by Problem Solving Inventory developed by Heppner & Peterson and Personal Information Form. Within the settings of this research, Cronbach Alpha reliability coefficient of the scale was found as .87. Besides, reliability coefficient obtained by split-half technique which splits odd and even numbered items of the scale was found as r=.81 (Split- Half Reliability). The findings of the research revealed that preservice Turkish teachers were sufficiently qualified on the subject of problem solving skills and statistical significance was found in favor of male candidates in terms of “gender" variable. According to the “grade" variable, statistical significance was found in favor of 4th graders.
Abstract: Modeling of the distributed systems allows us to
represent the whole its functionality. The working system instance
rarely fulfils the whole functionality represented by model; usually
some parts of this functionality should be accessible periodically.
The reporting system based on the Data Warehouse concept seams to
be an intuitive example of the system that some of its functionality is
required only from time to time. Analyzing an enterprise risk
associated with the periodical change of the system functionality, we
should consider not only the inaccessibility of the components
(object) but also their functions (methods), and the impact of such a
situation on the system functionality from the business point of view.
In the paper we suggest that the risk attributes should be estimated
from risk attributes specified at the requirements level (Use Case in
the UML model) on the base of the information about the structure of
the model (presented at other levels of the UML model). We argue
that it is desirable to consider the influence of periodical changes in
requirements on the enterprise risk estimation. Finally, the
proposition of such a solution basing on the UML system model is
presented.