Abstract: In this paper, a neural tree (NT) classifier having a
simple perceptron at each node is considered. A new concept for
making a balanced tree is applied in the learning algorithm of the
tree. At each node, if the perceptron classification is not accurate and
unbalanced, then it is replaced by a new perceptron. This separates
the training set in such a way that almost the equal number of patterns
fall into each of the classes. Moreover, each perceptron is trained only
for the classes which are present at respective node and ignore other
classes. Splitting nodes are employed into the neural tree architecture
to divide the training set when the current perceptron node repeats
the same classification of the parent node. A new error function based
on the depth of the tree is introduced to reduce the computational
time for the training of a perceptron. Experiments are performed to
check the efficiency and encouraging results are obtained in terms of
accuracy and computational costs.
Abstract: The ability of the brain to organize information and generate the functional structures we use to act, think and communicate, is a common and easily observable natural phenomenon. In object-oriented analysis, these structures are represented by objects. Objects have been extensively studied and documented, but the process that creates them is not understood. In this work, a new class of discrete, deterministic, dissipative, host-guest dynamical systems is introduced. The new systems have extraordinary self-organizing properties. They can host information representing other physical systems and generate the same functional structures as the brain does. A simple mathematical model is proposed. The new systems are easy to simulate by computer, and measurements needed to confirm the assumptions are abundant and readily available. Experimental results presented here confirm the findings. Applications are many, but among the most immediate are object-oriented engineering, image and voice recognition, search engines, and Neuroscience.
Abstract: Support vector regression (SVR) has been regarded
as a state-of-the-art method for approximation and regression. The
importance of kernel function, which is so-called admissible support
vector kernel (SV kernel) in SVR, has motivated many studies
on its composition. The Gaussian kernel (RBF) is regarded as a
“best" choice of SV kernel used by non-expert in SVR, whereas
there is no evidence, except for its superior performance on some
practical applications, to prove the statement. Its well-known that
reproducing kernel (R.K) is also a SV kernel which possesses many
important properties, e.g. positive definiteness, reproducing property
and composing complex R.K by simpler ones. However, there are a
limited number of R.Ks with explicit forms and consequently few
quantitative comparison studies in practice. In this paper, two R.Ks,
i.e. SV kernels, composed by the sum and product of a translation
invariant kernel in a Sobolev space are proposed. An exploratory
study on the performance of SVR based general R.K is presented
through a systematic comparison to that of RBF using multiple
criteria and synthetic problems. The results show that the R.K is
an equivalent or even better SV kernel than RBF for the problems
with more input variables (more than 5, especially more than 10) and
higher nonlinearity.
Abstract: The impact of noise upon live quality has become an
important aspect to make both urban and environmental policythroughout
Europe and in Turkey. Concern over the quality of urban
environments, including noise levels and declining quality of green
space, is over the past decade with increasing emphasis on designing
livable and sustainable communities. According to the World Health
Organization, noise pollution is the third most hazardous
environmental type of pollution which proceeded by only air (gas
emission) and water pollution. The research carried out in two
phases, the first stage of the research noise and plant types providing
the suction of noise was evaluated through literature study and at the
second stage, definite types (Juniperus horizontalis L., Spirea
vanhouetti Briot., Cotoneaster dammerii C.K., Berberis thunbergii
D.C., Pyracantha coccinea M. etc.) were selected for the city of
Konya. Trials were conducted on the highway of Konya. The biggest
value of noise reduction was 6.3 dB(A), 4.9 dB(A), 6.2 dB(A) value
with compared to the control which includes the group that formed
by the bushes at the distance of 7m, 11m, 20m from the source and
5m, 9m, 20m of plant width, respectively. In this paper, definitions
regarding to noise and its sources were made and the precautions
were taken against to noise that mentioned earlier with the adverse
effects of noise. Plantation design approaches and suggestions
concerning to the diversity to be used, which are peculiar to roadside,
were developed to discuss the role and the function of plant material
to reduce the noise of the traffic.
Abstract: In the literature of information theory, there is
necessity for comparing the different measures of fuzzy entropy and
this consequently, gives rise to the need for normalizing measures of
fuzzy entropy. In this paper, we have discussed this need and hence
developed some normalized measures of fuzzy entropy. It is also
desirable to maximize entropy and to minimize directed divergence
or distance. Keeping in mind this idea, we have explained the method
of optimizing different measures of fuzzy entropy.
Abstract: This paper describes the gain and noise performances
of discrete Raman amplifier as a function of fiber lengths and the
signal input powers for different pump configurations. Simulation has
been done by using optisystem 7.0 software simulation at signal
wavelength of 1550 nm and a pump wavelength of 1450nm. The
results showed that the gain is higher in bidirectional pumping than in
counter pumping, the gain changes with increasing the fiber length
while the noise figure remain the same for short fiber lengths and the
gain saturates differently for different pumping configuration at
different fiber lengths and power levels of the signal.
Abstract: Testing is an activity that is required both in the
development and maintenance of the software development life cycle
in which Integration Testing is an important activity. Integration
testing is based on the specification and functionality of the software
and thus could be called black-box testing technique. The purpose of
integration testing is testing integration between software
components. In function or system testing, the concern is with overall
behavior and whether the software meets its functional specifications
or performance characteristics or how well the software and
hardware work together. This explains the importance and necessity
of IT for which the emphasis is on interactions between modules and
their interfaces. Software errors should be discovered early during
IT to reduce the costs of correction. This paper introduces a new type
of integration error, presenting an overview of Integration Testing
techniques with comparison of each technique and also identifying
which technique detects what type of error.
Abstract: Information and Communication Technologies (ICT) in mathematical education is a very active field of research and innovation, where learning is understood to be meaningful and grasping multiple linked representation rather than rote memorization, a great amount of literature offering a wide range of theories, learning approaches, methodologies and interpretations, are generally stressing the potentialities for teaching and learning using ICT. Despite the utilization of new learning approaches with ICT, students experience difficulties in learning concepts relevant to understanding mathematics, much remains unclear about the relationship between the computer environment, the activities it might support, and the knowledge that might emerge from such activities. Many questions that might arise in this regard: to what extent does the use of ICT help students in the process of understanding and solving tasks or problems? Is it possible to identify what aspects or features of students' mathematical learning can be enhanced by the use of technology? This paper will highlight the interest of the integration of information and communication technologies (ICT) into the teaching and learning of mathematics (quadratic functions), it aims to investigate the effect of four instructional methods on students- mathematical understanding and problem solving. Quantitative and qualitative methods are used to report about 43 students in middle school. Results showed that mathematical thinking and problem solving evolves as students engage with ICT activities and learn cooperatively.
Abstract: Although backpropagation ANNs generally predict
better than decision trees do for pattern classification problems, they
are often regarded as black boxes, i.e., their predictions cannot be
explained as those of decision trees. In many applications, it is
desirable to extract knowledge from trained ANNs for the users to
gain a better understanding of how the networks solve the problems.
A new rule extraction algorithm, called rule extraction from artificial
neural networks (REANN) is proposed and implemented to extract
symbolic rules from ANNs. A standard three-layer feedforward ANN
is the basis of the algorithm. A four-phase training algorithm is
proposed for backpropagation learning. Explicitness of the extracted
rules is supported by comparing them to the symbolic rules generated
by other methods. Extracted rules are comparable with other methods
in terms of number of rules, average number of conditions for a rule,
and predictive accuracy. Extensive experimental studies on several
benchmarks classification problems, such as breast cancer, iris,
diabetes, and season classification problems, demonstrate the
effectiveness of the proposed approach with good generalization
ability.
Abstract: The removal of hydrogen sulphide is required for reasons of health, odour problems, safety and corrosivity problems. The means of removing hydrogen sulphide mainly depend on its concentration and kind of medium to be purified. The paper deals with a method of hydrogen sulphide removal from the air by its catalytic oxidation to elemental sulphur with the use of Fe-EDTA complex. The possibility of obtaining fibrous filtering materials able to remove small concentrations of H2S from the air were described. The base of these materials is fibrous ion exchanger with Fe(III)- EDTA complex immobilized on their functional groups. The complex of trivalent iron converts hydrogen sulphide to elemental sulphur. Bivalent iron formed in the reaction is oxidized by the atmospheric oxygen, so complex of trivalent iron is continuously regenerated and the overall process can be accounted as pseudocatalytic. In the present paper properties of several fibrous catalysts based on ion exchangers with different chemical nature (weak acid,weak base and strong base) were described. It was shown that the main parameters affecting the process of catalytic oxidation are:concentration of hydrogen sulphide in the air, relative humidity of the purified air, the process time and the content of Fe-EDTA complex in the fibres. The data presented show that the filtering layers with anion exchange package are much more active in the catalytic processes of hydrogen sulphide removal than cation exchanger and inert materials. In the addition to the nature of the fibres relative air humidity is a critical factor determining efficiency of the material in the air purification from H2S. It was proved that the most promising carrier of the Fe-EDTA catalyst for hydrogen sulphide oxidation are Fiban A-6 and Fiban AK-22 fibres.
Abstract: As the majority of faults are found in a few of its
modules so there is a need to investigate the modules that are
affected severely as compared to other modules and proper
maintenance need to be done in time especially for the critical
applications. As, Neural networks, which have been already applied
in software engineering applications to build reliability growth
models predict the gross change or reusability metrics. Neural
networks are non-linear sophisticated modeling techniques that are
able to model complex functions. Neural network techniques are
used when exact nature of input and outputs is not known. A key
feature is that they learn the relationship between input and output
through training. In this present work, various Neural Network Based
techniques are explored and comparative analysis is performed for
the prediction of level of need of maintenance by predicting level
severity of faults present in NASA-s public domain defect dataset.
The comparison of different algorithms is made on the basis of Mean
Absolute Error, Root Mean Square Error and Accuracy Values. It is
concluded that Generalized Regression Networks is the best
algorithm for classification of the software components into different
level of severity of impact of the faults. The algorithm can be used to
develop model that can be used for identifying modules that are
heavily affected by the faults.
Abstract: Inter-organizational Workflow (IOW) is commonly
used to support the collaboration between heterogeneous and
distributed business processes of different autonomous organizations
in order to achieve a common goal. E-government is considered as an
application field of IOW. The coordination of the different
organizations is the fundamental problem in IOW and remains the
major cause of failure in e-government projects. In this paper, we
introduce a new coordination model for IOW that improves the
collaboration between government administrations and that respects
IOW requirements applied to e-government. For this purpose, we
adopt a Multi-Agent approach, which deals more easily with interorganizational
digital government characteristics: distribution,
heterogeneity and autonomy. Our model integrates also different
technologies to deal with the semantic and technologic
interoperability. Moreover, it conserves the existing systems of
government administrations by offering a distributed coordination
based on interfaces communication. This is especially applied in
developing countries, where administrations are not necessary
equipped with workflow systems. The use of our coordination
techniques allows an easier migration for an e-government solution
and with a lower cost. To illustrate the applicability of the proposed
model, we present a case study of an identity card creation in Tunisia.
Abstract: In this paper, a PSO-based approach is proposed to
derive a digital controller for redesigned digital systems having an interval plant based on resemblance of the extremal gain/phase
margins. By combining the interval plant and a controller as an interval system, extremal GM/PM associated with the loop transfer function
can be obtained. The design problem is then formulated as an optimization problem of an aggregated error function revealing the deviation on the extremal GM/PM between the redesigned digital
system and its continuous counterpart, and subsequently optimized by
a proposed PSO to obtain an optimal set of parameters for the digital controller. Computer simulations have shown that frequency
responses of the redesigned digital system having an interval plant bare a better resemblance to its continuous-time counter part by the incorporation of a PSO-derived digital controller in comparison to those obtained using existing open-loop discretization methods.
Abstract: The recent global financial problem urges government
to play role in stimulating the economy due to the fact that private
sector has little ability to purchase during the recession. A concerned
question is whether the increased government spending crowds out
private consumption and whether it helps stimulate the economy. If
the government spending policy is effective; the private consumption
is expected to increase and can compensate the recent extra
government expense. In this study, the government spending is
categorized into government consumption spending and government
capital spending. The study firstly examines consumer consumption
along the line with the demand function in microeconomic theory.
Three categories of private consumption are used in the study. Those
are food consumption, non food consumption, and services
consumption. The dynamic Almost Ideal Demand System of the three
categories of the private consumption is estimated using the Vector
Error Correction Mechanism model. The estimated model indicates
the substituting effects (negative impacts) of the government
consumption spending on budget shares of private non food
consumption and of the government capital spending on budget share
of private food consumption, respectively. Nevertheless the result
does not necessarily indicate whether the negative effects of changes
in the budget shares of the non food and the food consumption means
fallen total private consumption. Microeconomic consumer demand
analysis clearly indicates changes in component structure of
aggregate expenditure in the economy as a result of the government
spending policy. The macroeconomic concept of aggregate demand
comprising consumption, investment, government spending (the
government consumption spending and the government capital
spending), export, and import are used to estimate for their
relationship using the Vector Error Correction Mechanism model.
The macroeconomic study found no effect of the government capital
spending on either the private consumption or the growth of GDP
while the government consumption spending has negative effect on
the growth of GDP. Therefore no crowding out effect of the
government spending is found on the private consumption but it is
ineffective and even inefficient expenditure as found reducing growth
of the GDP in the context of Thailand.
Abstract: This paper reports the results of an experimental work
conducted to investigate the effect of curing conditions on the
compressive strength of self-compacting geopolymer concrete
prepared by using fly ash as base material and combination of sodium
hydroxide and sodium silicate as alkaline activator. The experiments
were conducted by varying the curing time and curing temperature in
the range of 24-96 hours and 60-90°C respectively. The essential
workability properties of freshly prepared Self-compacting
Geopolymer concrete such as filling ability, passing ability and
segregation resistance were evaluated by using Slump flow,
V-funnel, L-box and J-ring test methods. The fundamental
requirements of high flowability and resistance to segregation as
specified by guidelines on Self-compacting Concrete by EFNARC
were satisfied. Test results indicate that longer curing time and curing
the concrete specimens at higher temperatures result in higher
compressive strength. There was increase in compressive strength
with the increase in curing time; however increase in compressive
strength after 48 hours was not significant. Concrete specimens cured
at 70°C produced the highest compressive strength as compared to
specimens cured at 60°C, 80°C and 90°C.
Abstract: NFκB is a transcription factor regulating many
function of the vessel wall. In the normal condition , NFκB is
revealed diffuse cytoplasmic expressionsuggesting that the system is
inactive. The presence of activation NFκB provide a potential
pathway for the rapid transcriptional of a variety of genes encoding
cytokines, growth factors, adhesion molecules and procoagulatory
factors. It is likely to play an important role in chronic inflamatory
disease involved atherosclerosis. There are many stimuli with the
potential to active NFκB, including hyperlipidemia. We used 24 mice
which was divided in 6 groups. The HFD given by et libitum
procedure during 2, 4, and 6 months. The parameters in this study
were the amount of NFKB activation ,H2O2 as ROS and VCAM-1 as
a product of NFKB activation. H2O2 colorimetryc assay performed
directly using Anti Rat H2O2 ELISA Kit. The NFKB and VCAM-1
detection obtained from aorta mice, measured by ELISA kit and
imunohistochemistry. There was a significant difference activation of
H2O2, NFKB and VCAM-1 level at induce HFD after 2, 4 and 6
months. It suggest that HFD induce ROS formation and increase the
activation of NFKB as one of atherosclerosis marker that caused by
hyperlipidemia as classical atheroschlerosis risk factor.
Abstract: Road signs are the elements of roads with a lot of
influence in driver-s behavior. So that signals can fulfill its function,
they must overcome visibility and durability requirements,
particularly needed at night, when the coefficient of retroreflection
becomes a decisive factor in ensuring road safety. Accepting that the
visibility of the signage has implications for people-s safety, we
understand the importance to fulfill its function: to foster the highest
standards of service and safety in drivers. The usual conditions of
perception of any sign are determined by: age of the driver, reflective
material, luminosity, vehicle speed and emplacement. In this way,
this paper evaluates the different signals to increase the safety road.
Abstract: The purpose of this work is measurement of the
system presampling MTF of a variable resolution x-ray (VRX) CT
scanner. In this paper, we used the parameters of an actual VRX CT
scanner for simulation and study of effect of different focal spot sizes
on system presampling MTF by Monte Carlo method (GATE
simulation software). Focal spot size of 0.6 mm limited the spatial
resolution of the system to 5.5 cy/mm at incident angles of below 17º
for cell#1. By focal spot size of 0.3 mm the spatial resolution
increased up to 11 cy/mm and the limiting effect of focal spot size
appeared at incident angles of below 9º. The focal spot size of 0.3
mm could improve the spatial resolution to some extent but because
of magnification non-uniformity, there is a 10 cy/mm difference
between spatial resolution of cell#1 and cell#256. The focal spot size
of 0.1 mm acted as an ideal point source for this system. The spatial
resolution increased to more than 35 cy/mm and at all incident angles
the spatial resolution was a function of incident angle. By the way
focal spot size of 0.1 mm minimized the effect of magnification nonuniformity.
Abstract: This paper presents the applicability of artificial
neural networks for 24 hour ahead solar power generation forecasting
of a 20 kW photovoltaic system, the developed forecasting is suitable
for a reliable Microgrid energy management. In total four neural
networks were proposed, namely: multi-layred perceptron, radial
basis function, recurrent and a neural network ensemble consisting in
ensemble of bagged networks. Forecasting reliability of the proposed
neural networks was carried out in terms forecasting error
performance basing on statistical and graphical methods. The
experimental results showed that all the proposed networks achieved
an acceptable forecasting accuracy. In term of comparison the neural
network ensemble gives the highest precision forecasting comparing
to the conventional networks. In fact, each network of the ensemble
over-fits to some extent and leads to a diversity which enhances the
noise tolerance and the forecasting generalization performance
comparing to the conventional networks.
Abstract: Finding the interpolation function of a given set of nodes is an important problem in scientific computing. In this work a kind of localization is introduced using the radial basis functions which finds a sufficiently smooth solution without consuming large amount of time and computer memory. Some examples will be presented to show the efficiency of the new method.