Abstract: In this study, the Multi-Layer Perceptron (MLP)with Back-Propagation learning algorithm are used to classify to effective diagnosis Parkinsons disease(PD).It-s a challenging problem for medical community.Typically characterized by tremor, PD occurs due to the loss of dopamine in the brains thalamic region that results in involuntary or oscillatory movement in the body. A feature selection algorithm along with biomedical test values to diagnose Parkinson disease.Clinical diagnosis is done mostly by doctor-s expertise and experience.But still cases are reported of wrong diagnosis and treatment. Patients are asked to take number of tests for diagnosis.In many cases,not all the tests contribute towards effective diagnosis of a disease.Our work is to classify the presence of Parkinson disease with reduced number of attributes.Original,22 attributes are involved in classify.We use Information Gain to determine the attributes which reduced the number of attributes which is need to be taken from patients.The Artificial neural networks is used to classify the diagnosis of patients.Twenty-Two attributes are reduced to sixteen attributes.The accuracy is in training data set is 82.051% and in the validation data set is 83.333%.
Abstract: This paper deals with stakeholders’ decisions within energy neutral urban redevelopment processes. The decisions of these stakeholders during the process will make or break energy neutral ambitions. An extensive form of game theory model gave insight in the behavioral differences of stakeholders regarding energy neutral ambitions and the effects of the changing legislation. The results show that new legislation regarding spatial planning slightly influences the behavior of stakeholders. An active behavior of the municipality will still result in the best outcome. Nevertheless, the municipality becomes more powerful when acting passively and can make the use of planning tools to provide governance towards energy neutral urban redevelopment. Moreover, organizational support, recognizing the necessity for energy neutrality, keeping focused and collaboration among stakeholders are crucial elements to achieve the objective of an energy neutral urban (re)development.
Abstract: In this paper we use exponential particle swarm
optimization (EPSO) to cluster data. Then we compare between
(EPSO) clustering algorithm which depends on exponential variation
for the inertia weight and particle swarm optimization (PSO)
clustering algorithm which depends on linear inertia weight. This
comparison is evaluated on five data sets. The experimental results
show that EPSO clustering algorithm increases the possibility to find
the optimal positions as it decrease the number of failure. Also show
that (EPSO) clustering algorithm has a smaller quantization error
than (PSO) clustering algorithm, i.e. (EPSO) clustering algorithm
more accurate than (PSO) clustering algorithm.
Abstract: This paper describes the gain and noise performances
of discrete Raman amplifier as a function of fiber lengths and the
signal input powers for different pump configurations. Simulation has
been done by using optisystem 7.0 software simulation at signal
wavelength of 1550 nm and a pump wavelength of 1450nm. The
results showed that the gain is higher in bidirectional pumping than in
counter pumping, the gain changes with increasing the fiber length
while the noise figure remain the same for short fiber lengths and the
gain saturates differently for different pumping configuration at
different fiber lengths and power levels of the signal.
Abstract: Testing is an activity that is required both in the
development and maintenance of the software development life cycle
in which Integration Testing is an important activity. Integration
testing is based on the specification and functionality of the software
and thus could be called black-box testing technique. The purpose of
integration testing is testing integration between software
components. In function or system testing, the concern is with overall
behavior and whether the software meets its functional specifications
or performance characteristics or how well the software and
hardware work together. This explains the importance and necessity
of IT for which the emphasis is on interactions between modules and
their interfaces. Software errors should be discovered early during
IT to reduce the costs of correction. This paper introduces a new type
of integration error, presenting an overview of Integration Testing
techniques with comparison of each technique and also identifying
which technique detects what type of error.
Abstract: Rapid advancement in computing technology brings
computers and humans to be seamlessly integrated in future. The
emergence of smartphone has driven computing era towards
ubiquitous and pervasive computing. Recognizing human activity has
garnered a lot of interest and has raised significant researches-
concerns in identifying contextual information useful to human
activity recognition. Not only unobtrusive to users in daily life,
smartphone has embedded built-in sensors that capable to sense
contextual information of its users supported with wide range
capability of network connections. In this paper, we will discuss the
classification algorithms used in smartphone-based human activity.
Existing technologies pertaining to smartphone-based researches in
human activity recognition will be highlighted and discussed. Our
paper will also present our findings and opinions to formulate
improvement ideas in current researches- trends. Understanding
research trends will enable researchers to have clearer research
direction and common vision on latest smartphone-based human
activity recognition area.
Abstract: Information and Communication Technologies (ICT) in mathematical education is a very active field of research and innovation, where learning is understood to be meaningful and grasping multiple linked representation rather than rote memorization, a great amount of literature offering a wide range of theories, learning approaches, methodologies and interpretations, are generally stressing the potentialities for teaching and learning using ICT. Despite the utilization of new learning approaches with ICT, students experience difficulties in learning concepts relevant to understanding mathematics, much remains unclear about the relationship between the computer environment, the activities it might support, and the knowledge that might emerge from such activities. Many questions that might arise in this regard: to what extent does the use of ICT help students in the process of understanding and solving tasks or problems? Is it possible to identify what aspects or features of students' mathematical learning can be enhanced by the use of technology? This paper will highlight the interest of the integration of information and communication technologies (ICT) into the teaching and learning of mathematics (quadratic functions), it aims to investigate the effect of four instructional methods on students- mathematical understanding and problem solving. Quantitative and qualitative methods are used to report about 43 students in middle school. Results showed that mathematical thinking and problem solving evolves as students engage with ICT activities and learn cooperatively.
Abstract: A concern that researchers usually face in different
applications of Artificial Neural Network (ANN) is determination of
the size of effective domain in time series. In this paper, trial and
error method was used on groundwater depth time series to determine
the size of effective domain in the series in an observation well in
Union County, New Jersey, U.S. different domains of 20, 40, 60, 80,
100, and 120 preceding day were examined and the 80 days was
considered as effective length of the domain. Data sets in different
domains were fed to a Feed Forward Back Propagation ANN with
one hidden layer and the groundwater depths were forecasted. Root
Mean Square Error (RMSE) and the correlation factor (R2) of
estimated and observed groundwater depths for all domains were
determined. In general, groundwater depth forecast improved, as
evidenced by lower RMSEs and higher R2s, when the domain length
increased from 20 to 120. However, 80 days was selected as the
effective domain because the improvement was less than 1% beyond
that. Forecasted ground water depths utilizing measured daily data
(set #1) and data averaged over the effective domain (set #2) were
compared. It was postulated that more accurate nature of measured
daily data was the reason for a better forecast with lower RMSE
(0.1027 m compared to 0.255 m) in set #1. However, the size of input
data in this set was 80 times the size of input data in set #2; a factor
that may increase the computational effort unpredictably. It was
concluded that 80 daily data may be successfully utilized to lower the
size of input data sets considerably, while maintaining the effective
information in the data set.
Abstract: This was the first document revealing the
investigation of protein hydrolysate production optimization from J.
curcas cake. Proximate analysis of raw material showed 18.98%
protein, 5.31% ash, 8.52% moisture and 12.18% lipid. The
appropriate protein hydrolysate production process began with
grinding the J. curcas cake into small pieces. Then it was suspended
in 2.5% sodium hydroxide solution with ratio between solution/ J.
curcas cake at 80:1 (v/w). The hydrolysis reaction was controlled at
temperature 50 °C in water bath for 45 minutes. After that, the
supernatant (protein hydrolysate) was separated using centrifuge at
8000g for 30 minutes. The maximum yield of resulting protein
hydrolysate was 73.27 % with 7.34% moisture, 71.69% total protein,
7.12% lipid, 2.49% ash. The product was also capable of well
dissolving in water.
Abstract: This study examines perception of environmental
approach in small and medium-sized enterprises (SMEs) – the
process by which firms integrate environmental concern into
business. Based on a review of the literature, the paper synthesizes
focus on environmental issues with the reflection in a case study in
the Czech Republic. Two themes of corporate environmentalism are
discussed – corporate environmental orientation and corporate
stances toward environmental concerns. It provides theoretical
material on greening organizational culture that is helpful in
understanding the response of contemporary business to
environmental problems. We integrate theoretical predictions with
empirical findings confronted with reality. Scales to measure these
themes are tested in a survey of managers in 229 Czech firms. We
used the process of in-depth questioning. The research question was
derived and answered in the context of the corresponding literature
and conducted research. A case study showed us that environmental
approach is variety different (depending on the size of the firm) in
SMEs sector. The results of the empirical mapping demonstrate
Czech company’s approach to environment and define the problem
areas and pinpoint the main limitation in the expansion of
environmental aspects. We contribute to the debate for recognition of
the particular role of environmental issues in business reality.
Abstract: Cloud Computing is a new technology that helps us to
use the Cloud for compliance our computation needs. Cloud refers to a scalable network of computers that work together like Internet. An
important element in Cloud Computing is that we shift processing, managing, storing and implementing our data from, locality into the
Cloud; So it helps us to improve the efficiency. Because of it is new
technology, it has both advantages and disadvantages that are
scrutinized in this article. Then some vanguards of this technology
are studied. Afterwards we find out that Cloud Computing will have
important roles in our tomorrow life!
Abstract: Although backpropagation ANNs generally predict
better than decision trees do for pattern classification problems, they
are often regarded as black boxes, i.e., their predictions cannot be
explained as those of decision trees. In many applications, it is
desirable to extract knowledge from trained ANNs for the users to
gain a better understanding of how the networks solve the problems.
A new rule extraction algorithm, called rule extraction from artificial
neural networks (REANN) is proposed and implemented to extract
symbolic rules from ANNs. A standard three-layer feedforward ANN
is the basis of the algorithm. A four-phase training algorithm is
proposed for backpropagation learning. Explicitness of the extracted
rules is supported by comparing them to the symbolic rules generated
by other methods. Extracted rules are comparable with other methods
in terms of number of rules, average number of conditions for a rule,
and predictive accuracy. Extensive experimental studies on several
benchmarks classification problems, such as breast cancer, iris,
diabetes, and season classification problems, demonstrate the
effectiveness of the proposed approach with good generalization
ability.
Abstract: During the last couple of years, the degree of dependence on IT systems has reached a dimension nobody imagined to be possible 10 years ago. The increased usage of mobile devices (e.g., smart phones), wireless sensor networks and embedded devices (Internet of Things) are only some examples of the dependency of modern societies on cyber space. At the same time, the complexity of IT applications, e.g., because of the increasing use of cloud computing, is rising continuously. Along with this, the threats to IT security have increased both quantitatively and qualitatively, as recent examples like STUXNET or the supposed cyber attack on Illinois water system are proofing impressively. Once isolated control systems are nowadays often publicly available - a fact that has never been intended by the developers. Threats to IT systems don’t care about areas of responsibility. Especially with regard to Cyber Warfare, IT threats are no longer limited to company or industry boundaries, administrative jurisdictions or state boundaries. One of the important countermeasures is increased cooperation among the participants especially in the field of Cyber Defence. Besides political and legal challenges, there are technical ones as well. A better, at least partially automated exchange of information is essential to (i) enable sophisticated situational awareness and to (ii) counter the attacker in a coordinated way. Therefore, this publication performs an evaluation of state of the art Intrusion Detection Message Exchange protocols in order to guarantee a secure information exchange between different entities.
Abstract: As the majority of faults are found in a few of its
modules so there is a need to investigate the modules that are
affected severely as compared to other modules and proper
maintenance need to be done in time especially for the critical
applications. As, Neural networks, which have been already applied
in software engineering applications to build reliability growth
models predict the gross change or reusability metrics. Neural
networks are non-linear sophisticated modeling techniques that are
able to model complex functions. Neural network techniques are
used when exact nature of input and outputs is not known. A key
feature is that they learn the relationship between input and output
through training. In this present work, various Neural Network Based
techniques are explored and comparative analysis is performed for
the prediction of level of need of maintenance by predicting level
severity of faults present in NASA-s public domain defect dataset.
The comparison of different algorithms is made on the basis of Mean
Absolute Error, Root Mean Square Error and Accuracy Values. It is
concluded that Generalized Regression Networks is the best
algorithm for classification of the software components into different
level of severity of impact of the faults. The algorithm can be used to
develop model that can be used for identifying modules that are
heavily affected by the faults.
Abstract: In this paper, a PSO-based approach is proposed to
derive a digital controller for redesigned digital systems having an interval plant based on resemblance of the extremal gain/phase
margins. By combining the interval plant and a controller as an interval system, extremal GM/PM associated with the loop transfer function
can be obtained. The design problem is then formulated as an optimization problem of an aggregated error function revealing the deviation on the extremal GM/PM between the redesigned digital
system and its continuous counterpart, and subsequently optimized by
a proposed PSO to obtain an optimal set of parameters for the digital controller. Computer simulations have shown that frequency
responses of the redesigned digital system having an interval plant bare a better resemblance to its continuous-time counter part by the incorporation of a PSO-derived digital controller in comparison to those obtained using existing open-loop discretization methods.
Abstract: The use of radar in Quantitative Precipitation Estimation (QPE) for radar-rainfall measurement is significantly beneficial. Radar has advantages in terms of high spatial and temporal condition in rainfall measurement and also forecasting. In Malaysia, radar application in QPE is still new and needs to be explored. This paper focuses on the Z/R derivation works of radarrainfall estimation based on rainfall classification. The works developed new Z/R relationships for Klang River Basin in Selangor area for three different general classes of rain events, namely low (10mm/hr, 30mm/hr) and also on more specific rain types during monsoon seasons. Looking at the high potential of Doppler radar in QPE, the newly formulated Z/R equations will be useful in improving the measurement of rainfall for any hydrological application, especially for flood forecasting.
Abstract: In the present study Schwertmannite (an iron oxide
hydroxide) is selected as an adsorbent for defluoridation of water.
The adsorbent was prepared by wet chemical process and was
characterized by SEM, XRD and BET. The fluoride adsorption
efficiency of the prepared adsorbent was determined with respect to
contact time, initial fluoride concentration, adsorbent dose and pH of
the solution. The batch adsorption data revealed that the fluoride
adsorption efficiency was highly influenced by the studied factors.
Equilibrium was attained within one hour of contact time indicating
fast kinetics and the adsorption data followed pseudo second order
kinetic model. Equilibrium isotherm data fitted to both Langmuir and
Freundlich isotherm models for a concentration range of 5-30 mg/L.
The adsorption system followed Langmuir isotherm model with
maximum adsorption capacity of 11.3 mg/g. The high adsorption
capacity of Schwertmannite points towards the potential of this
adsorbent for fluoride removal from aqueous medium.
Abstract: The purpose of the study is to determine the primary mathematics student teachers- views related to use instructional technology tools in course of the learning process and to reveal how the sample presentations towards different mathematical concepts affect their views. This is a qualitative study involving twelve mathematics students from a public university. The data gathered from two semi-structural interviews. The first one was realized in the beginning of the study. After that the representations prepared by the researchers were showed to the participants. These representations contain animations, Geometer-s Sketchpad activities, video-clips, spreadsheets, and power-point presentations. The last interview was realized at the end of these representations. The data from the interviews and content analyses were transcribed and read and reread to explore the major themes. Findings revealed that the views of the students changed in this process and they believed that the instructional technology tools should be used in their classroom.
Abstract: The purpose of this work is measurement of the
system presampling MTF of a variable resolution x-ray (VRX) CT
scanner. In this paper, we used the parameters of an actual VRX CT
scanner for simulation and study of effect of different focal spot sizes
on system presampling MTF by Monte Carlo method (GATE
simulation software). Focal spot size of 0.6 mm limited the spatial
resolution of the system to 5.5 cy/mm at incident angles of below 17º
for cell#1. By focal spot size of 0.3 mm the spatial resolution
increased up to 11 cy/mm and the limiting effect of focal spot size
appeared at incident angles of below 9º. The focal spot size of 0.3
mm could improve the spatial resolution to some extent but because
of magnification non-uniformity, there is a 10 cy/mm difference
between spatial resolution of cell#1 and cell#256. The focal spot size
of 0.1 mm acted as an ideal point source for this system. The spatial
resolution increased to more than 35 cy/mm and at all incident angles
the spatial resolution was a function of incident angle. By the way
focal spot size of 0.1 mm minimized the effect of magnification nonuniformity.
Abstract: The halophilic proteinase showed a maximal activity
at 50°C and pH 9~10, in 20% NaCl and was highly stabilized by
NaCl. It was able to hydrolyse natural actomyosin (NAM), collagen
and anchovy protein. For NAM hydrolysis, the myosin heavy chain
was completely digested by halophilic proteinase as evidenced by the
lowest band intensity remaining, but partially hydrolysed actin. The
SR5-3 proteinase was also capable hydrolyzing two major
components of collagen, β- and α-compounds, effectively. The
degree of hydrolysis (DH) of the halophilic proteinase and
commercial proteinases (Novozyme, Neutrase, chymotrypsin and
Flavourzyme) on the anchovy protein, were compared, and it was
found that the proteinase showed a greater degree of hydrolysis
towards anchovy protein than that from commercial proteinases. DH
of halophilic proteinase was sharply enhanced according to the
increase in the concentration of enzyme from 0.035 U to 0.105 U.
The results warranting that the acceleration of the production of fish
sauce with higher quality, may be achieved by adding of the
halophilic proteinase from this bacterium.