Abstract: In this study, an investigation over digestive diseases has been done in which the sound acts as a detector medium. Pursue to the preprocessing the extracted signal in cepstrum domain is registered. After classification of digestive diseases, the system selects random samples based on their features and generates the interest nonstationary, long-term signals via inverse transform in cepstral domain which is presented in digital and sonic form as the output. This structure is updatable or on the other word, by receiving a new signal the corresponding disease classification is updated in the feature domain.
Abstract: Much time series data is generally from continuous dynamic system. Firstly, this paper studies the detection of the nonlinearity of time series from continuous dynamics systems by applying the Phase-randomized surrogate algorithm. Then, the Delay Vector Variance (DVV) method is introduced into nonlinearity test. The results show that under the different sampling conditions, the opposite detection of nonlinearity is obtained via using traditional test statistics methods, which include the third-order autocovariance and the asymmetry due to time reversal. Whereas the DVV method can perform well on determining nonlinear of Lorenz signal. It indicates that the proposed method can describe the continuous dynamics signal effectively.
Abstract: Rapid advancement in computing technology brings
computers and humans to be seamlessly integrated in future. The
emergence of smartphone has driven computing era towards
ubiquitous and pervasive computing. Recognizing human activity has
garnered a lot of interest and has raised significant researches-
concerns in identifying contextual information useful to human
activity recognition. Not only unobtrusive to users in daily life,
smartphone has embedded built-in sensors that capable to sense
contextual information of its users supported with wide range
capability of network connections. In this paper, we will discuss the
classification algorithms used in smartphone-based human activity.
Existing technologies pertaining to smartphone-based researches in
human activity recognition will be highlighted and discussed. Our
paper will also present our findings and opinions to formulate
improvement ideas in current researches- trends. Understanding
research trends will enable researchers to have clearer research
direction and common vision on latest smartphone-based human
activity recognition area.
Abstract: This paper considers the development of a two-point
predictor-corrector block method for solving delay differential
equations. The formulae are represented in divided difference form
and the algorithm is implemented in variable stepsize variable order
technique. The block method produces two new values at a single
integration step. Numerical results are compared with existing
methods and it is evident that the block method performs very well.
Stability regions of the block method are also investigated.
Abstract: Tensile armour wires provide a flexible pipe's
resistance to longitudinal stresses. Flexible pipe manufacturers need
to know the effect of defects such as scratches and cracks, with
dimensions less than 0.2mm which is the limit of the current nondestructive
detection technology, on the fracture stress and fracture
strain of the wire for quality assurance purposes. Recent research
involving the determination of the fracture strength of cracked wires
employed laboratory testing and classical fracture mechanics
approach using non-standardised fracture mechanics specimens
because standard test specimens could not be manufactured from the
wires owing to their sizes. In this work, the effect of miniature
cracks on the fracture properties of tensile armour wires was
investigated using laboratory and finite element tensile testing
simulations with the phenomenological shear fracture model. The
investigation revealed that the presence of cracks shallower than
0.2mm is worse on the fracture strain of the wire.
Abstract: Information and Communication Technologies (ICT) in mathematical education is a very active field of research and innovation, where learning is understood to be meaningful and grasping multiple linked representation rather than rote memorization, a great amount of literature offering a wide range of theories, learning approaches, methodologies and interpretations, are generally stressing the potentialities for teaching and learning using ICT. Despite the utilization of new learning approaches with ICT, students experience difficulties in learning concepts relevant to understanding mathematics, much remains unclear about the relationship between the computer environment, the activities it might support, and the knowledge that might emerge from such activities. Many questions that might arise in this regard: to what extent does the use of ICT help students in the process of understanding and solving tasks or problems? Is it possible to identify what aspects or features of students' mathematical learning can be enhanced by the use of technology? This paper will highlight the interest of the integration of information and communication technologies (ICT) into the teaching and learning of mathematics (quadratic functions), it aims to investigate the effect of four instructional methods on students- mathematical understanding and problem solving. Quantitative and qualitative methods are used to report about 43 students in middle school. Results showed that mathematical thinking and problem solving evolves as students engage with ICT activities and learn cooperatively.
Abstract: It is observed that the Weighted least-square (WLS)
technique, including the modifications, results in equiripple error
curve. The resultant error as a percent of the ideal value is highly
non-uniformly distributed over the range of frequencies for which the
differentiator is designed. The present paper proposes a modification
to the technique so that the optimization procedure results in lower
maximum relative error compared to the ideal values. Simulation
results for first order as well as higher order differentiators are given
to illustrate the excellent performance of the proposed method.
Abstract: This paper presents a NDT by infrared thermography with excitation CO2 Laser, wavelength of 10.6 μm. This excitation is the controllable heating beam, confirmed by a preliminary test on a wooden plate 1.2 m x 0.9 m x 1 cm. As the first practice, this method is applied to detecting the defect in CFRP heated by the Laser 300 W during 40 s. Two samples 40 cm x 40 cm x 4.5 cm are prepared, one with defect, another one without defect. The laser beam passes through the lens of a deviation device, and heats the samples placed at a determinate position and area. As a result, the absence of adhesive can be detected. This method displays prominently its application as NDT with the composite materials. This work gives a good perspective to characterize the laser beam, which is very useful for the next detection campaigns.
Abstract: The effects of divers carbon substrates were
investigated for the tabtoxin production of an isolated pathogenic
Pseudomonas syringae pv. tabaci, the causal agent of wildfire of
tobacco and are discussed in relation to the bacterium growth. The
isolated organism was grown in batch culture on Woolley's
medium (28°C, 200 rpm, during 5 days). The growth has been
measured by the optical density (OD) at 620 nm and the tabtoxin
production quantified by Escherichia coli (K-12) bioassay
technique. The growth and the tabtoxin production were both
influenced by the substrates (sugars, amino acids, organic acids)
used, each, as a sole carbon source and as a supplement for the
same amino acids. The most significant quantities of tabtoxin were
obtained in presence of some amino acids used as sole carbon
source and/or as supplement.
Abstract: This paper will discuss about an active power generator scheduling method in order to increase the limit level of steady state systems. Some power generator optimization methods such as Langrange, PLN (Indonesian electricity company) Operation, and the proposed Z-Thevenin-based method will be studied and compared in respect of their steady state aspects. A method proposed in this paper is built upon the thevenin equivalent impedance values between each load respected to each generator. The steady state stability index obtained with the REI DIMO method. This research will review the 500kV-Jawa-Bali interconnection system. The simulation results show that the proposed method has the highest limit level of steady state stability compared to other optimization techniques such as Lagrange, and PLN operation. Thus, the proposed method can be used to create the steady state stability limit of the system especially in the peak load condition.
Abstract: Cloud Computing is a new technology that helps us to
use the Cloud for compliance our computation needs. Cloud refers to a scalable network of computers that work together like Internet. An
important element in Cloud Computing is that we shift processing, managing, storing and implementing our data from, locality into the
Cloud; So it helps us to improve the efficiency. Because of it is new
technology, it has both advantages and disadvantages that are
scrutinized in this article. Then some vanguards of this technology
are studied. Afterwards we find out that Cloud Computing will have
important roles in our tomorrow life!
Abstract: During the last couple of years, the degree of dependence on IT systems has reached a dimension nobody imagined to be possible 10 years ago. The increased usage of mobile devices (e.g., smart phones), wireless sensor networks and embedded devices (Internet of Things) are only some examples of the dependency of modern societies on cyber space. At the same time, the complexity of IT applications, e.g., because of the increasing use of cloud computing, is rising continuously. Along with this, the threats to IT security have increased both quantitatively and qualitatively, as recent examples like STUXNET or the supposed cyber attack on Illinois water system are proofing impressively. Once isolated control systems are nowadays often publicly available - a fact that has never been intended by the developers. Threats to IT systems don’t care about areas of responsibility. Especially with regard to Cyber Warfare, IT threats are no longer limited to company or industry boundaries, administrative jurisdictions or state boundaries. One of the important countermeasures is increased cooperation among the participants especially in the field of Cyber Defence. Besides political and legal challenges, there are technical ones as well. A better, at least partially automated exchange of information is essential to (i) enable sophisticated situational awareness and to (ii) counter the attacker in a coordinated way. Therefore, this publication performs an evaluation of state of the art Intrusion Detection Message Exchange protocols in order to guarantee a secure information exchange between different entities.
Abstract: As the majority of faults are found in a few of its
modules so there is a need to investigate the modules that are
affected severely as compared to other modules and proper
maintenance need to be done in time especially for the critical
applications. As, Neural networks, which have been already applied
in software engineering applications to build reliability growth
models predict the gross change or reusability metrics. Neural
networks are non-linear sophisticated modeling techniques that are
able to model complex functions. Neural network techniques are
used when exact nature of input and outputs is not known. A key
feature is that they learn the relationship between input and output
through training. In this present work, various Neural Network Based
techniques are explored and comparative analysis is performed for
the prediction of level of need of maintenance by predicting level
severity of faults present in NASA-s public domain defect dataset.
The comparison of different algorithms is made on the basis of Mean
Absolute Error, Root Mean Square Error and Accuracy Values. It is
concluded that Generalized Regression Networks is the best
algorithm for classification of the software components into different
level of severity of impact of the faults. The algorithm can be used to
develop model that can be used for identifying modules that are
heavily affected by the faults.
Abstract: Inter-organizational Workflow (IOW) is commonly
used to support the collaboration between heterogeneous and
distributed business processes of different autonomous organizations
in order to achieve a common goal. E-government is considered as an
application field of IOW. The coordination of the different
organizations is the fundamental problem in IOW and remains the
major cause of failure in e-government projects. In this paper, we
introduce a new coordination model for IOW that improves the
collaboration between government administrations and that respects
IOW requirements applied to e-government. For this purpose, we
adopt a Multi-Agent approach, which deals more easily with interorganizational
digital government characteristics: distribution,
heterogeneity and autonomy. Our model integrates also different
technologies to deal with the semantic and technologic
interoperability. Moreover, it conserves the existing systems of
government administrations by offering a distributed coordination
based on interfaces communication. This is especially applied in
developing countries, where administrations are not necessary
equipped with workflow systems. The use of our coordination
techniques allows an easier migration for an e-government solution
and with a lower cost. To illustrate the applicability of the proposed
model, we present a case study of an identity card creation in Tunisia.
Abstract: In this paper, the application of neural networks to study the design of short-term temperature forecasting (STTF) Systems for Kermanshah city, west of Iran was explored. One important architecture of neural networks named Multi-Layer Perceptron (MLP) to model STTF systems is used. Our study based on MLP was trained and tested using ten years (1996-2006) meteorological data. The results show that MLP network has the minimum forecasting error and can be considered as a good method to model the STTF systems.
Abstract: A new low-voltage floating gate MOSFET (FGMOS)
based squarer using square law characteristic of the FGMOS is
proposed in this paper. The major advantages of the squarer are simplicity,
rail-to-rail input dynamic range, low total harmonic distortion,
and low power consumption. The proposed circuit is biased without
body effect. The circuit is designed and simulated using SPICE in
0.25μm CMOS technology. The squarer is operated at the supply
voltages of ±0.75V . The total harmonic distortion (THD) for the
input signal 0.75Vpp at 25 KHz, and maximum power consumption
were found to be less than 1% and 319μW respectively.
Abstract: The UML modeling of complex distributed systems often is a great challenge due to the large amount of parallel real-time operating components. In this paper the problems of verification of such systems are discussed. ECPN, an Extended Colored Petri Net is defined to formally describe state transitions of components and interactions among components. The relationship between sequence diagrams and Free Choice Petri Nets is investigated. Free Choice Petri Net theory helps verifying the liveness of sequence diagrams. By converting sequence diagrams to ECPNs and then comparing behaviors of sequence diagram ECPNs and statecharts, the consistency among models is analyzed. Finally, a verification process for an example model is demonstrated.
Abstract: Large scale systems such as computational Grid is
a distributed computing infrastructure that can provide globally
available network resources. The evolution of information processing
systems in Data Grid is characterized by a strong decentralization of
data in several fields whose objective is to ensure the availability and
the reliability of the data in the reason to provide a fault tolerance
and scalability, which cannot be possible only with the use of the
techniques of replication. Unfortunately the use of these techniques
has a height cost, because it is necessary to maintain consistency
between the distributed data. Nevertheless, to agree to live with
certain imperfections can improve the performance of the system by
improving competition. In this paper, we propose a multi-layer protocol
combining the pessimistic and optimistic approaches conceived
for the data consistency maintenance in large scale systems. Our
approach is based on a hierarchical representation model with tree
layers, whose objective is with double vocation, because it initially
makes it possible to reduce response times compared to completely
pessimistic approach and it the second time to improve the quality
of service compared to an optimistic approach.
Abstract: Power Spectral Density (PSD) of quasi-stationary processes can be efficiently estimated using the short time Fourier series (STFT). In this paper, an algorithm has been proposed that computes the PSD of quasi-stationary process efficiently using offline autoregressive model order estimation algorithm, recursive parameter estimation technique and modified sliding window discrete Fourier Transform algorithm. The main difference in this algorithm and STFT is that the sliding window (SW) and window for spectral estimation (WSA) are separately defined. WSA is updated and its PSD is computed only when change in statistics is detected in the SW. The computational complexity of the proposed algorithm is found to be lesser than that for standard STFT technique.
Abstract: This study investigated a strategy of blending lead-laden sludge and Al-rich precursors to reduce the release of metals from the stabilized products. Using PbO as the simulated lead-laden sludge to sinter with γ-Al2O3 by Pb:Al molar ratios of 1:2 and 1:12, PbAl2O4 and PbAl12O19 were formed as final products during the sintering process, respectively. By firing the PbO + γ-Al2O3 mixtures with different Pb/Al molar ratios at 600 to 1000 °C, the lead transformation was determined through X-ray diffraction (XRD) data. In Pb/Al molar ratio of 1/2 system, the formation of PbAl2O4 is initiated at 700 °C, but an effective formation was observed above 750 °C. An intermediate phase, Pb9Al8O21, was detected in the temperature range of 800-900 °C. However, different incorporation behavior for sintering PbO with Al-rich precursors at a Pb/Al molar ratio of 1/12 was observed during the formation of PbAl12O19 in this system. In the sintering process, both temperature and time effect on the formation of PbAl2O4 and PbAl12O19 phases were estimated. Finally, a prolonged leaching test modified from the U.S. Environmental Protection Agency-s toxicity characteristic leaching procedure (TCLP) was used to evaluate the durability of PbO, Pb9Al8O21, PbAl2O4 and PbAl12O19 phases. Comparison for the leaching results of the four phases demonstrated the higher intrinsic resistance of PbAl12O19 against acid attack.