Abstract: When the foundations of structures under cyclic
loading with amplitudes less than their permissible load, the concern exists often for the amount of uniform and non-uniform settlement of
such structures. Storage tank foundations with numerous filling and discharging and railways ballast course under repeating
transportation loads are examples of such conditions. This paper
deals with the effects of using the new generation of reinforcements,
Grid-Anchor, for the purpose of reducing the permanent settlement
of these foundations under the influence of different proportions of
the ultimate load. Other items such as the type and the number of
reinforcements as well as the number of loading cycles are studied numerically. Numerical models were made using the Plaxis3D
Tunnel finite element code. The results show that by using gridanchor
and increasing the number of their layers in the same
proportion as that of the cyclic load being applied, the amount of
permanent settlement decreases up to 42% relative to unreinforced
condition depends on the number of reinforcement layers and percent
of applied load and the number of loading cycles to reach a constant
value of dimensionless settlement decreases up to 20% relative to
unreinforced condition.
Abstract: In recent years, a number of works proposing the
combination of multiple classifiers to produce a single
classification have been reported in remote sensing literature. The
resulting classifier, referred to as an ensemble classifier, is
generally found to be more accurate than any of the individual
classifiers making up the ensemble. As accuracy is the primary
concern, much of the research in the field of land cover
classification is focused on improving classification accuracy. This
study compares the performance of four ensemble approaches
(boosting, bagging, DECORATE and random subspace) with a
univariate decision tree as base classifier. Two training datasets,
one without ant noise and other with 20 percent noise was used to
judge the performance of different ensemble approaches. Results
with noise free data set suggest an improvement of about 4% in
classification accuracy with all ensemble approaches in
comparison to the results provided by univariate decision tree
classifier. Highest classification accuracy of 87.43% was achieved
by boosted decision tree. A comparison of results with noisy data
set suggests that bagging, DECORATE and random subspace
approaches works well with this data whereas the performance of
boosted decision tree degrades and a classification accuracy of
79.7% is achieved which is even lower than that is achieved (i.e.
80.02%) by using unboosted decision tree classifier.
Abstract: In this paper, the full state feedback controllers
capable of regulating and tracking the speed trajectory are presented.
A fourth order nonlinear mean value model of a 448 kW turbocharged
diesel engine published earlier is used for the purpose.
For designing controllers, the nonlinear model is linearized and
represented in state-space form. Full state feedback controllers
capable of meeting varying speed demands of drivers are presented.
Main focus here is to investigate sensitivity of the controller to the
perturbations in the parameters of the original nonlinear model.
Suggested controller is shown to be highly insensitive to the
parameter variations. This indicates that the controller is likely
perform with same accuracy even after significant wear and tear of
engine due to its use for years.
Abstract: Heart failure is the most common reason of death
nowadays, but if the medical help is given directly, the patient-s life
may be saved in many cases. Numerous heart diseases can be
detected by means of analyzing electrocardiograms (ECG). Artificial
Neural Networks (ANN) are computer-based expert systems that
have proved to be useful in pattern recognition tasks. ANN can be
used in different phases of the decision-making process, from
classification to diagnostic procedures. This work concentrates on a
review followed by a novel method.
The purpose of the review is to assess the evidence of healthcare
benefits involving the application of artificial neural networks to the
clinical functions of diagnosis, prognosis and survival analysis, in
ECG signals. The developed method is based on a compound neural
network (CNN), to classify ECGs as normal or carrying an
AtrioVentricular heart Block (AVB). This method uses three
different feed forward multilayer neural networks. A single output
unit encodes the probability of AVB occurrences. A value between 0
and 0.1 is the desired output for a normal ECG; a value between 0.1
and 1 would infer an occurrence of an AVB. The results show that
this compound network has a good performance in detecting AVBs,
with a sensitivity of 90.7% and a specificity of 86.05%. The accuracy
value is 87.9%.
Abstract: The study in this paper underlines the importance of
correct joint selection of the spreading codes for uplink of multicarrier
code division multiple access (MC-CDMA) at the transmitter
side and detector at the receiver side in the presence of nonlinear
distortion due to high power amplifier (HPA). The bit error rate
(BER) of system for different spreading sequences (Walsh code, Gold
code, orthogonal Gold code, Golay code and Zadoff-Chu code) and
different kinds of receivers (minimum mean-square error receiver
(MMSE-MUD) and microstatistic multi-user receiver (MSF-MUD))
is compared by means of simulations for MC-CDMA transmission
system. Finally, the results of analysis will show, that the application
of MSF-MUD in combination with Golay codes can outperform
significantly the other tested spreading codes and receivers for all
mostly used models of HPA.
Abstract: This article presents the simulation, parameterization and optimization of an electromagnet with the C–shaped configuration, intended for the study of magnetic properties of materials. The electromagnet studied consists of a C-shaped yoke, which provides self–shielding for minimizing losses of magnetic flux density, two poles of high magnetic permeability and power coils wound on the poles. The main physical variable studied was the static magnetic flux density in a column within the gap between the poles, with 4cm2 of square cross section and a length of 5cm, seeking a suitable set of parameters that allow us to achieve a uniform magnetic flux density of 1x104 Gaussor values above this in the column, when the system operates at room temperature and with a current consumption not exceeding 5A. By means of a magnetostatic analysis by the finite element method, the magnetic flux density and the distribution of the magnetic field lines were visualized and quantified. From the results obtained by simulating an initial configuration of electromagnet, a structural optimization of the geometry of the adjustable caps for the ends of the poles was performed. The magnetic permeability effect of the soft magnetic materials used in the poles system, such as low– carbon steel (0.08% C), Permalloy (45% Ni, 54.7% Fe) and Mumetal (21.2% Fe, 78.5% Ni), was also evaluated. The intensity and uniformity of the magnetic field in the gap showed a high dependence with the factors described above. The magnetic field achieved in the column was uniform and its magnitude ranged between 1.5x104 Gauss and 1.9x104 Gauss according to the material of the pole used, with the possibility of increasing the magnetic field by choosing a suitable geometry of the cap, introducing a cooling system for the coils and adjusting the spacing between the poles. This makes the device a versatile and scalable tool to generate the magnetic field necessary to perform magnetic characterization of materials by techniques such as vibrating sample magnetometry (VSM), Hall-effect, Kerr-effect magnetometry, among others. Additionally, a CAD design of the modules of the electromagnet is presented in order to facilitate the construction and scaling of the physical device.
Abstract: The increasing importance of data stream arising in a
wide range of advanced applications has led to the extensive study of
mining frequent patterns. Mining data streams poses many new
challenges amongst which are the one-scan nature, the unbounded
memory requirement and the high arrival rate of data streams. In this
paper, we propose a new approach for mining itemsets on data
stream. Our approach SFIDS has been developed based on FIDS
algorithm. The main attempts were to keep some advantages of the
previous approach and resolve some of its drawbacks, and
consequently to improve run time and memory consumption. Our
approach has the following advantages: using a data structure similar
to lattice for keeping frequent itemsets, separating regions from each
other with deleting common nodes that results in a decrease in search
space, memory consumption and run time; and Finally, considering
CPU constraint, with increasing arrival rate of data that result in
overloading system, SFIDS automatically detect this situation and
discard some of unprocessing data. We guarantee that error of results
is bounded to user pre-specified threshold, based on a probability
technique. Final results show that SFIDS algorithm could attain
about 50% run time improvement than FIDS approach.
Abstract: Border Gateway Protocol (BGP) is the standard routing protocol between various autonomous systems (AS) in the internet. In the event of failure, a considerable delay in the BGP convergence has been shown by empirical measurements. During the convergence time the BGP will repeatedly advertise new routes to some destination and withdraw old ones until it reach a stable state. It has been found that the KEEPALIVE message timer and the HOLD time are tow parameters affecting the convergence speed. This paper aims to find the optimum value for the KEEPALIVE timer and the HOLD time that maximally reduces the convergence time without increasing the traffic. The KEEPALIVE message timer optimal value founded by this paper is 30 second instead of 60 seconds, and the optimal value for the HOLD time is 90 seconds instead of 180 seconds.
Abstract: This paper describes an algorithm to estimate realtime vehicle velocity using image processing technique from the known camera calibration parameters. The presented algorithm involves several main steps. First, the moving object is extracted by utilizing frame differencing technique. Second, the object tracking method is applied and the speed is estimated based on the displacement of the object-s centroid. Several assumptions are listed to simplify the transformation of 2D images from 3D real-world images. The results obtained from the experiment have been compared to the estimated ground truth. From this experiment, it exhibits that the proposed algorithm has achieved the velocity accuracy estimation of about ± 1.7 km/h.
Abstract: With the advent of emerging personal computing paradigms such as ubiquitous and mobile computing, Web contents are becoming accessible from a wide range of mobile devices. Since these devices do not have the same rendering capabilities, Web contents need to be adapted for transparent access from a variety of client agents. Such content adaptation is exploited for either an individual element or a set of consecutive elements in a Web document and results in better rendering and faster delivery to the client device. Nevertheless, Web content adaptation sets new challenges for semantic markup. This paper presents an advanced components platform, called SMC, enabling the development of mobility applications and services according to a channel model based on the principles of Services Oriented Architecture (SOA). It then goes on to describe the potential for integration with the Semantic Web through a novel framework of external semantic annotation that prescribes a scheme for representing semantic markup files and a way of associating Web documents with these external annotations. The role of semantic annotation in this framework is to describe the contents of individual documents themselves, assuring the preservation of the semantics during the process of adapting content rendering. Semantic Web content adaptation is a way of adding value to Web contents and facilitates repurposing of Web contents (enhanced browsing, Web Services location and access, etc).
Abstract: The ability of UML to handle the modeling process of complex industrial software applications has increased its popularity to the extent of becoming the de-facto language in serving the design purpose. Although, its rich graphical notation naturally oriented towards the object-oriented concept, facilitates the understandability, it hardly successes to report all domainspecific aspects in a satisfactory way. OCL, as the standard language for expressing additional constraints on UML models, has great potential to help improve expressiveness. Unfortunately, it suffers from a weak formalism due to its poor semantic resulting in many obstacles towards the build of tools support and thus its application in the industry field. For this reason, many researches were established to formalize OCL expressions using a more rigorous approach. Our contribution join this work in a complementary way since it focuses specifically on OCL predefined properties which constitute an important part in the construction of OCL expressions. Using formal methods, we mainly succeed in expressing rigorously OCL predefined functions.
Abstract: The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.
Abstract: In order to make surfing the internet faster, and to save redundant processing load with each request for the same web page, many caching techniques have been developed to reduce latency of retrieving data on World Wide Web. In this paper we will give a quick overview of existing web caching techniques used for dynamic web pages then we will introduce a design and implementation model that take advantage of “URL Rewriting" feature in some popular web servers, e.g. Apache, to provide an effective approach of caching dynamic web pages.
Abstract: Active Vibration Control (AVC) is an important
problem in structures. One of the ways to tackle this problem is to
make the structure smart, adaptive and self-controlling. The objective
of active vibration control is to reduce the vibration of a system by
automatic modification of the system-s structural response. This
paper features the modeling and design of a Periodic Output
Feedback (POF) control technique for the active vibration control of
a flexible Timoshenko cantilever beam for a multivariable case with
2 inputs and 2 outputs by retaining the first 2 dominant vibratory
modes using the smart structure concept. The entire structure is
modeled in state space form using the concept of piezoelectric
theory, Timoshenko beam theory, Finite Element Method (FEM) and
the state space techniques. Simulations are performed in MATLAB.
The effect of placing the sensor / actuator at 2 finite element
locations along the length of the beam is observed. The open loop
responses, closed loop responses and the tip displacements with and
without the controller are obtained and the performance of the smart
system is evaluated for active vibration control.
Abstract: This paper demonstrates the bus location system for
the route bus through the experiment in the real environment. A
bus location system is a system that provides information such as
the bus delay and positions. This system uses actual services and
positions data of buses, and those information should match data
on the database. The system has two possible problems. One, the
system could cost high in preparing devices to get bus positions.
Two, it could be difficult to match services data of buses. To avoid
these problems, we have developed this system at low cost and short
time by using the smart phone with GPS and the bus route system.
This system realizes the path planning considering bus delay and
displaying position of buses on the map. The bus location system
was demonstrated on route buses with smart phones for two months.
Abstract: The purpose of this study is to examine the self and
decision making levels of students receiving education in schools of
physical training and sports. The population of the study consisted
258 students, among which 152 were male and 106 were female
( X age=19,3713 + 1,6968), that received education in the schools of
physical education and sports of Selcuk University, Inonu University,
Gazi University and Karamanoglu Mehmetbey University. In order to
achieve the purpose of the study, the Melbourne Decision Making
Questionnary developed by Mann et al. (1998) [1] and adapted to
Turkish by Deniz (2004) [2] and the Self-Esteem Scale developed by
Aricak (1999) [3] was utilized. For analyzing and interpreting data
Kolmogorov-Smirnov test, t-test and one way anova test were used,
while for determining the difference between the groups Tukey test
and Multiple Linear Regression test were employed and significance
was accepted at P
Abstract: The Comparison analysis of the Wald-s and Bayestype sequential methods for testing hypotheses is offered. The merits of the new sequential test are: universality which consists in optimality (with given criteria) and uniformity of decision-making regions for any number of hypotheses; simplicity, convenience and uniformity of the algorithms of their realization; reliability of the obtained results and an opportunity of providing the errors probabilities of desirable values. There are given the Computation results of concrete examples which confirm the above-stated characteristics of the new method and characterize the considered methods in regard to each other.
Abstract: In this paper, an analytical modeling is presentated to
describe the channel noise in GME SGT/CGT MOSFET, based on
explicit functions of MOSFETs geometry and biasing conditions for
all channel length down to deep submicron and is verified with the
experimental data. Results shows the impact of various parameters
such as gate bias, drain bias, channel length ,device diameter and gate
material work function difference on drain current noise spectral
density of the device reflecting its applicability for circuit design
applications.
Abstract: Computerized alarm systems have been applied
increasingly to nuclear power plants. For existing plants, an add-on
computer alarm system is often installed to the control rooms. Alarm
avalanches during the plant transients are major problems with the
alarm systems in nuclear power plants. Computerized alarm systems
can process alarms to reduce the number of alarms during the plant
transients. This paper describes various alarm processing methods, an
alarm cause tracking function, and various alarm presentation schemes
to show alarm information to the operators effectively which are
considered during the development of several computerized alarm
systems for Korean nuclear power plants and are found to be helpful to
the operators.
Abstract: The e-government emerging concept transforms the
way in which the citizens are dealing with their governments. Thus,
the citizens can execute the intended services online anytime and
anywhere. This results in great benefits for both the governments
(reduces the number of officers) and the citizens (more flexibility and
time saving). Therefore, building a maturity model to assess the egovernment
portals becomes desired to help in the improvement
process of such portals. This paper aims at proposing an egovernment
maturity model based on the measurement of the best
practices’ presence. The main benefit of such maturity model is to
provide a way to rank an e-government portal based on the used best
practices, and also giving a set of recommendations to go to the
higher stage in the maturity model.