Abstract: Long Term Evolution (LTE) is a 4G wireless
broadband technology developed by the Third Generation
Partnership Project (3GPP) release 8, and it's represent the
competitiveness of Universal Mobile Telecommunications System
(UMTS) for the next 10 years and beyond. The concepts for LTE
systems have been introduced in 3GPP release 8, with objective of
high-data-rate, low-latency and packet-optimized radio access
technology. In this paper, performance of different TCP variants
during LTE network investigated. The performance of TCP over
LTE is affected mostly by the links of the wired network and total
bandwidth available at the serving base station. This paper describes
an NS-2 based simulation analysis of TCP-Vegas, TCP-Tahoe, TCPReno,
TCP-Newreno, TCP-SACK, and TCP-FACK, with full
modeling of all traffics of LTE system. The Evaluation of the
network performance with all TCP variants is mainly based on
throughput, average delay and lost packet. The analysis of TCP
performance over LTE ensures that all TCP's have a similar
throughput and the best performance return to TCP-Vegas than other
variants.
Abstract: In this article a modification of the algorithm of the fuzzy ART network, aiming at returning it supervised is carried out. It consists of the search for the comparison, training and vigilance parameters giving the minimum quadratic distances between the output of the training base and those obtained by the network. The same process is applied for the determination of the parameters of the fuzzy ARTMAP giving the most powerful network. The modification consist in making learn the fuzzy ARTMAP a base of examples not only once as it is of use, but as many time as its architecture is in evolution or than the objective error is not reached . In this way, we don-t worry about the values to impose on the eight (08) parameters of the network. To evaluate each one of these three networks modified, a comparison of their performances is carried out. As application we carried out a classification of the image of Algiers-s bay taken by SPOT XS. We use as criterion of evaluation the training duration, the mean square error (MSE) in step control and the rate of good classification per class. The results of this study presented as curves, tables and images show that modified fuzzy ARTMAP presents the best compromise quality/computing time.
Abstract: This paper concerns about the experimental and
numerical investigations of energy absorption and axial tearing
behaviour of aluminium 6060 circular thin walled tubes under static
axial compression. The tubes are received in T66 heat treatment
condition with fixed outer diameter of 42mm, thickness of 1.5mm
and length of 120mm. The primary variables are the conical die
angles (15°, 20° and 25°). Numerical simulations are carried on
ANSYS/LS-DYNA software tool, for investigating the effect of
friction between the tube and the die.
Abstract: A dent is a gross distortion of the pipe cross-section.
Dent depth is defined as the maximum reduction in the diameter of
the pipe compared to the original diameter. Pipeline dent finite
element (FE) simulation and theoretical analysis are conducted in this
paper to develop an understanding of the geometric characteristics
and strain distribution in the pressurized dented pipe. Based on the
results, the magnitude of the denting force increases significantly
with increasing the internal pressure, and the maximum
circumferential and longitudinal strains increase by increasing the
internal pressure and the dent depth. The results can be used for
characterizing dents and ranking their risks to the integrity of a
pipeline.
Abstract: Term Extraction, a key data preparation step in Text
Mining, extracts the terms, i.e. relevant collocation of words,
attached to specific concepts (e.g. genetic-algorithms and decisiontrees
are terms associated to the concept “Machine Learning" ). In
this paper, the task of extracting interesting collocations is achieved
through a supervised learning algorithm, exploiting a few
collocations manually labelled as interesting/not interesting. From
these examples, the ROGER algorithm learns a numerical function,
inducing some ranking on the collocations. This ranking is optimized
using genetic algorithms, maximizing the trade-off between the false
positive and true positive rates (Area Under the ROC curve). This
approach uses a particular representation for the word collocations,
namely the vector of values corresponding to the standard statistical
interestingness measures attached to this collocation. As this
representation is general (over corpora and natural languages),
generality tests were performed by experimenting the ranking
function learned from an English corpus in Biology, onto a French
corpus of Curriculum Vitae, and vice versa, showing a good
robustness of the approaches compared to the state-of-the-art Support
Vector Machine (SVM).
Abstract: This paper deals with the modeling and the evaluation of a multiplicative phase noise influence on the bit error ratio in a general space communication system. Our research is focused on systems with multi-state phase shift keying modulation techniques and it turns out, that the phase noise significantly affects the bit error rate, especially for higher signal to noise ratios. These results come from a system model created in Matlab environment and are shown in a form of constellation diagrams and bit error rate dependencies. The change of a user data bit rate is also considered and included into simulation results. Obtained outcomes confirm theoretical presumptions.
Abstract: The permanent magnet synchronous motor (PMSM) is
very useful in many applications. Vector control of PMSM is popular
kind of its control. In this paper, at first an optimal vector control for
PMSM is designed and then results are compared with conventional
vector control. Then, it is assumed that the measurements are noisy
and linear quadratic Gaussian (LQG) methodology is used to filter
the noises. The results of noisy optimal vector control and filtered
optimal vector control are compared to each other. Nonlinearity of
PMSM and existence of inverter in its control circuit caused that the
system is nonlinear and time-variant. With deriving average model,
the system is changed to nonlinear time-invariant and then the
nonlinear system is converted to linear system by linearization of
model around average values. This model is used to optimize vector
control then two optimal vector controls are compared to each other.
Simulation results show that the performance and robustness to noise
of the control system has been highly improved.
Abstract: Network management techniques have long been of
interest to the networking research community. The queue size plays
a critical role for the network performance. The adequate size of the
queue maintains Quality of Service (QoS) requirements within
limited network capacity for as many users as possible. The
appropriate estimation of the queuing model parameters is crucial for
both initial size estimation and during the process of resource
allocation. The accurate resource allocation model for the
management system increases the network utilization. The present
paper demonstrates the results of empirical observation of memory
allocation for packet-based services.
Abstract: Developments in scientific and technical area cause to use new methods and techniques in education, as is the case in all fields. Especially, the internet contributes a variety of new methods to design virtual and real time laboratory applications in education. In this study, a real time virtual laboratory is designed and implemented for analog and digital communications laboratory experiments by using Lab VIEW program for Marmara University Electronics-Communication Department. In this application, students can access the virtual laboratory web site and perform their experiments without any limitation of time and location so as the students can observe the signals by changing the parameters of the experiment and evaluate the results.
Abstract: In 2002 an amendment to SOLAS opened for
lightweight material constructions in vessels if the same fire safety as
in steel constructions could be obtained. FISPAT (FIreSPread
Analysis Tool) is a computer application that simulates fire spread
and fault injection in cruise vessels and identifies fire sensitive areas.
It was developed to analyze cruise vessel designs and provides a
method to evaluate network layout and safety of cruise vessels. It
allows fast, reliable and deterministic exhaustive simulations and
presents the result in a graphical vessel model. By performing the
analysis iteratively while altering the cruise vessel design it can be
used along with fire chamber experiments to show that the
lightweight design can be as safe as a steel construction and that
SOLAS regulations are fulfilled.
Abstract: The influence of full-fat soy flour (FFSF) and
extrusion conditions on the mechanical characteristics of dry
spaghetti were evaluated. Process was performed with screw speed of
10-40rpm and water circulating temperature of 35-70°C. Data
analysis using mixture design showed that this enrichment resulted in
significant differences in mechanical strength.
Abstract: In this paper the main objective is to analyze the
quality of service of the bus companies operating in the city of
Campos, located in the state of Rio de Janeiro, Brazil. This analysis,
based on the opinion of the bus customers, will help to determine
their degree of satisfaction with the service provided by the bus
companies. The result of this assessment shows that the bus
customers are displeased with the quality of service supplied by the
bus companies. Therefore, it is necessary to identify alternative
solutions to minimize the consequences of the main problems related
to customers- dissatisfaction identified in our evaluation and to help
the bus companies operating in Campos better fulfill their riders-
needs.
Abstract: This research presented in this paper is an on-going
project of an application of neural network and fuzzy models to
evaluate the sociological factors which affect the educational
performance of the students in Sri Lanka. One of its major goals is to
prepare the grounds to device a counseling tool which helps these
students for a better performance at their examinations, especially at
their G.C.E O/L (General Certificate of Education-Ordinary Level)
examination. Closely related sociological factors are collected as raw
data and the noise of these data are filtered through the fuzzy
interface and the supervised neural network is being utilized to
recognize the performance patterns against the chosen social factors.
Abstract: While compressing text files is useful, compressing
still image files is almost a necessity. A typical image takes up much
more storage than a typical text message and without compression
images would be extremely clumsy to store and distribute. The
amount of information required to store pictures on modern
computers is quite large in relation to the amount of bandwidth
commonly available to transmit them over the Internet and
applications. Image compression addresses the problem of reducing
the amount of data required to represent a digital image. Performance
of any image compression method can be evaluated by measuring the
root-mean-square-error & peak signal to noise ratio. The method of
image compression that will be analyzed in this paper is based on the
lossy JPEG image compression technique, the most popular
compression technique for color images. JPEG compression is able to
greatly reduce file size with minimal image degradation by throwing
away the least “important" information. In JPEG, both color
components are downsampled simultaneously, but in this paper we
will compare the results when the compression is done by
downsampling the single chroma part. In this paper we will
demonstrate more compression ratio is achieved when the
chrominance blue is downsampled as compared to downsampling the
chrominance red in JPEG compression. But the peak signal to noise
ratio is more when the chrominance red is downsampled as compared
to downsampling the chrominance blue in JPEG compression. In
particular we will use the hats.jpg as a demonstration of JPEG
compression using low pass filter and demonstrate that the image is
compressed with barely any visual differences with both methods.
Abstract: The utilization of cheese whey as a fermentation
substrate to produce bio-ethanol is an effort to supply bio-ethanol
demand as a renewable energy. Like other process systems, modeling
is also required for fermentation process design, optimization and
plant operation. This research aims to study the fermentation process
of cheese whey by applying mathematics and fundamental concept in
chemical engineering, and to investigate the characteristic of the
cheese whey fermentation process. Steady state simulation results for
inlet substrate concentration of 50, 100 and 150 g/l, and various
values of hydraulic retention time, showed that the ethanol
productivity maximum values were 0.1091, 0.3163 and 0.5639 g/l.h
respectively. Those values were achieved at hydraulic retention time
of 20 hours, which was the minimum value used in this modeling.
This showed that operating reactor at low hydraulic retention time
was favorable. Model of bio-ethanol production from cheese whey
will enhance the understanding of what really happen in the
fermentation process.
Abstract: Batch adsorption of recalcitrant melanoidin using the abundantly available coal fly ash was carried out. It had low specific surface area (SBET) of 1.7287 m2/g and pore volume of 0.002245 cm3/g while qualitative evaluation of the predominant phases in it was done by XRD analysis. Colour removal efficiency was found to be dependent on various factors studied. Maximum colour removal was achieved around pH 6, whereas increasing sorbent mass from 10g/L to 200 g/L enhanced colour reduction from 25% to 86% at 298 K. Spontaneity of the process was suggested by negative Gibbs free energy while positive values for enthalpy change showed endothermic nature of the process. Non-linear optimization of error functions resulted in Freundlich and Redlich-Peterson isotherms describing sorption equilibrium data best. The coal fly ash had maximum sorption capacity of 53 mg/g and could thus be used as a low cost adsorbent in melanoidin removal.
Abstract: The paper investigates the potential of support vector
machines and Gaussian process based regression approaches to
model the oxygen–transfer capacity from experimental data of
multiple plunging jets oxygenation systems. The results suggest the
utility of both the modeling techniques in the prediction of the
overall volumetric oxygen transfer coefficient (KLa) from operational
parameters of multiple plunging jets oxygenation system. The
correlation coefficient root mean square error and coefficient of
determination values of 0.971, 0.002 and 0.945 respectively were
achieved by support vector machine in comparison to values of
0.960, 0.002 and 0.920 respectively achieved by Gaussian process
regression. Further, the performances of both these regression
approaches in predicting the overall volumetric oxygen transfer
coefficient was compared with the empirical relationship for multiple
plunging jets. A comparison of results suggests that support vector
machines approach works well in comparison to both empirical
relationship and Gaussian process approaches, and could successfully
be employed in modeling oxygen-transfer.
Abstract: Process capability index Cpk is the most widely
used index in making managerial decisions since it provides bounds
on the process yield for normally distributed processes. However,
existent methods for assessing process performance which
constructed by statistical inference may unfortunately lead to fine
results, because uncertainties exist in most real-world applications.
Thus, this study adopts fuzzy inference to deal with testing of Cpk .
A brief score is obtained for assessing a supplier’s process instead of
a severe evaluation.
Abstract: In this paper , by using fixed point theorem , upper and lower solution-s method and monotone iterative technique , we prove the existence of maximum and minimum solutions of differential equations with delay , which improved and generalize the result of related paper.
Abstract: A mammography image is composed of low contrast area where the breast tissues and the breast abnormalities such as microcalcification can hardly be differentiated by the medical practitioner. This paper presents the application of active contour models (Snakes) for the segmentation of microcalcification in mammography images. Comparison on the microcalcifiation areas segmented by the Balloon Snake, Gradient Vector Flow (GVF) Snake, and Distance Snake is done against the true value of the microcalcification area. The true area value is the average microcalcification area in the original mammography image traced by the expert radiologists. From fifty images tested, the result obtained shows that the accuracy of the Balloon Snake, GVF Snake, and Distance Snake in segmenting boundaries of microcalcification are 96.01%, 95.74%, and 95.70% accuracy respectively. This implies that the Balloon Snake is a better segmentation method to locate the exact boundary of a microcalcification region.