Abstract: Cyprus- offshore aquaculture industry has promising
prospects taking into account that Cyprus is an island. Its production
trend is increasing overtaking bigger countries such Greece and Italy.
However, current mooring systems seem to be under-performing
acting as obstacles for its future development. Furthermore, shallow
coastal waters scarcity due to competing industries dictates future
development to come by moving further from shore exposing fish
farms and subsequently mooring systems to harsher environmental
loadings. It is, therefore, of paramount importance to design mooring
systems based on engineering and scientific principles and leave
behind the present “trial and error" methods. This paper presents the
current state of Cyprus- offshore aquaculture industry and focuses of
its mooring designs by proposing a new methodology for designing
more reliable systems, hence ensuring its future.
Abstract: The paper presents a multimodal approach for biometric authentication, based on multiple classifiers. The proposed solution uses a post-classification biometric fusion method in which the biometric data classifiers outputs are combined in order to improve the overall biometric system performance by decreasing the classification error rates. The paper shows also the biometric recognition task improvement by means of a carefully feature selection, as much as not all of the feature vectors components support the accuracy improvement.
Abstract: Incremental forming is a complex forming process with
continuously local cumulative deformation taking place during its
process, and springback that forming quality affected by would occur.
The springback evaluation method based on forming error
compensation also was proposed, which it can be defined as the
difference between theory and the actual amount of compensation
along the measured direction. According to forming error
compensation evaluation method, experiments was designed and
implemented. And from the results that obtained it can be show, the
magnitude of springback average (δE) of formed parts was very small,
and the forming precision could be significantly improved by adopting
compensation method. Based on double tensile stress state in the main
deformation area, a hypothesis that there is little springback be arisen
by bending behavior on the formed parts that was proposed.
Abstract: We deal with the numerical solution of time-dependent convection-diffusion-reaction equations. We combine the local projection stabilization method for the space discretization with two different time discretization schemes: the continuous Galerkin-Petrov (cGP) method and the discontinuous Galerkin (dG) method of polynomial of degree k. We establish the optimal error estimates and present numerical results which shows that the cGP(k) and dG(k)- methods are accurate of order k +1, respectively, in the whole time interval. Moreover, the cGP(k)-method is superconvergent of order 2k and dG(k)-method is of order 2k +1 at the discrete time points. Furthermore, the dependence of the results on the choice of the stabilization parameter are discussed and compared.
Abstract: The current speech interfaces in many military
applications may be adequate for native speakers. However,
the recognition rate drops quite a lot for non-native speakers
(people with foreign accents). This is mainly because the nonnative
speakers have large temporal and intra-phoneme
variations when they pronounce the same words. This
problem is also complicated by the presence of large
environmental noise such as tank noise, helicopter noise, etc.
In this paper, we proposed a novel continuous acoustic feature
adaptation algorithm for on-line accent and environmental
adaptation. Implemented by incremental singular value
decomposition (SVD), the algorithm captures local acoustic
variation and runs in real-time. This feature-based adaptation
method is then integrated with conventional model-based
maximum likelihood linear regression (MLLR) algorithm.
Extensive experiments have been performed on the NATO
non-native speech corpus with baseline acoustic model trained
on native American English. The proposed feature-based
adaptation algorithm improved the average recognition
accuracy by 15%, while the MLLR model based adaptation
achieved 11% improvement. The corresponding word error
rate (WER) reduction was 25.8% and 2.73%, as compared to
that without adaptation. The combined adaptation achieved
overall recognition accuracy improvement of 29.5%, and
WER reduction of 31.8%, as compared to that without
adaptation.
Abstract: In conducting a case study to analyze the status-quo of
the extremists’ dominance in Egypt, the author of this paper uses
qualitative research method to analyze the evolution of extreme
Islamist groups in Egypt. In conducting this qualitative research, the
author of this paper intends to use several lenses to understand the
rise and the evolution of the hegemony of extremist groups, such as
the Muslim Brotherhood and other groups in Egypt. Therefore, unless
he intends to show an important nexus between the Egyptian groups
and their sister-groups in other countries, he will intentionally
exclude analyzing extreme Islamism of non-Egyptian origins. This
case study relies on the moral disengagement theory to shed light on
the ideological evolution of extremism in Egypt. The goal of this case
study is to help in understanding extreme-Islamism adverse to the
mainstream Islam; therefore, understanding the concept here should
help in preventing similar groups from threatening the international
community.
Abstract: Current practice of indigenous Mapping production based on GIS, are mostly produced by professional GIS personnel. Given such persons maintain control over data collection and authoring, it is possible to conceive errors due to misrepresentation or cognitive misunderstanding, causing map production inconsistencies. In order to avoid such issues, this research into tribal GIS interface focuses not on customizing interfaces for individual tribes, but rather generalizing the interface and features based on indigenous tribal user needs. The methods employed differs from the traditional expert top-down approach, and instead gaining deeper understanding into indigenous Mappings and user needs, prior to applying mapping techniques and feature development.
Abstract: Most of the collision warning systems currently
available in the automotive market are mainly designed to warn
against imminent rear-end and lane-changing collisions. No collision
warning system is commercially available to warn against imminent
turning collisions at intersections, especially for left-turn collisions
when a driver attempts to make a left-turn at either a signalized or
non-signalized intersection, conflicting with the path of other
approaching vehicles traveling on the opposite-direction traffic
stream. One of the major factors that lead to left-turn collisions is the
human error and misjudgment of the driver of the turning vehicle
when perceiving the speed and acceleration of other vehicles
traveling on the opposite-direction traffic stream; therefore, using a
properly-designed collision warning system will likely reduce, or
even eliminate, this type of collisions by reducing human error. This
paper introduces perceptual framework for a proposed collision
warning system that can detect imminent left-turn collisions at
intersections. The system utilizes a commercially-available detection
sensor (either a radar sensor or a laser detector) to detect approaching
vehicles traveling on the opposite-direction traffic stream and
calculate their speeds and acceleration rates to estimate the time-tocollision
and compare that time to the time required for the turning
vehicle to clear the intersection. When calculating the time required
for the turning vehicle to clear the intersection, consideration is given
to the perception-reaction time of the driver of the turning vehicle,
which is the time required by the driver to perceive the message
given by the warning system and react to it by engaging the throttle.
A regression model was developed to estimate perception-reaction
time based on age and gender of the driver of the host vehicle.
Desired acceleration rate selected by the driver of the turning vehicle,
when making the left-turn movement, is another human factor that is
considered by the system. Another regression model was developed
to estimate the acceleration rate selected by the driver of the turning
vehicle based on driver-s age and gender as well as on the location
and speed of the nearest approaching vehicle along with the
maximum acceleration rate provided by the mechanical
characteristics of the turning vehicle. By comparing time-to-collision
with the time required for the turning vehicle to clear the intersection,
the system displays a message to the driver of the turning vehicle
when departure is safe. An application example is provided to
illustrate the logic algorithm of the proposed system.
Abstract: Many metrics were proposed to evaluate the
characteristics of the analysis and design model of a given product
which in turn help to assess the quality of the product. Function point
metric is a measure of the 'functionality' delivery by the software.
This paper presents an analysis of a set of programs of a project
developed in Cµ through Function Points metric. Function points
are measured for a Data Flow Diagram (DFD) of the case developed
at initial stage. Lines of Codes (LOCs) and possible errors are
calculated with the help of measured Function Points (FPs). The
calculations are performed using suitable established functions.
Calculated LOCs and errors are compared with actual LOCs and
errors found at the time of analysis & design review, implementation
and testing. It has been observed that actual found errors are more
than calculated errors. On the basis of analysis and observations,
authors conclude that function point provides useful insight and helps
to analyze the drawbacks in the development process.
Abstract: This paper addresses the problem of peak-to-average
power ratio (PAPR) in orthogonal frequency division multiplexing
(OFDM) systems. It also introduces a new PAPR reduction technique
based on adaptive square-rooting (SQRT) companding process. The
SQRT process of the proposed technique changes the statistical
characteristics of the OFDM output signals from Rayleigh
distribution to Gaussian-like distribution. This change in statistical
distribution results changes of both the peak and average power
values of OFDM signals, and consequently reduces significantly the
PAPR. For the 64QAM OFDM system using 512 subcarriers, up to 6
dB reduction in PAPR was achieved by square-rooting technique
with fixed degradation in bit error rate (BER) equal to 3 dB.
However, the PAPR is reduced at the expense of only -15 dB out-ofband
spectral shoulder re-growth below the in-band signal level. The
proposed adaptive SQRT technique is superior in terms of BER
performance than the original, non-adaptive, square-rooting
technique when the required reduction in PAPR is no more than 5
dB. Also, it provides fixed amount of PAPR reduction in which it is
not available in the original SQRT technique.
Abstract: In this paper, we present a novel objective nonreference
performance assessment algorithm for image fusion. It takes
into account local measurements to estimate how well the important
information in the source images is represented by the fused image.
The metric is based on the Universal Image Quality Index and uses
the similarity between blocks of pixels in the input images and the
fused image as the weighting factors for the metrics. Experimental
results confirm that the values of the proposed metrics correlate well
with the subjective quality of the fused images, giving a significant
improvement over standard measures based on mean squared error
and mutual information.
Abstract: Designing, implementing, and debugging concurrency
control algorithms in a real system is a complex, tedious, and errorprone
process. Further, understanding concurrency control
algorithms and distributed computations is itself a difficult task.
Visualization can help with both of these problems. Thus, we have
developed an exploratory environment in which people can prototype
and test various versions of concurrency control algorithms, study
and debug distributed computations, and view performance statistics
of distributed systems. In this paper, we describe the exploratory
environment and show how it can be used to explore concurrency
control algorithms for the interactive steering of distributed
computations.
Abstract: The extensive number of engineering drawing will be referred for planning process and the changes will produce a good engineering design to meet the demand in producing a new model. The advantage in reuse of engineering designs is to allow continuous product development to further improve the quality of product development, thus reduce the development costs. However, to retrieve the existing engineering drawing, it is time consuming, a complex process and are expose to errors. Engineering drawing file searching system will be proposed to solve this problem. It is essential for engineer and designer to have some sort of medium to enable them to search for drawing in the most effective way. This paper lays out the proposed research project under the area of information extraction in engineering drawing.
Abstract: This paper presents the decoder design for the single error correcting and double error detecting code proposed by the authors in an earlier paper. The speed of error detection and correction of a code is largely dependent upon the associated encoder and decoder circuits. The complexity and the speed of such circuits are determined by the number of 1?s in the parity check matrix (PCM). The number of 1?s in the parity check matrix for the code proposed by the authors are fewer than in any currently known single error correcting/double error detecting code. This results in simplified encoding and decoding circuitry for error detection and correction.
Abstract: This paper attempts to establish the fact that Multi
State Network Classification is essential for performance
enhancement of Transport protocols over Satellite based Networks. A
model to classify Multi State network condition taking into
consideration both congestion and channel error is evolved. In order
to arrive at such a model an analysis of the impact of congestion and
channel error on RTT values has been carried out using ns2. The
analysis results are also reported in the paper. The inference drawn
from this analysis is used to develop a novel statistical RTT based
model for multi state network classification.
An Adaptive Multi State Proactive Transport Protocol consisting
of Proactive Slow Start, State based Error Recovery, Timeout Action
and Proactive Reduction is proposed which uses the multi state
network state classification model. This paper also confirms through
detail simulation and analysis that a prior knowledge about the
overall characteristics of the network helps in enhancing the
performance of the protocol over satellite channel which is
significantly affected due to channel noise and congestion.
The necessary augmentation of ns2 simulator is done for
simulating the multi state network classification logic. This
simulation has been used in detail evaluation of the protocol under
varied levels of congestion and channel noise. The performance
enhancement of this protocol with reference to established protocols
namely TCP SACK and Vegas has been discussed. The results as
discussed in this paper clearly reveal that the proposed protocol
always outperforms its peers and show a significant improvement in
very high error conditions as envisaged in the design of the protocol.
Abstract: In this paper, the C1-conforming finite element method is analyzed for a class of nonlinear fourth-order hyperbolic partial differential equation. Some a priori bounds are derived using Lyapunov functional, and existence, uniqueness and regularity for the weak solutions are proved. Optimal error estimates are derived for both semidiscrete and fully discrete schemes.
Abstract: In this paper, we consider the problem for identifying the unknown source in the Poisson equation. A modified Tikhonov regularization method is presented to deal with illposedness of the problem and error estimates are obtained with an a priori strategy and an a posteriori choice rule to find the regularization parameter. Numerical examples show that the proposed method is effective and stable.
Abstract: Cerium-doped lanthanum bromide LaBr3:Ce(5%)
crystals are considered to be one of the most advanced scintillator
materials used in PET scanning, combining a high light yield, fast
decay time and excellent energy resolution. Apart from the correct
choice of scintillator, it is also important to optimise the detector
geometry, not least in terms of source-to-detector distance in order to
obtain reliable measurements and efficiency. In this study a
commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce
(5%) detector was characterised in terms of its efficiency at varying
source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and
137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As
a result of the change in solid angle subtended by the detector, the
geometric efficiency reduced in efficiency with increasing distance.
High efficiencies at low distances can cause pulse pile-up when
subsequent photons are detected before previously detected events
have decayed. To reduce this systematic error the source-to-detector
distance should be balanced between efficiency and pulse pile-up
suppression as otherwise pile-up corrections would need to be
necessary at short distances. In addition to the experimental
measurements Monte Carlo simulations have been carried out for the
same setup, allowing a comparison of results. The advantages and
disadvantages of each approach have been highlighted.
Abstract: Quality of 2D and 3D cross-sectional images produce
by Computed Tomography primarily depend upon the degree of
precision of primary and secondary X-Ray intensity detection.
Traditional method of primary intensity detection is apt to errors.
Recently the X-Ray intensity measurement system along with smart
X-Ray sensors is developed by our group which is able to detect
primary X-Ray intensity unerringly. In this study a new smart X-Ray
sensor is developed using Light-to-Frequency converter TSL230
from Texas Instruments which has numerous advantages in terms of
noiseless data acquisition and transmission. TSL230 construction is
based on a silicon photodiode which converts incoming X-Ray
radiation into the proportional current signal. A current to frequency
converter is attached to this photodiode on a single monolithic CMOS
integrated circuit which provides proportional frequency count to
incoming current signal in the form of the pulse train. The frequency
count is delivered to the center of PICDEM FS USB board with
PIC18F4550 microcontroller mounted on it. With highly compact
electronic hardware, this Demo Board efficiently read the smart
sensor output data. The frequency output approaches overcome
nonlinear behavior of sensors with analog output thus un-attenuated
X-Ray intensities could be measured precisely and better
normalization could be acquired in order to attain high resolution.
Abstract: Images of human iris contain specular highlights due
to the reflective properties of the cornea. This corneal reflection
causes many errors not only in iris and pupil center estimation but
also to locate iris and pupil boundaries especially for methods that
use active contour. Each iris recognition system has four steps:
Segmentation, Normalization, Encoding and Matching. In order to
address the corneal reflection, a novel reflection removal method is
proposed in this paper. Comparative experiments of two existing
methods for reflection removal method are evaluated on CASIA iris
image databases V3. The experimental results reveal that the
proposed algorithm provides higher performance in reflection
removal.