Abstract: It is a well-established fact that terrorism is one of the foremost threats to present-day international security. The creation of tools or mechanisms for confronting it in an effective and efficient manner will only be possible by way of an objective assessment of the phenomenon. In order to achieve this, this paper has the following three main objectives: Firstly, setting out to find the reasons that have prevented the establishment of a universally accepted definition of terrorism, and consequently trying to outline the main features defining the face of the terrorist threat in order to discover the fundamental goals of what is now a serious blight on world society. Secondly, trying to explain the differences between a terrorist movement and a terrorist organisation, and the reasons for which a terrorist movement can be led to transform itself into an organisation. After analysing these motivations and the characteristics of a terrorist organisation, an example of the latter will be succinctly analysed to help the reader understand the ideas expressed. Lastly, discovering and exposing the factors that can lead to the appearance of terrorist tendencies, and discussing the most efficient and effective responses that can be given to this global security threat.
Abstract: To construct the lumped spring-mass model
considering the occupants for the offset frontal crash, the SISAME
software and the NHTSA test data were used. The data on 56 kph 40%
offset frontal vehicle to deformable barrier crash test of a MY2007
Mazda 6 4-door sedan were obtained from NHTSA test database. The
overall behaviors of B-pillar and engine of simulation models agreed
very well with the test data. The trends of accelerations at the driver
and passenger head were similar but big differences in peak values.
The differences of peak values caused the large errors of the HIC36
and 3 ms chest g’s. To predict well the behaviors of dummies, the
spring-mass model for the offset frontal crash needs to be improved.
Abstract: Any signal transmitted over a channel is corrupted by noise and interference. A host of channel coding techniques has been proposed to alleviate the effect of such noise and interference. Among these Turbo codes are recommended, because of increased capacity at higher transmission rates and superior performance over convolutional codes. The multimedia elements which are associated with ample amount of data are best protected by Turbo codes. Turbo decoder employs Maximum A-posteriori Probability (MAP) and Soft Output Viterbi Decoding (SOVA) algorithms. Conventional Turbo coded systems employ Equal Error Protection (EEP) in which the protection of all the data in an information message is uniform. Some applications involve Unequal Error Protection (UEP) in which the level of protection is higher for important information bits than that of other bits. In this work, enhancement to the traditional Log MAP decoding algorithm is being done by using optimized scaling factors for both the decoders. The error correcting performance in presence of UEP in Additive White Gaussian Noise channel (AWGN) and Rayleigh fading are analyzed for the transmission of image with Discrete Cosine Transform (DCT) as source coding technique. This paper compares the performance of log MAP, Modified log MAP (MlogMAP) and Enhanced log MAP (ElogMAP) algorithms used for image transmission. The MlogMAP algorithm is found to be best for lower Eb/N0 values but for higher Eb/N0 ElogMAP performs better with optimized scaling factors. The performance comparison of AWGN with fading channel indicates the robustness of the proposed algorithm. According to the performance of three different message classes, class3 would be more protected than other two classes. From the performance analysis, it is observed that ElogMAP algorithm with UEP is best for transmission of an image compared to Log MAP and MlogMAP decoding algorithms.
Abstract: Security can be defined as the degree of resistance to, or protection from harm. It applies to any vulnerable and valuable assets, such as persons, dwellings, communities, nations or organizations. Cybercrime is any crime committed or facilitated via the Internet. It is any criminal activity involving computers and networks. It can range from fraud to unsolicited emails (spam). It includes the distant theft of government or corporate secrets through criminal trespass into remote systems around the globe. Nigeria like any other nations of the world is currently having her own share of the menace that has been used even as tools by terrorists. This paper is an attempt at presenting cyber security as an issue that requires a coordinated national response. It also acknowledges and advocates the key roles to be played by stakeholders and the importance of forging strong partnerships to prevent and tackle cybercrime in Nigeria.
Abstract: This paper presented a study of three algorithms, the
equalization algorithm to equalize the transmission channel with ZF
and MMSE criteria, application of channel Bran A, and adaptive
filtering algorithms LMS and RLS to estimate the parameters of the
equalizer filter, i.e. move to the channel estimation and therefore
reflect the temporal variations of the channel, and reduce the error in
the transmitted signal. So far the performance of the algorithm
equalizer with ZF and MMSE criteria both in the case without noise,
a comparison of performance of the LMS and RLS algorithm.
Abstract: This paper addresses a cutting edge method of
business demand forecasting, based on an empirical probability
function when the historical behavior of the data is random.
Additionally, it presents error determination based on the numerical
method technique ‘propagation of errors.’ The methodology was
conducted characterization and process diagnostics demand planning
as part of the production management, then new ways to predict its
value through techniques of probability and to calculate their mistake
investigated, it was tools used numerical methods. All this based on
the behavior of the data. This analysis was determined considering
the specific business circumstances of a company in the sector of
communications, located in the city of Bogota, Colombia. In
conclusion, using this application it was possible to obtain the
adequate stock of the products required by the company to provide its
services, helping the company reduce its service time, increase the
client satisfaction rate, reduce stock which has not been in rotation
for a long time, code its inventory, and plan reorder points for the
replenishment of stock.
Abstract: Cerebellar ataxia is a steadily progressive
neurodegenerative disease associated with loss of motor control,
leaving patients unable to walk, talk, or perform activities of daily
living. Direct motor instruction in cerebella ataxia patients has limited
effectiveness, presumably because an inappropriate closed-loop
cerebellar response to the inevitable observed error confounds motor
learning mechanisms. Could the use of EEG based BCI provide
advanced biofeedback to improve motor imagery and provide a
“backdoor” to improving motor performance in ataxia patients? In
order to determine the feasibility of using EEG-based BCI control in
this population, we compare the ability to modulate mu-band power
(8-12 Hz) by performing a cued motor imagery task in an ataxia
patient and healthy control.
Abstract: The US Consumer Price Indices (CPIs) measures
hundreds of items in the US economy. Many social programs
and government benefits index to the CPIs. The purpose of
this project is to modernize an existing process. This paper will
show the development of a small, visual, software product that
documents the Economic Price Adjustment (EPA) for longterm
contracts. The existing workbook does not provide the
flexibility to calculate EPAs where the base-month and the
option-month are different. Nor does the workbook provide
automated error checking. The small, visual, software product
provides the additional flexibility and error checking. This
paper presents the feedback to project.
Abstract: The Orthogonal Frequency Division Multiplexing
(OFDM) with high data rate, high spectral efficiency and its ability to
mitigate the effects of multipath makes them most suitable in wireless
application. Impulsive noise distorts the OFDM transmission and
therefore methods must be investigated to suppress this noise. In this
paper, a State Space Recursive Least Square (SSRLS) algorithm
based adaptive impulsive noise suppressor for OFDM
communication system is proposed. And a comparison with another
adaptive algorithm is conducted. The state space model-dependent
recursive parameters of proposed scheme enables to achieve steady
state mean squared error (MSE), low bit error rate (BER), and faster
convergence than that of some of existing algorithm.
Abstract: Margin-Based Principle has been proposed for a long
time, it has been proved that this principle could reduce the
structural risk and improve the performance in both theoretical
and practical aspects. Meanwhile, feed-forward neural network is
a traditional classifier, which is very hot at present with a deeper
architecture. However, the training algorithm of feed-forward neural
network is developed and generated from Widrow-Hoff Principle that
means to minimize the squared error. In this paper, we propose
a new training algorithm for feed-forward neural networks based
on Margin-Based Principle, which could effectively promote the
accuracy and generalization ability of neural network classifiers
with less labelled samples and flexible network. We have conducted
experiments on four UCI open datasets and achieved good results
as expected. In conclusion, our model could handle more sparse
labelled and more high-dimension dataset in a high accuracy while
modification from old ANN method to our method is easy and almost
free of work.
Abstract: The problems arising from unbalanced data sets
generally appear in real world applications. Due to unequal class
distribution, many researchers have found that the performance of
existing classifiers tends to be biased towards the majority class. The
k-nearest neighbors’ nonparametric discriminant analysis is a method
that was proposed for classifying unbalanced classes with good
performance. In this study, the methods of discriminant analysis are
of interest in investigating misclassification error rates for classimbalanced
data of three diabetes risk groups. The purpose of this
study was to compare the classification performance between
parametric discriminant analysis and nonparametric discriminant
analysis in a three-class classification of class-imbalanced data of
diabetes risk groups. Data from a project maintaining healthy
conditions for 599 employees of a government hospital in Bangkok
were obtained for the classification problem. The employees were
divided into three diabetes risk groups: non-risk (90%), risk (5%),
and diabetic (5%). The original data including the variables of
diabetes risk group, age, gender, blood glucose, and BMI were
analyzed and bootstrapped for 50 and 100 samples, 599 observations
per sample, for additional estimation of the misclassification error
rate. Each data set was explored for the departure of multivariate
normality and the equality of covariance matrices of the three risk
groups. Both the original data and the bootstrap samples showed nonnormality
and unequal covariance matrices. The parametric linear
discriminant function, quadratic discriminant function, and the
nonparametric k-nearest neighbors’ discriminant function were
performed over 50 and 100 bootstrap samples and applied to the
original data. Searching the optimal classification rule, the choices of
prior probabilities were set up for both equal proportions (0.33: 0.33:
0.33) and unequal proportions of (0.90:0.05:0.05), (0.80: 0.10: 0.10)
and (0.70, 0.15, 0.15). The results from 50 and 100 bootstrap samples
indicated that the k-nearest neighbors approach when k=3 or k=4 and
the defined prior probabilities of non-risk: risk: diabetic as 0.90:
0.05:0.05 or 0.80:0.10:0.10 gave the smallest error rate of
misclassification. The k-nearest neighbors approach would be
suggested for classifying a three-class-imbalanced data of diabetes
risk groups.
Abstract: In more complex systems, such as automotive
gearbox, a rigorous treatment of the data is necessary because there
are several moving parts (gears, bearings, shafts, etc.), and in this
way, there are several possible sources of errors and also noise. The
basic objective of this work is the detection of damage in automotive
gearbox. The detection methods used are the wavelet method, the
bispectrum; advanced filtering techniques (selective filtering) of
vibrational signals and mathematical morphology. Gearbox vibration
tests were performed (gearboxes in good condition and with defects)
of a production line of a large vehicle assembler. The vibration
signals are obtained using five accelerometers in different positions
of the sample. The results obtained using the kurtosis, bispectrum,
wavelet and mathematical morphology showed that it is possible to
identify the existence of defects in automotive gearboxes.
Abstract: In the present study, the kinetics of thermal
degradation of a phenolic and lignin reinforced phenolic foams, and
the lignin used as reinforcement were studied and the activation
energies of their degradation processes were obtained by a DAEM
model. The average values for five heating rates of the mean
activation energies obtained were: 99.1, 128.2, and 144.0 kJ.mol-1 for
the phenolic foam; 109.5, 113.3, and 153.0 kJ.mol-1 for the lignin
reinforcement; and 82.1, 106.9, and 124.4 kJ.mol-1 for the lignin
reinforced phenolic foam. The standard deviation ranges calculated
for each sample were 1.27-8.85, 2.22-12.82, and 3.17-8.11 kJ.mol-1
for the phenolic foam, lignin and the reinforced foam, respectively.
The DAEM model showed low mean square errors (
Abstract: Micro-electromechanical system (MEMS)
accelerometers and gyroscopes are suitable for the inertial navigation
system (INS) of many applications due to low price, small
dimensions and light weight. The main disadvantage in a comparison
with classic sensors is a worse long term stability. The estimation
accuracy is mostly affected by the time-dependent growth of inertial
sensor errors, especially the stochastic errors. In order to eliminate
negative effects of these random errors, they must be accurately
modeled. In this paper, the Allan variance technique will be used in
modeling the stochastic errors of the inertial sensors. By performing
a simple operation on the entire length of data, a characteristic curve
is obtained whose inspection provides a systematic characterization
of various random errors contained in the inertial-sensor output data.
Abstract: An adaptive nonparametric method is proposed for
stable real-time detection of seismoacoustic sources in multichannel
C-OTDR systems with a significant number of channels. This
method guarantees given upper boundaries for probabilities of Type I
and Type II errors. Properties of the proposed method are rigorously
proved. The results of practical applications of the proposed method
in a real C-OTDR-system are presented in this report.
Abstract: Recently, there is a lot of interest in the field of under
water optical wireless communication for short range because of
its high bandwidth. But in most of the previous works line of
sight propagation or single scattering of photons only considered.
In practical case this is not applicable because of beam blockage in
underwater and multiple scattering also occurred during the photons
propagation through water. In this paper we consider a non-line
of sight underwater wireless optical communication system with
multiple scattering and examine the performance of the system using
monte carlo simulation. The distribution scattering angle of photons
are modeled by Henyey-Greenstein method. The average bit error
rate is calculated using on-off keying modulation for different water
types.
Abstract: This paper proposes the application of the Smart
Security Concept in the East Mediterranean. Smart Security aims to
secure critical infrastructure, such as hydrocarbon platforms, against
asymmetrical threats. The concept is based on Anti Asymmetrical
Area Denial (A3D) which necessitates limiting freedom of action of
maritime terrorists and piracy by founding safe and secure maritime
areas through sea lines of communication using short range
capabilities.
Abstract: We proposed a Hyperbolic Gompertz Growth Model
(HGGM), which was developed by introducing a shape parameter
(allometric). This was achieved by convoluting hyperbolic sine
function on the intrinsic rate of growth in the classical gompertz
growth equation. The resulting integral solution obtained
deterministically was reprogrammed into a statistical model and used
in modeling the height and diameter of Pines (Pinus caribaea). Its
ability in model prediction was compared with the classical gompertz
growth model, an approach which mimicked the natural variability of
height/diameter increment with respect to age and therefore provides
a more realistic height/diameter predictions using goodness of fit
tests and model selection criteria. The Kolmogorov Smirnov test and
Shapiro-Wilk test was also used to test the compliance of the error
term to normality assumptions while the independence of the error
term was confirmed using the runs test. The mean function of top
height/Dbh over age using the two models under study predicted
closely the observed values of top height/Dbh in the hyperbolic
gompertz growth models better than the source model (classical
gompertz growth model) while the results of R2, Adj. R2, MSE and
AIC confirmed the predictive power of the Hyperbolic Gompertz
growth models over its source model.
Abstract: The use of energy dissipation systems for seismic applications has increased worldwide, thus it is necessary to develop practical and modern criteria for their optimal design. Here, a direct displacement-based seismic design approach for frame buildings with hysteretic energy dissipation systems (HEDS) is applied. The building is constituted by two individual structural systems consisting of: 1) a main elastic structural frame designed for service loads; and 2) a secondary system, corresponding to the HEDS, that controls the effects of lateral loads. The procedure implies to control two design parameters: a) the stiffness ratio (α=Kframe/Ktotal system), and b) the strength ratio (γ=Vdamper/Vtotal system). The proposed damage-controlled approach contributes to the design of a more sustainable and resilient building because the structural damage is concentrated on the HEDS. The reduction of the design displacement spectrum is done by means of a damping factor (recently published) for elastic structural systems with HEDS, located in Mexico City. Two limit states are verified: serviceability and near collapse. Instead of the traditional trial-error approach, a procedure that allows the designer to establish the preliminary sizes of the structural elements of both systems is proposed. The design methodology is applied to an 8-story steel building with buckling restrained braces, located in soft soil of Mexico City. With the aim of choosing the optimal design parameters, a parametric study is developed considering different values of હ and . The simplified methodology is for preliminary sizing, design, and evaluation of the effectiveness of HEDS, and it constitutes a modern and practical tool that enables the structural designer to select the best design parameters.
Abstract: The purposes of this study are 1) to study the effects
of participatory error correction process and 2) to find out the
students’ satisfaction of such error correction process. This study is a
Quasi Experimental Research with single group, in which data is
collected 5 times preceding and following 4 experimental studies of
participatory error correction process including providing coded
indirect corrective feedback in the students’ texts with error treatment
activities. Samples include 52 2nd year English Major students,
Faculty of Humanities and Social Sciences, Suan Sunandha Rajabhat
University. Tool for experimental study includes the lesson plan of
the course; Reading and Writing English for Academic Purposes II,
and tools for data collection include 5 writing tests of short texts and
a questionnaire. Based on formative evaluation of the students’
writing ability prior to and after each of the 4 experiments, the
research findings disclose the students’ higher scores with statistical
difference at 0.00. Moreover, in terms of the effect size of such
process, it is found that for mean of the students’ scores prior to and
after the 4 experiments; d equals 0.6801, 0.5093, 0.5071, and 0.5296
respectively. It can be concluded that participatory error correction
process enables all of the students to learn equally well and there is
improvement in their ability to write short texts. Finally the students’
overall satisfaction of the participatory error correction process is in
high level (Mean = 4.39, S.D. = 0.76).