Abstract: One of the main challenges in MIMO-OFDM system
to achieve the expected performances in terms of data rate
and robustness against multi-path fading channels is the channel
estimation. Several methods were proposed in the literature based on
either least square (LS) or minimum mean squared error (MMSE)
estimators. These methods present high implementation complexity
as they require the inversion of large matrices. In order to overcome
this problem and to reduce the complexity, this paper presents
a solution that benefits from the use of the STBC encoder and
transforms the channel estimation process into a set of simple
linear operations. The proposed method is evaluated via simulation
in AWGN-Rayleigh fading channel. Simulation results show a
maximum reduction of 6.85% of the bit error rate (BER) compared to
the one obtained with the ideal case where the receiver has a perfect
knowledge of the channel.
Abstract: The primary focus of this paper is the generation of
energy-optimal speed trajectories for heterogeneous electric vehicle
platoons in urban driving conditions. Optimal speed trajectories are
generated for individual vehicles and for an entire platoon under
the assumption that they can be executed without errors, as would
be the case for self-driving vehicles. It is then shown that the
optimization for the “average vehicle in the platoon” generates similar
transportation energy savings to optimizing speed trajectories for
each vehicle individually. The introduced approach only requires the
lead vehicle to run the optimization software while the remaining
vehicles are only required to have adaptive cruise control capability.
The achieved energy savings are typically between 30% and 50%
for stop-to-stop segments in cities. The prime motivation of urban
platooning comes from the fact that urban platoons efficiently utilize
the available space and the minimization of transportation energy in
cities is important for many reasons, i.e., for environmental, power,
and range considerations.
Abstract: The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.
Abstract: The aim of the study was to assess diets of residents of nursing homes. Provided by social welfare home, 10 day menus were introduced into the computer program Diet 5 and analyzed in respect of protein, fats, carbohydrates, energy, vitamin D and calcium. The resulting mean values of 10-day menus were compared with the existing Nutrition Standards for Polish population. The analysis menus showed that the average amount of energy supplied from food is not sufficient. Carbohydrates in food supply are too high and represent 257% of normal. The average value of fats and proteins supplied with food is adequate 85.2 g/day and 75.2 g/day. The calcium content of the diet is 513.9 mg/day. The amount of vitamin D supplied in the age group 51-65 years is 2.3 µg/day. Dietary errors that have been shown are due to the lack of detailed nutritional guidelines for nursing homes, as well as state-owned care facilities in general.
Abstract: Producing a text in a language which is not one’s mother tongue can be a demanding task for language learners. Examining lexical errors committed by EFL learners is a challenging area of investigation which can shed light on the process of second language acquisition. Despite the considerable number of investigations into grammatical errors, few studies have tackled formal and semantic errors of lexis committed by EFL learners. The current study aimed at examining Persian learners’ formal and semantic errors of lexis in English. To this end, 60 students at three different proficiency levels were asked to write on 10 different topics in 10 separate sessions. Finally, 600 essays written by Persian EFL learners were collected, acting as the corpus of the study. An error taxonomy comprising formal and semantic errors was selected to analyze the corpus. The formal category covered misselection and misformation errors, while the semantic errors were classified into lexical, collocational and lexicogrammatical categories. Each category was further classified into subcategories depending on the identified errors. The results showed that there were 2583 errors in the corpus of 9600 words, among which, 2030 formal errors and 553 semantic errors were identified. The most frequent errors in the corpus included formal error commitment (78.6%), which were more prevalent at the advanced level (42.4%). The semantic errors (21.4%) were more frequent at the low intermediate level (40.5%). Among formal errors of lexis, the highest number of errors was devoted to misformation errors (98%), while misselection errors constituted 2% of the errors. Additionally, no significant differences were observed among the three semantic error subcategories, namely collocational, lexical choice and lexicogrammatical. The results of the study can shed light on the challenges faced by EFL learners in the second language acquisition process.
Abstract: Terrorism and radicalization have become a common threat to every nation in this world. As a part of the asymmetric warfare threat, terrorism and radicalization need a complex strategy as the problem solver. One such way is by collaborating with the international community. The Our Eyes Initiative (OEI), for example, is a cooperation pact in the field of intelligence information exchanges related to terrorism and radicalization initiated by the Indonesian Ministry of Defence. The pact has been signed by Indonesia, Philippines, Malaysia, Brunei Darussalam, Thailand, and Singapore. This cooperation mostly engages military acts as a central role, but it still requires the involvement of various parties such as the police, intelligence agencies and other government institutions. This paper will use a qualitative content analysis method to address the opportunity and enhance the optimization of OEI. As the result, it will explain how OEI takes the opportunities as the strategy for counter-terrorism by building it up as the regional cooperation, building the legitimacy of government and creating the legal framework of the information sharing system.
Abstract: Link adaptation is an important strategy for achieving robust wireless multimedia communications based on quality of service (QoS) demand. Scheme switching in multiple-input multiple-output (MIMO) systems is an aspect of link adaptation, and it involves selecting among different MIMO transmission schemes or modes so as to adapt to the varying radio channel conditions for the purpose of achieving QoS delivery. However, finding the most appropriate switching method in MIMO links is still a challenge as existing methods are either computationally complex or not always accurate. This paper presents an intelligent switching method for the MIMO system consisting of two schemes - transmit diversity (TD) and spatial multiplexing (SM) - using fuzzy logic technique. In this method, two channel quality indicators (CQI) namely average received signal-to-noise ratio (RSNR) and received signal strength indicator (RSSI) are measured and are passed as inputs to the fuzzy logic system which then gives a decision – an inference. The switching decision of the fuzzy logic system is fed back to the transmitter to switch between the TD and SM schemes. Simulation results show that the proposed fuzzy logic – based switching technique outperforms conventional static switching technique in terms of bit error rate and spectral efficiency.
Abstract: In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We then proceed by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.
Abstract: In the course of recent decades, medical imaging has
been dominated by the use of costly film media for review and
archival of medical investigation, however due to developments in
networks technologies and common acceptance of a standard digital
imaging and communication in medicine (DICOM) another approach
in light of World Wide Web was produced. Web technologies
successfully used in telemedicine applications, the combination of
web technologies together with DICOM used to design a web-based
and open source DICOM viewer. The Web server allowance to
inquiry and recovery of images and the images viewed/manipulated
inside a Web browser without need for any preinstalling software.
The dynamic site page for medical images visualization and
processing created by using JavaScript and HTML5 advancements.
The XAMPP ‘apache server’ is used to create a local web server for
testing and deployment of the dynamic site. The web-based viewer
connected to multiples devices through local area network (LAN) to
distribute the images inside healthcare facilities. The system offers a
few focal points over ordinary picture archiving and communication
systems (PACS): easy to introduce, maintain and independently
platforms that allow images to display and manipulated efficiently,
the system also user-friendly and easy to integrate with an existing
system that have already been making use of web technologies. The
wavelet-based image compression technique on which 2-D discrete
wavelet transform used to decompose the image then wavelet
coefficients are transmitted by entropy encoding after threshold to
decrease transmission time, stockpiling cost and capacity. The
performance of compression was estimated by using images quality
metrics such as mean square error ‘MSE’, peak signal to noise ratio
‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when
‘coif3’ wavelet filter is used.
Abstract: Driven by the demand of intelligent monitoring in
rehabilitation centers or hospitals, a high accuracy real-time location
system based on UWB (ultra-wideband) technology was proposed.
The system measures precise location of a specific person, traces his
movement and visualizes his trajectory on the screen for doctors or
administrators. Therefore, doctors could view the position of the
patient at any time and find them immediately and exactly when
something emergent happens. In our design process, different
algorithms were discussed, and their errors were analyzed. In addition,
we discussed about a , simple but effective way of correcting the
antenna delay error, which turned out to be effective. By choosing the
best algorithm and correcting errors with corresponding methods, the
system attained a good accuracy. Experiments indicated that the
ranging error of the system is lower than 7 cm, the locating error is
lower than 20 cm, and the refresh rate exceeds 5 times per second. In
future works, by embedding the system in wearable IoT (Internet of
Things) devices, it could provide not only physical parameters, but
also the activity status of the patient, which would help doctors a lot in
performing healthcare.
Abstract: Flight Data Monitoring (FDM) program assists an
operator in aviation industries to identify, quantify, assess and
address operational safety risks, in order to improve safety of flight
operations. FDM is a powerful tool for an aircraft operator integrated
into the operator’s Safety Management System (SMS), allowing to
detect, confirm, and assess safety issues and to check the
effectiveness of corrective actions, associated with human errors.
This article proposes a model for safety risk assessment level of flight
data in a different aspect of event focus based on fuzzy set values. It
permits to evaluate the operational safety level from the point of view
of flight activities. The main advantages of this method are proposed
qualitative safety analysis of flight data. This research applies the
opinions of the aviation experts through a number of questionnaires
Related to flight data in four categories of occurrence that can take
place during an accident or an incident such as: Runway Excursions
(RE), Controlled Flight Into Terrain (CFIT), Mid-Air Collision
(MAC), Loss of Control in Flight (LOC-I). By weighting each one
(by F-TOPSIS) and applying it to the number of risks of the event,
the safety risk of each related events can be obtained.
Abstract: This paper presents a prediction performance of
feedforward Multilayer Perceptron (MLP) and Echo State Networks
(ESN) trained with extended Kalman filter. Feedforward neural
networks and ESN are powerful neural networks which can track and
predict nonlinear signals. However, their tracking performance
depends on the specific signals or data sets, having the risk of
instability accompanied by large error. In this study we explore this
process by applying different network size and leaking rate for
prediction of nonlinear or chaotic signals in MLP neural networks.
Major problems of ESN training such as the problem of initialization
of the network and improvement in the prediction performance are
tackled. The influence of coefficient of activation function in the
hidden layer and other key parameters are investigated by simulation
results. Extended Kalman filter is employed in order to improve the
sequential and regulation learning rate of the feedforward neural
networks. This training approach has vital features in the training of
the network when signals have chaotic or non-stationary sequential
pattern. Minimization of the variance in each step of the computation
and hence smoothing of tracking were obtained by examining the
results, indicating satisfactory tracking characteristics for certain
conditions. In addition, simulation results confirmed satisfactory
performance of both of the two neural networks with modified
parameterization in tracking of the nonlinear signals.
Abstract: In this work, we present an efficient approach for
solving variable-order time-fractional partial differential equations,
which are based on Legendre and Laguerre polynomials. First, we
introduced the pseudo-operational matrices of integer and variable
fractional order of integration by use of some properties of
Riemann-Liouville fractional integral. Then, applied together with
collocation method and Legendre-Laguerre functions for solving
variable-order time-fractional partial differential equations. Also, an
estimation of the error is presented. At last, we investigate numerical
examples which arise in physics to demonstrate the accuracy of the
present method. In comparison results obtained by the present method
with the exact solution and the other methods reveals that the method
is very effective.
Abstract: For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.
Abstract: The driven processes of Wiener and Lévy are known
self-standing Gaussian-Markov processes for fitting non-linear
dynamical Vasciek model. In this paper, a coincidental Gaussian
density stationarity condition and autocorrelation function of the
two driven processes were established. This led to the conflation
of Wiener and Lévy processes so as to investigate the efficiency
of estimates incorporated into the one-dimensional Vasciek model
that was estimated via the Maximum Likelihood (ML) technique.
The conditional laws of drift, diffusion and stationarity process
was ascertained for the individual Wiener and Lévy processes as
well as the commingle of the two processes for a fixed effect
and Autoregressive like Vasciek model when subjected to financial
series; exchange rate of Naira-CFA Franc. In addition, the model
performance error of the sub-merged driven process was miniature
compared to the self-standing driven process of Wiener and Lévy.
Abstract: The Earth system generates different phenomena that are observable at the surface of the Earth such as mass deformations and displacements leading to plate tectonics, earthquakes, and volcanism. The dynamic processes associated with the interior, surface, and atmosphere of the Earth affect the three pillars of geodesy: shape of the Earth, its gravity field, and its rotation. Geodesy establishes a characteristic structure in order to define, monitor, and predict of the whole Earth system. The traditional and new instruments, observables, and techniques in geodesy are related to the gravity field. Therefore, the geodesy monitors the gravity field and its temporal variability in order to transform the geodetic observations made on the physical surface of the Earth into the geometrical surface in which positions are mathematically defined. In this paper, the main components of the gravity field modeling, (Free-air and Bouguer) gravity anomalies are calculated via recent global models (EGM2008, EIGEN6C4, and GECO) over a selected study area. The model-based gravity anomalies are compared with the corresponding terrestrial gravity data in terms of standard deviation (SD) and root mean square error (RMSE) for determining the best fit global model in the study area at a regional scale in Turkey. The least SD (13.63 mGal) and RMSE (15.71 mGal) were obtained by EGM2008 for the Free-air gravity anomaly residuals. For the Bouguer gravity anomaly residuals, EIGEN6C4 provides the least SD (8.05 mGal) and RMSE (8.12 mGal). The results indicated that EIGEN6C4 can be a useful tool for modeling the gravity field of the Earth over the study area.
Abstract: Article 3 of the European Convention for the Protection of Human Rights and Fundamental Freedoms (E.C.H.R.) proclaims that no one may be subjected to torture, punishment or degrading treatment. The legislative correlate in Spain is embodied in Article 15 of the Spanish Constitution, and there must be an overlapping interpretation of both precepts on the ideal plane. While it is true that there are not many cases in which the European Court of Human Rights (E.C.t.H.R. (The Strasbourg Court)) has sanctioned Spain for its failure to investigate complaints of torture, it must be emphasized that the tendency to violate Article 3 of the Convention appears to be on the rise, being necessary to know possible factors that may be affecting it. This paper addresses the analysis of sentences that directly or indirectly reveal the violation of Article 3 of the European Convention. To carry out the analysis, sentences of the Strasbourg Court have been consulted from 2012 to 2016, being able to address any previous sentences to this period if it provided justified information necessary for the study. After the review it becomes clear that there are two key groups of subjects that request a response to the Strasbourg Court on the understanding that they have been tortured or degradingly treated. These are: immigrants and terrorists. Both phenomena, immigration and terrorism, respond to patterns that have mutated in recent years, and it is important for this study to know if national regulations begin to be dysfunctional.
Abstract: A tax authority wants to take actions it knows will foster
the greatest degree of voluntary taxpayer compliance to reduce the
“tax gap.” This paper suggests that even if a tax authority could attain
a state of complete knowledge, there are constraints on whether and
to what extent such actions would result in reducing the macro-level
tax gap. These limits are not merely a consequence of finite agency
resources. They are inherent in the system itself. To show that this is
one possible interpretation of the tax gap data, the paper formulates
known results in a different way by analyzing tax compliance as a
population with a single covariate. This leads to a standard use of the
logistic map to analyze the dynamics of non-compliance growth or
decay over a sequence of periods. This formulation gives the same
results as the tax gap studies performed over the past fifty years
in the U.S. given the published margins of error. Limitations and
recommendations for future work are discussed, along with some
implications for tax policy.
Abstract: The development of a quantum key distribution (QKD) system on a field-programmable gate array (FPGA) platform is the subject of this paper. A quantum cryptographic protocol is designed based on the properties of quantum information and the characteristics of FPGAs. The proposed protocol performs key extraction, reconciliation, error correction, and privacy amplification tasks to generate a perfectly secret final key. We modeled the presence of the spy in our system with a strategy to reveal some of the exchanged information without being noticed. Using an FPGA card with a 100 MHz clock frequency, we have demonstrated the evolution of the error rate as well as the amounts of mutual information (between the two interlocutors and that of the spy) passing from one step to another in the key generation process.
Abstract: The Semi Interlocking Masonry (SIM) system has been developed in Masonry Research Group at the University of Newcastle, Australia. The main purpose of this system is to enhance the seismic resistance of framed structures with masonry panels. In this system, SIM panels dissipate energy through the sliding friction between rows of SIM units during earthquake excitation. This paper aimed to find the applicability of artificial neural network (ANN) to predict the displacement behaviour of the SIM panel under out-of-plane loading. The general concept of ANN needs to be trained by related force-displacement data of SIM panel. The overall data to train and test the network are 70 increments of force-displacement from three tests, which comprise of none input nodes. The input data contain height and length of panels, height, length and width of the brick and friction and geometry angle of brick along the compressive strength of the brick with the lateral load applied to the panel. The aim of designed network is prediction displacement of the SIM panel by Multi-Layer Perceptron (MLP). The mean square error (MSE) of network was 0.00042 and the coefficient of determination (R2) values showed the 0.91. The result revealed that the ANN has significant agreement to predict the SIM panel behaviour.