Abstract: The monitoring of spectrum usage and signal identification, using cognitive radio, is done to identify frequencies that are vacant for reuse. It has been established that ‘internet of things’ device uses secondary frequency which is free, thereby facing the challenge of interference from other users, where some primary frequencies are not being utilised. The design was done by analysing a specific frequency spectrum, checking if all the frequency stations that range from 87.5-108 MHz are presently being used in Benin City, Edo State, Nigeria. From the results, it was noticed that by using Software Defined Radio/Simulink, we were able to identify vacant frequencies in the range of frequency under consideration. Also, we were able to use the significance of energy detection threshold to reuse this vacant frequency spectrum, when the cognitive radio displays a zero output (that is decision H0), meaning that the channel is unoccupied. Hence, the analysis was able to find the spectrum hole and identify how it can be reused.
Abstract: The near-field synthetic aperture radar (SAR) imaging
is an advanced nondestructive testing and evaluation (NDT&E)
technique. This paper investigates the complex-valued signal
processing related to the near-field SAR imaging system, where
the measurement data turns out to be noncircular and improper,
meaning that the complex-valued data is correlated to its complex
conjugate. Furthermore, we discover that the degree of impropriety
of the measurement data and that of the target image can be highly
correlated in near-field SAR imaging. Based on these observations, A
modified generalized sparse Bayesian learning algorithm is proposed,
taking impropriety and noncircularity into account. Numerical results
show that the proposed algorithm provides performance gain, with the
help of noncircular assumption on the signals.
Abstract: Communication signal modulation recognition
technology is one of the key technologies in the field of modern
information warfare. At present, communication signal automatic
modulation recognition methods are mainly divided into two major
categories. One is the maximum likelihood hypothesis testing method
based on decision theory, the other is a statistical pattern recognition
method based on feature extraction. Now, the most commonly used
is a statistical pattern recognition method, which includes feature
extraction and classifier design. With the increasingly complex
electromagnetic environment of communications, how to effectively
extract the features of various signals at low signal-to-noise ratio
(SNR) is a hot topic for scholars in various countries. To solve this
problem, this paper proposes a feature extraction algorithm for the
communication signal based on the improved Holder cloud feature.
And the extreme learning machine (ELM) is used which aims at
the problem of the real-time in the modern warfare to classify
the extracted features. The algorithm extracts the digital features
of the improved cloud model without deterministic information in
a low SNR environment, and uses the improved cloud model to
obtain more stable Holder cloud features and the performance of the
algorithm is improved. This algorithm addresses the problem that
a simple feature extraction algorithm based on Holder coefficient
feature is difficult to recognize at low SNR, and it also has a
better recognition accuracy. The results of simulations show that the
approach in this paper still has a good classification result at low
SNR, even when the SNR is -15dB, the recognition accuracy still
reaches 76%.
Abstract: Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.
Abstract: A rain cell ratio model is proposed that computes attenuation of the smallest rain cell which represents the maximum rain rate value i.e. the cell size when rainfall rate is exceeded 0.01% of the time, R0.01 and predicts attenuation for other cells as the ratio with this maximum. This model incorporates the dependence of the path factor r on the ellipsoidal path variation of the Fresnel zone at different frequencies. In addition, the inhomogeneity of rainfall is modeled by a rain drop packing density factor. In order to derive the model, two empirical methods that can be used to find rain cell size distribution Dc are presented. Subsequently, attenuation measurements from different climatic zones for terrestrial radio links with frequencies F in the range 7-38 GHz are used to test the proposed model. Prediction results show that the path factor computed from the rain cell ratio technique has improved reliability when compared with other path factor and effective rain rate models, including the current ITU-R 530-15 model of 2013.
Abstract: The separation of speech signals has become a research
hotspot in the field of signal processing in recent years. It has
many applications and influences in teleconferencing, hearing aids,
speech recognition of machines and so on. The sounds received are
usually noisy. The issue of identifying the sounds of interest and
obtaining clear sounds in such an environment becomes a problem
worth exploring, that is, the problem of blind source separation.
This paper focuses on the under-determined blind source separation
(UBSS). Sparse component analysis is generally used for the problem
of under-determined blind source separation. The method is mainly
divided into two parts. Firstly, the clustering algorithm is used to
estimate the mixing matrix according to the observed signals. Then
the signal is separated based on the known mixing matrix. In this
paper, the problem of mixing matrix estimation is studied. This paper
proposes an improved algorithm to estimate the mixing matrix for
speech signals in the UBSS model. The traditional potential algorithm
is not accurate for the mixing matrix estimation, especially for low
signal-to noise ratio (SNR).In response to this problem, this paper
considers the idea of an improved potential function method to
estimate the mixing matrix. The algorithm not only avoids the inuence
of insufficient prior information in traditional clustering algorithm,
but also improves the estimation accuracy of mixing matrix. This
paper takes the mixing of four speech signals into two channels as
an example. The results of simulations show that the approach in this
paper not only improves the accuracy of estimation, but also applies
to any mixing matrix.
Abstract: The capacity of conventional cellular networks has
reached its upper bound and it can be well handled by introducing
femtocells with low-cost and easy-to-deploy. Spectrum interference
issue becomes more critical in peace with the value-added multimedia
services growing up increasingly in two-tier cellular networks.
Spectrum allocation is one of effective methods in interference
mitigation technology. This paper proposes a game-theory-based on
OFDMA downlink spectrum allocation aiming at reducing co-channel
interference in two-tier femtocell networks. The framework is
formulated as a non-cooperative game, wherein the femto base
stations are players and frequency channels available are strategies.
The scheme takes full account of competitive behavior and
fairness among stations. In addition, the utility function reflects
the interference from the standpoint of channels essentially. This
work focuses on co-channel interference and puts forward a negative
logarithm interference function on distance weight ratio aiming
at suppressing co-channel interference in the same layer network.
This scenario is more suitable for actual network deployment and
the system possesses high robustness. According to the proposed
mechanism, interference exists only when players employ the same
channel for data communication. This paper focuses on implementing
spectrum allocation in a distributed fashion. Numerical results show
that signal to interference and noise ratio can be obviously improved
through the spectrum allocation scheme and the users quality of
service in downlink can be satisfied. Besides, the average spectrum
efficiency in cellular network can be significantly promoted as
simulations results shown.
Abstract: The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.
Abstract: Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.
Abstract: Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.
Abstract: In this paper, a simple method is presented for measurement of power frequency deviations. A phase locked loop (PLL) is used to multiply the signal under test by a factor of 100. The number of pulses in this pulse train signal is counted over a stable known period, using decade driving assemblies (DDAs) and flip-flops. These signals are combined using logic gates and then passed through decade counters to give a unique combination of pulses or levels, which are further encoded. These pulses are equally suitable for both control applications and display units. The experimental circuit developed gives a resolution of 1 Hz within the measurement period of 20 ms. The proposed circuit is also simulated in Verilog Hardware Description Language (VHDL) and implemented using Field Programing Gate Arrays (FPGAs). A Mixed signal Oscilloscope (MSO) is used to observe the results of FPGA implementation. These results are compared with the results of the proposed circuit of discrete components. The proposed system is useful for frequency deviation measurement and control in power systems.
Abstract: Many jobs in society go underground, such as mine mining, tunnel construction and subways, which are vital to the development of society. Once accidents occur in these places, the interruption of traditional wired communication is not conducive to the development of rescue work. In order to realize the positioning, early warning and command functions of underground personnel and improve rescue efficiency, it is necessary to develop and design an emergency ground communication system. It is easy to be subjected to narrowband interference when performing conventional underground communication. Spreading communication can be used for this problem. However, general spread spectrum methods such as direct spread communication are inefficient, so it is proposed to use parallel combined spread spectrum (PCSS) communication to improve efficiency. The PCSS communication not only has the anti-interference ability and the good concealment of the traditional spread spectrum system, but also has a relatively high frequency band utilization rate and a strong information transmission capability. So, this technology has been widely used in practice. This paper presents a PCSS communication model-multiple detection parallel combined spread spectrum (MDPCSS) communication system. In this paper, the principle of MDPCSS communication system is described, that is, the sequence at the transmitting end is processed in blocks and cyclically shifted to facilitate multiple detection at the receiving end. The block diagrams of the transmitter and receiver of the MDPCSS communication system are introduced. At the same time, the calculation formula of the system bit error rate (BER) is introduced, and the simulation and analysis of the BER of the system are completed. By comparing with the common parallel PCSS communication, we can draw a conclusion that it is indeed possible to reduce the BER and improve the system performance. Furthermore, the influence of different pseudo-code lengths selected on the system BER is simulated and analyzed, and the conclusion is that the larger the pseudo-code length is, the smaller the system error rate is.
Abstract: In data-driven prognostic methods, the prediction
accuracy of the estimation for remaining useful life of bearings
mainly depends on the performance of health indicators, which
are usually fused some statistical features extracted from vibrating
signals. However, the existing health indicators have the following
two drawbacks: (1) The differnet ranges of the statistical features
have the different contributions to construct the health indicators,
the expert knowledge is required to extract the features. (2) When
convolutional neural networks are utilized to tackle time-frequency
features of signals, the time-series of signals are not considered.
To overcome these drawbacks, in this study, the method combining
convolutional neural network with gated recurrent unit is proposed to
extract the time-frequency image features. The extracted features are
utilized to construct health indicator and predict remaining useful life
of bearings. First, original signals are converted into time-frequency
images by using continuous wavelet transform so as to form the
original feature sets. Second, with convolutional and pooling layers
of convolutional neural networks, the most sensitive features of
time-frequency images are selected from the original feature sets.
Finally, these selected features are fed into the gated recurrent unit
to construct the health indicator. The results state that the proposed
method shows the enhance performance than the related studies which
have used the same bearing dataset provided by PRONOSTIA.
Abstract: This paper presented an intend scheme of Modular-Multilevel-Converter (MMC) levels for move towering the practical conciliation flanked by the precision and divisional competence. The whole process is standard by a Thevenin-equivalent 133-level MMC model. Firstly the computation scheme of the fundamental limit imitation time step is offered to faithfully represent each voltage level of waveforms. Secondly the earlier industrial Improved Analytic Hierarchy Process (IAHP) is adopted to integrate the relative errors of all the input electrical factors interested in one complete virtual fault on each converter level. Thirdly the stable AC and DC ephemeral condition in virtual faults effects of all the forms stabilize and curve integral stand on the standard form. Finally the optimal MMC level will be obtained by the drown curves and it will give individual weights allowing for the precision and efficiency. And the competence and potency of the scheme are validated by model on MATLAB Simulink.
Abstract: A compact planar monopole antenna with dual-band operation suitable for wireless local area network (WLAN) application is presented in this paper. The antenna occupies an overall area of 18 ×12 mm2. The antenna is fed by a coplanar waveguide (CPW) transmission line and it combines two folded strips, which radiates at 2.4 and 5.2 GHz. In the proposed antenna, by optimally selecting the antenna dimensions, dual-band resonant modes with a much wider impedance matching at the higher band can be produced. Prototypes of the obtained optimized design have been simulated using EM solver. The simulated results explore good dual-band operation with -10 dB impedance bandwidths of 50 MHz and 2400 MHz at bands of 2.4 and 5.2 GHz, respectively, which cover the 2.4/5.2/5.8 GHz WLAN operating bands. Good antenna performances such as radiation patterns and antenna gains over the operating bands have also been observed. The antenna with a compact size of 18×12×1.6 mm3 is designed on an FR4 substrate with a dielectric constant of 4.4.
Abstract: Compared with terrestrial network, the traffic of spatial information network has both self-similarity and short correlation characteristics. By studying its traffic prediction method, the resource utilization of spatial information network can be improved, and the method can provide an important basis for traffic planning of a spatial information network. In this paper, considering the accuracy and complexity of the algorithm, the spatial information network traffic is decomposed into approximate component with long correlation and detail component with short correlation, and a time series hybrid prediction model based on wavelet decomposition is proposed to predict the spatial network traffic. Firstly, the original traffic data are decomposed to approximate components and detail components by using wavelet decomposition algorithm. According to the autocorrelation and partial correlation smearing and truncation characteristics of each component, the corresponding model (AR/MA/ARMA) of each detail component can be directly established, while the type of approximate component modeling can be established by ARIMA model after smoothing. Finally, the prediction results of the multiple models are fitted to obtain the prediction results of the original data. The method not only considers the self-similarity of a spatial information network, but also takes into account the short correlation caused by network burst information, which is verified by using the measured data of a certain back bone network released by the MAWI working group in 2018. Compared with the typical time series model, the predicted data of hybrid model is closer to the real traffic data and has a smaller relative root means square error, which is more suitable for a spatial information network.
Abstract: In this paper, we presented the LDO (low-dropout) regulator which enhanced the PSRR by applying the constant current source generation technique through the BGR (Band Gap Reference) to form the noise sensing circuit. The current source through the BGR has a constant current value even if the applied voltage varies. Then, the noise sensing circuit, which is composed of the current source through the BGR, operated between the error amplifier and the pass transistor gate of the LDO regulator. As a result, the LDO regulator has a PSRR of -68.2 dB at 1k Hz, -45.85 dB at 1 MHz and -45 dB at 10 MHz. the other performance of the proposed LDO was maintained at the same level of the conventional LDO regulator.
Abstract: In this paper, a microstrip antenna array is designed for 5G applications. A corporate series feed is considered to operate with a center frequency between 27 to 28 GHz to be able to cover the 5G frequency bands 24.25-27.5 GHz, 26.5-29.5 GHz and 27.5-28.35 GHz. The substrate is taken to be Rogers RT/Duroid 6002. The corporate series 5G antenna array is designed stage by stage by taking into consideration a conventional antenna designed at 28 GHz, thereby constructing the 2X1 antenna array before arriving at the final design structure of 4-element corporate series feed antenna array. The discussions concerning S11 parameter, gain and voltage standing wave ratio (VSWR) for the design structures are considered and all the important findings are tabulated. The proposed antenna array’s S11 parameter was found to be -29.00 dB at a frequency of 27.39 GHz with a good directional gain of 12.12 dB.
Abstract: Visual search and identification of immunohistochemically stained tissue of meningioma was performed manually in pathologic laboratories to detect and diagnose the cancers type of meningioma. This task is very tedious and time-consuming. Moreover, because of cell's complex nature, it still remains a challenging task to segment cells from its background and analyze them automatically. In this paper, we develop and test a computerized scheme that can automatically identify cells in microscopic images of meningioma and classify them into positive (proliferative) and negative (normal) cells. Dataset including 150 images are used to test the scheme. The scheme uses Fuzzy C-means algorithm as a color clustering method based on perceptually uniform hue, saturation, value (HSV) color space. Since the cells are distinguishable by the human eye, the accuracy and stability of the algorithm are quantitatively compared through application to a wide variety of real images.
Abstract: This paper presents a hybrid system solar cell antenna for 5G mobile communications networks. We propose here a solar cell antenna with either a front face collection grid or mesh patch. The solar cell antenna of our contribution combines both optical and radiofrequency signals. Thus, we propose two solar cell antenna structures in the frequency bands of future 5G standard respectively in both 2.6 and 3.5 GHz bands. Simulation using the Advanced Design System (ADS) software allows us to analyze and determine the antenna parameters proposed in this work such as the reflection coefficient (S11), gain, directivity and radiated power.