Abstract: Spectrum sensing is the main feature of cognitive
radio technology. Spectrum sensing gives an idea of detecting the
presence of the primary users in a licensed spectrum. In this paper we
compare the theoretical results of detection probability of different
fading environments like Rayleigh, Rician, Nakagami-m fading
channels with the simulation results using energy detection based
spectrum sensing. The numerical results are plotted as Pf Vs Pd for
different SNR values, fading parameters. It is observed that
Nakagami fading channel performance is better than other fading
channels by using energy detection in spectrum sensing. A MATLAB
simulation test bench has been implemented to know the performance
of energy detection in different fading channel environment.
Abstract: Different order modulations combined with different
coding schemes, allow sending more bits per symbol, thus achieving
higher throughputs and better spectral efficiencies. However, it must
also be noted that when using a modulation technique such as 64-
QAM with less overhead bits, better signal-to-noise ratios (SNRs) are
needed to overcome any Inter symbol Interference (ISI) and maintain
a certain bit error ratio (BER). The use of adaptive modulation allows
wireless technologies to yielding higher throughputs while also
covering long distances. The aim of this paper is to implement an
Adaptive Modulation and Coding (AMC) features of the WiMAX
PHY in MATLAB and to analyze the performance of the system in
different channel conditions (AWGN, Rayleigh and Rician fading
channel) with channel estimation and blind equalization. Simulation
results have demonstrated that the increment in modulation order
causes to increment in throughput and BER values. These results
derived a trade-off among modulation order, FFT length, throughput,
BER value and spectral efficiency. The BER changes gradually for
AWGN channel and arbitrarily for Rayleigh and Rician fade
channels.
Abstract: Load Forecasting plays a key role in making today's
and future's Smart Energy Grids sustainable and reliable. Accurate
power consumption prediction allows utilities to organize in advance
their resources or to execute Demand Response strategies more
effectively, which enables several features such as higher
sustainability, better quality of service, and affordable electricity
tariffs. It is easy yet effective to apply Load Forecasting at larger
geographic scale, i.e. Smart Micro Grids, wherein the lower available
grid flexibility makes accurate prediction more critical in Demand
Response applications. This paper analyses the application of
short-term load forecasting in a concrete scenario, proposed within the
EU-funded GreenCom project, which collect load data from single
loads and households belonging to a Smart Micro Grid. Three
short-term load forecasting techniques, i.e. linear regression, artificial
neural networks, and radial basis function network, are considered,
compared, and evaluated through absolute forecast errors and training
time. The influence of weather conditions in Load Forecasting is also
evaluated. A new definition of Gain is introduced in this paper, which
innovatively serves as an indicator of short-term prediction
capabilities of time spam consistency. Two models, 24- and
1-hour-ahead forecasting, are built to comprehensively compare these
three techniques.
Abstract: In this research work, neural networks were applied to
classify two types of hip joint implants based on the relative hip joint
implant side speed and three components of each ground reaction
force. The condition of walking gait at normal velocity was used and
carried out with each of the two hip joint implants assessed. Ground
reaction forces’ kinetic temporal changes were considered in the first
approach followed but discarded in the second one. Ground reaction
force components were obtained from eighteen patients under such
gait condition, half of which had a hip implant type I-II, whilst the
other half had the hip implant, defined as type III by Orthoload®.
After pre-processing raw gait kinetic data and selecting the time
frames needed for the analysis, the ground reaction force components
were used to train a MLP neural network, which learnt to distinguish
the two hip joint implants in the abovementioned condition. Further
to training, unknown hip implant side and ground reaction force
components were presented to the neural networks, which assigned
those features into the right class with a reasonably high accuracy for
the hip implant type I-II and the type III. The results suggest that
neural networks could be successfully applied in the performance
assessment of hip joint implants.
Abstract: Over the past era, there have been a lot of efforts and
studies are carried out in growing proficient tools for performing
various tasks in big data. Recently big data have gotten a lot of
publicity for their good reasons. Due to the large and complex
collection of datasets it is difficult to process on traditional data
processing applications. This concern turns to be further mandatory
for producing various tools in big data. Moreover, the main aim of
big data analytics is to utilize the advanced analytic techniques
besides very huge, different datasets which contain diverse sizes from
terabytes to zettabytes and diverse types such as structured or
unstructured and batch or streaming. Big data is useful for data sets
where their size or type is away from the capability of traditional
relational databases for capturing, managing and processing the data
with low-latency. Thus the out coming challenges tend to the
occurrence of powerful big data tools. In this survey, a various
collection of big data tools are illustrated and also compared with the
salient features.
Abstract: Opportunistic routing is used, where the network has
the features like dynamic topology changes and intermittent network
connectivity. In Delay tolerant network or Disruption tolerant
network opportunistic forwarding technique is widely used. The key
idea of opportunistic routing is selecting forwarding nodes to forward
data packets and coordination among these nodes to avoid duplicate
transmissions. This paper gives the analysis of pros and cons of
various opportunistic routing techniques used in MANET.
Abstract: The system is designed to show images which are
related to the query image. Extracting color, texture, and shape
features from an image plays a vital role in content-based image
retrieval (CBIR). Initially RGB image is converted into HSV color
space due to its perceptual uniformity. From the HSV image, Color
features are extracted using block color histogram, texture features
using Haar transform and shape feature using Fuzzy C-means
Algorithm. Then, the characteristics of the global and local color
histogram, texture features through co-occurrence matrix and Haar
wavelet transform and shape are compared and analyzed for CBIR.
Finally, the best method of each feature is fused during similarity
measure to improve image retrieval effectiveness and accuracy.
Abstract: Anultra-low power capacitor less low-dropout voltage
regulator with improved transient response using gain enhanced feed
forward path compensation is presented in this paper. It is based on a
cascade of a voltage amplifier and a transconductor stage in the feed
forward path with regular error amplifier to form a composite gainenhanced
feed forward stage. It broadens the gain bandwidth and thus
improves the transient response without substantial increase in power
consumption. The proposed LDO, designed for a maximum output
current of 100 mA in UMC 180 nm, requires a quiescent current of
69 )A. An undershot of 153.79mV for a load current changes from
0mA to 100mA and an overshoot of 196.24mV for current change of
100mA to 0mA. The settling time is approximately 1.1 )s for the
output voltage undershooting case. The load regulation is of 2.77
)V/mA at load current of 100mA. Reference voltage is generated by
using an accurate band gap reference circuit of 0.8V.The costly
features of SOC such as total chip area and power consumption is
drastically reduced by the use of only a total compensation
capacitance of 6pF while consuming power consumption of 0.096
mW.
Abstract: The most important part of modern lean low NOx combustors is a premixer where swirlers are often used for intensification of mixing processes and further formation of required flow pattern in combustor liner. Swirling flow leads to formation of complex eddy structures causing flow perturbations. It is able to cause combustion instability. Therefore, at design phase, it is necessary to pay great attention to aerodynamics of premixers. Analysis based on unsteady CFD modeling of swirling flow in production combustor swirler showed presence of large number of different eddy structures that can be conditionally divided into three types relative to its location of origin and a propagation path. Further, features of each eddy type were subsequently defined. Comparison of calculated and experimental pressure fluctuations spectrums verified correctness of computations.
Abstract: The article is devoted to the problem of political
discourse and its reflection on mass cognition. This article is
dedicated to describe the myth as one of the main features of political
discourse. The dominance of an expressional and emotional
component in the myth is shown. Precedent phenomenon plays an
important role in distinguishing the myth from the linguistic point of
view. Precedent phenomena show the linguistic cognition, which is
characterized by their fame and recognition. Four types of myths
such as master myths, a foundation myth, sustaining myth,
eschatological myths are observed. The myths about the national idea
are characterized by national specificity. The main aim of the
political discourse with the help of myths is to influence on the mass
consciousness in order to motivate the addressee to certain actions so
that the target purpose is reached owing to unity of forces.
Abstract: This research proposes a novel reconstruction protocol
for restoring missing surfaces and low-quality edges and shapes in
photos of artifacts at historical sites. The protocol starts with the
extraction of a cloud of points. This extraction process is based on
four subordinate algorithms, which differ in the robustness and
amount of resultant. Moreover, they use different -but
complementary- accuracy to some related features and to the way
they build a quality mesh. The performance of our proposed protocol
is compared with other state-of-the-art algorithms and toolkits. The
statistical analysis shows that our algorithm significantly outperforms
its rivals in the resultant quality of its object files used to reconstruct
the desired model.
Abstract: In this study, a comparative analysis of the approaches
associated with the use of neural network algorithms for effective
solution of a complex inverse problem – the problem of identifying
and determining the individual concentrations of inorganic salts in
multicomponent aqueous solutions by the spectra of Raman
scattering of light – is performed. It is shown that application of
artificial neural networks provides the average accuracy of
determination of concentration of each salt no worse than 0.025 M.
The results of comparative analysis of input data compression
methods are presented. It is demonstrated that use of uniform
aggregation of input features allows decreasing the error of
determination of individual concentrations of components by 16-18%
on the average.
Abstract: Frequent pattern mining is the process of finding a
pattern (a set of items, subsequences, substructures, etc.) that occurs
frequently in a data set. It was proposed in the context of frequent
itemsets and association rule mining. Frequent pattern mining is used
to find inherent regularities in data. What products were often
purchased together? Its applications include basket data analysis,
cross-marketing, catalog design, sale campaign analysis, Web log
(click stream) analysis, and DNA sequence analysis. However, one of
the bottlenecks of frequent itemset mining is that as the data increase
the amount of time and resources required to mining the data
increases at an exponential rate. In this investigation a new algorithm
is proposed which can be uses as a pre-processor for frequent itemset
mining. FASTER (FeAture SelecTion using Entropy and Rough sets)
is a hybrid pre-processor algorithm which utilizes entropy and roughsets
to carry out record reduction and feature (attribute) selection
respectively. FASTER for frequent itemset mining can produce a
speed up of 3.1 times when compared to original algorithm while
maintaining an accuracy of 71%.
Abstract: Two types of commercial cylindrical lithium ion
batteries (Panasonic 3.4 Ah NCR-18650B and Samsung 2.9 Ah
INR-18650), were investigated experimentally. The capacities of these
samples were individually measured using constant current-constant
voltage (CC-CV) method at different ambient temperatures (-10°C,
0°C, 25°C). Their internal resistance was determined by
electrochemical impedance spectroscopy (EIS) and pulse discharge
methods. The cells with different configurations of parallel connection
NCR-NCR, INR-INR and NCR-INR were charged/discharged at the
aforementioned ambient temperatures. The results showed that the
difference of internal resistance between cells much more evident at
low temperatures. Furthermore, the parallel connection of NCR-NCR
exhibits the most uniform temperature distribution in cells at -10°C,
this feature is quite favorable for the safety of the battery pack.
Abstract: Concerns on corrosion and effective coating
protection of double hull tankers and bulk carriers in service have
been raised especially in water ballast tanks (WBTs). Test
protocols/methodologies specifically that which is incorporated in the
International Maritime Organisation (IMO), Performance Standard
for Protective Coatings for Dedicated Sea Water ballast tanks (PSPC)
are being used to assess and evaluate the performance of the coatings
for type approval prior to their application in WBTs. However, some
of the type approved coatings may be applied as very thick films to
less than ideally prepared steel substrates in the WBT. As such films
experience hygrothermal cycling from operating and environmental
conditions, they become embrittled which may ultimately result in
cracking. This embrittlement of the coatings is identified as an
undesirable feature in the PSPC but is not mentioned in the test
protocols within it. There is therefore renewed industrial research
aimed at understanding this issue in order to eliminate cracking and
achieve the intended coating lifespan of 15 years in good condition.
This paper will critically review test protocols currently used for
assessing and evaluating coating performance, particularly the IMO
PSPC.
Abstract: Consumer-to-Consumer (C2C) E-commerce has been
growing at a very high speed in recent years. Since identical or
nearly-same kinds of products compete one another by relying on
keyword search in C2C E-commerce, some sellers describe their
products with spam keywords that are popular but are not related to
their products. Though such products get more chances to be retrieved
and selected by consumers than those without spam keywords,
the spam keywords mislead the consumers and waste their time.
This problem has been reported in many commercial services like
ebay and taobao, but there have been little research to solve this
problem. As a solution to this problem, this paper proposes a method
to classify whether keywords of a product are spam or not. The
proposed method assumes that a keyword for a given product is
more reliable if the keyword is observed commonly in specifications
of products which are the same or the same kind as the given
product. This is because that a hierarchical category of a product
in general determined precisely by a seller of the product and so is
the specification of the product. Since higher layers of the hierarchical
category represent more general kinds of products, a reliable degree
is differently determined according to the layers. Hence, reliable
degrees from different layers of a hierarchical category become
features for keywords and they are used together with features only
from specifications for classification of the keywords. Support Vector
Machines are adopted as a basic classifier using the features, since
it is powerful, and widely used in many classification tasks. In
the experiments, the proposed method is evaluated with a golden
standard dataset from Yi-han-wang, a Chinese C2C E-commerce,
and is compared with a baseline method that does not consider
the hierarchical category. The experimental results show that the
proposed method outperforms the baseline in F1-measure, which
proves that spam keywords are effectively identified by a hierarchical
category in C2C E-commerce.
Abstract: In this paper the issue of dimensionality reduction is
investigated in finger vein recognition systems using kernel Principal
Component Analysis (KPCA). One aspect of KPCA is to find the
most appropriate kernel function on finger vein recognition as there
are several kernel functions which can be used within PCA-based
algorithms. In this paper, however, another side of PCA-based
algorithms -particularly KPCA- is investigated. The aspect of
dimension of feature vector in PCA-based algorithms is of
importance especially when it comes to the real-world applications
and usage of such algorithms. It means that a fixed dimension of
feature vector has to be set to reduce the dimension of the input and
output data and extract the features from them. Then a classifier is
performed to classify the data and make the final decision. We
analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in
this paper and investigate the optimal feature extraction dimension in
finger vein recognition using KPCA.
Abstract: ‘Steganalysis’ is one of the challenging and attractive interests for the researchers with the development of information hiding techniques. It is the procedure to detect the hidden information from the stego created by known steganographic algorithm. In this paper, a novel feature based image steganalysis technique is proposed. Various statistical moments have been used along with some similarity metric. The proposed steganalysis technique has been designed based on transformation in four wavelet domains, which include Haar, Daubechies, Symlets and Biorthogonal. Each domain is being subjected to various classifiers, namely K-nearest-neighbor, K* Classifier, Locally weighted learning, Naive Bayes classifier, Neural networks, Decision trees and Support vector machines. The experiments are performed on a large set of pictures which are available freely in image database. The system also predicts the different message length definitions.
Abstract: We present our approach on using continuous delivery
pattern for release management. One of the key practices of agile and
lean teams is the continuous delivery of new features to stakeholders.
The main benefits of this approach lie in the ability to release new
applications rapidly which has real strategic impact on the
competitive advantage of an organization. Organizations that
successfully implement Continuous Delivery have the ability to
evolve rapidly to support innovation, provide stable and reliable
software in more efficient ways, decrease the amount of resources
need for maintenance, and lower the software delivery time and costs.
One of the objectives of this paper is to elaborate a case study where
IT division of Central Securities Depository Institution (MKK) of
Turkey apply Continuous Delivery pattern to improve release
management process.
Abstract: An extensive amount of work has been done in data
clustering research under the unsupervised learning technique in Data
Mining during the past two decades. Moreover, several approaches
and methods have been emerged focusing on clustering diverse data
types, features of cluster models and similarity rates of clusters.
However, none of the single clustering algorithm exemplifies its best
nature in extracting efficient clusters. Consequently, in order to
rectify this issue, a new challenging technique called Cluster
Ensemble method was bloomed. This new approach tends to be the
alternative method for the cluster analysis problem. The main
objective of the Cluster Ensemble is to aggregate the diverse
clustering solutions in such a way to attain accuracy and also to
improve the eminence the individual clustering algorithms. Due to
the massive and rapid development of new methods in the globe of
data mining, it is highly mandatory to scrutinize a vital analysis of
existing techniques and the future novelty. This paper shows the
comparative analysis of different cluster ensemble methods along
with their methodologies and salient features. Henceforth this
unambiguous analysis will be very useful for the society of clustering
experts and also helps in deciding the most appropriate one to resolve
the problem in hand.