Abstract: This paper discusses the simulation and experimental work of small Smart Grid containing ten consumers. Smart Grid is characterized by a two-way flow of real-time information and energy. RTP (Real Time Pricing) based tariff is implemented in this work to reduce peak demand, PAR (peak to average ratio) and cost of energy consumed. In the experimental work described here, working of Smart Plug, HEC (Home Energy Controller), HAN (Home Area Network) and communication link between consumers and utility server are explained. Algorithms for Smart Plug, HEC, and utility server are presented and explained in this work. After receiving the Real Time Price for different time slots of the day, HEC interacts automatically by running an algorithm which is based on Linear Programming Problem (LPP) method to find the optimal energy consumption schedule. Algorithm made for utility server can handle more than one off-peak time period during the day. Simulation and experimental work are carried out for different cases. At the end of this work, comparison between simulation results and experimental results are presented to show the effectiveness of the minimization method adopted.
Abstract: This work presents a new type of the affine projection
(AP) algorithms which incorporate the sparsity condition of a
system. To exploit the sparsity of the system, a weighted l1-norm
regularization is imposed on the cost function of the AP algorithm.
Minimizing the cost function with a subgradient calculus and
choosing two distinct weighting for l1-norm, two stochastic gradient
based sparsity regularized AP (SR-AP) algorithms are developed.
Experimental results exhibit that the SR-AP algorithms outperform
the typical AP counterparts for identifying sparse systems.
Abstract: This paper proposes a cost-effective private grid using Object-based Grid Architecture (OGA). In OGA, the data process privacy and inter communication are increased through an object- oriented concept. The limitation of the existing grid is that the user can enter or leave the grid at any time without schedule and dedicated resource. To overcome these limitations, cost-effective private grid and appropriate algorithms are proposed. In this, each system contains two platforms such as grid and local platforms. The grid manager service running in local personal computer can act as grid resource. When the system is on, it is intimated to the Monitoring and Information System (MIS) and details are maintained in Resource Object Table (ROT). The MIS is responsible to select the resource where the file or the replica should be stored. The resource storage is done within virtual single private grid nodes using random object addressing to prevent stolen attack. If any grid resource goes down, then the resource ID will be removed from the ROT, and resource recovery is efficiently managed by the replicas. This random addressing technique makes the grid storage a single storage and the user views the entire grid network as a single system.
Abstract: In the standards of IEC 60076-2 and IEC 60076-7, three different hot-spot temperature estimation methods are suggested. In this study, the algorithms which used in hot-spot temperature calculations are analyzed by comparing the algorithms with the results of an experimental set-up made by a Transformer Monitoring System (TMS) in use. In tested system, TMS uses only top oil temperature and load ratio for hot-spot temperature calculation. And also, it uses some constants from standards which are on agreed statements tables. During the tests, it came out that hot-spot temperature calculation method is just making a simple calculation and not uses significant all other variables that could affect the hot-spot temperature.
Abstract: Data mining is the procedure of determining interesting patterns from the huge amount of data. With the intention of accessing the data faster the most supporting processes needed is clustering. Clustering is the process of identifying similarity between data according to the individuality present in the data and grouping associated data objects into clusters. Cluster ensemble is the technique to combine various runs of different clustering algorithms to obtain a general partition of the original dataset, aiming for consolidation of outcomes from a collection of individual clustering outcomes. The performances of clustering ensembles are mainly affecting by two principal factors such as diversity and quality. This paper presents the overview about the different cluster ensemble algorithm along with their methods used in cluster ensemble to improve the diversity and quality in the several cluster ensemble related papers and shows the comparative analysis of different cluster ensemble also summarize various cluster ensemble methods. Henceforth this clear analysis will be very useful for the world of clustering experts and also helps in deciding the most appropriate one to determine the problem in hand.
Abstract: Rough set theory is used to handle uncertainty and incomplete information by applying two accurate sets, Lower approximation and Upper approximation. In this paper, the rough clustering algorithms are improved by adopting the Similarity, Dissimilarity–Similarity and Entropy based initial centroids selection method on three different clustering algorithms namely Entropy based Rough K-Means (ERKM), Similarity based Rough K-Means (SRKM) and Dissimilarity-Similarity based Rough K-Means (DSRKM) were developed and executed by yeast dataset. The rough clustering algorithms are validated by cluster validity indexes namely Rand and Adjusted Rand indexes. An experimental result shows that the ERKM clustering algorithm perform effectively and delivers better results than other clustering methods. Outlier detection is an important task in data mining and very much different from the rest of the objects in the clusters. Entropy based Rough Outlier Factor (EROF) method is seemly to detect outlier effectively for yeast dataset. In rough K-Means method, by tuning the epsilon (ᶓ) value from 0.8 to 1.08 can detect outliers on boundary region and the RKM algorithm delivers better results, when choosing the value of epsilon (ᶓ) in the specified range. An experimental result shows that the EROF method on clustering algorithm performed very well and suitable for detecting outlier effectively for all datasets. Further, experimental readings show that the ERKM clustering method outperformed the other methods.
Abstract: The purpose of this paper is to study and compare two maximum power point tracking (MPPT) algorithms in a photovoltaic simulation system and also show a simulation study of maximum power point tracking (MPPT) for photovoltaic systems using perturb and observe algorithm and Incremental conductance algorithm. Maximum power point tracking (MPPT) plays an important role in photovoltaic systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency and minimize the overall system cost. Since the maximum power point (MPP) varies, based on the irradiation and cell temperature, appropriate algorithms must be utilized to track the (MPP) and maintain the operation of the system in it. MATLAB/Simulink is used to establish a model of photovoltaic system with (MPPT) function. This system is developed by combining the models established of solar PV module and DC-DC Boost converter. The system is simulated under different climate conditions. Simulation results show that the photovoltaic simulation system can track the maximum power point accurately.
Abstract: Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.
Abstract: Vertex Enumeration Algorithms explore the methods and procedures of generating the vertices of general polyhedra formed by system of equations or inequalities. These problems of enumerating the extreme points (vertices) of general polyhedra are shown to be NP-Hard. This lead to exploring how to count the vertices of general polyhedra without listing them. This is also shown to be #P-Complete. Some fully polynomial randomized approximation schemes (fpras) of counting the vertices of some special classes of polyhedra associated with Down-Sets, Independent Sets, 2-Knapsack problems and 2 x n transportation problems are presented together with some discovered open problems.
Abstract: We present a new framework of the data-reusing (DR)
adaptive algorithms by incorporating a constraint on noise, referred
to as a noise constraint. The motivation behind this work is that the
use of the statistical knowledge of the channel noise can contribute
toward improving the convergence performance of an adaptive filter
in identifying a noisy linear finite impulse response (FIR) channel.
By incorporating the noise constraint into the cost function of the
DR adaptive algorithms, the noise constrained DR (NC-DR) adaptive
algorithms are derived. Experimental results clearly indicate their
superior performance over the conventional DR ones.
Abstract: We present a new subband adaptive filter (R-SAF)
which is robust against impulsive noise in system identification. To
address the vulnerability of adaptive filters based on the L2-norm
optimization criterion against impulsive noise, the R-SAF comes from
the L1-norm optimization criterion with a constraint on the energy
of the weight update. Minimizing L1-norm of the a posteriori error
in each subband with a constraint on minimum disturbance gives
rise to the robustness against the impulsive noise and the capable
convergence performance. Experimental results clearly demonstrate
that the proposed R-SAF outperforms the classical adaptive filtering
algorithms when impulsive noise as well as background noise exist.
Abstract: The material selection problem is concerned with the
determination of the right material for a certain product to optimize
certain performance indices in that product such as mass, energy
density, and power-to-weight ratio. This paper is concerned about
optimizing the selection of the manufacturing process along with the
material used in the product under performance indices and
availability constraints. In this paper, the material selection problem
is formulated using binary programming and solved by genetic
algorithm. The objective function of the model is to minimize the
total manufacturing cost under performance indices and material and
manufacturing process availability constraints.
Abstract: MicroRNAs are small non-coding RNA found in
many different species. They play crucial roles in cancer such as
biological processes of apoptosis and proliferation. The identification
of microRNA-target genes can be an essential first step towards to
reveal the role of microRNA in various cancer types. In this paper,
we predict miRNA-target genes for lung cancer by integrating
prediction scores from miRanda and PITA algorithms used as a
feature vector of miRNA-target interaction. Then, machine-learning
algorithms were implemented for making a final prediction. The
approach developed in this study should be of value for future studies
into understanding the role of miRNAs in molecular mechanisms
enabling lung cancer formation.
Abstract: Mumbai, being traditionally the epicenter of India's
trade and commerce, the existing major ports such as Mumbai and
Jawaharlal Nehru Ports (JN) situated in Thane estuary are also
developing its waterfront facilities. Various developments over the
passage of decades in this region have changed the tidal flux
entering/leaving the estuary. The intake at Pir-Pau is facing the
problem of shortage of water in view of advancement of shoreline,
while jetty near Ulwe faces the problem of ship scheduling due to
existence of shallower depths between JN Port and Ulwe Bunder. In
order to solve these problems, it is inevitable to have information
about tide levels over a long duration by field measurements.
However, field measurement is a tedious and costly affair;
application of artificial intelligence was used to predict water levels
by training the network for the measured tide data for one lunar tidal
cycle. The application of two layered feed forward Artificial Neural
Network (ANN) with back-propagation training algorithms such as
Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to
predict the yearly tide levels at waterfront structures namely at Ulwe
Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe,
and Vashi for a period of lunar tidal cycle (2013) was used to train,
validate and test the neural networks. These trained networks having
high co-relation coefficients (R= 0.998) were used to predict the tide
at Ulwe, and Vashi for its verification with the measured tide for the
year 2000 & 2013. The results indicate that the predicted tide levels
by ANN give reasonably accurate estimation of tide. Hence, the
trained network is used to predict the yearly tide data (2015) for
Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was
predicted by using the neural network which was trained with the
help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The
measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is
maximum amplification of tide by about 10-20 cm with a phase lag
of 10-20 minutes with reference to the tide at Apollo Bunder
(Mumbai). LM training algorithm is faster than GD and with increase
in number of neurons in hidden layer and the performance of the
network increases. The predicted tide levels by ANN at Pir-Pau and
Ulwe provides valuable information about the occurrence of high and
low water levels to plan the operation of pumping at Pir-Pau and
improve ship schedule at Ulwe.
Abstract: Patient-specific models are instance-based learning
algorithms that take advantage of the particular features of the patient
case at hand to predict an outcome. We introduce two patient-specific
algorithms based on decision tree paradigm that use AUC as a
metric to select an attribute. We apply the patient specific algorithms
to predict outcomes in several datasets, including medical datasets.
Compared to the patient-specific decision path (PSDP) entropy-based
and CART methods, the AUC-based patient-specific decision path
models performed equivalently on area under the ROC curve (AUC).
Our results provide support for patient-specific methods being a
promising approach for making clinical predictions.
Abstract: Recently, traffic monitoring has attracted the attention
of computer vision researchers. Many algorithms have been
developed to detect and track moving vehicles. In fact, vehicle
tracking in daytime and in nighttime cannot be approached with the
same techniques, due to the extreme different illumination conditions.
Consequently, traffic-monitoring systems are in need of having a
component to differentiate between daytime and nighttime scenes. In
this paper, a HSV-based day/night detector is proposed for traffic
monitoring scenes. The detector employs the hue-histogram and the
value-histogram on the top half of the image frame. Experimental
results show that the extraction of the brightness features along with
the color features within the top region of the image is effective for
classifying traffic scenes. In addition, the detector achieves high
precision and recall rates along with it is feasible for real time
applications.
Abstract: Ant algorithms are well-known metaheuristics which
have been widely used since two decades. In most of the literature,
an ant is a constructive heuristic able to build a solution from scratch.
However, other types of ant algorithms have recently emerged: the
discussion is thus not limited by the common framework of the
constructive ant algorithms. Generally, at each generation of an ant
algorithm, each ant builds a solution step by step by adding an
element to it. Each choice is based on the greedy force (also called the
visibility, the short term profit or the heuristic information) and the
trail system (central memory which collects historical information of
the search process). Usually, all the ants of the population have the
same characteristics and behaviors. In contrast in this paper, a new
type of ant metaheuristic is proposed, namely SMART (for Solution
Methods with Ants Running by Types). It relies on the use of different
population of ants, where each population has its own personality.
Abstract: This paper outlines the development of an
experimental technique in quantifying supersonic jet flows, in an
attempt to avoid seeding particle problems frequently associated with
particle-image velocimetry (PIV) techniques at high Mach numbers.
Based on optical flow algorithms, the idea behind the technique
involves using high speed cameras to capture Schlieren images of the
supersonic jet shear layers, before they are subjected to an adapted
optical flow algorithm based on the Horn-Schnuck method to
determine the associated flow fields. The proposed method is capable
of offering full-field unsteady flow information with potentially
higher accuracy and resolution than existing point-measurements or
PIV techniques. Preliminary study via numerical simulations of a
circular de Laval jet nozzle successfully reveals flow and shock
structures typically associated with supersonic jet flows, which serve
as useful data for subsequent validation of the optical flow based
experimental results. For experimental technique, a Z-type Schlieren
setup is proposed with supersonic jet operated in cold mode,
stagnation pressure of 4 bar and exit Mach of 1.5. High-speed singleframe
or double-frame cameras are used to capture successive
Schlieren images. As implementation of optical flow technique to
supersonic flows remains rare, the current focus revolves around
methodology validation through synthetic images. The results of
validation test offers valuable insight into how the optical flow
algorithm can be further improved to improve robustness and
accuracy. Despite these challenges however, this supersonic flow
measurement technique may potentially offer a simpler way to
identify and quantify the fine spatial structures within the shock shear
layer.
Abstract: Orthogonal Frequency Division Multiplexing
(OFDM) has been used in many advanced wireless communication
systems due to its high spectral efficiency and robustness to
frequency selective fading channels. However, the major concern
with OFDM system is the high peak-to-average power ratio (PAPR)
of the transmitted signal. Some of the popular techniques used for
PAPR reduction in OFDM system are conventional partial transmit
sequences (CPTS) and clipping. In this paper, a parallel
combination/hybrid scheme of PAPR reduction using clipping and
CPTS algorithms is proposed. The proposed method intelligently
applies both the algorithms in order to reduce both PAPR as well as
computational complexity. The proposed scheme slightly degrades
bit error rate (BER) performance due to clipping operation and it can
be reduced by selecting an appropriate value of the clipping ratio
(CR). The simulation results show that the proposed algorithm
achieves significant PAPR reduction with much reduced
computational complexity.
Abstract: The current web has become a modern encyclopedia,
where people share their thoughts and ideas on various topics around
them. This kind of encyclopedia is very useful for other people who
are looking for answers to their questions. However, with the
growing popularity of social networking and blogging and ever
expanding network services, there has also been a growing diversity
of technologies along with a different structure of individual web
sites. It is therefore difficult to directly find a relevant answer for a
common Internet user. This paper presents a web application for the
real-time end-to-end analysis of selected Internet trends where the
trend can be whatever the people post online. The application
integrates fully configurable tools for data collection and analysis
using selected webometric algorithms, and for its chronological
visualization to user. It can be assumed that the application facilitates
the users to evaluate the quality of various products that are
mentioned online.