Abstract: Advances technology in the field of photogrammetry
replaces analog cameras with reflection on aircraft GPS/IMU system
with a digital aerial camera. In this system, when determining the
position of the camera with the GPS, camera rotations are also
determined by the IMU systems. All around the world, digital aerial
cameras have been used for the photogrammetry applications in the
last ten years. In this way, in terms of the work done in
photogrammetry it is possible to use time effectively, costs to be
reduced to a minimum level, the opportunity to make fast and
accurate.
Geo-referencing techniques that are the cornerstone of the GPS /
INS systems, photogrammetric triangulation of images required for
balancing (interior and exterior orientation) brings flexibility to the
process. Also geo-referencing process; needed in the application of
photogrammetry targets to help to reduce the number of ground
control points. In this study, the use of direct and indirect georeferencing
techniques on the accuracy of the points was investigated
in the production of photogrammetric mapping.
Abstract: Load Forecasting plays a key role in making today's
and future's Smart Energy Grids sustainable and reliable. Accurate
power consumption prediction allows utilities to organize in advance
their resources or to execute Demand Response strategies more
effectively, which enables several features such as higher
sustainability, better quality of service, and affordable electricity
tariffs. It is easy yet effective to apply Load Forecasting at larger
geographic scale, i.e. Smart Micro Grids, wherein the lower available
grid flexibility makes accurate prediction more critical in Demand
Response applications. This paper analyses the application of
short-term load forecasting in a concrete scenario, proposed within the
EU-funded GreenCom project, which collect load data from single
loads and households belonging to a Smart Micro Grid. Three
short-term load forecasting techniques, i.e. linear regression, artificial
neural networks, and radial basis function network, are considered,
compared, and evaluated through absolute forecast errors and training
time. The influence of weather conditions in Load Forecasting is also
evaluated. A new definition of Gain is introduced in this paper, which
innovatively serves as an indicator of short-term prediction
capabilities of time spam consistency. Two models, 24- and
1-hour-ahead forecasting, are built to comprehensively compare these
three techniques.
Abstract: Over the past era, there have been a lot of efforts and
studies are carried out in growing proficient tools for performing
various tasks in big data. Recently big data have gotten a lot of
publicity for their good reasons. Due to the large and complex
collection of datasets it is difficult to process on traditional data
processing applications. This concern turns to be further mandatory
for producing various tools in big data. Moreover, the main aim of
big data analytics is to utilize the advanced analytic techniques
besides very huge, different datasets which contain diverse sizes from
terabytes to zettabytes and diverse types such as structured or
unstructured and batch or streaming. Big data is useful for data sets
where their size or type is away from the capability of traditional
relational databases for capturing, managing and processing the data
with low-latency. Thus the out coming challenges tend to the
occurrence of powerful big data tools. In this survey, a various
collection of big data tools are illustrated and also compared with the
salient features.
Abstract: Opportunistic routing is used, where the network has
the features like dynamic topology changes and intermittent network
connectivity. In Delay tolerant network or Disruption tolerant
network opportunistic forwarding technique is widely used. The key
idea of opportunistic routing is selecting forwarding nodes to forward
data packets and coordination among these nodes to avoid duplicate
transmissions. This paper gives the analysis of pros and cons of
various opportunistic routing techniques used in MANET.
Abstract: This paper aims at finding a suitable neural network
for monitoring congestion level in electrical power systems. In this
paper, the input data has been framed properly to meet the target
objective through supervised learning mechanism by defining normal
and abnormal operating conditions for the system under study. The
congestion level, expressed as line congestion index (LCI), is
evaluated for each operating condition and is presented to the NN
along with the bus voltages to represent the input and target data.
Once, the training goes successful, the NN learns how to deal with a
set of newly presented data through validation and testing
mechanism. The crux of the results presented in this paper rests on
performance comparison of a multi-layered feed forward neural
network with eleven types of back propagation techniques so as to
evolve the best training criteria. The proposed methodology has been
tested on the standard IEEE-14 bus test system with the support of
MATLAB based NN toolbox. The results presented in this paper
signify that the Levenberg-Marquardt backpropagation algorithm
gives best training performance of all the eleven cases considered in
this paper, thus validating the proposed methodology.
Abstract: Color Histogram is considered as the oldest method
used by CBIR systems for indexing images. In turn, the global
histograms do not include the spatial information; this is why the
other techniques coming later have attempted to encounter this
limitation by involving the segmentation task as a preprocessing step.
The weak segmentation is employed by the local histograms while
other methods as CCV (Color Coherent Vector) are based on strong
segmentation. The indexation based on local histograms consists of
splitting the image into N overlapping blocks or sub-regions, and
then the histogram of each block is computed. The dissimilarity
between two images is reduced, as consequence, to compute the
distance between the N local histograms of the both images resulting
then in N*N values; generally, the lowest value is taken into account
to rank images, that means that the lowest value is that which helps to
designate which sub-region utilized to index images of the collection
being asked. In this paper, we make under light the local histogram
indexation method in the hope to compare the results obtained against
those given by the global histogram. We address also another
noteworthy issue when Relying on local histograms namely which
value, among N*N values, to trust on when comparing images, in
other words, which sub-region among the N*N sub-regions on which
we base to index images. Based on the results achieved here, it seems
that relying on the local histograms, which needs to pose an extra
overhead on the system by involving another preprocessing step
naming segmentation, does not necessary mean that it produces better
results. In addition to that, we have proposed here some ideas to
select the local histogram on which we rely on to encode the image
rather than relying on the local histogram having lowest distance with
the query histograms.
Abstract: In this paper, strontium ferrite (SrO.6Fe2O3) was
synthesized by the sol-gel auto-combustion process. The thermal
behavior of powder obtained from self-propagating combustion of
initial gel was evaluated by simultaneous differential thermal analysis
(DTA) and thermo gravimetric (TG), from room temperature to
1200°C. The as-burnt powder was calcined at various temperatures
from 700-900°C to achieve the single-phase Sr-ferrite. Phase
composition, morphology and magnetic properties were investigated
using X-ray diffraction (XRD), transmission electron microscopy
(TEM) and vibrating sample magnetometry (VSM) techniques.
Results showed that the single-phase and nano-sized hexagonal
strontium ferrite particles were formed at calcination temperature of
800°C with crystallite size of 27 nm and coercivity of 6238 Oe.
Abstract: Recent perceived climate variability raises concerns
with unprecedented hydrological phenomena and extremes.
Distribution and circulation of the waters of the Earth become
increasingly difficult to determine because of additional uncertainty
related to anthropogenic emissions. The world wide observed
changes in the large-scale hydrological cycle have been related to an
increase in the observed temperature over several decades. Although
the effect of change in climate on hydrology provides a general
picture of possible hydrological global change, new tools and
frameworks for modelling hydrological series with nonstationary
characteristics at finer scales, are required for assessing climate
change impacts. Of the downscaling techniques, dynamic
downscaling is usually based on the use of Regional Climate Models
(RCMs), which generate finer resolution output based on atmospheric
physics over a region using General Circulation Model (GCM) fields
as boundary conditions. However, RCMs are not expected to capture
the observed spatial precipitation extremes at a fine cell scale or at a
basin scale. Statistical downscaling derives a statistical or empirical
relationship between the variables simulated by the GCMs, called
predictors, and station-scale hydrologic variables, called predictands.
The main focus of the paper is on the need for using statistical
downscaling techniques for projection of local hydrometeorological
variables under climate change scenarios. The projections can be then
served as a means of input source to various hydrologic models to
obtain streamflow, evapotranspiration, soil moisture and other
hydrological variables of interest.
Abstract: This paper is concerned with the stability problem
with two additive time-varying delay components. By choosing one
augmented Lyapunov-Krasovskii functional, using some new zero
equalities, and combining linear matrix inequalities (LMI)
techniques, two new sufficient criteria ensuring the global stability
asymptotic stability of DNNs is obtained. These stability criteria are
present in terms of linear matrix inequalities and can be easily
checked. Finally, some examples are showed to demonstrate the
effectiveness and less conservatism of the proposed method.
Abstract: Lightning protection systems (LPS) for wind power
generation is becoming an important public issue. A serious damage
of blades, accidents where low-voltage and control circuit
breakdowns are frequently occur in many wind farms. A grounding
system is one of the most important components required for
appropriate LPSs in wind turbines WTs. Proper design of a wind
turbine grounding system is demanding and several factors for the
proper and effective implementation must taken into account. In this
paper proposed procedure of proper design of grounding systems for
a wind turbine was introduced. This procedure depends on measuring
of ground current of simulated wind farm under lightning taking into
consideration the soil ionization. The procedure also includes the
Ground Potential Rise (GPR) and the voltage distributions at ground
surface level and Touch potential. In particular, the contribution of
mitigating techniques, such as rings, rods and the proposed design
were investigated.
Abstract: In this paper, an edge-strength guided multiscale
retinex (EGMSR) approach will be proposed for color image contrast
enhancement. In EGMSR, the pixel-dependent weight associated with
each pixel in the single scale retinex output image is computed
according to the edge strength around this pixel in order to prevent
from over-enhancing the noises contained in the smooth dark/bright
regions. Further, by fusing together the enhanced results of EGMSR
and adaptive multiscale retinex (AMSR), we can get a natural fused
image having high contrast and proper tonal rendition. Experimental
results on several low-contrast images have shown that our proposed
approach can produce natural and appealing enhanced images.
Abstract: The emergence of the Semantic Web technology
increases day by day due to the rapid growth of multiple web pages.
Many standard formats are available to store the semantic web data.
The most popular format is the Resource Description Framework
(RDF). Querying large RDF graphs becomes a tedious procedure
with a vast increase in the amount of data. The problem of query
optimization becomes an issue in querying large RDF graphs.
Choosing the best query plan reduces the amount of query execution
time. To address this problem, nature inspired algorithms can be used
as an alternative to the traditional query optimization techniques. In
this research, the optimal query plan is generated by the proposed
SAPSO algorithm which is a hybrid of Simulated Annealing (SA)
and Particle Swarm Optimization (PSO) algorithms. The proposed
SAPSO algorithm has the ability to find the local optimistic result
and it avoids the problem of local minimum. Experiments were
performed on different datasets by changing the number of predicates
and the amount of data. The proposed algorithm gives improved
results compared to existing algorithms in terms of query execution
time.
Abstract: ‘Steganalysis’ is one of the challenging and attractive interests for the researchers with the development of information hiding techniques. It is the procedure to detect the hidden information from the stego created by known steganographic algorithm. In this paper, a novel feature based image steganalysis technique is proposed. Various statistical moments have been used along with some similarity metric. The proposed steganalysis technique has been designed based on transformation in four wavelet domains, which include Haar, Daubechies, Symlets and Biorthogonal. Each domain is being subjected to various classifiers, namely K-nearest-neighbor, K* Classifier, Locally weighted learning, Naive Bayes classifier, Neural networks, Decision trees and Support vector machines. The experiments are performed on a large set of pictures which are available freely in image database. The system also predicts the different message length definitions.
Abstract: An extensive amount of work has been done in data
clustering research under the unsupervised learning technique in Data
Mining during the past two decades. Moreover, several approaches
and methods have been emerged focusing on clustering diverse data
types, features of cluster models and similarity rates of clusters.
However, none of the single clustering algorithm exemplifies its best
nature in extracting efficient clusters. Consequently, in order to
rectify this issue, a new challenging technique called Cluster
Ensemble method was bloomed. This new approach tends to be the
alternative method for the cluster analysis problem. The main
objective of the Cluster Ensemble is to aggregate the diverse
clustering solutions in such a way to attain accuracy and also to
improve the eminence the individual clustering algorithms. Due to
the massive and rapid development of new methods in the globe of
data mining, it is highly mandatory to scrutinize a vital analysis of
existing techniques and the future novelty. This paper shows the
comparative analysis of different cluster ensemble methods along
with their methodologies and salient features. Henceforth this
unambiguous analysis will be very useful for the society of clustering
experts and also helps in deciding the most appropriate one to resolve
the problem in hand.
Abstract: The classroom of the 21st century is an ever changing forum for new and innovative thoughts and ideas. With increasing technology and opportunity, students have rapid access to information that only decades ago would have taken weeks to obtain. Unfortunately, new techniques and technology is not a cure for the fundamental problems that have plagued the classroom ever since education was established. Class size has been an issue long debated in academia. While it is difficult to pin point an exact number, it is clear that in this case more does not mean better. By looking into the success and pitfalls of classroom size the true advantages of smaller classes will become clear. Previously, one class was comprised of 50 students. Being seventeen and eighteen-year-old students, sometimes it was quite difficult for them to stay focused. To help them understand and gain much knowledge, a researcher introduced “The Theory of Multiple Intelligence” and this, in fact, enabled students to learn according to their own learning preferences no matter how they were being taught. In this lesson, the researcher designed a cycle of learning activities involving all intelligences so that everyone had equal opportunities to learn.
Abstract: In this paper, the design problem of state estimator for
neural networks with the mixed time-varying delays are investigated
by constructing appropriate Lyapunov-Krasovskii functionals and
using some effective mathematical techniques. In order to derive
several conditions to guarantee the estimation error systems to be
globally exponential stable, we transform the considered systems
into the neural-type time-delay systems. Then with a set of linear
inequalities(LMIs), we can obtain the stable criteria. Finally, three
numerical examples are given to show the effectiveness and less
conservatism of the proposed criterion.
Abstract: The objectives of the research are to study patterns of fire location distribution and develop techniques of Geographic Information System application in fire risk assessment for fire planning and management. Fire risk assessment was based on two factors: the vulnerability factor such as building material types, building height, building density and capacity for mitigation factor such as accessibility by road, distance to fire station, distance to hydrants and it was obtained from four groups of stakeholders including firemen, city planners, local government officers and local residents. Factors obtained from all stakeholders were converted into Raster data of GIS and then were superimposed on the data in order to prepare fire risk map of the area showing level of fire risk ranging from high to low. The level of fire risk was obtained from weighted mean of each factor based on the stakeholders. Weighted mean for each factor was obtained by Analytical Hierarchy Analysis.
Abstract: The study investigated the implementation of the
Neural Network (NN) techniques for prediction of the loading of Cu
ions onto clinoptilolite. The experimental design using analysis of
variance (ANOVA) was chosen for testing the adequacy of the
Neural Network and for optimizing of the effective input parameters
(pH, temperature and initial concentration). Feed forward, multi-layer
perceptron (MLP) NN successfully tracked the non-linear behavior of
the adsorption process versus the input parameters with mean squared
error (MSE), correlation coefficient (R) and minimum squared error
(MSRE) of 0.102, 0.998 and 0.004 respectively. The results showed
that NN modeling techniques could effectively predict and simulate
the highly complex system and non-linear process such as ionexchange.
Abstract: Chaotic analysis has been performed on the river flow time series before and after applying the wavelet based de-noising techniques in order to investigate the noise content effects on chaotic nature of flow series. In this study, 38 years of monthly runoff data of three gauging stations were used. Gauging stations were located in Ghar-e-Aghaj river basin, Fars province, Iran. Noise level of time series was estimated with the aid of Gaussian kernel algorithm. This step was found to be crucial in preventing removal of the vital data such as memory, correlation and trend from the time series in addition to the noise during de-noising process.
Abstract: Advances in the field of image processing envision a
new era of evaluation techniques and application of procedures in
various different fields. One such field being considered is the
biomedical field for prognosis as well as diagnosis of diseases. This
plethora of methods though provides a wide range of options to select
from, it also proves confusion in selecting the apt process and also in
finding which one is more suitable. Our objective is to use a series of
techniques on bone scans, so as to detect the occurrence of
rheumatoid arthritis (RA) as accurately as possible. Amongst other
techniques existing in the field our proposed system tends to be more
effective as it depends on new methodologies that have been proved
to be better and more consistent than others. Computer aided
diagnosis will provide more accurate and infallible rate of
consistency that will help to improve the efficiency of the system.
The image first undergoes histogram smoothing and specification,
morphing operation, boundary detection by edge following algorithm
and finally image subtraction to determine the presence of
rheumatoid arthritis in a more efficient and effective way. Using preprocessing
noises are removed from images and using segmentation,
region of interest is found and Histogram smoothing is applied for a
specific portion of the images. Gray level co-occurrence matrix
(GLCM) features like Mean, Median, Energy, Correlation, Bone
Mineral Density (BMD) and etc. After finding all the features it
stores in the database. This dataset is trained with inflamed and noninflamed
values and with the help of neural network all the new
images are checked properly for their status and Rough set is
implemented for further reduction.