Abstract: In this paper, we present a novel objective nonreference performance assessment algorithm for image fusion. It takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is based on the Universal Image Quality Index and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors for the metrics. Experimental results confirm that the values of the proposed metrics correlate well with the subjective quality of the fused images, giving a significant improvement over standard measures based on mean squared error and mutual information.
Abstract: Stochastic modeling of network traffic is an area of
significant research activity for current and future broadband
communication networks. Multimedia traffic is statistically
characterized by a bursty variable bit rate (VBR) profile. In this
paper, we develop an improved model for uniform activity level
video sources in ATM using a doubly stochastic autoregressive
model driven by an underlying spatial point process. We then
examine a number of burstiness metrics such as the peak-to-average
ratio (PAR), the temporal autocovariance function (ACF) and the
traffic measurements histogram. We found that the former measure is
most suitable for capturing the burstiness of single scene video
traffic. In the last phase of this work, we analyse statistical
multiplexing of several constant scene video sources. This proved,
expectedly, to be advantageous with respect to reducing the
burstiness of the traffic, as long as the sources are statistically
independent. We observed that the burstiness was rapidly
diminishing, with the largest gain occuring when only around 5
sources are multiplexed. The novel model used in this paper for
characterizing uniform activity video was thus found to be an
accurate model.
Abstract: Use of the Internet and the World-Wide-Web
(WWW) has become widespread in recent years and mobile agent
technology has proliferated at an equally rapid rate. In this scenario
load balancing becomes important for P2P systems. Beside P2P
systems can be highly heterogeneous, i.e., they may consists of peers
that range from old desktops to powerful servers connected to
internet through high-bandwidth lines. There are various loads
balancing policies came into picture. Primitive one is Message
Passing Interface (MPI). Its wide availability and portability make it
an attractive choice; however the communication requirements are
sometimes inefficient when implementing the primitives provided by
MPI. In this scenario we use the concept of mobile agent because
Mobile agent (MA) based approach have the merits of high
flexibility, efficiency, low network traffic, less communication
latency as well as highly asynchronous. In this study we present
decentralized load balancing scheme using mobile agent technology
in which when a node is overloaded, task migrates to less utilized
nodes so as to share the workload. However, the decision of which
nodes receive migrating task is made in real-time by defining certain
load balancing policies. These policies are executed on PMADE (A
Platform for Mobile Agent Distribution and Execution) in
decentralized manner using JuxtaNet and various load balancing
metrics are discussed.
Abstract: The cost of developing the software from scratch can
be saved by identifying and extracting the reusable components from
already developed and existing software systems or legacy systems
[6]. But the issue of how to identify reusable components from
existing systems has remained relatively unexplored. We have used
metric based approach for characterizing a software module. In this
present work, the metrics McCabe-s Cyclometric Complexity
Measure for Complexity measurement, Regularity Metric, Halstead
Software Science Indicator for Volume indication, Reuse Frequency
metric and Coupling Metric values of the software component are
used as input attributes to the different types of Neural Network
system and reusability of the software component is calculated. The
results are recorded in terms of Accuracy, Mean Absolute Error
(MAE) and Root Mean Squared Error (RMSE).
Abstract: The requirement to improve software productivity has
promoted the research on software metric technology. There are
metrics for identifying the quality of reusable components but the
function that makes use of these metrics to find reusability of
software components is still not clear. These metrics if identified in
the design phase or even in the coding phase can help us to reduce the
rework by improving quality of reuse of the component and hence
improve the productivity due to probabilistic increase in the reuse
level. CK metric suit is most widely used metrics for the objectoriented
(OO) software; we critically analyzed the CK metrics, tried
to remove the inconsistencies and devised the framework of metrics
to obtain the structural analysis of OO-based software components.
Neural network can learn new relationships with new input data and
can be used to refine fuzzy rules to create fuzzy adaptive system.
Hence, Neuro-fuzzy inference engine can be used to evaluate the
reusability of OO-based component using its structural attributes as
inputs. In this paper, an algorithm has been proposed in which the
inputs can be given to Neuro-fuzzy system in form of tuned WMC,
DIT, NOC, CBO , LCOM values of the OO software component and
output can be obtained in terms of reusability. The developed
reusability model has produced high precision results as expected by
the human experts.
Abstract: There is significant interest in achieving technology
innovation through new product development activities. It is
recognized, however, that traditional project management practices
focused only on performance, cost, and schedule attributes, can often
lead to risk mitigation strategies that limit new technology
innovation. In this paper, a new approach is proposed for formally
managing and quantifying technology innovation. This approach uses
a risk-based framework that simultaneously optimizes innovation
attributes along with traditional project management and system
engineering attributes. To demonstrate the efficacy of the new riskbased
approach, a comprehensive product development experiment
was conducted. This experiment simultaneously managed the
innovation risks and the product delivery risks through the proposed
risk-based framework. Quantitative metrics for technology
innovation were tracked and the experimental results indicate that the
risk-based approach can simultaneously achieve both project
deliverable and innovation objectives.
Abstract: There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. In this paper, we tried to predict the level of impact of the existing faults in software systems. Neuro-Fuzzy based predictor models is applied NASA-s public domain defect dataset coded in C programming language. As Correlation-based Feature Selection (CFS) evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. So, CFS is used for the selecting the best metrics that have highly correlated with level of severity of faults. The results are compared with the prediction results of Logistic Models (LMT) that was earlier quoted as the best technique in [17]. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provide a relatively better prediction accuracy as compared to other models and hence, can be used for the modeling of the level of impact of faults in function based systems.
Abstract: Wireless sensor networks (WSNs) consist of number
of tiny, low cost and low power sensor nodes to monitor some physical phenomenon. The major limitation in these networks is the use of non-rechargeable battery having limited power supply. The
main cause of energy consumption in such networks is
communication subsystem. This paper presents an energy efficient
Cluster Cooperative Caching at Sensor (C3S) based upon grid type clustering. Sensor nodes belonging to the same cluster/grid form a
cooperative cache system for the node since the cost for
communication with them is low both in terms of energy
consumption and message exchanges. The proposed scheme uses
cache admission control and utility based data replacement policy to
ensure that more useful data is retained in the local cache of a node.
Simulation results demonstrate that C3S scheme performs better in
various performance metrics than NICoCa which is existing
cooperative caching protocol for WSNs.
Abstract: Retinal vascularity assessment plays an important role in diagnosis of ophthalmic pathologies. The employment of digital images for this purpose makes possible a computerized approach and has motivated development of many methods for automated vascular tree segmentation. Metrics based on contingency tables for binary classification have been widely used for evaluating performance of these algorithms and, concretely, the accuracy has been mostly used as measure of global performance in this topic. However, this metric shows very poor matching with human perception as well as other notable deficiencies. Here, a new similarity function for measuring quality of retinal vessel segmentations is proposed. This similarity function is based on characterizing the vascular tree as a connected structure with a measurable area and length. Tests made indicate that this new approach shows better behaviour than the current one does. Generalizing, this concept of measuring descriptive properties may be used for designing functions for measuring more successfully segmentation quality of other complex structures.
Abstract: System testing is actually done to the entire system
against the Functional Requirement Specification and/or the System
Requirement Specification. Moreover, it is an investigatory testing
phase, where the focus is to have almost a destructive attitude and
test not only the design, but also the behavior and even the believed
expectations of the customer. It is also intended to test up to and
beyond the bounds defined in the software/hardware requirements
specifications. In Motorola®, Automated Testing is one of the testing
methodologies uses by GSG-iSGT (Global Software Group - iDEN
TM
Subcriber Group-Test) to increase the testing volume, productivity
and reduce test cycle-time in iDEN
TM
phones testing. Testing is able
to produce more robust products before release to the market. In this
paper, iHopper is proposed as a tool to perform stress test on iDEN
TM
phonse. We will discuss the value that automation has brought to
iDEN
TM
Phone testing such as improving software quality in the
iDEN
TM
phone together with some metrics. We will also look into
the advantages of the proposed system and some discussion of the
future work as well.
Abstract: Recently the use of data mining to scientific bibliographic data bases has been implemented to analyze the pathways of the knowledge or the core scientific relevances of a laureated novel or a country. This specific case of data mining has been named citation mining, and it is the integration of citation bibliometrics and text mining. In this paper we present an improved WEB implementation of statistical physics algorithms to perform the text mining component of citation mining. In particular we use an entropic like distance between the compression of text as an indicator of the similarity between them. Finally, we have included the recently proposed index h to characterize the scientific production. We have used this web implementation to identify users, applications and impact of the Mexican scientific institutions located in the State of Morelos.
Abstract: In modern telecommunications industry, demand &
supply chain management (DSCM) needs reliable design and
versatile tools to control the material flow. The objective for efficient
DSCM is reducing inventory, lead times and related costs in order to
assure reliable and on-time deliveries from manufacturing units
towards customers. In this paper the multi-rate expert system based
methodology for developing simulation tools that would enable
optimal DSCM for multi region, high volume and high complexity
manufacturing environment was proposed.
Abstract: The traditional software product and process metrics
are neither suitable nor sufficient in measuring the complexity of
software components, which ultimately is necessary for quality and
productivity improvement within organizations adopting CBSE.
Researchers have proposed a wide range of complexity metrics for
software systems. However, these metrics are not sufficient for
components and component-based system and are restricted to the
module-oriented systems and object-oriented systems. In this
proposed study it is proposed to find the complexity of the JavaBean
Software Components as a reflection of its quality and the component
can be adopted accordingly to make it more reusable. The proposed
metric involves only the design issues of the component and does not
consider the packaging and the deployment complexity. In this way,
the software components could be kept in certain limit which in turn
help in enhancing the quality and productivity.
Abstract: This paper presents a hand vein authentication system
using fast spatial correlation of hand vein patterns. In order to
evaluate the system performance, a prototype was designed and a
dataset of 50 persons of different ages above 16 and of different
gender, each has 10 images per person was acquired at different
intervals, 5 images for left hand and 5 images for right hand. In
verification testing analysis, we used 3 images to represent the
templates and 2 images for testing. Each of the 2 images is matched
with the existing 3 templates. FAR of 0.02% and FRR of 3.00 %
were reported at threshold 80. The system efficiency at this threshold
was found to be 99.95%. The system can operate at a 97% genuine
acceptance rate and 99.98 % genuine reject rate, at corresponding
threshold of 80. The EER was reported as 0.25 % at threshold 77. We
verified that no similarity exists between right and left hand vein
patterns for the same person over the acquired dataset sample.
Finally, this distinct 100 hand vein patterns dataset sample can be
accessed by researchers and students upon request for testing other
methods of hand veins matching.
Abstract: The Mobile Ad-hoc Network (MANET) is a collection of self-configuring and rapidly deployed mobile nodes (routers) without any central infrastructure. Routing is one of the potential issues. Many routing protocols are reported but it is difficult to decide which one is best in all scenarios. In this paper on demand routing protocols DSR and DYMO based on IEEE 802.11 DCF MAC protocol are examined and characteristic summary of these routing protocols is presented. Their performance is analyzed and compared on performance measuring metrics throughput, dropped packets due to non availability of routes, duplicate RREQ generated for route discovery and normalized routing load by varying CBR data traffic load using QualNet 5.0.2 network simulator.
Abstract: Complex networks have been intensively studied across
many fields, especially in Internet technology, biological engineering,
and nonlinear science. Software is built up out of many interacting
components at various levels of granularity, such as functions, classes,
and packages, representing another important class of complex networks.
It can also be studied using complex network theory. Over the
last decade, many papers on the interdisciplinary research between
software engineering and complex networks have been published.
It provides a different dimension to our understanding of software
and also is very useful for the design and development of software
systems. This paper will explore how to use the complex network
theory to analyze software structure, and briefly review the main
advances in corresponding aspects.
Abstract: The practical implementation of audio-video coupled speech recognition systems is mainly limited by the hardware complexity to integrate two radically different information capturing devices with good temporal synchronisation. In this paper, we propose a solution based on a smart CMOS image sensor in order to simplify the hardware integration difficulties. By using on-chip image processing, this smart sensor can calculate in real time the X/Y projections of the captured image. This on-chip projection reduces considerably the volume of the output data. This data-volume reduction permits a transmission of the condensed visual information via the same audio channel by using a stereophonic input available on most of the standard computation devices such as PC, PDA and mobile phones. A prototype called VMIKE (Visio-Microphone) has been designed and realised by using standard 0.35um CMOS technology. A preliminary experiment gives encouraged results. Its efficiency will be further investigated in a large variety of applications such as biometrics, speech recognition in noisy environments, and vocal control for military or disabled persons, etc.
Abstract: This paper proposes an implementation for the
directed diffusion paradigm aids in studying this paradigm-s
operations and evaluates its behavior according to this
implementation. The directed diffusion is evaluated with respect to
the loss percentage, lifetime, end-to-end delay, and throughput.
From these evaluations some suggestions and modifications are
proposed to improve the directed diffusion behavior according to
this implementation with respect to these metrics. The proposed
modifications reflect the effect of local path repair by introducing a
technique called Loop-free Local Path Repair (LLPR) which
improves the directed diffusion behavior especially with respect to
packet loss percentage by about 92.69%. Also LLPR improves the
throughput and end-to-end delay by about 55.31% and 14.06%
respectively, while the lifetime decreases by about 29.79%.
Abstract: The evaluation and measurement of human body
dimensions are achieved by physical anthropometry. This research
was conducted in view of the importance of anthropometric indices
of the face in forensic medicine, surgery, and medical imaging. The
main goal of this research is to optimization of facial feature point by
establishing a mathematical relationship among facial features and
used optimize feature points for age classification. Since selected
facial feature points are located to the area of mouth, nose, eyes and
eyebrow on facial images, all desire facial feature points are extracted
accurately. According this proposes method; sixteen Euclidean
distances are calculated from the eighteen selected facial feature
points vertically as well as horizontally. The mathematical
relationships among horizontal and vertical distances are established.
Moreover, it is also discovered that distances of the facial feature
follows a constant ratio due to age progression. The distances
between the specified features points increase with respect the age
progression of a human from his or her childhood but the ratio of the
distances does not change (d = 1 .618 ) . Finally, according to the
proposed mathematical relationship four independent feature
distances related to eight feature points are selected from sixteen
distances and eighteen feature point-s respectively. These four feature
distances are used for classification of age using Support Vector
Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm
and shown around 96 % accuracy. Experiment result shows the
proposed system is effective and accurate for age classification.
Abstract: Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set of features and perform recognition task. The proposed system is evaluated with UMIST face database. The experiment results demonstrate the efficiency and robustness of the proposed system with high recognition rates.