Abstract: The success of an electronic system in a System-on- Chip is highly dependent on the efficiency of its interconnection network, which is constructed from routers and channels (the routers move data across the channels between nodes). Since neither classical bus based nor point to point architectures can provide scalable solutions and satisfy the tight power and performance requirements of future applications, the Network-on-Chip (NoC) approach has recently been proposed as a promising solution. Indeed, in contrast to the traditional solutions, the NoC approach can provide large bandwidth with moderate area overhead. The selected topology of the components interconnects plays prime rule in the performance of NoC architecture as well as routing and switching techniques that can be used. In this paper, we present two generic NoC architectures that can be customized to the specific communication needs of an application in order to reduce the area with minimal degradation of the latency of the system. An experimental study is performed to compare these structures with basic NoC topologies represented by 2D mesh, Butterfly-Fat Tree (BFT) and SPIN. It is shown that Cluster mesh (CMesh) and MinRoot schemes achieves significant improvements in network latency and energy consumption with only negligible area overhead and complexity over existing architectures. In fact, in the case of basic NoC topologies, CMesh and MinRoot schemes provides substantial savings in area as well, because they requires fewer routers. The simulation results show that CMesh and MinRoot networks outperforms MESH, BFT and SPIN in main performance metrics.
Abstract: In this study, a novel approach of image embedding is introduced. The proposed method consists of three main steps. First, the edge of the image is detected using Sobel mask filters. Second, the least significant bit LSB of each pixel is used. Finally, a gray level connectivity is applied using a fuzzy approach and the ASCII code is used for information hiding. The prior bit of the LSB represents the edged image after gray level connectivity, and the remaining six bits represent the original image with very little difference in contrast. The proposed method embeds three images in one image and includes, as a special case of data embedding, information hiding, identifying and authenticating text embedded within the digital images. Image embedding method is considered to be one of the good compression methods, in terms of reserving memory space. Moreover, information hiding within digital image can be used for security information transfer. The creation and extraction of three embedded images, and hiding text information is discussed and illustrated, in the following sections.
Abstract: Visual information is very important in human perception
of surrounding world. Video is one of the most common ways to
capture visual information. The video capability has many benefits
and can be used in various applications. For the most part, the
video information is used to bring entertainment and help to relax,
moreover, it can improve the quality of life of deaf people. Visual
information is crucial for hearing impaired people, it allows them to
communicate personally, using the sign language; some parts of the
person being spoken to, are more important than others (e.g. hands,
face). Therefore, the information about visually relevant parts of the
image, allows us to design objective metric for this specific case. In
this paper, we present an example of an objective metric based on
human visual attention and detection of salient object in the observed
scene.
Abstract: The back-propagation algorithm calculates the weight
changes of an artificial neural network, and a two-term algorithm
with a dynamically optimal learning rate and a momentum factor
is commonly used. Recently the addition of an extra term, called a
proportional factor (PF), to the two-term BP algorithm was proposed.
The third term increases the speed of the BP algorithm. However,
the PF term also reduces the convergence of the BP algorithm, and
optimization approaches for evaluating the learning parameters are
required to facilitate the application of the three terms BP algorithm.
This paper considers the optimization of the new back-propagation
algorithm by using derivative information. A family of approaches
exploiting the derivatives with respect to the learning rate, momentum
factor and proportional factor is presented. These autonomously
compute the derivatives in the weight space, by using information
gathered from the forward and backward procedures. The three-term
BP algorithm and the optimization approaches are evaluated using
the benchmark XOR problem.
Abstract: The decision of information technology (IT) outsourcing requires close attention to the evaluation of supplier selection process because the selection decision involves conflicting multiple criteria and is replete with complex decision making problems. Selecting the most appropriate suppliers is considered an important strategic decision that may impact the performance of outsourcing engagements. The objective of this paper is to aid decision makers to evaluate and assess possible IT outsourcing suppliers. An axiomatic design based fuzzy group decision making is adopted to evaluate supplier alternatives. Finally, a case study is given to demonstrate the potential of the methodology. KeywordsIT outsourcing, Supplier selection, Multi-criteria decision making, Axiomatic design, Fuzzy logic.
Abstract: The efficiency of an image watermarking technique depends on the preservation of visually significant information. This is attained by embedding the watermark transparently with the maximum possible strength. The current paper presents an approach for still image digital watermarking in which the watermark embedding process employs the wavelet transform and incorporates Human Visual System (HVS) characteristics. The sensitivity of a human observer to contrast with respect to spatial frequency is described by the Contrast Sensitivity Function (CSF). The strength of the watermark within the decomposition subbands, which occupy an interval on the spatial frequencies, is adjusted according to this sensitivity. Moreover, the watermark embedding process is carried over the subband coefficients that lie on edges where distortions are less noticeable. The experimental evaluation of the proposed method shows very good results in terms of robustness and transparency.
Abstract: Biodiversity crisis is one of the many crises that
started at the turn of the millennia. Concrete form of expression is
still disputed, but there is a relatively high consensus regarding the
high rate of degradation and the urgent need for action. The strategy
of action outlines a strong economic component, together with the
recognition of market mechanisms as the most effective policies to
protect biodiversity. In this context, biodiversity and ecosystem
services are natural assets that play a key role in economic strategies
and technological development to promote development and
prosperity. Developing and strengthening policies for transition to an
economy based on efficient use of resources is the way forward.
To emphasize the co-viability specific to the connection economyecosystem
services, scientific approach aimed on one hand how to
implement policies for nature conservation and on the other hand, the
concepts underlying the economic expression of ecosystem services-
value, in the context of current technology. Following the analysis of
business opportunities associated with changes in ecosystem services
was concluded that development of market mechanisms for nature
conservation is a trend that is increasingly stronger individualized
within recent years. Although there are still many controversial issues
that have already given rise to an obvious bias, international
organizations and national governments have initiated and
implemented in cooperation or independently such mechanisms.
Consequently, they created the conditions for convergence between
private interests and social interests of nature conservation, so there
are opportunities for ongoing business development which leads,
among other things, the positive effects on biodiversity. Finally,
points out that markets fail to quantify the value of most ecosystem
services. Existing price signals reflect at best, only a proportion of the
total amount corresponding provision of food, water or fuel.
Abstract: Quantum cryptography offers a way of key agreement,
which is unbreakable by any external adversary. Authentication is
of crucial importance, as perfect secrecy is worthless if the identity
of the addressee cannot be ensured before sending important information.
Message authentication has been studied thoroughly, but no
approach seems to be able to explicitly counter meet-in-the-middle
impersonation attacks. The goal of this paper is the development of
an authentication scheme being resistant against active adversaries
controlling the communication channel. The scheme is built on top
of a key-establishment protocol and is unconditionally secure if built
upon quantum cryptographic key exchange. In general, the security
is the same as for the key-agreement protocol lying underneath.
Abstract: In this paper, an intelligent algorithm for optimal
document archiving is presented. It is kown that electronic archives
are very important for information system management. Minimizing
the size of the stored data in electronic archive is a main issue to
reduce the physical storage area. Here, the effect of different types of
Arabic fonts on electronic archives size is discussed. Simulation
results show that PDF is the best file format for storage of the Arabic
documents in electronic archive. Furthermore, fast information
detection in a given PDF file is introduced. Such approach uses fast
neural networks (FNNs) implemented in the frequency domain. The
operation of these networks relies on performing cross correlation in
the frequency domain rather than spatial one. It is proved
mathematically and practically that the number of computation steps
required for the presented FNNs is less than that needed by
conventional neural networks (CNNs). Simulation results using
MATLAB confirm the theoretical computations.
Abstract: Reachability graph (RG) generation suffers from the
problem of exponential space and time complexity. To alleviate the
more critical problem of time complexity, this paper presents the new
approach for RG generation for the Petri net (PN) models of parallel
processes. Independent RGs for each parallel process in the PN
structure are generated in parallel and cross-product of these RGs
turns into the exhaustive state space from which the RG of given
parallel system is determined. The complexity analysis of the
presented algorithm illuminates significant decrease in the time
complexity cost of RG generation. The proposed technique is
applicable to parallel programs having multiple threads with the
synchronization problem.
Abstract: Fingerprint based identification system; one of a well
known biometric system in the area of pattern recognition and has
always been under study through its important role in forensic
science that could help government criminal justice community. In
this paper, we proposed an identification framework of individuals by
means of fingerprint. Different from the most conventional
fingerprint identification frameworks the extracted Geometrical
element features (GEFs) will go through a Discretization process.
The intention of Discretization in this study is to attain individual
unique features that could reflect the individual varianceness in order
to discriminate one person from another. Previously, Discretization
has been shown a particularly efficient identification on English
handwriting with accuracy of 99.9% and on discrimination of twins-
handwriting with accuracy of 98%. Due to its high discriminative
power, this method is adopted into this framework as an independent
based method to seek for the accuracy of fingerprint identification.
Finally the experimental result shows that the accuracy rate of
identification of the proposed system using Discretization is 100%
for FVC2000, 93% for FVC2002 and 89.7% for FVC2004 which is
much better than the conventional or the existing fingerprint
identification system (72% for FVC2000, 26% for FVC2002 and
32.8% for FVC2004). The result indicates that Discretization
approach manages to boost up the classification effectively, and
therefore prove to be suitable for other biometric features besides
handwriting and fingerprint.
Abstract: High level synthesis (HLS) is a process which
generates register-transfer level design for digital systems from
behavioral description. There are many HLS algorithms and
commercial tools. However, most of these algorithms consider a
behavioral description for the system when a single token is
presented to the system. This approach does not exploit extra
hardware efficiently, especially in the design of digital filters where
common operations may exist between successive tokens. In this
paper, we modify the behavioral description to process multiple
tokens in parallel. However, this approach is unlike the full
processing that requires full hardware replication. It exploits the
presence of common operations between successive tokens. The
performance of the proposed approach is better than sequential
processing and approaches that of full parallel processing as the
hardware resources are increased.
Abstract: The objective of this paper is to investigate a new
approach based on the idea of pictograms for food portion size. This
approach adopts the model of the United States Pharmacopeia- Drug
Information (USP-DI). The representation of each food portion size
composed of three parts: frame, the connotation of dietary portion
sizes and layout. To investigate users- comprehension based on this
approach, two experiments were conducted, included 122 Taiwanese
people, 60 male and 62 female with ages between 16 and 64 (divided
into age groups of 16-30, 31-45 and 46-64). In Experiment 1, the mean
correcting rate of the understanding level of food items is 48.54%
(S.D.= 95.08) and the mean response time 2.89sec (S.D.=2.14). The
difference on the correct rates for different age groups is significant
(P*=0.00
Abstract: This paper proposes a novel hybrid algorithm for feature selection based on a binary ant colony and SVM. The final subset selection is attained through the elimination of the features that produce noise or, are strictly correlated with other already selected features. Our algorithm can improve classification accuracy with a small and appropriate feature subset. Proposed algorithm is easily implemented and because of use of a simple filter in that, its computational complexity is very low. The performance of the proposed algorithm is evaluated through a real Rotary Cement kiln dataset. The results show that our algorithm outperforms existing algorithms.
Abstract: We propose a new approach on how to obtain the approximate solutions of Hamilton-Jacobi (HJ) equations. The process of the approximation consists of two steps. The first step is to transform the HJ equations into the virtual time based HJ equations (VT-HJ) by introducing a new idea of ‘virtual-time’. The second step is to construct the approximate solutions of the HJ equations through a computationally iterative procedure based on the VT-HJ equations. It should be noted that the approximate feedback solutions evolve by themselves as the virtual-time goes by. Finally, we demonstrate the effectiveness of our approximation approach by means of simulations with linear and nonlinear control problems.
Abstract: Chikungunya virus (CHICKV) is an arboviruses belonging to family Tagoviridae and is transmitted to human through by mosquito (Aedes aegypti and Aedes albopictus) bite. A large outbreak of chikungunya has been reported in India between 2006 and 2007, along with several other countries from South-East Asia and for the first time in Europe. It was for the first time that the CHICKV outbreak has been reported with mortality from Reunion Island and increased mortality from Asian countries. CHICKV affects all age groups, and currently there are no specific drugs or vaccine to cure the disease. The need of antiviral agents for the treatment of CHICKV infection and the success of virtual screening against many therapeutically valuable targets led us to carry out the structure based drug design against Chikungunya nSP2 protease (PDB: 3TRK). Highthroughput virtual screening of publicly available databases, ZINC12 and BindingDB, has been carried out using the Openeye tools and Schrodinger LLC software packages. Openeye Filter program has been used to filter the database and the filtered outputs were docked using HTVS protocol implemented in GLIDE package of Schrodinger LLC. The top HITS were further used for enriching the similar molecules from the database through vROCS; a shape based screening protocol implemented in Openeye. The approach adopted has provided different scaffolds as HITS against CHICKV protease. Three scaffolds: Indole, Pyrazole and Sulphone derivatives were selected based on the docking score and synthetic feasibility. Derivatives of Pyrazole were synthesized and submitted for antiviral screening against CHICKV.
Abstract: Fault-proneness of a software module is the
probability that the module contains faults. A correlation exists
between the fault-proneness of the software and the measurable
attributes of the code (i.e. the static metrics) and of the testing (i.e.
the dynamic metrics). Early detection of fault-prone software
components enables verification experts to concentrate their time and
resources on the problem areas of the software system under
development. This paper introduces Genetic Algorithm based
software fault prediction models with Object-Oriented metrics. The
contribution of this paper is that it has used Metric values of JEdit
open source software for generation of the rules for the classification
of software modules in the categories of Faulty and non faulty
modules and thereafter empirically validation is performed. The
results shows that Genetic algorithm approach can be used for
finding the fault proneness in object oriented software components.
Abstract: Until recently, researchers have developed various
tools and methodologies for effective clinical decision-making.
Among those decisions, chest pain diseases have been one of
important diagnostic issues especially in an emergency department. To
improve the ability of physicians in diagnosis, many researchers have
developed diagnosis intelligence by using machine learning and data
mining. However, most of the conventional methodologies have been
generally based on a single classifier for disease classification and
prediction, which shows moderate performance. This study utilizes an
ensemble strategy to combine multiple different classifiers to help
physicians diagnose chest pain diseases more accurately than ever.
Specifically the ensemble strategy is applied by using the integration
of decision trees, neural networks, and support vector machines. The
ensemble models are applied to real-world emergency data. This study
shows that the performance of the ensemble models is superior to each
of single classifiers.
Abstract: This paper investigates the problem of tracking spa¬tiotemporal changes of a satellite image through the use of Knowledge Discovery in Database (KDD). The purpose of this study is to help a given user effectively discover interesting knowledge and then build prediction and decision models. Unfortunately, the KDD process for spatiotemporal data is always marked by several types of imperfections. In our paper, we take these imperfections into consideration in order to provide more accurate decisions. To achieve this objective, different KDD methods are used to discover knowledge in satellite image databases. Each method presents a different point of view of spatiotemporal evolution of a query model (which represents an extracted object from a satellite image). In order to combine these methods, we use the evidence fusion theory which considerably improves the spatiotemporal knowledge discovery process and increases our belief in the spatiotemporal model change. Experimental results of satellite images representing the region of Auckland in New Zealand depict the improvement in the overall change detection as compared to using classical methods.
Abstract: Industrial robots play a vital role in automation
however only little effort are taken for the application of robots in
machining work such as Grinding, Cutting, Milling, Drilling,
Polishing etc. Robot parallel manipulators have high stiffness,
rigidity and accuracy, which cannot be provided by conventional
serial robot manipulators. The aim of this paper is to perform the
modeling and the workspace analysis of a 3 DOF Parallel
Manipulator (3 DOF PM). The 3 DOF PM was modeled and
simulated using 'ADAMS'. The concept involved is based on the
transformation of motion from a screw joint to a spherical joint
through a connecting link. This paper work has been planned to
model the Parallel Manipulator (PM) using screw joints for very
accurate positioning. A workspace analysis has been done for the
determination of work volume of the 3 DOF PM. The position of the
spherical joints connected to the moving platform and the
circumferential points of the moving platform were considered for
finding the workspace. After the simulation, the position of the joints
of the moving platform was noted with respect to simulation time and
these points were given as input to the 'MATLAB' for getting the
work envelope. Then 'AUTOCAD' is used for determining the work
volume. The obtained values were compared with analytical
approach by using Pappus-Guldinus Theorem. The analysis had been
dealt by considering the parameters, link length and radius of the
moving platform. From the results it is found that the radius of
moving platform is directly proportional to the work volume for a
constant link length and the link length is also directly proportional
to the work volume, at a constant radius of the moving platform.