Abstract: Various security APIs (Application Programming
Interfaces) are being used in a variety of application areas requiring
the information security function. However, these standards are not
compatible, and the developer must use those APIs selectively
depending on the application environment or the programming
language. To resolve this problem, we propose the standard draft of
the information security component, while SSL (Secure Sockets
Layer) using the confidentiality and integrity component interface has
been implemented to verify validity of the standard proposal. The
implemented SSL uses the lower-level SSL component when
establishing the RMI (Remote Method Invocation) communication
between components, as if the security algorithm had been
implemented by adding one more layer on the TCP/IP.
Abstract: Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.
Abstract: We present here the results for a comparative study of
some techniques, available in the literature, related to the relevance
feedback mechanism in the case of a short-term learning. Only one
method among those considered here is belonging to the data mining
field which is the K-nearest neighbors algorithm (KNN) while the
rest of the methods is related purely to the information retrieval field
and they fall under the purview of the following three major axes:
Shifting query, Feature Weighting and the optimization of the
parameters of similarity metric. As a contribution, and in addition to
the comparative purpose, we propose a new version of the KNN
algorithm referred to as an incremental KNN which is distinct from
the original version in the sense that besides the influence of the
seeds, the rate of the actual target image is influenced also by the
images already rated. The results presented here have been obtained
after experiments conducted on the Wang database for one iteration
and utilizing color moments on the RGB space. This compact
descriptor, Color Moments, is adequate for the efficiency purposes
needed in the case of interactive systems. The results obtained allow
us to claim that the proposed algorithm proves good results; it even
outperforms a wide range of techniques available in the literature.
Abstract: Implemented 5-bit 125-MS/s successive
approximation register (SAR) analog to digital converter (ADC) on
FPGA is presented in this paper.The design and modeling of a high
performance SAR analog to digital converter are based on monotonic
capacitor switching procedure algorithm .Spartan 3 FPGA is chosen
for implementing SAR analog to digital converter algorithm. SAR
VHDL program writes in Xilinx and modelsim uses for showing
results.
Abstract: In this paper, a solution is presented for a robotic
manipulation problem in industrial settings. The problem is sensing
objects on a conveyor belt, identifying the target, planning and
tracking an interception trajectory between end effector and the
target. Such a problem could be formulated as combining object
recognition, tracking and interception. For this purpose, we integrated
a vision system to the manipulation system and employed tracking
algorithms. The control approach is implemented on a real industrial
manipulation setting, which consists of a conveyor belt, objects
moving on it, a robotic manipulator, and a visual sensor above the
conveyor. The trjectory for robotic interception at a rendezvous point
on the conveyor belt is analytically calculated. Test results show that
tracking the raget along this trajectory results in interception and
grabbing of the target object.
Abstract: This paper presents how the real-time chatter
prevention can be realized by feedback of acoustic cutting signal, and
the efficacy of the proposed adaptive spindle speed tuning algorithm is
verified by intensive experimental simulations. A pair of
microphones, perpendicular to each other, is used to acquire the
acoustic cutting signal resulting from milling chatter. A real-time
feedback control loop is constructed for spindle speed compensation
so that the milling process can be ensured to be within the stability
zone of stability lobe diagram. Acoustic Chatter Signal Index (ACSI)
and Spindle Speed Compensation Strategy (SSCS) are proposed to
quantify the acoustic signal and actively tune the spindle speed
respectively. By converting the acoustic feedback signal into ACSI,
an appropriate Spindle Speed Compensation Rate (SSCR) can be
determined by SSCS based on real-time chatter level or ACSI.
Accordingly, the compensation command, referred to as Added-On
Voltage (AOV), is applied to increase/decrease the spindle motor
speed. By inspection on the precision and quality of the workpiece
surface after milling, the efficacy of the real-time chatter prevention
strategy via acoustic signal feedback is further assured.
Abstract: In this contribution an innovative platform is being
presented that integrates intelligent agents in legacy e-learning environments. It introduces the design and development of a scalable
and interoperable integration platform supporting various assessment agents for e-learning environments. The agents are implemented in
order to provide intelligent assessment services to computational intelligent techniques such as Bayesian Networks and Genetic
Algorithms. The utilization of new and emerging technologies like web services allows integrating the provided services to any web
based legacy e-learning environment.
Abstract: Selecting the word translation from a set of target
language words, one that conveys the correct sense of source word
and makes more fluent target language output, is one of core
problems in machine translation. In this paper we compare the 3
methods of estimating word translation probabilities for selecting the
translation word in Thai – English Machine Translation. The 3
methods are (1) Method based on frequency of word translation, (2)
Method based on collocation of word translation, and (3) Method
based on Expectation Maximization (EM) algorithm. For evaluation
we used Thai – English parallel sentences generated by NECTEC.
The method based on EM algorithm is the best method in comparison
to the other methods and gives the satisfying results.
Abstract: Clustering techniques have received attention in many areas including engineering, medicine, biology and data mining. The purpose of clustering is to group together data points, which are close to one another. The K-means algorithm is one of the most widely used techniques for clustering. However, K-means has two shortcomings: dependency on the initial state and convergence to local optima and global solutions of large problems cannot found with reasonable amount of computation effort. In order to overcome local optima problem lots of studies done in clustering. This paper is presented an efficient hybrid evolutionary optimization algorithm based on combining Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO), called PSO-ACO, for optimally clustering N object into K clusters. The new PSO-ACO algorithm is tested on several data sets, and its performance is compared with those of ACO, PSO and K-means clustering. The simulation results show that the proposed evolutionary optimization algorithm is robust and suitable for handing data clustering.
Abstract: An iterative algorithm is proposed and tested in Cournot Game models, which is based on the convergence of sequential best responses and the utilization of a genetic algorithm for determining each player-s best response to a given strategy profile of its opponents. An extra outer loop is used, to address the problem of finite accuracy, which is inherent in genetic algorithms, since the set of feasible values in such an algorithm is finite. The algorithm is tested in five Cournot models, three of which have convergent best replies sequence, one with divergent sequential best replies and one with “local NE traps"[14], where classical local search algorithms fail to identify the Nash Equilibrium. After a series of simulations, we conclude that the algorithm proposed converges to the Nash Equilibrium, with any level of accuracy needed, in all but the case where the sequential best replies process diverges.
Abstract: The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm.
Abstract: An alternative approach to the use of Discrete Fourier
Transform (DFT) for Magnetic Resonance Imaging (MRI) reconstruction
is the use of parametric modeling technique. This method
is suitable for problems in which the image can be modeled by
explicit known source functions with a few adjustable parameters.
Despite the success reported in the use of modeling technique as an
alternative MRI reconstruction technique, two important problems
constitutes challenges to the applicability of this method, these are
estimation of Model order and model coefficient determination. In
this paper, five of the suggested method of evaluating the model
order have been evaluated, these are: The Final Prediction Error
(FPE), Akaike Information Criterion (AIC), Residual Variance (RV),
Minimum Description Length (MDL) and Hannan and Quinn (HNQ)
criterion. These criteria were evaluated on MRI data sets based on the
method of Transient Error Reconstruction Algorithm (TERA). The
result for each criterion is compared to result obtained by the use of a
fixed order technique and three measures of similarity were evaluated.
Result obtained shows that the use of MDL gives the highest measure
of similarity to that use by a fixed order technique.
Abstract: Electromagnetic flowmeters with DC excitation are used for a wide range of fluid measurement tasks, but are rarely found in dosing applications with short measurement cycles due to the achievable accuracy. This paper will identify a number of factors that influence the accuracy of this sensor type when used for short-term measurements. Based on these results a new signal-processing algorithm will be described that overcomes the identified problems to some extend. This new method allows principally a higher accuracy of electromagnetic flowmeters with DC excitation than traditional methods.
Abstract: The rapid expansion of the web is causing the
constant growth of information, leading to several problems such as
increased difficulty of extracting potentially useful knowledge. Web
content mining confronts this problem gathering explicit information
from different web sites for its access and knowledge discovery.
Query interfaces of web databases share common building blocks.
After extracting information with parsing approach, we use a new
data mining algorithm to match a large number of schemas in
databases at a time. Using this algorithm increases the speed of
information matching. In addition, instead of simple 1:1 matching,
they do complex (m:n) matching between query interfaces. In this
paper we present a novel correlation mining algorithm that matches
correlated attributes with smaller cost. This algorithm uses Jaccard
measure to distinguish positive and negative correlated attributes.
After that, system matches the user query with different query
interfaces in special domain and finally chooses the nearest query
interface with user query to answer to it.
Abstract: Wireless Sensor Networks consist of small battery
powered devices with limited energy resources. once deployed, the
small sensor nodes are usually inaccessible to the user, and thus
replacement of the energy source is not feasible. Hence, One of the
most important issues that needs to be enhanced in order to improve
the life span of the network is energy efficiency. to overcome this
demerit many research have been done. The clustering is the one of
the representative approaches. in the clustering, the cluster heads
gather data from nodes and sending them to the base station. In this
paper, we introduce a dynamic clustering algorithm using genetic
algorithm. This algorithm takes different parameters into
consideration to increase the network lifetime. To prove efficiency of
proposed algorithm, we simulated the proposed algorithm compared
with LEACH algorithm using the matlab
Abstract: This paper presents a general trainable framework
for fast and robust upright human face and non-human object
detection and verification in static images. To enhance the
performance of the detection process, the technique we develop is
based on the combination of fast neural network (FNN) and
classical neural network (CNN). In FNN, a useful correlation is
exploited to sustain high level of detection accuracy between input
image and the weight of the hidden neurons. This is to enable the
use of Fourier transform that significantly speed up the time
detection. The combination of CNN is responsible to verify the
face region. A bootstrap algorithm is used to collect non human
object, which adds the false detection to the training process of the
human and non-human object. Experimental results on test images
with both simple and complex background demonstrate that the
proposed method has obtained high detection rate and low false
positive rate in detecting both human face and non-human object.
Abstract: We depend upon explanation in order to “make sense"
out of our world. And, making sense is all the more important when
dealing with change. But, what happens if our explanations are
wrong? This question is examined with respect to two types of
explanatory model. Models based on labels and categories we shall
refer to as “representations." More complex models involving
stories, multiple algorithms, rules of thumb, questions, ambiguity we
shall refer to as “compressions." Both compressions and
representations are reductions. But representations are far more
reductive than compressions. Representations can be treated as a set
of defined meanings – coherence with regard to a representation is
the degree of fidelity between the item in question and the definition
of the representation, of the label. By contrast, compressions contain
enough degrees of freedom and ambiguity to allow us to make
internal predictions so that we may determine our potential actions in
the possibility space. Compressions are explanatory via mechanism.
Representations are explanatory via category. Managers are often
confusing their evocation of a representation (category inclusion) as
the creation of a context of compression (description of mechanism).
When this type of explanatory error occurs, more errors follow. In
the drive for efficiency such substitutions are all too often proclaimed
– at the manager-s peril..
Abstract: Resource discovery is one of the chief services of a grid. A new approach to discover the provenances in grid through learning automata has been propounded in this article. The objective of the aforementioned resource-discovery service is to select the resource based upon the user-s applications and the mercantile yardsticks that is to say opting for an originator which can accomplish the user-s tasks in the most economic manner. This novel service is submitted in two phases. We proffered an applicationbased categorization by means of an intelligent nerve-prone plexus. The user in question sets his or her application as the input vector of the nerve-prone nexus. The output vector of the aforesaid network limns the appropriateness of any one of the resource for the presented executive procedure. The most scrimping option out of those put forward in the previous stage which can be coped with to fulfill the task in question is picked out. Te resource choice is carried out by means of the presented algorithm based upon the learning automata.
Abstract: In this paper, the phase control antenna array synthesis
is presented. The problem is formulated as a constrained optimization
problem that imposes nulls with prescribed level while maintaining
the sidelobe at a prescribed level. For efficient use of the algorithm
memory, compared to the well known Particle Swarm Optimization
(PSO), the Accelerated Particle Swarm Optimization (APSO) is used
to estimate the phase parameters of the synthesized array. The
objective function is formed using a main objective and set of
constraints with penalty factors that measure the violation of each
feasible solution in the search space to each constraint. In this case
the obtained feasible solution is guaranteed to satisfy all the
constraints. Simulation results have shown significant performance
increases and a decreased randomness in the parameter search space
compared to a single objective conventional particle swarm
optimization.
Abstract: A potentially serious problem with current payment systems is that their underlying hard problems from number theory may be solved by either a quantum computer or unanticipated future advances in algorithms and hardware. A new quantum payment system is proposed in this paper. The suggested system makes use of fundamental principles of quantum mechanics to ensure the unconditional security without prior arrangements between customers and vendors. More specifically, the new system uses Greenberger-Home-Zeilinger (GHZ) states and Quantum Key Distribution to authenticate the vendors and guarantee the transaction integrity.