Abstract: The wireless mesh networks (WMNs) are emerging technology in wireless networking as they can serve large scale high speed internet access. Due to its wireless multi-hop feature, wireless mesh network is prone to suffer from many attacks, such as denial of service attack (DoS). We consider a special case of DoS attack which is selective forwarding attack (a.k.a. gray hole attack). In such attack, a misbehaving mesh router selectively drops the packets it receives rom its predecessor mesh router. It is very hard to detect that packet loss is due to medium access collision, bad channel quality or because of selective forwarding attack. In this paper, we present a review of detection algorithms of selective forwarding attack and discuss their advantage & disadvantage. Finally we conclude this paper with open research issues and challenges.
Abstract: Text categorization - the assignment of natural language documents to one or more predefined categories based on their semantic content - is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. An adaptation of the algorithm is proposed in which a decision tree from root node until a final leave is used for initialization of multilayer neural network. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters-21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Abstract: Visual secret sharing (VSS) was proposed by Naor and Shamir in 1995. Visual secret sharing schemes encode a secret image into two or more share images, and single share image can’t obtain any information about the secret image. When superimposes the shares, it can restore the secret by human vision. Due to the traditional VSS have some problems like pixel expansion and the cost of sophisticated. And this method only can encode one secret image. The schemes of encrypting more secret images by random grids into two shares were proposed by Chen et al. in 2008. But when those restored secret images have much distortion, those schemes are almost limited in decoding. In the other words, if there is too much distortion, we can’t encrypt too much information. So, if we can adjust distortion to very small, we can encrypt more secret images. In this paper, four new algorithms which based on Chang et al.’s scheme be held in 2010 are proposed. First algorithm can adjust distortion to very small. Second algorithm distributes the distortion into two restored secret images. Third algorithm achieves no distortion for special secret images. Fourth algorithm encrypts three secret images, which not only retain the advantage of VSS but also improve on the problems of decoding.
Abstract: A automatic human tracking system using mobile
agent technology is realized because a mobile agent moves in
accordance with a migration of a target person. In this paper, we
propose a method for determining the neighbor node in consideration
of the imaging range of cameras.
Abstract: Demand over web services is in growing with increases number of Web users. Web service is applied by Web application. Web application size is affected by its user-s requirements and interests. Differential in requirements and interests lead to growing of Web application size. The efficient way to save store spaces for more data and information is achieved by implementing algorithms to compress the contents of Web application documents. This paper introduces an algorithm to reduce Web application size based on reduction of the contents of HTML files. It removes unimportant contents regardless of the HTML file size. The removing is not ignored any character that is predicted in the HTML building process.
Abstract: We describe a novel method for removing noise (in wavelet domain) of unknown variance from microarrays. The method is based on the following procedure: We apply 1) Bidimentional Discrete Wavelet Transform (DWT-2D) to the Noisy Microarray, 2) scaling and rounding to the coefficients of the highest subbands (to obtain integer and positive coefficients), 3) bit-slicing to the new highest subbands (to obtain bit-planes), 4) then we apply the Systholic Boolean Orthonormalizer Network (SBON) to the input bit-plane set and we obtain two orthonormal otput bit-plane sets (in a Boolean sense), we project a set on the other one, by means of an AND operation, and then, 5) we apply re-assembling, and, 6) rescaling. Finally, 7) we apply Inverse DWT-2D and reconstruct a microarray from the modified wavelet coefficients. Denoising results compare favorably to the most of methods in use at the moment.
Abstract: Like other external sorting algorithms, the presented
algorithm is a two step algorithm including internal and external
steps. The first part of the algorithm is like the other similar
algorithms but second part of that is including a new easy
implementing method which has reduced the vast number of inputoutput
operations saliently. As decreasing processor operating time
does not have any effect on main algorithm speed, any improvement
in it should be done through decreasing the number of input-output
operations. This paper propose an easy algorithm for choose the
correct record location of the final list. This decreases the time
complexity and makes the algorithm faster.
Abstract: This research uses computational linguistics, an area of study that employs a computer to process natural language, and aims at discerning the patterns that exist in declarative sentences used in technical texts. The approach is mathematical, and the focus is on instructional texts found on web pages. The technique developed by the author and named the MAYA Semantic Technique is used here and organized into four stages. In the first stage, the parts of speech in each sentence are identified. In the second stage, the subject of the sentence is determined. In the third stage, MAYA performs a frequency analysis on the remaining words to determine the verb and its object. In the fourth stage, MAYA does statistical analysis to determine the content of the web page. The advantage of the MAYA Semantic Technique lies in its use of mathematical principles to represent grammatical operations which assist processing and accuracy if performed on unambiguous text. The MAYA Semantic Technique is part of a proposed architecture for an entire web-based intelligent tutoring system. On a sample set of sentences, partial semantics derived using the MAYA Semantic Technique were approximately 80% accurate. The system currently processes technical text in one domain, namely Cµ programming. In this domain all the keywords and programming concepts are known and understood.
Abstract: Nonlinear system identification is becoming an important tool which can be used to improve control performance. This paper describes the application of adaptive neuro-fuzzy inference system (ANFIS) model for controlling a car. The vehicle must follow a predefined path by supervised learning. Backpropagation gradient descent method was performed to train the ANFIS system. The performance of the ANFIS model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed ANFIS model has potential in controlling the non linear system.
Abstract: Group key management is an important functional
building block for any secure multicast architecture.
Thereby, it has been extensively studied in the literature.
In this paper we present relevant group key management
protocols. Then, we compare them against some pertinent
performance criteria.
Abstract: There exists an injective, information-preserving function
that maps a semantic network (i.e a directed labeled network)
to a directed network (i.e. a directed unlabeled network). The edge
label in the semantic network is represented as a topological feature
of the directed network. Also, there exists an injective function that
maps a directed network to an undirected network (i.e. an undirected
unlabeled network). The edge directionality in the directed network
is represented as a topological feature of the undirected network.
Through function composition, there exists an injective function that
maps a semantic network to an undirected network. Thus, aside from
space constraints, the semantic network construct does not have any
modeling functionality that is not possible with either a directed
or undirected network representation. Two proofs of this idea will
be presented. The first is a proof of the aforementioned function
composition concept. The second is a simpler proof involving an
undirected binary encoding of a semantic network.
Abstract: Existing image-based virtual reality applications
allow users to view image-based 3D virtual environment in a more
interactive manner. User could “walkthrough"; looks left, right, up
and down and even zoom into objects in these virtual worlds of
images. However what the user sees during a “zoom in" is just a
close-up view of the same image which was taken from a distant.
Thus, this does not give the user an accurate view of the object from
the actual distance. In this paper, a simple technique for zooming in
an object in a virtual scene is presented. The technique is based on
the 'hotspot' concept in existing application. Instead of navigation
between two different locations, the hotspots are used to focus into
an object in the scene. For each object, several hotspots are created.
A different picture is taken for each hotspot. Each consecutive
hotspot created will take the user closer to the object. This will
provide the user with a correct of view of the object based on his
proximity to the object. Implementation issues and the relevance of
this technique in potential application areas are highlighted.
Abstract: Set covering problem is a classical problem in
computer science and complexity theory. It has many applications,
such as airline crew scheduling problem, facilities location problem,
vehicle routing, assignment problem, etc. In this paper, three
different techniques are applied to solve set covering problem.
Firstly, a mathematical model of set covering problem is introduced
and solved by using optimization solver, LINGO. Secondly, the
Genetic Algorithm Toolbox available in MATLAB is used to solve
set covering problem. And lastly, an ant colony optimization method
is programmed in MATLAB programming language. Results
obtained from these methods are presented in tables. In order to
assess the performance of the techniques used in this project, the
benchmark problems available in open literature are used.
Abstract: Although many researchers have studied the flow
hydraulics in compound channels, there are still many complicated problems in determination of their flow rating curves. Many different
methods have been presented for these channels but extending them
for all types of compound channels with different geometrical and
hydraulic conditions is certainly difficult. In this study, by aid of nearly 400 laboratory and field data sets of geometry and flow rating
curves from 30 different straight compound sections and using artificial neural networks (ANNs), flow discharge in compound channels was estimated. 13 dimensionless input variables including relative depth, relative roughness, relative width, aspect ratio, bed
slope, main channel side slopes, flood plains side slopes and berm
inclination and one output variable (flow discharge), have been used
in ANNs. Comparison of ANNs model and traditional method
(divided channel method-DCM) shows high accuracy of ANNs model results. The results of Sensitivity analysis showed that the relative depth with 47.6 percent contribution, is the most effective input parameter for flow discharge prediction. Relative width and
relative roughness have 19.3 and 12.2 percent of importance, respectively. On the other hand, shape parameter, main channel and
flood plains side slopes with 2.1, 3.8 and 3.8 percent of contribution, have the least importance.
Abstract: LabVIEW and SIMULINK are two most widely used
graphical programming environments for designing digital signal
processing and control systems. Unlike conventional text-based
programming languages such as C, Cµ and MATLAB, graphical
programming involves block-based code developments, allowing a
more efficient mechanism to build and analyze control systems. In
this paper a LabVIEW environment has been employed as a
graphical user interface for monitoring the operation of a controlled
distillation column, by visualizing both the closed loop performance
and the user selected control conditions, while the column dynamics
has been modeled under the SIMULINK environment. This tool has
been applied to the PID based decoupled control of a binary
distillation column. By means of such integrated environments the
control designer is able to monitor and control the plant behavior and
optimize the response when both, the quality improvement of
distillation products and the operation efficiency tasks, are
considered.
Abstract: Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforward network training is a special case of functional minimization, where no explicit model of the data is assumed. Therefore due to the high dimensionality of the data, linearization of the training problem through use of orthogonal basis functions is not desirable. The focus is functional minimization on any basis. A number of methods based on local gradient and Hessian matrices are discussed. Modifications of many methods of first and second order training methods are considered. Using share rates data, experimentally it is proved that Conjugate gradient and Quasi Newton?s methods outperformed the Gradient Descent methods. In case of the Levenberg-Marquardt algorithm is of special interest in financial forecasting.
Abstract: Classifying data hierarchically is an efficient approach
to analyze data. Data is usually classified into multiple categories, or
annotated with a set of labels. To analyze multi-labeled data, such
data must be specified by giving a set of labels as a semantic range.
There are some certain purposes to analyze data. This paper shows
which multi-labeled data should be the target to be analyzed for
those purposes, and discusses the role of a label against a set of
labels by investigating the change when a label is added to the set of
labels. These discussions give the methods for the advanced analysis
of multi-labeled data, which are based on the role of a label against
a semantic range.
Abstract: Wavelets have provided the researchers with
significant positive results, by entering the texture defect detection domain. The weak point of wavelets is that they are one-dimensional
by nature so they are not efficient enough to describe and analyze two-dimensional functions. In this paper we present a new method to
detect the defect of texture images by using curvelet transform.
Simulation results of the proposed method on a set of standard
texture images confirm its correctness. Comparing the obtained results indicates the ability of curvelet transform in describing
discontinuity in two-dimensional functions compared to wavelet
transform
Abstract: Lossless compression schemes with secure
transmission play a key role in telemedicine applications that helps in
accurate diagnosis and research. Traditional cryptographic algorithms
for data security are not fast enough to process vast amount of data.
Hence a novel Secured lossless compression approach proposed in
this paper is based on reversible integer wavelet transform, EZW
algorithm, new modified runlength coding for character
representation and selective bit scrambling. The use of the lifting
scheme allows generating truly lossless integer-to-integer wavelet
transforms. Images are compressed/decompressed by well-known
EZW algorithm. The proposed modified runlength coding greatly
improves the compression performance and also increases the
security level. This work employs scrambling method which is fast,
simple to implement and it provides security. Lossless compression
ratios and distortion performance of this proposed method are found
to be better than other lossless techniques.
Abstract: A social network is a set of people or organization or other social entities connected by some form of relationships. Analysis of social network broadly elaborates visual and mathematical representation of that relationship. Web can also be considered as a social network. This paper presents an innovative approach to analyze a social network using a variant of existing ant colony optimization algorithm called as Clever Ant Colony Metaphor. Experiments are performed and interesting findings and observations have been inferred based on the proposed model.