Abstract: Software reuse can be considered as the most realistic
and promising way to improve software engineering productivity and
quality. Automated assistance for software reuse involves the
representation, classification, retrieval and adaptation of components.
The representation and retrieval of components are important to
software reuse in Component-Based on Software Development
(CBSD). However, current industrial component models mainly focus
on the implement techniques and ignore the semantic information
about component, so it is difficult to retrieve the components that
satisfy user-s requirements. This paper presents a method of business
component retrieval based on specification matching to solve the
software reuse of enterprise information system. First, a business
component model oriented reuse is proposed. In our model, the
business data type is represented as sign data type based on XML,
which can express the variable business data type that can describe the
variety of business operations. Based on this model, we propose
specification match relationships in two levels: business operation
level and business component level. In business operation level, we
use input business data types, output business data types and the
taxonomy of business operations evaluate the similarity between
business operations. In the business component level, we propose five
specification matches between business components. To retrieval
reusable business components, we propose the measure of similarity
degrees to calculate the similarities between business components.
Finally, a business component retrieval command like SQL is
proposed to help user to retrieve approximate business components
from component repository.
Abstract: Increasing growth of information volume in the
internet causes an increasing need to develop new (semi)automatic
methods for retrieval of documents and ranking them according to
their relevance to the user query. In this paper, after a brief review
on ranking models, a new ontology based approach for ranking
HTML documents is proposed and evaluated in various
circumstances. Our approach is a combination of conceptual,
statistical and linguistic methods. This combination reserves the
precision of ranking without loosing the speed. Our approach
exploits natural language processing techniques for extracting
phrases and stemming words. Then an ontology based conceptual
method will be used to annotate documents and expand the query.
To expand a query the spread activation algorithm is improved so
that the expansion can be done in various aspects. The annotated
documents and the expanded query will be processed to compute
the relevance degree exploiting statistical methods. The outstanding
features of our approach are (1) combining conceptual, statistical
and linguistic features of documents, (2) expanding the query with
its related concepts before comparing to documents, (3) extracting
and using both words and phrases to compute relevance degree, (4)
improving the spread activation algorithm to do the expansion based
on weighted combination of different conceptual relationships and
(5) allowing variable document vector dimensions. A ranking
system called ORank is developed to implement and test the
proposed model. The test results will be included at the end of the
paper.
Abstract: A manufacturing feature can be defined simply as a
geometric shape and its manufacturing information to create the shape.
In a feature-based process planning system, feature library that
consists of pre-defined manufacturing features and the manufacturing
information to create the shape of the features, plays an important role
in the extraction of manufacturing features with their proper
manufacturing information. However, to manage the manufacturing
information flexibly, it is important to build a feature library that can
be easily modified. In this paper, the implementation of Semantic Wiki
for the development of the feature library is proposed.
Abstract: Most file systems overwrite modified file data and
metadata in their original locations, while the Log-structured File
System (LFS) dynamically relocates them to other locations. We
design and implement the Evergreen file system that can select
between overwriting or relocation for each block of a file or metadata.
Therefore, the Evergreen file system can achieve superior write
performance by sequentializing write requests (similar to LFS-style
relocation) when space utilization is low and overwriting when
utilization is high. Another challenging issue is identifying
performance benefits of LFS-style relocation over overwriting on a
newly introduced SSD (Solid State Drive) which has only
Flash-memory chips and control circuits without mechanical parts.
Our experimental results measured on a SSD show that relocation
outperforms overwriting when space utilization is below 80% and vice
versa.
Abstract: Set covering problem is a classical problem in
computer science and complexity theory. It has many applications,
such as airline crew scheduling problem, facilities location problem,
vehicle routing, assignment problem, etc. In this paper, three
different techniques are applied to solve set covering problem.
Firstly, a mathematical model of set covering problem is introduced
and solved by using optimization solver, LINGO. Secondly, the
Genetic Algorithm Toolbox available in MATLAB is used to solve
set covering problem. And lastly, an ant colony optimization method
is programmed in MATLAB programming language. Results
obtained from these methods are presented in tables. In order to
assess the performance of the techniques used in this project, the
benchmark problems available in open literature are used.
Abstract: QoS Routing aims to find paths between senders and
receivers satisfying the QoS requirements of the application which
efficiently using the network resources and underlying routing
algorithm to be able to find low-cost paths that satisfy given QoS
constraints. The problem of finding least-cost routing is known to be
NP hard or complete and some algorithms have been proposed to
find a near optimal solution. But these heuristics or algorithms either
impose relationships among the link metrics to reduce the complexity
of the problem which may limit the general applicability of the
heuristic, or are too costly in terms of execution time to be applicable
to large networks. In this paper, we analyzed two algorithms namely
Characterized Delay Constrained Routing (CDCR) and Optimized
Delay Constrained Routing (ODCR). The CDCR algorithm dealt an
approach for delay constrained routing that captures the trade-off
between cost minimization and risk level regarding the delay
constraint. The ODCR which uses an adaptive path weight function
together with an additional constraint imposed on the path cost, to
restrict search space and hence ODCR finds near optimal solution in
much quicker time.
Abstract: In this paper, we propose a hybrid machine learning
system based on Genetic Algorithm (GA) and Support Vector
Machines (SVM) for stock market prediction. A variety of indicators
from the technical analysis field of study are used as input features.
We also make use of the correlation between stock prices of different
companies to forecast the price of a stock, making use of technical
indicators of highly correlated stocks, not only the stock to be
predicted. The genetic algorithm is used to select the set of most
informative input features from among all the technical indicators.
The results show that the hybrid GA-SVM system outperforms the
stand alone SVM system.
Abstract: In this paper, we proposed a method to classify each
type of natural rock texture. Our goal is to classify 26 classes of rock
textures. First, we extract five features of each class by using
principle component analysis combining with the use of applied
spatial frequency measurement. Next, the effective node number of
neural network was tested. We used the most effective neural
network in classification process. The results from this system yield
quite high in recognition rate. It is shown that high recognition rate
can be achieved in separation of 26 stone classes.
Abstract: In this paper, we propose a fixed formatting method of PPX(Pretty Printer for XML). PPX is a query language for XML database which has extensive formatting capability that produces HTML as the result of a query. The fixed formatting method is to completely specify the combination of variables and layout specification operators within the layout expression of the GENERATE clause of PPX. In the experiment, a quick comparison shows that PPX requires far less description compared to XSLT or XQuery programs doing the same tasks.
Abstract: The requirement to improve software productivity has
promoted the research on software metric technology. There are
metrics for identifying the quality of reusable components but the
function that makes use of these metrics to find reusability of
software components is still not clear. These metrics if identified in
the design phase or even in the coding phase can help us to reduce the
rework by improving quality of reuse of the component and hence
improve the productivity due to probabilistic increase in the reuse
level. CK metric suit is most widely used metrics for the objectoriented
(OO) software; we critically analyzed the CK metrics, tried
to remove the inconsistencies and devised the framework of metrics
to obtain the structural analysis of OO-based software components.
Neural network can learn new relationships with new input data and
can be used to refine fuzzy rules to create fuzzy adaptive system.
Hence, Neuro-fuzzy inference engine can be used to evaluate the
reusability of OO-based component using its structural attributes as
inputs. In this paper, an algorithm has been proposed in which the
inputs can be given to Neuro-fuzzy system in form of tuned WMC,
DIT, NOC, CBO , LCOM values of the OO software component and
output can be obtained in terms of reusability. The developed
reusability model has produced high precision results as expected by
the human experts.
Abstract: This paper analyzes the patterns of the Monte Carlo
data for a large number of variables and minterms, in order to
characterize the circuit path length behavior. We propose models
that are determined by training process of shortest path length
derived from a wide range of binary decision diagram (BDD)
simulations. The creation of the model was done use of feed forward
neural network (NN) modeling methodology. Experimental results
for ISCAS benchmark circuits show an RMS error of 0.102 for the
shortest path length complexity estimation predicted by the NN
model (NNM). Use of such a model can help reduce the time
complexity of very large scale integrated (VLSI) circuitries and
related computer-aided design (CAD) tools that use BDDs.
Abstract: The paper describes a new approach for fingerprint
classification, based on the distribution of local features (minute
details or minutiae) of the fingerprints. The main advantage is that
fingerprint classification provides an indexing scheme to facilitate
efficient matching in a large fingerprint database. A set of rules based
on heuristic approach has been proposed. The area around the core
point is treated as the area of interest for extracting the minutiae
features as there are substantial variations around the core point as
compared to the areas away from the core point. The core point in a
fingerprint has been located at a point where there is maximum
curvature. The experimental results report an overall average
accuracy of 86.57 % in fingerprint classification.
Abstract: In this article a modification of the algorithm of the fuzzy ART network, aiming at returning it supervised is carried out. It consists of the search for the comparison, training and vigilance parameters giving the minimum quadratic distances between the output of the training base and those obtained by the network. The same process is applied for the determination of the parameters of the fuzzy ARTMAP giving the most powerful network. The modification consist in making learn the fuzzy ARTMAP a base of examples not only once as it is of use, but as many time as its architecture is in evolution or than the objective error is not reached . In this way, we don-t worry about the values to impose on the eight (08) parameters of the network. To evaluate each one of these three networks modified, a comparison of their performances is carried out. As application we carried out a classification of the image of Algiers-s bay taken by SPOT XS. We use as criterion of evaluation the training duration, the mean square error (MSE) in step control and the rate of good classification per class. The results of this study presented as curves, tables and images show that modified fuzzy ARTMAP presents the best compromise quality/computing time.
Abstract: In this article we are going to discuss the improvement
of the multi classes- classification problem using multi layer
Perceptron. The considered approach consists in breaking down the
n-class problem into two-classes- subproblems. The training of each
two-class subproblem is made independently; as for the phase of test,
we are going to confront a vector that we want to classify to all two
classes- models, the elected class will be the strongest one that won-t
lose any competition with the other classes. Rates of recognition
gotten with the multi class-s approach by two-class-s decomposition
are clearly better that those gotten by the simple multi class-s
approach.
Abstract: An image texture analysis and target recognition approach of using an improved image texture feature coding method (TFCM) and Support Vector Machine (SVM) for target detection is presented. With our proposed target detection framework, targets of interest can be detected accurately. Cascade-Sliding-Window technique was also developed for automated target localization. Application to mammogram showed that over 88% of normal mammograms and 80% of abnormal mammograms can be correctly identified. The approach was also successfully applied to Synthetic Aperture Radar (SAR) and Ground Penetrating Radar (GPR) images for target detection.
Abstract: A method is presented for the construction of arbitrary
even-input sorting networks exhibiting better properties than the
networks created using a conventional technique of the same type.
The method was discovered by means of a genetic algorithm combined
with an application-specific development. Similarly to human
inventions in the area of theoretical computer science, the evolved
invention was analyzed: its generality was proven and area and time
complexities were determined.
Abstract: According to the density of the chips, designers are
trying to put so any facilities of computational and storage on single
chips. Along with the complexity of computational and storage
circuits, the designing, testing and debugging become more and more
complex and expensive. So, hardware design will be built by using
very high speed hardware description language, which is more
efficient and cost effective. This paper will focus on the
implementation of 32-bit ALU design based on Verilog hardware
description language. Adder and subtracter operate correctly on both
unsigned and positive numbers. In ALU, addition takes most of the
time if it uses the ripple-carry adder. The general strategy for
designing fast adders is to reduce the time required to form carry
signals. Adders that use this principle are called carry look- ahead
adder. The carry look-ahead adder is to be designed with combination
of 4-bit adders. The syntax of Verilog HDL is similar to the C
programming language. This paper proposes a unified approach to
ALU design in which both simulation and formal verification can
co-exist.
Abstract: This paper presents a new system developed in Java®
for pattern recognition and pattern summarisation in multi-band
(RGB) satellite images. The system design is described in some
detail. Results of testing the system to analyse and summarise
patterns in SPOT MS images and LANDSAT images are also
discussed.
Abstract: This paper presents an improved image segmentation
model with edge preserving regularization based on the
piecewise-smooth Mumford-Shah functional. A level set formulation
is considered for the Mumford-Shah functional minimization in
segmentation, and the corresponding partial difference equations are
solved by the backward Euler discretization. Aiming at encouraging
edge preserving regularization, a new edge indicator function is
introduced at level set frame. In which all the grid points which is used
to locate the level set curve are considered to avoid blurring the edges
and a nonlinear smooth constraint function as regularization term is
applied to smooth the image in the isophote direction instead of the
gradient direction. In implementation, some strategies such as a new
scheme for extension of u+ and u- computation of the grid points and
speedup of the convergence are studied to improve the efficacy of the
algorithm. The resulting algorithm has been implemented and
compared with the previous methods, and has been proved efficiently
by several cases.
Abstract: Aspect Oriented Programming promises many
advantages at programming level by incorporating the cross cutting
concerns into separate units, called aspects. Join Points are
distinguishing features of Aspect Oriented Programming as they
define the points where core requirements and crosscutting concerns
are (inter)connected. Currently, there is a problem of multiple
aspects- composition at the same join point, which introduces the
issues like ordering and controlling of these superimposed aspects.
Dynamic strategies are required to handle these issues as early as
possible. State chart is an effective modeling tool to capture dynamic
behavior at high level design. This paper provides methodology to
formulate the strategies for multiple aspect composition at high level,
which helps to better implement these strategies at coding level. It
also highlights the need of designing shared join point at high level,
by providing the solutions of these issues using state chart diagrams
in UML 2.0. High level design representation of shared join points
also helps to implement the designed strategy in systematic way.