Abstract: Fast changing knowledge systems on the Internet can
be accessed more efficiently with the help of automatic document
summarization and updating techniques. The aim of multi-document
update summary generation is to construct a summary unfolding the
mainstream of data from a collection of documents based on the
hypothesis that the user has already read a set of previous documents.
In order to provide a lot of semantic information from the documents,
deeper linguistic or semantic analysis of the source documents were
used instead of relying only on document word frequencies to select
important concepts. In order to produce a responsive summary,
meaning oriented structural analysis is needed. To address this issue,
the proposed system presents a document summarization approach
based on sentence annotation with aspects, prepositions and named
entities. Semantic element extraction strategy is used to select
important concepts from documents which are used to generate
enhanced semantic summary.
Abstract: Construction cost estimation is one of the most
important aspects of construction project design. For generations, the
process of cost estimating has been manual, time-consuming and
error-prone. This has partly led to most cost estimates to be unclear
and riddled with inaccuracies that at times lead to over- or underestimation
of construction cost. The development of standard set of
measurement rules that are understandable by all those involved in a
construction project, have not totally solved the challenges. Emerging
Building Information Modelling (BIM) technologies can exploit
standard measurement methods to automate cost estimation process
and improve accuracies. This requires standard measurement
methods to be structured in ontological and machine readable format;
so that BIM software packages can easily read them. Most standard
measurement methods are still text-based in textbooks and require
manual editing into tables or Spreadsheet during cost estimation. The
aim of this study is to explore the development of an ontology based
on New Rules of Measurement (NRM) commonly used in the UK for
cost estimation. The methodology adopted is Methontology, one of
the most widely used ontology engineering methodologies. The
challenges in this exploratory study are also reported and
recommendations for future studies proposed.
Abstract: The increasing demand of gallium, indium and
rare-earth elements for the production of electronics, e.g. solid
state-lighting, photovoltaics, integrated circuits, and liquid crystal
displays, will exceed the world-wide supply according to current
forecasts. Recycling systems to reclaim these materials are not yet in
place, which challenges the sustainability of these technologies. This
paper proposes a multispectral imaging system as a basis for a vision
based recognition system for valuable components of electronics
waste. Multispectral images intend to enhance the contrast of images
of printed circuit boards (single components, as well as labels) for
further analysis, such as optical character recognition and entire
printed circuit board recognition. The results show, that a higher
contrast is achieved in the near infrared compared to ultraviolett and
visible light.
Abstract: Clustering involves the partitioning of n objects into k
clusters. Many clustering algorithms use hard-partitioning techniques
where each object is assigned to one cluster. In this paper we propose
an overlapping algorithm MCOKE which allows objects to belong to
one or more clusters. The algorithm is different from fuzzy clustering
techniques because objects that overlap are assigned a membership
value of 1 (one) as opposed to a fuzzy membership degree. The
algorithm is also different from other overlapping algorithms that
require a similarity threshold be defined a priori which can be
difficult to determine by novice users.
Abstract: Image search engines rely on the surrounding textual
keywords for the retrieval of images. It is a tedious work for the
search engines like Google and Bing to interpret the user’s search
intention and to provide the desired results. The recent researches
also state that the Google image search engines do not work well on
all the images. Consequently, this leads to the emergence of efficient
image retrieval technique, which interprets the user’s search intention
and shows the desired results. In order to accomplish this task, an
efficient image re-ranking framework is required. Sequentially, to
provide best image retrieval, the new image re-ranking framework is
experimented in this paper. The implemented new image re-ranking
framework provides best image retrieval from the image dataset by
making use of re-ranking of retrieved images that is based on the
user’s desired images. This is experimented in two sections. One is
offline section and other is online section. In offline section, the reranking
framework studies differently (reference classes or Semantic
Spaces) for diverse user query keywords. The semantic signatures get
generated by combining the textual and visual features of the images.
In the online section, images are re-ranked by comparing the
semantic signatures that are obtained from the reference classes with
the user specified image query keywords. This re-ranking
methodology will increases the retrieval image efficiency and the
result will be effective to the user.
Abstract: Iris codes contain bits with different entropy. This
work investigates different strategies to reduce the size of iris
code templates with the aim of reducing storage requirements and
computational demand in the matching process. Besides simple subsampling
schemes, also a binary multi-resolution representation as
used in the JBIG hierarchical coding mode is assessed. We find that
iris code template size can be reduced significantly while maintaining
recognition accuracy. Besides, we propose a two-stage identification
approach, using small-sized iris code templates in a pre-selection
stage, and full resolution templates for final identification, which
shows promising recognition behaviour.
Abstract: Image or document encryption is needed through egovernment
data base. Really in this paper we introduce two matrices
images, one is the public, and the second is the secret (original). The
analyses of each matrix is achieved using the transformation of
singular values decomposition. So each matrix is transformed or
analyzed to three matrices say row orthogonal basis, column
orthogonal basis, and spectral diagonal basis. Product of the two row
basis is calculated. Similarly the product of the two column basis is
achieved. Finally we transform or save the files of public, row
product and column product. In decryption stage, the original image
is deduced by mutual method of the three public files.
Abstract: In this article is reported a construction and some
properties of the 5iD viewer, the system recording simultaneously
5 views of a given experimental object. Properties of the system
are demonstrated on the analysis of fish schooling behaviour. It
is demonstrated the method of instrument calibration which allows
inclusion of image distortion and it is proposed and partly tested
also the method of distance assessment in the case that only two
opposite cameras are available. Finally, we demonstrate how the state
trajectory of the behaviour of the fish school may be constructed from
the entropy of the system.
Abstract: In this paper we consider the rule reduct generation
problem. Rule Reduct Generation (RG) and Modified Rule
Generation (MRG) algorithms, that are used to solve this problem,
are well-known. Alternative to these algorithms, we develop Pruning
Rule Generation (PRG) algorithm. We compare the PRG algorithm
with RG and MRG.
Abstract: This paper introduces novel approaches to partitioning
and mapping in terms of model-based embedded multicore system
engineering and further discusses benefits, industrial relevance and
features in common with existing approaches. In order to assess
and evaluate results, both approaches have been applied to a real
industrial application as well as to various prototypical demonstrative
applications, that have been developed and implemented for
different purposes. Evaluations show, that such applications improve
significantly according to performance, energy efficiency, meeting
timing constraints and covering maintaining issues by using
the AMALTHEA platform and the implemented approaches.
Furthermore, the model-based design provides an open, expandable,
platform independent and scalable exchange format between
OEMs, suppliers and developers on different levels. Our proposed
mechanisms provide meaningful multicore system utilization since
load balancing by means of partitioning and mapping is effectively
performed with regard to the modeled systems including hardware,
software, operating system, scheduling, constraints, configuration and
more data.
Abstract: Job Scheduling plays an important role for efficient
utilization of grid resources available across different domains and
geographical zones. Scheduling of jobs is challenging and NPcomplete.
Evolutionary / Swarm Intelligence algorithms have been
extensively used to address the NP problem in grid scheduling.
Artificial Bee Colony (ABC) has been proposed for optimization
problems based on foraging behaviour of bees. This work proposes a
modified ABC algorithm, Cluster Heterogeneous Earliest First Min-
Min Artificial Bee Colony (CHMM-ABC), to optimally schedule
jobs for the available resources. The proposed model utilizes a novel
Heterogeneous Earliest Finish Time (HEFT) Heuristic Algorithm
along with Min-Min algorithm to identify the initial food source.
Simulation results show the performance improvement of the
proposed algorithm over other swarm intelligence techniques.
Abstract: In medical imaging, segmentation of different areas of
human body like bones, organs, tissues, etc. is an important issue.
Image segmentation allows isolating the object of interest for further
processing that can lead for example to 3D model reconstruction of
whole organs. Difficulty of this procedure varies from trivial for
bones to quite difficult for organs like liver. The liver is being
considered as one of the most difficult human body organ to segment.
It is mainly for its complexity, shape versatility and proximity of
other organs and tissues. Due to this facts usually substantial user
effort has to be applied to obtain satisfactory results of the image
segmentation. Process of image segmentation then deteriorates from
automatic or semi-automatic to fairly manual one. In this paper,
overview of selected available software applications that can handle
semi-automatic image segmentation with further 3D volume
reconstruction of human liver is presented. The applications are being
evaluated based on the segmentation results of several consecutive
DICOM images covering the abdominal area of the human body.
Abstract: Digital image correlation (DIC) is a contactless fullfield
displacement and strain reconstruction technique commonly
used in the field of experimental mechanics. Comparing with
physical measuring devices, such as strain gauges, which only
provide very restricted coverage and are expensive to deploy widely,
the DIC technique provides the result with full-field coverage and
relative high accuracy using an inexpensive and simple experimental
setup. It is very important to study the natural patterns effect on the
DIC technique because the preparation of the artificial patterns is
time consuming and hectic process. The objective of this research is
to study the effect of using images having natural pattern on the
performance of DIC. A systematical simulation method is used to
build simulated deformed images used in DIC. A parameter (subset
size) used in DIC can have an effect on the processing and accuracy
of DIC and even cause DIC to failure. Regarding to the picture
parameters (correlation coefficient), the higher similarity of two
subset can lead the DIC process to fail and make the result more
inaccurate. The pictures with good and bad quality for DIC methods
have been presented and more importantly, it is a systematic way to
evaluate the quality of the picture with natural patterns before they
install the measurement devices.
Abstract: This paper presents an evolutionary algorithm for
solving multi-objective optimization problems-based artificial neural
network (ANN). The multi-objective evolutionary algorithm used in
this study is genetic algorithm while ANN used is radial basis
function network (RBFN). The proposed algorithm named memetic
elitist Pareto non-dominated sorting genetic algorithm-based RBFN
(MEPGAN). The proposed algorithm is implemented on medical
diseases problems. The experimental results indicate that the
proposed algorithm is viable, and provides an effective means to
design multi-objective RBFNs with good generalization capability
and compact network structure. This study shows that MEPGAN
generates RBFNs coming with an appropriate balance between
accuracy and simplicity, comparing to the other algorithms found in
literature.
Abstract: Text mining techniques are generally applied for
classifying the text, finding fuzzy relations and structures in data
sets. This research provides plenty text mining capabilities. One
common application is text classification and event extraction,
which encompass deducing specific knowledge concerning incidents
referred to in texts. The main contribution of this paper is the
clarification of a concept graph generation mechanism, which is based
on a text classification and optimal fuzzy relationship extraction.
Furthermore, the work presented in this paper explains the application
of fuzzy relationship extraction and branch and bound (BB) method
to simplify the texts.
Abstract: Leukaemia is a blood cancer disease that contributes
to the increment of mortality rate in Malaysia each year. There are
two main categories for leukaemia, which are acute and chronic
leukaemia. The production and development of acute leukaemia cells
occurs rapidly and uncontrollable. Therefore, if the identification of
acute leukaemia cells could be done fast and effectively, proper
treatment and medicine could be delivered. Due to the requirement of
prompt and accurate diagnosis of leukaemia, the current study has
proposed unsupervised pixel segmentation based on clustering
algorithm in order to obtain a fully segmented abnormal white blood
cell (blast) in acute leukaemia image. In order to obtain the
segmented blast, the current study proposed three clustering
algorithms which are k-means, fuzzy c-means and moving k-means
algorithms have been applied on the saturation component image.
Then, median filter and seeded region growing area extraction
algorithms have been applied, to smooth the region of segmented
blast and to remove the large unwanted regions from the image,
respectively. Comparisons among the three clustering algorithms are
made in order to measure the performance of each clustering
algorithm on segmenting the blast area. Based on the good sensitivity
value that has been obtained, the results indicate that moving kmeans
clustering algorithm has successfully produced the fully
segmented blast region in acute leukaemia image. Hence, indicating
that the resultant images could be helpful to haematologists for
further analysis of acute leukaemia.
Abstract: Motion Tracking and Stereo Vision are complicated,
albeit well-understood problems in computer vision. Existing
softwares that combine the two approaches to perform stereo motion
tracking typically employ complicated and computationally expensive
procedures. The purpose of this study is to create a simple and
effective solution capable of combining the two approaches. The
study aims to explore a strategy to combine the two techniques
of two-dimensional motion tracking using Kalman Filter; and depth
detection of object using Stereo Vision. In conventional approaches
objects in the scene of interest are observed using a single camera.
However for Stereo Motion Tracking; the scene of interest is
observed using video feeds from two calibrated cameras. Using two
simultaneous measurements from the two cameras a calculation for
the depth of the object from the plane containing the cameras is made.
The approach attempts to capture the entire three-dimensional spatial
information of each object at the scene and represent it through a
software estimator object. In discrete intervals, the estimator tracks
object motion in the plane parallel to plane containing cameras and
updates the perpendicular distance value of the object from the plane
containing the cameras as depth. The ability to efficiently track
the motion of objects in three-dimensional space using a simplified
approach could prove to be an indispensable tool in a variety of
surveillance scenarios. The approach may find application from high
security surveillance scenes such as premises of bank vaults, prisons
or other detention facilities; to low cost applications in supermarkets
and car parking lots.
Abstract: Reverse Logistics (RL) Network is considered as
complex and dynamic network that involves many stakeholders such
as: suppliers, manufactures, warehouse, retails and costumers, this
complexity is inherent in such process due to lack of perfect
knowledge or conflicting information. Ontologies on the other hand
can be considered as an approach to overcome the problem of sharing
knowledge and communication among the various reverse logistics
partners. In this paper we propose a semantic representation based on
hybrid architecture for building the Ontologies in ascendant way, this
method facilitates the semantic reconciliation between the
heterogeneous information systems that support reverse logistics
processes and product data.
Abstract: Content-based image retrieval (CBIR) uses the
contents of images to characterize and contact the images. This paper
focus on retrieving the image by separating images into its three color
mechanism R, G and B and for that Discrete Wavelet Transformation
is applied. Then Wavelet based Generalized Gaussian Density (GGD)
is practical which is used for modeling the coefficients from the
wavelet transforms. After that it is agreed to Histogram of Oriented
Gradient (HOG) for extracting its characteristic vectors with Relevant
Feedback technique is used. The performance of this approach is
calculated by exactness and it confirms that this method is wellorganized
for image retrieval.
Abstract: It is widely believed that mobile device is a promising technology for lending the opportunity for the third wave of electronic commerce. Mobile devices have changed the way companies do business. Many applications are under development or being incorporated into business processes. In this day, mobile applications are a vital component of any industry strategy.One of the greatest benefits of selling merchandise and providing services on a mobile application is that it widens a company’s customer base significantly.Mobile applications are accessible to interested customers across regional and international borders in different electronic business (e-business) area. But there is a dark side to this success story. The security risks associated with mobile devices and applications are very significant. This paper introduces a broad risk analysis for the various threats, vulnerabilities, and risks in mobile e-business applications and presents some important risk mitigation approaches. It reviews and compares two different frameworks for security assurance in mobile e-business applications. Based on the comparison, the paper suggests some recommendations for applications developers and business owners in mobile e-business application development process.