Abstract: The mobile systems are powered by batteries.
Reducing the system power consumption is a key to increase its
autonomy. It is known that mostly the systems are dealing with time
varying signals. Thus, we aim to achieve power efficiency by smartly
adapting the system processing activity in accordance with the input
signal local characteristics. It is done by completely rethinking the
processing chain, by adopting signal driven sampling and processing.
In this context, a signal driven filtering technique, based on the level
crossing sampling is devised. It adapts the sampling frequency and
the filter order by analysing the input signal local variations. Thus, it
correlates the processing activity with the signal variations. It leads
towards a drastic computational gain of the proposed technique
compared to the classical one.
Abstract: In this paper we have proposed three and two
stage still gray scale image compressor based on BTC. In our
schemes, we have employed a combination of four techniques
to reduce the bit rate. They are quad tree segmentation, bit
plane omission, bit plane coding using 32 visual patterns and
interpolative bit plane coding. The experimental results show
that the proposed schemes achieve an average bit rate of 0.46
bits per pixel (bpp) for standard gray scale images with an
average PSNR value of 30.25, which is better than the results
from the exiting similar methods based on BTC.
Abstract: A new method identifies coupled fluid-structure system with a reduced set of state variables is presented. Assuming that the structural model is known a priori either from an analysis or a test and using linear transformations between structural and aeroelastic states, it is possible to deduce aerodynamic information from sampled time histories of the aeroelastic system. More specifically given a finite set of structural modes the method extracts generalized aerodynamic force matrix corresponding to these mode shapes. Once the aerodynamic forces are known, an aeroelastic reduced-order model can be constructed in discrete-time, state-space format by coupling the structural model and the aerodynamic system. The resulting reduced-order model is suitable for constant Mach, varying density analysis.
Abstract: Benefits to the organisation are just as important as technical ability when it comes to software success. The challenge is to provide industry with professionals who understand this. In other words: How to teach computer engineering students to look beyond technology, and at the benefits of software to organizations? This paper reports on the conceptual design of a section of the computer networks module aimed to sensitize the students to the organisational context.
Checkland focuses on different worldviews represented by various role players in the organisation. He developed the Soft Systems Methodology that guides purposeful action in organisations, while incorporating different worldviews in the modeling process. If we can sensitize students to these methods, they are likely to appreciate the wider context of application of system software. This paper will provide literature on these concepts as well as detail on how the students will be guided to adopt these concepts.
Abstract: Software metric is a measure of some property of a
piece of software or its specification. The aim of this paper is to
present an application of evolutionary decision trees in software
engineering in order to classify the software modules that have or
have not one or more reported defects. For this some metrics are used
for detecting the class of modules with defects or without defects.
Abstract: e-mail has become an important means of electronic
communication but the viability of its usage is marred by Unsolicited
Bulk e-mail (UBE) messages. UBE consists of many types
like pornographic, virus infected and 'cry-for-help' messages as well
as fake and fraudulent offers for jobs, winnings and medicines. UBE
poses technical and socio-economic challenges to usage of e-mails.
To meet this challenge and combat this menace, we need to
understand UBE. Towards this end, the current paper presents a
content-based textual analysis of more than 2700 body enhancement
medicinal UBE. Technically, this is an application of Text Parsing
and Tokenization for an un-structured textual document and we
approach it using Bag Of Words (BOW) and Vector Space Document
Model techniques. We have attempted to identify the most
frequently occurring lexis in the UBE documents that advertise
various products for body enhancement. The analysis of such top
100 lexis is also presented. We exhibit the relationship between
occurrence of a word from the identified lexis-set in the given UBE
and the probability that the given UBE will be the one advertising for
fake medicinal product. To the best of our knowledge and survey of
related literature, this is the first formal attempt for identification of
most frequently occurring lexis in such UBE by its textual analysis.
Finally, this is a sincere attempt to bring about alertness against and
mitigate the threat of such luring but fake UBE.
Abstract: Bond Graph as a unified multidisciplinary tool is widely
used not only for dynamic modelling but also for Fault Detection and
Isolation because of its structural and causal proprieties. A binary
Fault Signature Matrix is systematically generated but to make the
final binary decision is not always feasible because of the problems
revealed by such method. The purpose of this paper is introducing a
methodology for the improvement of the classical binary method of
decision-making, so that the unknown and identical failure signatures
can be treated to improve the robustness. This approach consists of
associating the evaluated residuals and the components reliability data
to build a Hybrid Bayesian Network. This network is used in two
distinct inference procedures: one for the continuous part and the
other for the discrete part. The continuous nodes of the network are
the prior probabilities of the components failures, which are used by
the inference procedure on the discrete part to compute the posterior
probabilities of the failures. The developed methodology is applied
to a real steam generator pilot process.
Abstract: This paper includes two novel techniques for skew
estimation of binary document images. These algorithms are based on
connected component analysis and Hough transform. Both these
methods focus on reducing the amount of input data provided to
Hough transform. In the first method, referred as word centroid
approach, the centroids of selected words are used for skew detection.
In the second method, referred as dilate & thin approach, the selected
characters are blocked and dilated to get word blocks and later
thinning is applied. The final image fed to Hough transform has the
thinned coordinates of word blocks in the image. The methods have
been successful in reducing the computational complexity of Hough
transform based skew estimation algorithms. Promising experimental
results are also provided to prove the effectiveness of the proposed
methods.
Abstract: Legionella pneumophila is involved in more than 95%
cases of severe atypical pneumonia. Infection is mainly by
inhalation the indoor aerosols through the water-coolant systems.
Because some Legionella strains may be viable but not culturable,
therefore, Taq polymerase, DNA amplification and semi-nested-PCR
were carried out to detect Legionella-specific 16S-rDNA sequence.
For this purpose, 1.5 litter of water samples from 77 water-coolant
system were collected from four different hospitals, two nursing
homes and one student hostel in Kerman city of Iran, each in a brand
new plastic bottle during summer season of 2006 (from April to
August). The samples were filtered in the sterile condition through
the Millipore Membrane Filter. DNA was extracted from membrane
and used for PCR to detect Legionella spp. The PCR product was
then subjected to semi-nested PCR for detection of L. pneumophila.
Out of 77 water samples that were tested by PCR, 30 (39%) were
positive for most species of Legionella. However, L. pneumophila
was detected from 14 (18.2%) water samples by semi-nested PCR.
From the above results it can be concluded that water coolant
systems of different hospitals and nursing homes in Kerman city of
Iran are highly contaminated with L. pneumophila spp. and pose
serious concern. So, we recommend avoiding such type of coolant
system in the hospitals and nursing homes.
Abstract: Exact expressions for bit-error probability (BEP) for
coherent square detection of uncoded and coded M-ary quadrature
amplitude modulation (MQAM) using an array of antennas with
maximal ratio combining (MRC) in a flat fading channel interference
limited system in a Nakagami-m fading environment is derived. The
analysis assumes an arbitrary number of independent and identically
distributed Nakagami interferers. The results for coded MQAM are
computed numerically for the case of (24,12) extended Golay code
and compared with uncoded MQAM by plotting error probabilities
versus average signal-to-interference ratio (SIR) for various values of
order of diversity N, number of distinct symbols M, in order to
examine the effect of cochannel interferers on the performance of the
digital communication system. The diversity gains and net gains are
also presented in tabular form in order to examine the performance of
digital communication system in the presence of interferers, as the
order of diversity increases. The analytical results presented in this
paper are expected to provide useful information needed for design
and analysis of digital communication systems with space diversity
in wireless fading channels.
Abstract: Fuzzy Load forecasting plays a paramount role in the operation and management of power systems. Accurate estimation of future power demands for various lead times facilitates the task of generating power reliably and economically. The forecasting of future loads for a relatively large lead time (months to few years) is studied here (long term load forecasting). Among the various techniques used in forecasting load, artificial intelligence techniques provide greater accuracy to the forecasts as compared to conventional techniques. Fuzzy Logic, a very robust artificial intelligent technique, is described in this paper to forecast load on long term basis. The paper gives a general algorithm to forecast long term load. The algorithm is an Extension of Short term load forecasting method to Long term load forecasting and concentrates not only on the forecast values of load but also on the errors incorporated into the forecast. Hence, by correcting the errors in the forecast, forecasts with very high accuracy have been achieved. The algorithm, in the paper, is demonstrated with the help of data collected for residential sector (LT2 (a) type load: Domestic consumers). Load, is determined for three consecutive years (from April-06 to March-09) in order to demonstrate the efficiency of the algorithm and to forecast for the next two years (from April-09 to March-11).
Abstract: The paper investigates the feasibility of constructing a software multi-agent based monitoring and classification system and utilizing it to provide an automated and accurate classification of end users developing applications in the spreadsheet domain. The agents function autonomously to provide continuous and periodic monitoring of excels spreadsheet workbooks. Resulting in, the development of the MultiAgent classification System (MACS) that is in compliance with the specifications of the Foundation for Intelligent Physical Agents (FIPA). However, different technologies have been brought together to build MACS. The strength of the system is the integration of the agent technology with the FIPA specifications together with other technologies that are Windows Communication Foundation (WCF) services, Service Oriented Architecture (SOA), and Oracle Data Mining (ODM). The Microsoft's .NET widows service based agents were utilized to develop the monitoring agents of MACS, the .NET WCF services together with SOA approach allowed the distribution and communication between agents over the WWW that is in order to satisfy the monitoring and classification of the multiple developer aspect. ODM was used to automate the classification phase of MACS.
Abstract: In this paper, we propose a hybrid machine learning
system based on Genetic Algorithm (GA) and Support Vector
Machines (SVM) for stock market prediction. A variety of indicators
from the technical analysis field of study are used as input features.
We also make use of the correlation between stock prices of different
companies to forecast the price of a stock, making use of technical
indicators of highly correlated stocks, not only the stock to be
predicted. The genetic algorithm is used to select the set of most
informative input features from among all the technical indicators.
The results show that the hybrid GA-SVM system outperforms the
stand alone SVM system.
Abstract: In this paper, we proposed a new framework to incorporate an intelligent agent software robot into a crisis communication portal (CCNet) in order to send alert news to subscribed users via email and other mobile services such as Short Message Service (SMS), Multimedia Messaging Service (MMS) and General Packet Radio Services (GPRS). The content on the mobile services can be delivered either through mobile phone or Personal Digital Assistance (PDA). This research has shown that with our proposed framework, the embodied conversation agents system can handle questions intelligently with our multilayer architecture. At the same time, the extended framework can take care of delivery content through a more humanoid interface on mobile devices.
Abstract: In the last decade digital watermarking procedures have
become increasingly applied to implement the copyright protection
of multimedia digital contents distributed on the Internet. To this
end, it is worth noting that a lot of watermarking procedures
for images and videos proposed in literature are based on spread
spectrum techniques. However, some scepticism about the robustness
and security of such watermarking procedures has arisen because
of some documented attacks which claim to render the inserted
watermarks undetectable. On the other hand, web content providers
wish to exploit watermarking procedures characterized by flexible and
efficient implementations and which can be easily integrated in their
existing web services frameworks or platforms. This paper presents
how a simple spread spectrum watermarking procedure for MPEG-2
videos can be modified to be exploited in web contexts. To this end,
the proposed procedure has been made secure and robust against some
well-known and dangerous attacks. Furthermore, its basic scheme
has been optimized by making the insertion procedure adaptive with
respect to the terminals used to open the videos and the network transactions
carried out to deliver them to buyers. Finally, two different
implementations of the procedure have been developed: the former
is a high performance parallel implementation, whereas the latter is
a portable Java and XML based implementation. Thus, the paper
demonstrates that a simple spread spectrum watermarking procedure,
with limited and appropriate modifications to the embedding scheme,
can still represent a valid alternative to many other well-known and
more recent watermarking procedures proposed in literature.
Abstract: In mobile computing environments, there are many
new non existing problems in the distributed system, which is
consisted of stationary hosts because of host mobility, sudden
disconnection by handoff in wireless networks, voluntary
disconnection for efficient power consumption of a mobile host, etc.
To solve the problems, we proposed the architecture of Partial
Connection Manager (PCM) in this paper. PCM creates the limited
number of mobile agents according to priority, sends them in parallel
to servers, and combines the results to process the user request rapidly.
In applying the proposed PCM to the mobile market agent service, we
understand that the mobile agent technique could be suited for the
mobile computing environment and the partial connection problem
management.
Abstract: In this paper, a model for an information retrieval
system is proposed which takes into account that knowledge about
documents and information need of users are dynamic. Two
methods are combined, one qualitative or symbolic and the other
quantitative or numeric, which are deemed suitable for many
clustering contexts, data analysis, concept exploring and
knowledge discovery. These two methods may be classified as
inductive learning techniques. In this model, they are introduced to
build “long term" knowledge about past queries and concepts in a
collection of documents. The “long term" knowledge can guide
and assist the user to formulate an initial query and can be
exploited in the process of retrieving relevant information. The
different kinds of knowledge are organized in different points of
view. This may be considered an enrichment of the exploration
level which is coherent with the concept of document/query
structure.
Abstract: This paper presents an effective traffic lights detection
method at the night-time. First, candidate blobs of traffic lights are
extracted from RGB color image. Input image is represented on the
dominant color domain by using color transform proposed by Ruta,
then red and green color dominant regions are selected as candidates.
After candidate blob selection, we carry out shape filter for noise
reduction using information of blobs such as length, area, area of
boundary box, etc. A multi-class classifier based on SVM (Support
Vector Machine) applies into the candidates. Three kinds of features
are used. We use basic features such as blob width, height, center
coordinate, area, area of blob. Bright based stochastic features are also
used. In particular, geometric based moment-s values between
candidate region and adjacent region are proposed and used to improve
the detection performance. The proposed system is implemented on
Intel Core CPU with 2.80 GHz and 4 GB RAM and tested with the
urban and rural road videos. Through the test, we show that the
proposed method using PF, BMF, and GMF reaches up to 93 % of
detection rate with computation time of in average 15 ms/frame.
Abstract: OpenMP is an API for parallel programming model of shared memory multiprocessors. Novice OpenMP programmers often produce the code that compiler cannot find human errors. It was investigated how compiler coped with the common mistakes that can occur in OpenMP code. The latest version(4.4.3) of GCC is used for this research. It was found that GCC compiled the codes without any errors or warnings. In this paper the programming aid tool is presented for OpenMP programs. It can check 12 common mistakes that novice programmer can commit during the programming of OpenMP. It was demonstrated that the programming aid tool can detect the various common mistakes that GCC failed to detect.
Abstract: Today, the central role of industrial robots in automation in general and in material handling in particular is crystal clear. Based on the current status of Photovoltaics and by focusing on lightweight material handling, PV industry has turned into a potential candidate for introducing a fresh “pick and place" robot technology. Thus, to examine the industry needs in this regard, firstly the best suited applications for such robotic automation,and then the essential prerequisites in PV industry should be identified. The objective of this paper is to present holistic views on the industry trends, general automation status and existing challenges facing lightweight robotic material handling in PV Silicon Wafer and Thin Film technologies. The results of this study show that currently no uniform pick and place solution prevails among PV Silicon Wafer manufacturers and the industry calls for a new robot solution to satisfy its needs in new directions.