Abstract: The increasing complexity of software development based on peer to peer networks makes necessary the creation of new frameworks in order to simplify the developer-s task. Additionally, some applications, e.g. fire detection or security alarms may require real-time constraints and the high level definition of these features eases the application development. In this paper, a service model based on a component model with real-time features is proposed. The high-level model will abstract developers from implementation tasks, such as discovery, communication, security or real-time requirements. The model is oriented to deploy services on small mobile devices, such as sensors, mobile phones and PDAs, where the computation is light-weight. Services can be composed among them by means of the port concept to form complex ad-hoc systems and their implementation is carried out using a component language called UM-RTCOM. In order to apply our proposals a fire detection application is described.
Abstract: In this paper a new method for increasing the speed of
SAGCM-APD is proposed. Utilizing carrier rate equations in
different regions of the structure, a circuit model for the structure is
obtained. In this research, in addition to frequency response, the
effect of added new charge layer on some transient parameters like
slew-rate, rising and falling times have been considered. Finally, by
trading-off among some physical parameters such as different layers
widths and droppings, a noticeable decrease in breakdown voltage
has been achieved. The results of simulation, illustrate some features
of proposed structure improvement in comparison with conventional
SAGCM-APD structures.
Abstract: In this paper we analyze the core issues affecting
software architecture in enterprise projects where a large number of
people at different backgrounds are involved and complex business,
management and technical problems exist. We first give general
features of typical enterprise projects and then present foundations of
software architectures. The detailed analysis of core issues affecting
software architecture in software development phases is given. We
focus on three main areas in each development phase: people,
process, and management related issues, structural (product) issues,
and technology related issues. After we point out core issues and
problems in these main areas, we give recommendations for
designing good architecture. We observed these core issues and the
importance of following the best software development practices and
also developed some novel practices in many big enterprise
commercial and military projects in about 10 years of experience.
Abstract: An electrocardiogram (ECG) feature extraction system
based on the calculation of the complex resonance frequency
employing Prony-s method is developed. Prony-s method is applied
on five different classes of ECG signals- arrhythmia as a finite sum
of exponentials depending on the signal-s poles and the resonant
complex frequencies. Those poles and resonance frequencies of the
ECG signals- arrhythmia are evaluated for a large number of each
arrhythmia. The ECG signals of lead II (ML II) were taken from
MIT-BIH database for five different types. These are the ventricular
couplet (VC), ventricular tachycardia (VT), ventricular bigeminy
(VB), and ventricular fibrillation (VF) and the normal (NR). This
novel method can be extended to any number of arrhythmias.
Different classification techniques were tried using neural networks
(NN), K nearest neighbor (KNN), linear discriminant analysis (LDA)
and multi-class support vector machine (MC-SVM).
Abstract: In this paper multivariable predictive PID controller has
been implemented on a multi-inputs multi-outputs control problem
i.e., quadruple tank system, in comparison with a simple multiloop
PI controller. One of the salient feature of this system is an
adjustable transmission zero which can be adjust to operate in both
minimum and non-minimum phase configuration, through the flow
distribution to upper and lower tanks in quadruple tank system.
Stability and performance analysis has also been carried out for this
highly interactive two input two output system, both in minimum
and non-minimum phases. Simulations of control system revealed
that better performance are obtained in predictive PID design.
Abstract: In current common research reports, salient regions
are usually defined as those regions that could present the main
meaningful or semantic contents. However, there are no uniform
saliency metrics that could describe the saliency of implicit image
regions. Most common metrics take those regions as salient regions,
which have many abrupt changes or some unpredictable
characteristics. But, this metric will fail to detect those salient useful
regions with flat textures. In fact, according to human semantic
perceptions, color and texture distinctions are the main characteristics
that could distinct different regions. Thus, we present a novel saliency
metric coupled with color and texture features, and its corresponding
salient region extraction methods. In order to evaluate the
corresponding saliency values of implicit regions in one image, three
main colors and multi-resolution Gabor features are respectively used
for color and texture features. For each region, its saliency value is
actually to evaluate the total sum of its Euclidean distances for other
regions in the color and texture spaces. A special synthesized image
and several practical images with main salient regions are used to
evaluate the performance of the proposed saliency metric and other
several common metrics, i.e., scale saliency, wavelet transform
modulus maxima point density, and important index based metrics.
Experiment results verified that the proposed saliency metric could
achieve more robust performance than those common saliency
metrics.
Abstract: Complexity, as a theoretical background has made it
easier to understand and explain the features and dynamic behavior
of various complex systems. As the common theoretical background
has confirmed, borrowing the terminology for design from the
natural sciences has helped to control and understand urban
complexity. Phenomena like self-organization, evolution and
adaptation are appropriate to describe the formerly inaccessible
characteristics of the complex environment in unpredictable bottomup
systems. Increased computing capacity has been a key element in
capturing the chaotic nature of these systems.
A paradigm shift in urban planning and architectural design has
forced us to give up the illusion of total control in urban
environment, and consequently to seek for novel methods for
steering the development. New methods using dynamic modeling
have offered a real option for more thorough understanding of
complexity and urban processes. At best new approaches may renew
the design processes so that we get a better grip on the complex
world via more flexible processes, support urban environmental
diversity and respond to our needs beyond basic welfare by liberating
ourselves from the standardized minimalism.
A complex system and its features are as such beyond human
ethics. Self-organization or evolution is either good or bad. Their
mechanisms are by nature devoid of reason. They are common in
urban dynamics in both natural processes and gas. They are features
of a complex system, and they cannot be prevented. Yet their
dynamics can be studied and supported.
The paradigm of complexity and new design approaches has been
criticized for a lack of humanity and morality, but the ethical
implications of scientific or computational design processes have not
been much discussed. It is important to distinguish the (unexciting)
ethics of the theory and tools from the ethics of computer aided
processes based on ethical decisions. Urban planning and architecture
cannot be based on the survival of the fittest; however, the natural
dynamics of the system cannot be impeded on grounds of being
“non-human".
In this paper the ethical challenges of using the dynamic models
are contemplated in light of a few examples of new architecture and
dynamic urban models and literature. It is suggested that ethical
challenges in computational design processes could be reframed
under the concepts of responsibility and transparency.
Abstract: The speech signal conveys information about the
identity of the speaker. The area of speaker identification is
concerned with extracting the identity of the person speaking the
utterance. As speech interaction with computers becomes more
pervasive in activities such as the telephone, financial transactions
and information retrieval from speech databases, the utility of
automatically identifying a speaker is based solely on vocal
characteristic. This paper emphasizes on text dependent speaker
identification, which deals with detecting a particular speaker from a
known population. The system prompts the user to provide speech
utterance. System identifies the user by comparing the codebook of
speech utterance with those of the stored in the database and lists,
which contain the most likely speakers, could have given that speech
utterance. The speech signal is recorded for N speakers further the
features are extracted. Feature extraction is done by means of LPC
coefficients, calculating AMDF, and DFT. The neural network is
trained by applying these features as input parameters. The features
are stored in templates for further comparison. The features for the
speaker who has to be identified are extracted and compared with the
stored templates using Back Propogation Algorithm. Here, the
trained network corresponds to the output; the input is the extracted
features of the speaker to be identified. The network does the weight
adjustment and the best match is found to identify the speaker. The
number of epochs required to get the target decides the network
performance.
Abstract: The goal of this project is to design a system to
recognition voice commands. Most of voice recognition systems
contain two main modules as follow “feature extraction" and “feature
matching". In this project, MFCC algorithm is used to simulate
feature extraction module. Using this algorithm, the cepstral
coefficients are calculated on mel frequency scale. VQ (vector
quantization) method will be used for reduction of amount of data to
decrease computation time. In the feature matching stage Euclidean
distance is applied as similarity criterion. Because of high accuracy
of used algorithms, the accuracy of this voice command system is
high. Using these algorithms, by at least 5 times repetition for each
command, in a single training session, and then twice in each testing
session zero error rate in recognition of commands is achieved.
Abstract: Investment in a constructed facility represents a cost in
the short term that returns benefits only over the long term use of the
facility. Thus, the costs occur earlier than the benefits, and the owners
of facilities must obtain the capital resources to finance the costs of
construction. A project cannot proceed without an adequate
financing, and the cost of providing an adequate financing can be
quite large. For these reasons, the attention to the project finance is an
important aspect of project management. Finance is also a concern to
the other organizations involved in a project such as the general
contractor and material suppliers. Unless an owner immediately and
completely covers the costs incurred by each participant, these
organizations face financing problems of their own. At a more
general level, the project finance is the only one aspect of the general
problem of corporate finance. If numerous projects are considered
and financed together, then the net cash flow requirements constitute
the corporate financing problem for capital investment. Whether
project finance is performed at the project or at the corporate level
does not alter the basic financing problem .In this paper, we will first
consider facility financing from the owner's perspective, with due
consideration for its interaction with other organizations involved in a
project. Later, we discuss the problems of construction financing
which are crucial to the profitability and solvency of construction
contractors. The objective of this paper is to present the steps utilized
to determine the best combination of minimum project financing.
The proposed model considers financing; schedule and maximum net
area .The proposed model is called Project Financing and Schedule
Integration using Genetic Algorithms "PFSIGA". This model
intended to determine more steps (maximum net area) for any project
with a subproject. An illustrative example will demonstrate the
feature of this technique. The model verification and testing are put
into consideration.
Abstract: To improve the classification rate of the face
recognition, features combination and a novel non-linear kernel are
proposed. The feature vector concatenates three different radius of
local binary patterns and Gabor wavelet features. Gabor features are
the mean, standard deviation and the skew of each scaling and
orientation parameter. The aim of the new kernel is to incorporate
the power of the kernel methods with the optimal balance between
the features. To verify the effectiveness of the proposed method,
numerous methods are tested by using four datasets, which are
consisting of various emotions, orientations, configuration,
expressions and lighting conditions. Empirical results show the
superiority of the proposed technique when compared to other
methods.
Abstract: The AL-MAJIRI school system is a variant of private
Arabic and Islamic schools which cater for the religious and moral development of Muslims. In the past, the system produced clerics,
scholars, judges, religious reformers, eminent teachers and great men who are worthy of emulation, particularly in northern Nigeria.
Gradually, the system lost its glory but continued to discharge its
educational responsibilities to a certain extent. This paper takes a
look at the activities of the AL-MAJIRI schools. The introduction
provides background information about Nigeria where the schools
operate. This is followed by an overview of the Nigerian educational system, the nature and the features of the AL-MAJIRI school system,
its weaknesses and the current challenges facing the schools. The paper concludes with emphasis on the urgent need for a comprehensive reform of the curriculum content of the schools. The step by step procedure required for the reform is discussed.
Abstract: This paper describes a CMOS four-quadrant
multiplier intended for use in the front-end receiver by utilizing the
square-law characteristic of the MOS transistor in the saturation
region. The circuit is based on 0.35 um CMOS technology simulated
using HSPICE software. The mixer has a third-order inter the power
consumption is 271uW from a single 1.2V power supply. One of the
features of the proposed design is using two MOS transistors
limitation to reduce the supply voltage, which leads to reduce the
power consumption. This technique provides a GHz bandwidth
response and low power consumption.
Abstract: The necessity of accurate and timely field data is
shared among organizations engaged in fundamentally different
activities, public services or commercial operations. Basically, there
are three major components in the process of the qualitative research:
data collection, interpretation and organization of data, and analytic
process. Representative technological advancements in terms of
innovation have been made in mobile devices (mobile phone, PDA-s,
tablets, laptops, etc). Resources that can be potentially applied on the
data collection activity for field researches in order to improve this
process.
This paper presents and discuss the main features of a mobile
phone based solution for field data collection, composed of basically
three modules: a survey editor, a server web application and a client
mobile application. The data gathering process begins with the
survey creation module, which enables the production of tailored
questionnaires. The field workforce receives the questionnaire(s) on
their mobile phones to collect the interviews responses and sending
them back to a server for immediate analysis.
Abstract: Increasing growth of information volume in the
internet causes an increasing need to develop new (semi)automatic
methods for retrieval of documents and ranking them according to
their relevance to the user query. In this paper, after a brief review
on ranking models, a new ontology based approach for ranking
HTML documents is proposed and evaluated in various
circumstances. Our approach is a combination of conceptual,
statistical and linguistic methods. This combination reserves the
precision of ranking without loosing the speed. Our approach
exploits natural language processing techniques to extract phrases
from documents and the query and doing stemming on words. Then
an ontology based conceptual method will be used to annotate
documents and expand the query. To expand a query the spread
activation algorithm is improved so that the expansion can be done
flexible and in various aspects. The annotated documents and the
expanded query will be processed to compute the relevance degree
exploiting statistical methods. The outstanding features of our
approach are (1) combining conceptual, statistical and linguistic
features of documents, (2) expanding the query with its related
concepts before comparing to documents, (3) extracting and using
both words and phrases to compute relevance degree, (4) improving
the spread activation algorithm to do the expansion based on
weighted combination of different conceptual relationships and (5)
allowing variable document vector dimensions. A ranking system
called ORank is developed to implement and test the proposed
model. The test results will be included at the end of the paper.
Abstract: In this paper a new approach to face recognition is
presented that achieves double dimension reduction, making the
system computationally efficient with better recognition results and
out perform common DCT technique of face recognition. In pattern
recognition techniques, discriminative information of image
increases with increase in resolution to a certain extent, consequently
face recognition results change with change in face image resolution
and provide optimal results when arriving at a certain resolution
level. In the proposed model of face recognition, initially image
decimation algorithm is applied on face image for dimension
reduction to a certain resolution level which provides best
recognition results. Due to increased computational speed and feature
extraction potential of Discrete Cosine Transform (DCT), it is
applied on face image. A subset of coefficients of DCT from low to
mid frequencies that represent the face adequately and provides best
recognition results is retained. A tradeoff between decimation factor,
number of DCT coefficients retained and recognition rate with
minimum computation is obtained. Preprocessing of the image is
carried out to increase its robustness against variations in poses and
illumination level. This new model has been tested on different
databases which include ORL , Yale and EME color database.
Abstract: Basic objective of this study is to create a regression
analysis method that can estimate the length of a plastic hinge which
is an important design parameter, by making use of the outcomes of
(lateral load-lateral displacement hysteretic curves) the experimental
studies conducted for the reinforced square concrete columns. For
this aim, 170 different square reinforced concrete column tests results
have been collected from the existing literature. The parameters
which are thought affecting the plastic hinge length such as crosssection
properties, features of material used, axial loading level,
confinement of the column, longitudinal reinforcement bars in the
columns etc. have been obtained from these 170 different square
reinforced concrete column tests. In the study, when determining the
length of plastic hinge, using the experimental test results, a
regression analysis have been separately tested and compared with
each other. In addition, the outcome of mentioned methods on
determination of plastic hinge length of the reinforced concrete
columns has been compared to other methods available in the
literature.
Abstract: In most of the cases, natural disasters lead to the
necessity of evacuating people. The quality of evacuation
management is dramatically improved by the use of information
provided by decision support systems, which become indispensable
in case of large scale evacuation operations. This paper presents a
best practice case study. In November 2007, officers from the
Emergency Situations Inspectorate “Crisana" of Bihor County from
Romania participated to a cross-border evacuation exercise, when
700 people have been evacuated from Netherlands to Belgium. One
of the main objectives of the exercise was the test of four different
decision support systems. Afterwards, based on that experience,
software system called TEVAC (Trans Border Evacuation) has been
developed “in house" by the experts of this institution. This original
software system was successfully tested in September 2008, during
the deployment of the international exercise EU-HUROMEX 2008,
the scenario involving real evacuation of 200 persons from Hungary
to Romania. Based on the lessons learned and results, starting from
April 2009, the TEVAC software is used by all Emergency
Situations Inspectorates all over Romania.
Abstract: A major requirement for Grid application developers is ensuring performance and scalability of their applications. Predicting the performance of an application demands understanding its specific features. This paper discusses performance modeling and prediction of multi-agent based simulation (MABS) applications on the Grid. An experiment conducted using a synthetic MABS workload explains the key features to be included in the performance model. The results obtained from the experiment show that the prediction model developed for the synthetic workload can be used as a guideline to understand to estimate the performance characteristics of real world simulation applications.
Abstract: HSDPA is a new feature which is introduced in
Release-5 specifications of the 3GPP WCDMA/UTRA standard to
realize higher speed data rate together with lower round-trip times.
Moreover, the HSDPA concept offers outstanding improvement of
packet throughput and also significantly reduces the packet call
transfer delay as compared to Release -99 DSCH. Till now the
HSDPA system uses turbo coding which is the best coding technique
to achieve the Shannon limit. However, the main drawbacks of turbo
coding are high decoding complexity and high latency which makes
it unsuitable for some applications like satellite communications,
since the transmission distance itself introduces latency due to
limited speed of light. Hence in this paper it is proposed to use LDPC
coding in place of Turbo coding for HSDPA system which decreases
the latency and decoding complexity. But LDPC coding increases the
Encoding complexity. Though the complexity of transmitter
increases at NodeB, the End user is at an advantage in terms of
receiver complexity and Bit- error rate. In this paper LDPC Encoder
is implemented using “sparse parity check matrix" H to generate a
codeword at Encoder and “Belief Propagation algorithm "for LDPC
decoding .Simulation results shows that in LDPC coding the BER
suddenly drops as the number of iterations increase with a small
increase in Eb/No. Which is not possible in Turbo coding. Also same
BER was achieved using less number of iterations and hence the
latency and receiver complexity has decreased for LDPC coding.
HSDPA increases the downlink data rate within a cell to a theoretical
maximum of 14Mbps, with 2Mbps on the uplink. The changes that
HSDPA enables includes better quality, more reliable and more
robust data services. In other words, while realistic data rates are
only a few Mbps, the actual quality and number of users achieved
will improve significantly.