Abstract: In this paper, we first consider the quality of service
problems in heterogeneous wireless networks for sending the video
data, which their problem of being real-time is pronounced. At last,
we present a method for ensuring the end-to-end quality of service at
application layer level for adaptable sending of the video data at
heterogeneous wireless networks. To do this, mechanism in different
layers has been used. We have used the stop mechanism, the
adaptation mechanism and the graceful degrade at the application
layer, the multi-level congestion feedback mechanism in the network
layer and connection cutting off decision mechanism in the link
layer. At the end, the presented method and the achieved
improvement is simulated and presented in the NS-2 software.
Abstract: As the enormous amount of on-line text grows on the
World-Wide Web, the development of methods for automatically
summarizing this text becomes more important. The primary goal of
this research is to create an efficient tool that is able to summarize
large documents automatically. We propose an Evolving
connectionist System that is adaptive, incremental learning and
knowledge representation system that evolves its structure and
functionality. In this paper, we propose a novel approach for Part of
Speech disambiguation using a recurrent neural network, a paradigm
capable of dealing with sequential data. We observed that
connectionist approach to text summarization has a natural way of
learning grammatical structures through experience. Experimental
results show that our approach achieves acceptable performance.
Abstract: We present a simplified equalization technique for a
π/4 differential quadrature phase shift keying ( π/4 -DQPSK) modulated
signal in a multipath fading environment. The proposed equalizer is
realized as a fractionally spaced adaptive decision feedback equalizer
(FS-ADFE), employing exponential step-size least mean square
(LMS) algorithm as the adaptation technique. The main advantage of
the scheme stems from the usage of exponential step-size LMS algorithm
in the equalizer, which achieves similar convergence behavior
as that of a recursive least squares (RLS) algorithm with significantly
reduced computational complexity. To investigate the finite-precision
performance of the proposed equalizer along with the π/4 -DQPSK
modem, the entire system is evaluated on a 16-bit fixed point digital
signal processor (DSP) environment. The proposed scheme is found
to be attractive even for those cases where equalization is to be
performed within a restricted number of training samples.
Abstract: In this paper, we propose a reversible watermarking
scheme based on histogram shifting (HS) to embed watermark bits
into the H.264/AVC standard videos by modifying the last nonzero
level in the context adaptive variable length coding (CAVLC) domain.
The proposed method collects all of the last nonzero coefficients (or
called last level coefficient) of 4×4 sub-macro blocks in a macro
block and utilizes predictions for the current last level from the
neighbor block-s last levels to embed watermark bits. The feature of
the proposed method is low computational and has the ability of
reversible recovery. The experimental results have demonstrated that
our proposed scheme has acceptable degradation on video quality and
output bit-rate for most test videos.
Abstract: Classification of video sequences based on their contents is a vital process for adaptation techniques. It helps decide which adaptation technique best fits the resource reduction requested by the client. In this paper we used the principal feature analysis algorithm to select a reduced subset of video features. The main idea is to select only one feature from each class based on the similarities between the features within that class. Our results showed that using this feature reduction technique the source video features can be completely omitted from future classification of video sequences.
Abstract: One of the main concerns in the Information Technology field is adoption with new technologies in organizations which may result in increasing the usage paste of these technologies.This study aims to look at the issue of culture-s role in accepting and using new technologies in organizations. The study examines the effect of culture on accepting and intention to use new technology in organizations. Studies show culture is one of the most important barriers in adoption new technologies. The model used for accepting and using new technology is Technology Acceptance Model (TAM), while for culture and dimensions a well-known theory by Hofsted was used. Results of the study show significant effect of culture on intention to use new technologies. All four dimensions of culture were tested to find the strength of relationship with behavioral intention to use new technologies. Findings indicate the important role of culture in the level of intention to use new technologies and different role of each dimension to improve adaptation process. The study suggests that transferring of new technologies efforts are most likely to be successful if the parties are culturally aligned.
Abstract: Subjective loneliness describes people who feel a
disagreeable or unacceptable lack of meaningful social relationships,
both at the quantitative and qualitative level. The studies to be
presented tested an Italian 18-items self-report loneliness measure,
that included items adapted from scales previously developed,
namely a short version of the UCLA (Russell, Peplau and Cutrona,
1980), and the 11-items Loneliness scale by De Jong-Gierveld &
Kamphuis (JGLS; 1985). The studies aimed at testing the developed
scale and at verifying whether loneliness is better conceptualized as a
unidimensional (so-called 'general loneliness') or a bidimensional
construct, namely comprising the distinct facets of social and
emotional loneliness. The loneliness questionnaire included 2 singleitem
criterion measures of sad mood, and social contact, and asked
participants to supply information on a number of socio-demographic
variables. Factorial analyses of responses obtained in two
preliminary studies, with 59 and 143 Italian participants respectively,
showed good factor loadings and subscale reliability and confirmed
that perceived loneliness has clearly two components, a social and an
emotional one, the latter measured by two subscales, a 7-item
'general' loneliness subscale derived from UCLA, and a 6–item
'emotional' scale included in the JGLS. Results further showed that
type and amount of loneliness are related, negatively, to frequency of
social contacts, and, positively, to sad mood. In a third study data
were obtained from a nation-wide sample of 9.097 Italian subjects,
12 to about 70 year-olds, who filled the test on-line, on the Italian
web site of a large-audience magazine, Focus. The results again
confirmed the reliability of the component subscales, namely social,
emotional, and 'general' loneliness, and showed that they were
highly correlated with each other, especially the latter two.
Loneliness scores were significantly predicted by sex, age, education
level, sad mood and social contact, and, less so, by other variables –
e.g., geographical area and profession. The scale validity was
confirmed by the results of a fourth study, with elderly men and
women (N 105) living at home or in residential care units. The three
subscales were significantly related, among others, to depression, and
to various measures of the extension of, and satisfaction with, social
contacts with relatives and friends. Finally, a fifth study with 315
career-starters showed that social and emotional loneliness correlate
with life satisfaction, and with measures of emotional intelligence.
Altogether the results showed a good validity and reliability in the
tested samples of the entire scale, and of its components.
Abstract: The analysis to detect arrhythmias and life-threatening
conditions are highly essential in today world and this analysis
can be accomplished by advanced non-linear processing methods
for accurate analysis of the complex signals of heartbeat dynamics.
In this perspective, recent developments in the field of multiscale
information content have lead to the Microcanonical Multiscale
Formalism (MMF). We show that such framework provides several
signal analysis techniques that are especially adapted to the
study of heartbeat dynamics. In this paper, we just show first hand
results of whether the considered heartbeat dynamics signals have
the multiscale properties by computing local preticability exponents
(LPEs) and the Unpredictable Points Manifold (UPM), and thereby
computing the singularity spectrum.
Abstract: Mobile Ad hoc network consists of a set of mobile
nodes. It is a dynamic network which does not have fixed topology.
This network does not have any infrastructure or central
administration, hence it is called infrastructure-less network. The
change in topology makes the route from source to destination as
dynamic fixed and changes with respect to time. The nature of
network requires the algorithm to perform route discovery, maintain
route and detect failure along the path between two nodes [1]. This
paper presents the enhancements of ARA [2] to improve the
performance of routing algorithm. ARA [2] finds route between
nodes in mobile ad-hoc network. The algorithm is on-demand source
initiated routing algorithm. This is based on the principles of swarm
intelligence. The algorithm is adaptive, scalable and favors load
balancing. The improvements suggested in this paper are handling of
loss ants and resource reservation.
Abstract: In unsupervised segmentation context, we propose a bi-dimensional hidden Markov chain model (X,Y) that we adapt to the image segmentation problem. The bi-dimensional observed process Y = (Y 1, Y 2) is such that Y 1 represents the noisy image and Y 2 represents a noisy supplementary information on the image, for example a noisy proportion of pixels of the same type in a neighborhood of the current pixel. The proposed model can be seen as a competitive alternative to the Hilbert-Peano scan. We propose a bayesian algorithm to estimate parameters of the considered model. The performance of this algorithm is globally favorable, compared to the bi-dimensional EM algorithm through numerical and visual data.
Abstract: Compliance requires an effective communication
within an enterprise as well as towards a company-s external
environment. This requirement commences with the
implementation of compliance within large scale compliance
projects and still persists in the compliance reporting within
standard operations. On the one hand the understanding of
compliance necessities within the organization is promoted.
On the other hand reduction of asymmetric information with
compliance stakeholders is achieved. To reach this goal, a
central reporting must provide a consolidated view of different
compliance efforts- statuses. A concept which could be
adapted for this purpose is the balanced scorecard by Kaplan /
Norton. This concept has not been analyzed in detail
concerning its adequacy for a holistic compliance reporting
starting in compliance projects until later usage in regularly
compliance operations.
At first, this paper evaluates if a holistic compliance
reporting can be designed by using the balanced scorecard
concept. The current status of compliance reporting clearly
shows that scorecards are generally accepted as a compliance
reporting tool and are already used for corporate governance
reporting. Additional specialized compliance IT - solutions
exist in the market. After the scorecard-s adequacy is
thoroughly examined and proofed, an example strategy map as
the basis to derive a compliance balanced scorecard is defined.
This definition answers the question on proceeding in
designing a compliance reporting tool.
Abstract: School physical education, through its objectives and
contents, efficiently valorizes the pupils- abilities, developing them,
especially the coordinative skill component, which is the basis of
movement learning, of the development of the daily motility and also
of the special, refined motility required by the practice of certain
sports. Medium school age offers the nervous and motor substratum
needed for the acquisition of complex motor habits, a substratum that
is essential for the coordinative skill. Individuals differ as to the level
at which this function is performed, the extent to which this function
turns an individual into a person that is adapted and adaptable to
complex and various situations. Spatio-temporal orientation, together
with movement combination and coupling, and with kinesthetic,
balance, motor reaction, movement transformation and rhythm
differentiation form the coordinative skills. From our viewpoint,
these are characteristic features with high levels of manifestation in a
complex psychomotor act - valorizing the quality of one-s talent - as
well as indices pertaining to one-s psychomotor intelligence and
creativity.
Abstract: This paper presents a forgetting factor scheme for variable step-size affine projection algorithms (APA). The proposed scheme uses a forgetting processed input matrix as the projection matrix of pseudo-inverse to estimate system deviation. This method introduces temporal weights into the projection matrix, which is typically a better model of the real error's behavior than homogeneous temporal weights. The regularization overcomes the ill-conditioning introduced by both the forgetting process and the increasing size of the input matrix. This algorithm is tested by independent trials with coloured input signals and various parameter combinations. Results show that the proposed algorithm is superior in terms of convergence rate and misadjustment compared to existing algorithms. As a special case, a variable step size NLMS with forgetting factor is also presented in this paper.
Abstract: This paper presents a recognition system for isolated
words like robot commands. It’s carried out by Time Delay Neural
Networks; TDNN. To teleoperate a robot for specific tasks as turn,
close, etc… In industrial environment and taking into account the
noise coming from the machine. The choice of TDNN is based on its
generalization in terms of accuracy, in more it acts as a filter that
allows the passage of certain desirable frequency characteristics of
speech; the goal is to determine the parameters of this filter for
making an adaptable system to the variability of speech signal and to
noise especially, for this the back propagation technique was used in
learning phase. The approach was applied on commands pronounced
in two languages separately: The French and Arabic. The results for
two test bases of 300 spoken words for each one are 87%, 97.6% in
neutral environment and 77.67%, 92.67% when the white Gaussian
noisy was added with a SNR of 35 dB.
Abstract: In this study, the use of silicon NAM (Non-Audible
Murmur) microphone in automatic speech recognition is presented.
NAM microphones are special acoustic sensors, which are attached
behind the talker-s ear and can capture not only normal (audible)
speech, but also very quietly uttered speech (non-audible murmur).
As a result, NAM microphones can be applied in automatic speech
recognition systems when privacy is desired in human-machine communication.
Moreover, NAM microphones show robustness against
noise and they might be used in special systems (speech recognition,
speech conversion etc.) for sound-impaired people. Using a small
amount of training data and adaptation approaches, 93.9% word
accuracy was achieved for a 20k Japanese vocabulary dictation
task. Non-audible murmur recognition in noisy environments is also
investigated. In this study, further analysis of the NAM speech has
been made using distance measures between hidden Markov model
(HMM) pairs. It has been shown the reduced spectral space of NAM
speech using a metric distance, however the location of the different
phonemes of NAM are similar to the location of the phonemes
of normal speech, and the NAM sounds are well discriminated.
Promising results in using nonlinear features are also introduced,
especially under noisy conditions.
Abstract: In this paper, a robust statistics based filter to remove salt and pepper noise in digital images is presented. The function of the algorithm is to detect the corrupted pixels first since the impulse noise only affect certain pixels in the image and the remaining pixels are uncorrupted. The corrupted pixels are replaced by an estimated value using the proposed robust statistics based filter. The proposed method perform well in removing low to medium density impulse noise with detail preservation upto a noise density of 70% compared to standard median filter, weighted median filter, recursive weighted median filter, progressive switching median filter, signal dependent rank ordered mean filter, adaptive median filter and recently proposed decision based algorithm. The visual and quantitative results show the proposed algorithm outperforms in restoring the original image with superior preservation of edges and better suppression of impulse noise
Abstract: Influence diagrams (IDs) are one of the most commonly used graphical decision models for reasoning under uncertainty. The quantification of IDs which consists in defining conditional probabilities for chance nodes and utility functions for value nodes is not always obvious. In fact, decision makers cannot always provide exact numerical values and in some cases, it is more easier for them to specify qualitative preference orders. This work proposes an adaptation of standard IDs to the qualitative framework based on possibility theory.
Abstract: A lot of research made during these last 15 years
showed that the quantification of the springback has a significant role
in the industry of sheet metal forming. These studies were made with
the objective of finding techniques and methods to minimize or
completely avoid this permanent physical variation. Moreover, the
use of steel and aluminum alloys in the car industry and aviation
poses every day the problem of the springback. The determination in
advance of the quantity of the springback allows consequently the
design and manufacture of the tool. The aim of this paper is to study
experimentally the influence of the blank holder force BHF and the
radius of curvature of the die on the springback and their influence on
the strain in various zone of specimen.
The original of our purpose consist on tests which are ensured by
adapting a U-type stretching-bending device on a tensile testing
machine, where we studied and quantified the variation of the
springback according to displacement.
Abstract: Evolutionary robotics is concerned with the design of
intelligent systems with life-like properties by means of simulated
evolution. Approaches in evolutionary robotics can be categorized
according to the control structures that represent the behavior and the
parameters of the controller that undergo adaptation. The basic idea
is to automatically synthesize behaviors that enable the robot to
perform useful tasks in complex environments. The evolutionary
algorithm searches through the space of parameterized controllers
that map sensory perceptions to control actions, thus realizing a
specific robotic behavior. Further, the evolutionary algorithm
maintains and improves a population of candidate behaviors by
means of selection, recombination and mutation. A fitness function
evaluates the performance of the resulting behavior according to the
robot-s task or mission. In this paper, the focus is in the use of
genetic algorithms to solve a multi-objective optimization problem
representing robot behaviors; in particular, the A-Compander Law is
employed in selecting the weight of each objective during the
optimization process. Results using an adaptive fitness function show
that this approach can efficiently react to complex tasks under
variable environments.
Abstract: The paper presents the applications of artificial
intelligence technique called adaptive tabu search to design the
controller of a buck converter. The averaging model derived from the
DQ and generalized state-space averaging methods is applied to
simulate the system during a searching process. The simulations
using such averaging model require the faster computational time
compared with that of the full topology model from the software
packages. The reported model is suitable for the work in the paper in
which the repeating calculation is needed for searching the best
solution. The results will show that the proposed design technique
can provide the better output waveforms compared with those
designed from the classical method.