Abstract: The promises of component-based technology can only be fully realized when the system contains in its design a necessary level of separation of concerns. The authors propose to focus on the concerns that emerge throughout the life cycle of the system and use them as an architectural foundation for the design of a component-based framework. The proposed model comprises a set of superimposed views of the system describing its functional and non-functional concerns. This approach is illustrated by the design of a specific framework for data analysis and data acquisition and supplemented with experiences from using the systems developed with this framework at the Fermi National Accelerator Laboratory.
Abstract: This paper presents an approach based on the
adoption of a distributed cognition framework and a non parametric
multicriteria evaluation methodology (DEA) designed specifically to
compare e-commerce websites from the consumer/user viewpoint. In
particular, the framework considers a website relative efficiency as a
measure of its quality and usability. A website is modelled as a black
box capable to provide the consumer/user with a set of
functionalities. When the consumer/user interacts with the website to
perform a task, he/she is involved in a cognitive activity, sustaining a
cognitive cost to search, interpret and process information, and
experiencing a sense of satisfaction. The degree of ambiguity and
uncertainty he/she perceives and the needed search time determine
the effort size – and, henceforth, the cognitive cost amount – he/she
has to sustain to perform his/her task. On the contrary, task
performing and result achievement induce a sense of gratification,
satisfaction and usefulness. In total, 9 variables are measured,
classified in a set of 3 website macro-dimensions (user experience,
site navigability and structure). The framework is implemented to
compare 40 websites of businesses performing electronic commerce
in the information technology market. A questionnaire to collect
subjective judgements for the websites in the sample was purposely
designed and administered to 85 university students enrolled in
computer science and information systems engineering
undergraduate courses.
Abstract: This paper presents a hybrid approach for solving nqueen problem by combination of PSO and SA. PSO is a population based heuristic method that sometimes traps in local maximum. To solve this problem we can use SA. Although SA suffer from many iterations and long time convergence for solving some problems, By good adjusting initial parameters such as temperature and the length of temperature stages SA guarantees convergence. In this article we use discrete PSO (due to nature of n-queen problem) to achieve a good local maximum. Then we use SA to escape from local maximum. The experimental results show that our hybrid method in comparison of SA method converges to result faster, especially for high dimensions n-queen problems.
Abstract: The identification and elimination of bad
measurements is one of the basic functions of a robust state estimator
as bad data have the effect of corrupting the results of state
estimation according to the popular weighted least squares method.
However this is a difficult problem to handle especially when dealing
with multiple errors from the interactive conforming type. In this
paper, a self adaptive genetic based algorithm is proposed. The
algorithm utilizes the results of the classical linearized normal
residuals approach to tune the genetic operators thus instead of
making a randomized search throughout the whole search space it is
more likely to be a directed search thus the optimum solution is
obtained at very early stages(maximum of 5 generations). The
algorithm utilizes the accumulating databases of already computed
cases to reduce the computational burden to minimum. Tests are
conducted with reference to the standard IEEE test systems. Test
results are very promising.
Abstract: This work presents an approach for the measurement
of mutual inductance on near field inductive coupling. The mutual
inductance between inductive circuits allows the simulation of energy
transfer from reader to tag, that can be used in RFID and powerless
implantable devices. It also allows one to predict the maximum
voltage in the tag of the radio-frequency system.
Abstract: We consider linear regression models where both input data (the values of independent variables) and output data (the observations of the dependent variable) are interval-censored. We introduce a possibilistic generalization of the least squares estimator, so called OLS-set for the interval model. This set captures the impact of the loss of information on the OLS estimator caused by interval censoring and provides a tool for quantification of this effect. We study complexity-theoretic properties of the OLS-set. We also deal with restricted versions of the general interval linear regression model, in particular the crisp input – interval output model. We give an argument that natural descriptions of the OLS-set in the crisp input – interval output cannot be computed in polynomial time. Then we derive easily computable approximations for the OLS-set which can be used instead of the exact description. We illustrate the approach by an example.
Abstract: This paper discusses the novel graphical approach for
stability analysis of multi induction motor drive controlled by a single
inverter. Stability issue arises in parallel connected induction motors
under unbalanced load conditions. The two powerful globally
accepted modeling and simulation software packages such as
MATLAB and LabVIEW are selected to perform the stability
analysis. The stability investigation is performed for different load
conditions and difference in stator and rotor resistances among the
two motors. It is very simple and effective than the techniques
presented to obtain the stability of the parallel connected induction
motor drive under unbalanced load conditions. Approximate transfer
functions are considered to model the induction motors, load
dynamics, speed controllers and inverter. Simulink library tools are
utilized to model the entire drive scheme in MATLAB. Stability
study is discussed in LabVIEW using control design and simulation
toolkits. Simulation results are illustrated for various running
conditions to demonstrate the effectiveness of the transfer function
method.
Abstract: The shortest path routing problem is a multiobjective
nonlinear optimization problem with constraints. This problem has
been addressed by considering Quality of service parameters, delay
and cost objectives separately or as a weighted sum of both
objectives. Multiobjective evolutionary algorithms can find multiple
pareto-optimal solutions in one single run and this ability makes them
attractive for solving problems with multiple and conflicting
objectives. This paper uses an elitist multiobjective evolutionary
algorithm based on the Non-dominated Sorting Genetic Algorithm
(NSGA), for solving the dynamic shortest path routing problem in
computer networks. A priority-based encoding scheme is proposed
for population initialization. Elitism ensures that the best solution
does not deteriorate in the next generations. Results for a sample test
network have been presented to demonstrate the capabilities of the
proposed approach to generate well-distributed pareto-optimal
solutions of dynamic routing problem in one single run. The results
obtained by NSGA are compared with single objective weighting
factor method for which Genetic Algorithm (GA) was applied.
Abstract: Traditional higher-education classrooms allow lecturers to observe students- behaviours and responses to a particular pedagogy during learning in a way that can influence changes to the pedagogical approach. Within current e-learning systems it is difficult to perform continuous analysis of the cohort-s behavioural tendency, making real-time pedagogical decisions difficult. This paper presents a Virtual Learning Process Environment (VLPE) based on the Business Process Management (BPM) conceptual framework. Within the VLPE, course designers can model various education pedagogies in the form of learning process workflows using an intuitive flow diagram interface. These diagrams are used to visually track the learning progresses of a cohort of students. This helps assess the effectiveness of the chosen pedagogy, providing the information required to improve course design. A case scenario of a cohort of students is presented and quantitative statistical analysis of their learning process performance is gathered and displayed in realtime using dashboards.
Abstract: Artificial Immune System is adopted as a Heuristic
Algorithm to solve the combinatorial problems for decades.
Nevertheless, many of these applications took advantage of the benefit
for applications but seldom proposed approaches for enhancing the
efficiency. In this paper, we continue the previous research to develop
a Self-evolving Artificial Immune System II via coordinating the T
and B cell in Immune System and built a block-based artificial
chromosome for speeding up the computation time and better
performance for different complexities of problems. Through the
design of Plasma cell and clonal selection which are relative the
function of the Immune Response. The Immune Response will help
the AIS have the global and local searching ability and preventing
trapped in local optima. From the experimental result, the significant
performance validates the SEAIS II is effective when solving the
permutation flows-hop problems.
Abstract: Cosmic showers, during the transit through space, produce
sub - products as a result of interactions with the intergalactic
or interstellar medium which after entering earth generate secondary
particles called Extensive Air Shower (EAS). Detection and analysis
of High Energy Particle Showers involve a plethora of theoretical and
experimental works with a host of constraints resulting in inaccuracies
in measurements. Therefore, there exist a necessity to develop a
readily available system based on soft-computational approaches
which can be used for EAS analysis. This is due to the fact that soft
computational tools such as Artificial Neural Network (ANN)s can be
trained as classifiers to adapt and learn the surrounding variations. But
single classifiers fail to reach optimality of decision making in many
situations for which Multiple Classifier System (MCS) are preferred
to enhance the ability of the system to make decisions adjusting
to finer variations. This work describes the formation of an MCS
using Multi Layer Perceptron (MLP), Recurrent Neural Network
(RNN) and Probabilistic Neural Network (PNN) with data inputs
from correlation mapping Self Organizing Map (SOM) blocks and
the output optimized by another SOM. The results show that the setup
can be adopted for real time practical applications for prediction
of primary energy and location of EAS from density values captured
using detectors in a circular grid.
Abstract: This paper seeks to explore the actual classroom
setting, to examine its role for students- learning, and attitude in the
class. It presents a theoretical approach of the classroom as system to
be explored and examines the concrete reality of Greek secondary
education students, under the light of the above approach. Based on
the findings of a quantitative and qualitative research, authors
propose a rather ontological approach of the classroom and underline
what the key-elements for such approach should be. The paper
explores extensively the theoretical dimensions for the change of
paradigm required and addresses the new issues to be considered.
Abstract: In this paper, we present an approach for soccer video
edition using a multimodal annotation. We propose to associate with
each video sequence of a soccer match a textual document to be used
for further exploitation like search, browsing and abstract edition.
The textual document contains video meta data, match meta data, and
match data. This document, generated automatically while the video
is analyzed, segmented and classified, can be enriched semi
automatically according to the user type and/or a specialized
recommendation system.
Abstract: The design of a pattern classifier includes an attempt
to select, among a set of possible features, a minimum subset of
weakly correlated features that better discriminate the pattern classes.
This is usually a difficult task in practice, normally requiring the
application of heuristic knowledge about the specific problem
domain. The selection and quality of the features representing each
pattern have a considerable bearing on the success of subsequent
pattern classification. Feature extraction is the process of deriving
new features from the original features in order to reduce the cost of
feature measurement, increase classifier efficiency, and allow higher
classification accuracy. Many current feature extraction techniques
involve linear transformations of the original pattern vectors to new
vectors of lower dimensionality. While this is useful for data
visualization and increasing classification efficiency, it does not
necessarily reduce the number of features that must be measured
since each new feature may be a linear combination of all of the
features in the original pattern vector. In this paper a new approach is
presented to feature extraction in which feature selection, feature
extraction, and classifier training are performed simultaneously using
a genetic algorithm. In this approach each feature value is first
normalized by a linear equation, then scaled by the associated weight
prior to training, testing, and classification. A knn classifier is used to
evaluate each set of feature weights. The genetic algorithm optimizes
a vector of feature weights, which are used to scale the individual
features in the original pattern vectors in either a linear or a nonlinear
fashion. By this approach, the number of features used in classifying
can be finely reduced.
Abstract: A new approach based on the consideration that electroencephalogram (EEG) signals are chaotic signals was presented for automated diagnosis of electroencephalographic changes. This consideration was tested successfully using the nonlinear dynamics tools, like the computation of Lyapunov exponents. This paper presented the usage of statistics over the set of the Lyapunov exponents in order to reduce the dimensionality of the extracted feature vectors. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The selected Lyapunov exponents of the EEG signals were used as inputs of the MLPNN trained with Levenberg- Marquardt algorithm. The classification results confirmed that the proposed MLPNN has potential in detecting the electroencephalographic changes.
Abstract: The Corporate Social Responsibility (CSR) performance has garnered significant interest during the last two decades as numerous methodologies are proposed by Social Responsible Investment (SRI) indexes. The weight of each indicator is a crucial component of the CSR measurement procedures. Based on a previous study, the appropriate weight of each proposed indicator for the Greek telecommunication sector is specified using the rank reciprocal weighting. The Kendall-s Coefficient of Concordance and Spearman Correlation Coefficient non-parametric tests are adopted to determine the level of consensus among the experts concerning the importance rank of indicators. The results show that there is no consensus regarding the rank of indicators in most of stakeholders- domains. The equal weight for all indicators could be proposed as a solution for the lack of consensus among the experts. The study recommends three different equations concerning the adopted weight approach.
Abstract: Productivity has been one of the major concerns with the increasingly high cost of software development. Choosing the right development language with high productivity is one approach to reduce development costs. Working on the large database with 4106 projects ever developed, we found the factors significant to productivity. After the removal of the effects of other factors on productivity, we compare the productivity differences of the ten general development programs. The study supports the fact that fourth-generation languages are more productive than thirdgeneration languages.
Abstract: In this paper the problem of estimating the time delay
between two spatially separated noisy sinusoidal signals by system
identification modeling is addressed. The system is assumed to be
perturbed by both input and output additive white Gaussian noise. The
presence of input noise introduces bias in the time delay estimates.
Normally the solution requires a priori knowledge of the input-output
noise variance ratio. We utilize the cascade of a self-tuned filter with
the time delay estimator, thus making the delay estimates robust to
input noise. Simulation results are presented to confirm the superiority
of the proposed approach at low input signal-to-noise ratios.
Abstract: The demand for higher performance graphics
continues to grow because of the incessant desire towards realism.
And, rapid advances in fabrication technology have enabled us to
build several processor cores on a single die. Hence, it is important to
develop single chip parallel architectures for such data-intensive
applications. In this paper, we propose an efficient PIM architectures
tailored for computer graphics which requires a large number of
memory accesses. We then address the two important tasks necessary
for maximally exploiting the parallelism provided by the architecture,
namely, partitioning and placement of graphic data, which affect
respectively load balances and communication costs. Under the
constraints of uniform partitioning, we develop approaches for optimal
partitioning and placement, which significantly reduce search space.
We also present heuristics for identifying near-optimal placement,
since the search space for placement is impractically large despite our
optimization. We then demonstrate the effectiveness of our partitioning
and placement approaches via analysis of example scenes; simulation
results show considerable search space reductions, and our heuristics
for placement performs close to optimal – the average ratio of
communication overheads between our heuristics and the optimal was
1.05. Our uniform partitioning showed average load-balance ratio of
1.47 for geometry processing and 1.44 for rasterization, which is
reasonable.
Abstract: Only recently have water ethics received focused interest in the international water community. Because water is metabolically basic to life, an ethical dimension persists in every decision related to water. Water ethics at once express human society-s approach to water and act as guidelines for behaviour. Ideas around water are often implicit and embedded as assumptions. They can be entrenched in behaviour and difficult to contest because they are difficult to “see". By explicitly revealing the ethical ideas underlying water-related decisions, human society-s relationship with water, and with natural systems of which water is part, can be contested and shifted or be accepted with conscious intention by human society. In recent decades, improved understanding of water-s importance for ecosystem functioning and ecological services for human survival is moving us beyond this growth-driven, supplyfocused management paradigm. Environmental ethics challenge this paradigm by extending the ethical sphere to the environment and thus water or water Resources management per se. An ethical approach is a legitimate, important, and often ignored approach to effect change in environmental decision making. This qualitative research explores principles of water ethics and examines the underlying ethical precepts of selected water policy examples. The constructed water ethic principles act as a set of criteria against which a policy comparison can be established. This study shows that water Resources management is a progressive issue by embracing full public participation and a new planning model, and knowledgegeneration initiatives.