Abstract: Advertising today has already become an integral part
of human life as a building block of the consumer community. A
component of the value chain of the media, advertising sector is
struggling increasingly harder to find new methods to reach
consumers. The tendency towards experimental marketing practices
is increasing day by day, especially to divert consumers from the idea
“They are selling something to me.” It is therefore considered a good
idea to investigate the trust in ad media of consumers, who are today
exposed to a great bulk of information from advertising sector.
In this study, the current value of ad media for the young
consumer will be investigated. Data on various ad media reliability
will be comparatively analyzed and young consumers will be traced
by including university students in the study. In this research, which
will be performed on students studying at the Selçuk University
(Turkey) by random sampling method, data will be obtained by
survey technique and evaluated by a statistical analysis.
Abstract: In the context of spectrum surveillance, a new method
to recover the code of spread spectrum signal is presented, while the
receiver has no knowledge of the transmitter-s spreading sequence. In
our previous paper, we used Genetic algorithm (GA), to recover
spreading code. Although genetic algorithms (GAs) are well known
for their robustness in solving complex optimization problems, but
nonetheless, by increasing the length of the code, we will often lead
to an unacceptable slow convergence speed. To solve this problem we
introduce Particle Swarm Optimization (PSO) into code estimation in
spread spectrum communication system. In searching process for
code estimation, the PSO algorithm has the merits of rapid
convergence to the global optimum, without being trapped in local
suboptimum, and good robustness to noise. In this paper we describe
how to implement PSO as a component of a searching algorithm in
code estimation. Swarm intelligence boasts a number of advantages
due to the use of mobile agents. Some of them are: Scalability, Fault
tolerance, Adaptation, Speed, Modularity, Autonomy, and
Parallelism. These properties make swarm intelligence very attractive
for spread spectrum code estimation. They also make swarm
intelligence suitable for a variety of other kinds of channels. Our
results compare between swarm-based algorithms and Genetic
algorithms, and also show PSO algorithm performance in code
estimation process.
Abstract: In recent years, “Bottom-up Planning Approach" has been widely accepted and expanded from planning theorists. Citizen participation becomes more important in decision-making in informal settlements. Many of previous projects and strategies due to ignorance of citizen participation, have been failed facing with informal settlements and in some cases lead physical expansion of these neighbourhoods. According to recent experiences, the new participatory approach was in somehow successful. This paper focuses on local experiences in Iran. A considerable amount of people live in informal settlements in Iran. With the previous methods, the government could not solve the problems of these settlements. It is time to examine new methods such as empowerment of the local citizens and involve them to solve the current physical, social, and economic problems. The paper aims to address the previous and new strategies facing with informal settlements, the conditions under which citizens could be involved in planning process, limits and potentials of this process, the main actors and issues and finally motivations that are able to promote citizen participation. Documentary studies, observation, interview and questionnaire have been used to achieve the above mentioned objectives. Nearly 80 percent of responder in Saadi Community are ready to participate in regularising their neighbourhoods, if pre-conditions of citizen involvement are being provided. These pre-conditions include kind of problem and its severity, the importance of issue, existence of a short-term solution, etc. Moreover, confirmation of dweller-s ownership can promote the citizen engagement in participatory projects.
Abstract: Fuzzy Cognitive Maps (FCMs) have successfully
been applied in numerous domains to show relations between
essential components. In some FCM, there are more nodes, which
related to each other and more nodes means more complex in system
behaviors and analysis. In this paper, a novel learning method used to
construct FCMs based on historical data and by using data mining
and DEMATEL method, a new method defined to reduce nodes
number. This method cluster nodes in FCM based on their cause and
effect behaviors.
Abstract: This paper presents a new method for estimating the mean curve of impulse voltage waveforms that are recorded during impulse tests. In practice, these waveforms are distorted by noise, oscillations and overshoot. The problem is formulated as an estimation problem. Estimation of the current signal parameters is achieved using a fast and accurate technique. The method is based on discrete dynamic filtering algorithm (DDF). The main advantage of the proposed technique is its ability in producing the estimates in a very short time and at a very high degree of accuracy. The algorithm uses sets of digital samples of the recorded impulse waveform. The proposed technique has been tested using simulated data of practical waveforms. Effects of number of samples and data window size are studied. Results are reported and discussed.
Abstract: This paper attempts to explore a new method to
improve the teaching of algorithmic for beginners. It is well known
that algorithmic is a difficult field to teach for teacher and complex to
assimilate for learner. These difficulties are due to intrinsic
characteristics of this field and to the manner that teachers (the
majority) apprehend its bases. However, in a Technology Enhanced
Learning environment (TEL), assessment, which is important and
indispensable, is the most delicate phase to implement, for all
problems that generate (noise...). Our objective registers in the
confluence of these two axes. For this purpose, EASEL focused
essentially to elaborate an assessment approach of algorithmic
competences in a TEL environment. This approach consists in
modeling an algorithmic solution according to basic and elementary
operations which let learner draw his/her own step with all autonomy
and independently to any programming language. This approach
assures a trilateral assessment: summative, formative and diagnostic
assessment.
Abstract: In this paper, a new method is proposed to find the fuzzy optimal solution of fuzzy assignment problems by representing all the parameters as triangular fuzzy numbers. The advantages of the pro-posed method are also discussed. To illustrate the proposed method a fuzzy assignment problem is solved by using the proposed method and the obtained results are discussed. The proposed method is easy to understand and to apply for finding the fuzzy optimal solution of fuzzy assignment problems occurring in real life situations.
Abstract: The prediction of transmembrane helical segments
(TMHs) in membrane proteins is an important field in the
bioinformatics research. In this paper, a new method based on discrete
wavelet transform (DWT) has been developed to predict the number
and location of TMHs in membrane proteins. PDB coded as 1KQG
was chosen as an example to describe the prediction of the number and
location of TMHs in membrane proteins by using this method. To
access the effect of the method, 80 proteins with known 3D-structure
from Mptopo database are chosen at random as the test objects
(including 325 TMHs), 308 of which can be predicted accurately, the
average predicted accuracy is 96.3%. In addition, the above 80
membrane proteins are divided into 13 groups according to their
function and type. In particular, the results of the prediction of TMHs
of the 13 groups are satisfying.
Abstract: We study different types of aggregation operators and
the decision making process with minimization of regret. We analyze
the original work developed by Savage and the recent work
developed by Yager that generalizes the MMR method creating a
parameterized family of minimal regret methods by using the ordered
weighted averaging (OWA) operator. We suggest a new method that
uses different types of geometric operators such as the weighted
geometric mean or the ordered weighted geometric operator (OWG)
to generalize the MMR method obtaining a new parameterized family
of minimal regret methods. The main result obtained in this method
is that it allows to aggregate negative numbers in the OWG operator.
Finally, we give an illustrative example.
Abstract: Generator of hypotheses is a new method for data mining. It makes possible to classify the source data automatically and produces a particular enumeration of patterns. Pattern is an expression (in a certain language) describing facts in a subset of facts. The goal is to describe the source data via patterns and/or IF...THEN rules. Used evaluation criteria are deterministic (not probabilistic). The search results are trees - form that is easy to comprehend and interpret. Generator of hypotheses uses very effective algorithm based on the theory of monotone systems (MS) named MONSA (MONotone System Algorithm).
Abstract: In the last 15 years, a number of methods have been proposed for forecasting based on fuzzy time series. Most of the fuzzy time series methods are presented for forecasting of enrollments at the University of Alabama. However, the forecasting accuracy rates of the existing methods are not good enough. In this paper, we compared our proposed new method of fuzzy time series forecasting with existing methods. Our method is based on frequency density based partitioning of the historical enrollment data. The proposed method belongs to the kth order and time-variant methods. The proposed method can get the best forecasting accuracy rate for forecasting enrollments than the existing methods.
Abstract: This paper introduces a new method called ARPDC (Advanced Robust Parallel Distributed Compensation) for automatic control of nonlinear systems. This method improves a quality of robust control by interpolating of robust and optimal controller. The weight of each controller is determined by an original criteria function for model validity and disturbance appreciation. ARPDC method is based on nonlinear Takagi-Sugeno (T-S) fuzzy systems and Parallel Distributed Compensation (PDC) control scheme. The relaxed stability conditions of ARPDC control of nominal system have been derived. The advantages of presented method are demonstrated on the inverse pendulum benchmark problem. From comparison between three different controllers (robust, optimal and ARPDC) follows, that ARPDC control is almost optimal with the robustness close to the robust controller. The results indicate that ARPDC algorithm can be a good alternative not only for a robust control, but in some cases also to an adaptive control of nonlinear systems.
Abstract: This paper presents a new method which applies an
artificial bee colony algorithm (ABC) for capacitor placement in
distribution systems with an objective of improving the voltage profile
and reduction of power loss. The ABC algorithm is a new population
based meta heuristic approach inspired by intelligent foraging behavior
of honeybee swarm. The advantage of ABC algorithm is that
it does not require external parameters such as cross over rate and
mutation rate as in case of genetic algorithm and differential evolution
and it is hard to determine these parameters in prior. The other
advantage is that the global search ability in the algorithm is implemented
by introducing neighborhood source production mechanism
which is a similar to mutation process. To demonstrate the validity
of the proposed algorithm, computer simulations are carried out on
69-bus system and compared the results with the other approach
available in the literature. The proposed method has outperformed the
other methods in terms of the quality of solution and computational
efficiency.
Abstract: Reliability Centered Maintenance(RCM) is one of
most widely used methods in the modern power system to schedule a
maintenance cycle and determine the priority of inspection. In order
to apply the RCM method to the Smart Grid, a precedence study for
the new structure of rearranged system should be performed due to
introduction of additional installation such as renewable and
sustainable energy resources, energy storage devices and advanced
metering infrastructure. This paper proposes a new method to
evaluate the priority of maintenance and inspection of the power
system facilities in the Smart Grid using the Risk Priority Number. In
order to calculate that risk index, it is required that the reliability
block diagram should be analyzed for the Smart Grid system. Finally,
the feasible technical method is discussed to estimate the risk
potential as part of the RCM procedure.
Abstract: Grid computing provides a virtual framework for
controlled sharing of resources across institutional boundaries.
Recently, trust has been recognised as an important factor for
selection of optimal resources in a grid. We introduce a new method
that provides a quantitative trust value, based on the past interactions
and present environment characteristics. This quantitative trust value
is used to select a suitable resource for a job and eliminates run time
failures arising from incompatible user-resource pairs. The proposed
work will act as a tool to calculate the trust values of the various
components of the grid and there by improves the success rate of the
jobs submitted to the resource on the grid. The access to a resource
not only depend on the identity and behaviour of the resource but
also upon its context of transaction, time of transaction, connectivity
bandwidth, availability of the resource and load on the resource. The
quality of the recommender is also evaluated based on the accuracy
of the feedback provided about a resource. The jobs are submitted for
execution to the selected resource after finding the overall trust value
of the resource. The overall trust value is computed with respect to
the subjective and objective parameters.
Abstract: Organ motion, especially respiratory motion, is a technical challenge to radiation therapy planning and dosimetry. This motion induces displacements and deformation of the organ tissues within the irradiated region which need to be taken into account when simulating dose distribution during treatment. Finite element modeling (FEM) can provide a great insight into the mechanical behavior of the organs, since they are based on the biomechanical material properties, complex geometry of organs, and anatomical boundary conditions. In this paper we present an original approach that offers the possibility to combine image-based biomechanical models with particle transport simulations. We propose a new method to map material density information issued from CT images to deformable tetrahedral meshes. Based on the principle of mass conservation our method can correlate density variation of organ tissues with geometrical deformations during the different phases of the respiratory cycle. The first results are particularly encouraging, as local error quantification of density mapping on organ geometry and density variation with organ motion are performed to evaluate and validate our approach.
Abstract: We analyze the problem of decision making under
ignorance with regrets. Recently, Yager has developed a new method
for decision making where instead of using regrets he uses another
type of transformation called negrets. Basically, the negret is
considered as the dual of the regret. We study this problem in detail
and we suggest the use of geometric aggregation operators in this
method. For doing this, we develop a different method for
constructing the negret matrix where all the values are positive. The
main result obtained is that now the model is able to deal with
negative numbers because of the transformation done in the negret
matrix. We further extent these results to another model developed
also by Yager about mixing valuations and negrets. Unfortunately, in
this case we are not able to deal with negative numbers because the
valuations can be either positive or negative.
Abstract: In this paper we propose new method for
simultaneous generating multiple quantiles corresponding to given
probability levels from data streams and massive data sets. This
method provides a basis for development of single-pass low-storage
quantile estimation algorithms, which differ in complexity, storage
requirement and accuracy. We demonstrate that such algorithms may
perform well even for heavy-tailed data.
Abstract: A new method, based on the normal shrink and
modified version of Katssagelous and Lay, is proposed for multiscale
blind image restoration. The method deals with the noise and blur in
the images. It is shown that the normal shrink gives the highest S/N
(signal to noise ratio) for image denoising process. The multiscale
blind image restoration is divided in two sections. The first part of
this paper proposes normal shrink for image denoising and the
second part of paper proposes modified version of katssagelous and
Lay for blur estimation and the combination of both methods to reach
a multiscale blind image restoration.
Abstract: This paper presents a new method to detect high impedance faults in radial distribution systems. Magnitudes of third and fifth harmonic components of voltages and currents are used as a feature vector for fault discrimination. The proposed methodology uses a learning vector quantization (LVQ) neural network as a classifier for identifying high impedance arc-type faults. The network learns from the data obtained from simulation of a simple radial system under different fault and system conditions. Compared to a feed-forward neural network, a properly tuned LVQ network gives quicker response.