Abstract: This paper presents a highly efficient algorithm for detecting and tracking humans and objects in video surveillance sequences. Mean shift clustering is applied on backgrounddifferenced image sequences. For efficiency, all calculations are performed on integral images. Novel corresponding exponential integral kernels are introduced to allow the application of nonuniform kernels for clustering, which dramatically increases robustness without giving up the efficiency of the integral data structures. Experimental results demonstrating the power of this approach are presented.
Abstract: The urban centers within northeastern Brazil are
mainly influenced by the intense rainfalls, which can occur after long
periods of drought, when flood events can be observed during such
events. Thus, this paper aims to study the rainfall frequencies in such
region through the wavelet transform. An application of wavelet
analysis is done with long time series of the total monthly rainfall
amount at the capital cities of northeastern Brazil. The main
frequency components in the time series are studied by the global
wavelet spectrum and the modulation in separated periodicity bands
were done in order to extract additional information, e.g., the 8 and
16 months band was examined by an average of all scales, giving a
measure of the average annual variance versus time, where the
periods with low or high variance could be identified. The important
increases were identified in the average variance for some periods,
e.g. 1947 to 1952 at Teresina city, which can be considered as high
wet periods. Although, the precipitation in those sites showed similar
global wavelet spectra, the wavelet spectra revealed particular
features. This study can be considered an important tool for time
series analysis, which can help the studies concerning flood control,
mainly when they are applied together with rainfall-runoff
simulations.
Abstract: There are three main ways of categorizing capital in banking operations: accounting, regulatory and economic capital. However, the 2008-2009 global crisis has shown that none of these categories adequately reflects the real risks of bank operations, especially in light of the failures Bear Stearns, Lehman Brothers or Northern Rock. This paper deals with the economic capital allocation of global banks. In theory, economic capital should reflect the real risks of a bank and should be publicly available. Yet, as discovered during the global financial crisis, even when economic capital information was publicly disclosed, the underlying assumptions rendered the information useless. Specifically, some global banks that reported relatively high levels of economic capital before the crisis went bankrupt or had to be bailed-out by their government. And, only 15 out of 50 global banks reported their economic capital during the 2007-2010 period. In this paper, we analyze the changes in reported bank economic capital disclosure during this period. We conclude that relative shares of credit and business risks increased in 2010 compared to 2007, while both operational and market risks decreased their shares on the total economic capital of top-rated global banks. Generally speaking, higher levels of disclosure and transparency of bank operations are required to obtain more confidence from stakeholders. Moreover, additional risks such as liquidity risks should be included in these disclosures.
Abstract: Classifying data hierarchically is an efficient approach
to analyze data. Data is usually classified into multiple categories, or
annotated with a set of labels. To analyze multi-labeled data, such
data must be specified by giving a set of labels as a semantic range.
There are some certain purposes to analyze data. This paper shows
which multi-labeled data should be the target to be analyzed for
those purposes, and discusses the role of a label against a set of
labels by investigating the change when a label is added to the set of
labels. These discussions give the methods for the advanced analysis
of multi-labeled data, which are based on the role of a label against
a semantic range.
Abstract: Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforward network training is a special case of functional minimization, where no explicit model of the data is assumed. Therefore due to the high dimensionality of the data, linearization of the training problem through use of orthogonal basis functions is not desirable. The focus is functional minimization on any basis. A number of methods based on local gradient and Hessian matrices are discussed. Modifications of many methods of first and second order training methods are considered. Using share rates data, experimentally it is proved that Conjugate gradient and Quasi Newton?s methods outperformed the Gradient Descent methods. In case of the Levenberg-Marquardt algorithm is of special interest in financial forecasting.
Abstract: Applying the idea of soft set theory to lattice implication algebras, the novel concept of (implicative) filteristic soft lattice implication algebras which related to (implicative) filter(for short, (IF-)F-soft lattice implication algebras) are introduced. Basic properties of (IF-)F-soft lattice implication algebras are derived. Two kinds of fuzzy filters (i.e.(2, 2 _qk)((2, 2 _ qk))-fuzzy (implicative) filter) of L are introduced, which are generalizations of fuzzy (implicative) filters. Some characterizations for a soft set to be a (IF-)F-soft lattice implication algebra are provided. Analogously, this idea can be used in other types of filteristic lattice implication algebras (such as fantastic (positive implicative) filteristic soft lattice implication algebras).
Abstract: RoboCup Rescue simulation as a large-scale Multi
agent system (MAS) is one of the challenging environments for
keeping coordination between agents to achieve the objectives
despite sensing and communication limitations. The dynamicity of
the environment and intensive dependency between actions of
different kinds of agents make the problem more complex. This point
encouraged us to use learning-based methods to adapt our decision
making to different situations. Our approach is utilizing
reinforcement leaning. Using learning in rescue simulation is one of
the current ways which has been the subject of several researches in
recent years. In this paper we present an innovative learning method
implemented for Police Force (PF) Agent. This method can cope
with the main difficulties that exist in other learning approaches.
Different methods used in the literature have been examined. Their
drawbacks and possible improvements have led us to the method
proposed in this paper which is fast and accurate. The Brain
Emotional Learning Based Intelligent Controller (BELBIC) is our
solution for learning in this environment. BELBIC is a
physiologically motivated approach based on a computational model
of amygdale and limbic system. The paper presents the results
obtained by the proposed approach, showing the power of BELBIC
as a decision making tool in complex and dynamic situation.
Abstract: One of the most basic functions of control engineers is
tuning of controllers. There are always several process loops in the
plant necessitate of tuning. The auto tuned Proportional Integral
Derivative (PID) Controllers are designed for applications where
large load changes are expected or the need for extreme accuracy and
fast response time exists. The algorithm presented in this paper is
used for the tuning PID controller to obtain its parameters with a
minimum computing complexity. It requires continuous analysis of
variation in few parameters, and let the program to do the plant test
and calculate the controller parameters to adjust and optimize the
variables for the best performance. The algorithm developed needs
less time as compared to a normal step response test for continuous
tuning of the PID through gain scheduling.
Abstract: The analysis is mainly concentrating on the knowledge
management literatures productivity trend which subjects as
“knowledge management" in SSCI database. The purpose what the
analysis will propose is to summarize the trend information for
knowledge management researchers since core knowledge will be
concentrated in core categories. The result indicated that the literature
productivity which topic as “knowledge management" is still
increasing extremely and will demonstrate the trend by different
categories including author, country/territory, institution name,
document type, language, publication year, and subject area. Focus on
the right categories, you will catch the core research information. This
implies that the phenomenon "success breeds success" is more
common in higher quality publications.
Abstract: In this paper, a wavelet based method is proposed to
identify the constant coefficients of a second order linear system and
is compared with the least squares method. The proposed method
shows improved accuracy of parameter estimation as compared to the
least squares method. Additionally, it has the advantage of smaller
data requirement and storage requirement as compared to the least
squares method.
Abstract: Set covering problem is a classical problem in
computer science and complexity theory. It has many applications,
such as airline crew scheduling problem, facilities location problem,
vehicle routing, assignment problem, etc. In this paper, three
different techniques are applied to solve set covering problem.
Firstly, a mathematical model of set covering problem is introduced
and solved by using optimization solver, LINGO. Secondly, the
Genetic Algorithm Toolbox available in MATLAB is used to solve
set covering problem. And lastly, an ant colony optimization method
is programmed in MATLAB programming language. Results
obtained from these methods are presented in tables. In order to
assess the performance of the techniques used in this project, the
benchmark problems available in open literature are used.
Abstract: Existing image-based virtual reality applications
allow users to view image-based 3D virtual environment in a more
interactive manner. User could “walkthrough"; looks left, right, up
and down and even zoom into objects in these virtual worlds of
images. However what the user sees during a “zoom in" is just a
close-up view of the same image which was taken from a distant.
Thus, this does not give the user an accurate view of the object from
the actual distance. In this paper, a simple technique for zooming in
an object in a virtual scene is presented. The technique is based on
the 'hotspot' concept in existing application. Instead of navigation
between two different locations, the hotspots are used to focus into
an object in the scene. For each object, several hotspots are created.
A different picture is taken for each hotspot. Each consecutive
hotspot created will take the user closer to the object. This will
provide the user with a correct of view of the object based on his
proximity to the object. Implementation issues and the relevance of
this technique in potential application areas are highlighted.
Abstract: This research documents a qualitative study of
selected Native Americans who have successfully graduated from
mainstream higher education institutions. The research framework
explored the Bicultural Identity Formation Model as a means of
understanding the expressions of the students' adaptations to
mainstream education. This approach lead to an awareness of how
the participants in the study used specific cultural and social
strategies to enhance their educational success and also to an
awareness of how they coped with cultural dissonance to achieve a
new academic identity. Research implications impact a larger
audience of bicultural, foreign, or international students experiencing
cultural dissonance.
Abstract: This research presents a fuzzy multi-objective model
for a machine selection problem in a flexible manufacturing system
of a tire company. Two main objectives are minimization of an
average machine error and minimization of the total setup time.
Conventionally, the working team uses trial and error in selecting a
pressing machine for each task due to the complexity and constraints
of the problem. So, both objectives may not satisfy. Moreover, trial
and error takes a lot of time to get the final decision. Therefore, in
this research preemptive fuzzy goal programming model is developed
for solving this multi-objective problem. The proposed model can
obtain the appropriate results that the Decision Making (DM) is
satisfied for both objectives. Besides, alternative choice can be easily
generated by varying the satisfaction level. Additionally, decision
time can be reduced by using the model, which includes all
constraints of the system to generate the solutions. A numerical
example is also illustrated to show the effectiveness of the proposed
model.
Abstract: This paper presents work characterizing finite element
performance boundaries within which live, interactive finite element
modeling is feasible on current and emerging systems. These results
are based on wide-ranging tests performed using a prototype finite
element program implemented specifically for this study, thereby enabling
the unified investigation of numerous direct and iterative solver
strategies and implementations in a variety of modeling contexts.
The results are intended to be useful for researchers interested in
interactive analysis by providing baseline performance estimates, to
give guidance in matching solution strategies to problem domains,
and to spur further work addressing the challenge of extending the
present boundaries.
Abstract: The purpose of this study was to investigate the
relationship between parent involvement and preschool disabled
children’s development. Parents of 3 year old disabled children
(N=440) and 5 year old disabled children (N=937) participating in the
Special Needs Education Longitudinal Study were interviewed or
answered the web design questionnaire about their actions in parenting
their disabled children. These children’s developments were also
evaluated by their teachers. Data were analyzed using Structural
Equation Modeling. Results were showed by tables and figures. Based
on the results, the researcher made some suggestions for future studies.
Abstract: Rutting is one of the major load-related distresses in airport flexible pavements. Rutting in paving materials develop gradually with an increasing number of load applications, usually appearing as longitudinal depressions in the wheel paths and it may be accompanied by small upheavals to the sides. Significant research has been conducted to determine the factors which affect rutting and how they can be controlled. Using the experimental design concepts, a series of tests can be conducted while varying levels of different parameters, which could be the cause for rutting in airport flexible pavements. If proper experimental design is done, the results obtained from these tests can give a better insight into the causes of rutting and the presence of interactions and synergisms among the system variables which have influence on rutting. Although traditionally, laboratory experiments are conducted in a controlled fashion to understand the statistical interaction of variables in such situations, this study is an attempt to identify the critical system variables influencing airport flexible pavement rut depth from a statistical DoE perspective using real field data from a full-scale test facility. The test results do strongly indicate that the response (rut depth) has too much noise in it and it would not allow determination of a good model. From a statistical DoE perspective, two major changes proposed for this experiment are: (1) actual replication of the tests is definitely required, (2) nuisance variables need to be identified and blocked properly. Further investigation is necessary to determine possible sources of noise in the experiment.
Abstract: This paper presents the application of discrete-time
variable structure control with sliding mode based on the 'reaching
law' method for robust control of a 'simple inverted pendulum on
moving cart' - a standard nonlinear benchmark system. The
controllers designed using the above techniques are completely
insensitive to parametric uncertainty and external disturbance. The
controller design is carried out using pole placement technique to find
state feedback gain matrix , which decides the dynamic behavior
of the system during sliding mode. This is followed by feedback gain
realization using the control law which is synthesized from 'Gao-s
reaching law'. The model of a single inverted pendulum and the
discrete variable structure control controller are developed, simulated
in MATLAB-SIMULINK and results are presented. The response of
this simulation is compared with that of the discrete linear quadratic
regulator (DLQR) and the advantages of sliding mode controller over
DLQR are also presented
Abstract: During the last decade ultrafine grained (UFG) and nano-structured (NS) materials have experienced a rapid development. In this research work finite element analysis has been carried out to investigate the plastic strain distribution in equal channel angular process (ECAP). The magnitudes of Standard deviation (S. D.) and inhomogeneity index (Ci) were compared for different ECAP passes. Verification of a three-dimensional finite element model was performed with experimental tests. Finally the mechanical property including impact energy of ultrafine grained pure commercially pure Aluminum produced by severe plastic deformation method has been examined. For this aim, equal channel angular pressing die with the channel angle, outer corner angle and channel diameter of 90°, 20° and 20mm had been designed and manufactured. Commercial pure Aluminum billets were ECAPed up to four passes by route BC at the ambient temperature. The results indicated that there is a great improvement at the hardness measurement, yield strength and ultimate tensile strength after ECAP process. It is found that the magnitudes of HV reach 67HV from 21HV after the final stage of process. Also, about 330% and 285% enhancement at the YS and UTS values have been obtained after the fourth pass as compared to the as-received conditions, respectively. On the other hand, the elongation to failure and impact energy have been reduced by 23% and 50% after imposing four passes of ECAP process, respectively.
Abstract: The aim of this investigation is to study the
performance of the new generation of the PVD coated grade and to
map the influence of cutting conditions on the tool life in milling of
ADI (Austempered Ductile Iron). The results show that chipping is
the main wear mechanism which determines the tool life in dry
condition and notch wear in wet condition for this application. This
due to the different stress mechanisms and preexisting cracks in the
coating. The wear development shows clearly that the new PVD
coating (C20) has the best ability to delay the chipping growth. It
was also found that a high content of Al in the new coating (C20)
was especially favorable compared to a TiAlN multilayer with lower
Al content (C30) or CVD coating. This is due to fine grains and low
compressive stress level in the coating which increase the coating
ability to withstand the mechanical and thermal impact. It was also
found that the use of coolant decreases the tool life with 70-80%
compare to dry milling.