Abstract: The design problem of Infinite Impulse Response (IIR)
digital filters is usually expressed as the minimization problem of
the complex magnitude error that includes both the magnitude and
phase information. However, the group delay of the filter obtained
by solving such design problem may be far from the desired group
delay. In this paper, we propose a design method of stable IIR digital
filters with prespecified maximum group delay errors. In the proposed
method, the approximation problems of the magnitude-phase and
group delay are separately defined, and these two approximation
problems are alternately solved using successive projections. As a
result, the proposed method can design the IIR filters that satisfy the
prespecified allowable errors for not only the complex magnitude but
also the group delay by alternately executing the coefficient update
for the magnitude-phase and the group delay approximation. The
usefulness of the proposed method is verified through some examples.
Abstract: The prediction of financial time series is a very
complicated process. If the efficient market hypothesis holds, then the predictability of most financial time series would be a rather
controversial issue, due to the fact that the current price contains already all available information in the market. This paper extends
the Adaptive Neuro Fuzzy Inference System for High Frequency
Trading which is an expert system that is capable of using fuzzy reasoning combined with the pattern recognition capability of neural networks to be used in financial forecasting and trading in high
frequency. However, in order to eliminate unnecessary input in the
training phase a new event based volatility model was proposed.
Taking volatility and the scaling laws of financial time series into consideration has brought about the development of the Intraday Seasonality Observation Model. This new model allows the observation of specific events and seasonalities in data and subsequently removes any unnecessary data. This new event based
volatility model provides the ANFIS system with more accurate input
and has increased the overall performance of the system.
Abstract: Gene expression profiling is rapidly evolving into a
powerful technique for investigating tumor malignancies. The
researchers are overwhelmed with the microarray-based platforms
and methods that confer them the freedom to conduct large-scale
gene expression profiling measurements. Simultaneously,
investigations into cross-platform integration methods have started
gaining momentum due to their underlying potential to help
comprehend a myriad of broad biological issues in tumor diagnosis,
prognosis, and therapy. However, comparing results from different
platforms remains to be a challenging task as various inherent
technical differences exist between the microarray platforms. In this
paper, we explain a simple ratio-transformation method, which can
provide some common ground for cDNA and Affymetrix platform
towards cross-platform integration. The method is based on the
characteristic data attributes of Affymetrix- and cDNA- platform. In
the work, we considered seven childhood leukemia patients and their
gene expression levels in either platform. With a dataset of 822
differentially expressed genes from both these platforms, we carried
out a specific ratio-treatment to Affymetrix data, which subsequently
showed an improvement in the relationship with the cDNA data.
Abstract: It is well known that a linear dynamic system including
a delay will exhibit limit cycle oscillations when a bang-bang sensor
is used in the feedback loop of a PID controller. A similar behaviour
occurs when a delayed feedback signal is used to train a neural
network. This paper develops a method of predicting this behaviour
by linearizing the system, which can be shown to behave in a manner
similar to an integral controller. Using this procedure, it is possible
to predict the characteristics of the neural network driven limit cycle
to varying degrees of accuracy, depending on the information known
about the system. An application is also presented: the intelligent
control of a spark ignition engine.
Abstract: This article attempts to analyze functionally graded beam thermal buckling along with piezoelectric layers applying based on the third order shearing deformation theory considering various boundary conditions. The beam properties are assumed to vary continuously from the lower surface to the upper surface of the beam. The equilibrium equations are derived using the total potential energy equations, Euler equations, piezoelectric material constitutive equations and third order shear deformation theory assumptions. In order to fulfill such an aim, at first functionally graded beam with piezoelectric layers applying the third order shearing deformation theory along with clamped -clamped boundary conditions are thoroughly analyzed, and then following making sure of the correctness of all the equations, the very same beam is analyzed with piezoelectric layers through simply-simply and simply-clamped boundary conditions. In this article buckling critical temperature for functionally graded beam is derived in two different ways, without piezoelectric layer and with piezoelectric layer and the results are compared together. Finally, all the conclusions obtained will be compared and contrasted with the same samples in the same and distinguished conditions through tables and charts. It would be noteworthy that in this article, the software MAPLE has been applied in order to do the numeral calculations.
Abstract: This paper describes a method of modeling to model
shadow play puppet using sophisticated computer graphics techniques
available in OpenGL in order to allow interactive play in real-time
environment as well as producing realistic animation. This paper
proposes a novel real-time method is proposed for modeling of puppet
and its shadow image that allows interactive play of virtual shadow
play using texture mapping and blending techniques. Special effects
such as lighting and blurring effects for virtual shadow play
environment are also developed. Moreover, the use of geometric
transformations and hierarchical modeling facilitates interaction
among the different parts of the puppet during animation. Based on the
experiments and the survey that were carried out, the respondents
involved are very satisfied with the outcomes of these techniques.
Abstract: A multicriteria linear programming problem with integer variables and parameterized optimality principle "from lexicographic to Slater" is considered. A situation in which initial coefficients of penalty cost functions are not fixed but may be potentially a subject to variations is studied. For any efficient solution, appropriate measures of the quality are introduced which incorporate information about variations of penalty cost function coefficients. These measures correspond to the so-called stability and accuracy functions defined earlier for efficient solutions of a generic multicriteria combinatorial optimization problem with Pareto and lexicographic optimality principles. Various properties of such functions are studied and maximum norms of perturbations for which an efficient solution preserves the property of being efficient are calculated.
Abstract: While compressing text files is useful, compressing
still image files is almost a necessity. A typical image takes up much
more storage than a typical text message and without compression
images would be extremely clumsy to store and distribute. The
amount of information required to store pictures on modern
computers is quite large in relation to the amount of bandwidth
commonly available to transmit them over the Internet and
applications. Image compression addresses the problem of reducing
the amount of data required to represent a digital image. Performance
of any image compression method can be evaluated by measuring the
root-mean-square-error & peak signal to noise ratio. The method of
image compression that will be analyzed in this paper is based on the
lossy JPEG image compression technique, the most popular
compression technique for color images. JPEG compression is able to
greatly reduce file size with minimal image degradation by throwing
away the least “important" information. In JPEG, both color
components are downsampled simultaneously, but in this paper we
will compare the results when the compression is done by
downsampling the single chroma part. In this paper we will
demonstrate more compression ratio is achieved when the
chrominance blue is downsampled as compared to downsampling the
chrominance red in JPEG compression. But the peak signal to noise
ratio is more when the chrominance red is downsampled as compared
to downsampling the chrominance blue in JPEG compression. In
particular we will use the hats.jpg as a demonstration of JPEG
compression using low pass filter and demonstrate that the image is
compressed with barely any visual differences with both methods.
Abstract: Despite so many years- development, the mainstream of workflow solutions from IT industries has not made ad-hoc workflow-support easy or inexpensive in MIS. Moreover, most of academic approaches tend to make their resulted BPM (Business Process Management) more complex and clumsy since they used to necessitate modeling workflow. To cope well with various ad-hoc or casual requirements on workflows while still keeping things simple and inexpensive, the author puts forth first the TSM design pattern that can provide a flexible workflow control while minimizing demand of predefinitions and modeling workflow, which introduces a generic approach for building BPM in workflow-aware MISs (Management Information Systems) with low development and running expenses.
Abstract: An implant elicits a biological response in the
surrounding tissue which determines the acceptance and long-term
function of the implant. Dental implants have become one of the
main therapy methods in clinic after teeth lose. A successful implant
is in contact with bone and soft tissue represent by fibroblasts. In our
study we focused on the interaction between six different chemically
and physically modified titanium implants (Tis-MALP, Tis-O, Tis-
OA, Tis-OPAAE, Tis-OZ, Tis-OPAE) with alveolar fibroblasts as
well as with five type of microorganisms (S. epidermis, S.mutans, S.
gordonii, S. intermedius, C.albicans). The analysis of microorganism
adhesion was determined by CFU (colony forming unite) and biofilm
formation. The presence of α3β1 and vinculin expression on alveolar
fibroblasts was demonstrated using phospho specific cell based
ELISA (PACE). Alveolar fibroblasts have the highest expression of
these proteins on Tis-OPAAE and Tis-OPAE. It corresponds with
results from bacterial adhesion and biofilm formation and it was
related to the lowest production of collagen I by alveolar fibroblasts
on Tis-OPAAE titanium disc.
Abstract: A framework to estimate the state of dynamically
varying environment where data are generated from heterogeneous
sources possessing partial knowledge about the environment is presented.
This is entirely derived within Dempster-Shafer and Evidence
Filtering frameworks. The belief about the current state is expressed
as belief and plausibility functions. An addition to Single Input
Single Output Evidence Filter, Multiple Input Single Output Evidence
Filtering approach is introduced. Variety of applications such as
situational estimation of an emergency environment can be developed
within the framework successfully. Fire propagation scenario is used
to justify the proposed framework, simulation results are presented.
Abstract: Carbon disulfide is widely used for the production of
viscose rayon, rubber, and other organic materials and it is a
feedstock for the synthesis of sulfuric acid. The objective of this
paper is to analyze possibilities for efficient production of CS2 from
sour natural gas reformation (H2SMR) (2H2S+CH4 =CS2 +4H2) .
Also, the effect of H2S to CH4 feed ratio and reaction temperature on
carbon disulfide production is investigated numerically in a
reforming reactor. The chemical reaction model is based on an
assumed Probability Density Function (PDF) parameterized by the
mean and variance of mixture fraction and β-PDF shape. The results
show that the major factors influencing CS2 production are reactor
temperature. The yield of carbon disulfide increases with increasing
H2S to CH4 feed gas ratio (H2S/CH4≤4). Also the yield of C(s)
increases with increasing temperature until the temperature reaches
to 1000°K, and then due to increase of CS2 production and
consumption of C(s), yield of C(s) drops with further increase in the
temperature. The predicted CH4 and H2S conversion and yield of
carbon disulfide are in good agreement with result of Huang and TRaissi.
Abstract: This paper considers the problem of finding low cost
chip set for a minimum cost partitioning of a large logic circuits. Chip
sets are selected from a given library. Each chip in the library has a
different price, area, and I/O pin. We propose a low cost chip set
selection algorithm. Inputs to the algorithm are a netlist and a chip
information in the library. Output is a list of chip sets satisfied with
area and maximum partitioning number and it is sorted by cost. The
algorithm finds the sorted list of chip sets from minimum cost to
maximum cost. We used MCNC benchmark circuits for experiments.
The experimental results show that all of chip sets found satisfy the
multiple partitioning constraints.
Abstract: Sparse representation which can represent high dimensional
data effectively has been successfully used in computer vision
and pattern recognition problems. However, it doesn-t consider the
label information of data samples. To overcome this limitation,
we develop a novel dimensionality reduction algorithm namely
dscriminatively regularized sparse subspace learning(DR-SSL) in this
paper. The proposed DR-SSL algorithm can not only make use of
the sparse representation to model the data, but also can effective
employ the label information to guide the procedure of dimensionality
reduction. In addition,the presented algorithm can effectively deal
with the out-of-sample problem.The experiments on gene-expression
data sets show that the proposed algorithm is an effective tool for
dimensionality reduction and gene-expression data classification.
Abstract: This paper presents a robust method to detect obstacles in stereo images using shadow removal technique and color information. Stereo vision based obstacle detection is an algorithm that aims to detect and compute obstacle depth using stereo matching and disparity map. The proposed advanced method is divided into three phases, the first phase is detecting obstacles and removing shadows, the second one is matching and the last phase is depth computing. We propose a robust method for detecting obstacles in stereo images using a shadow removal technique based on color information in HIS space, at the first phase. In this paper we use Normalized Cross Correlation (NCC) function matching with a 5 × 5 window and prepare an empty matching table τ and start growing disparity components by drawing a seed s from S which is computed using canny edge detector, and adding it to τ. In this way we achieve higher performance than the previous works [2,17]. A fast stereo matching algorithm is proposed that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. The obstacle identified in phase one which appears in the disparity map of phase two enters to the third phase of depth computing. Finally, experimental results are presented to show the effectiveness of the proposed method.
Abstract: Little attention has been paid to information
transmission between the portfolios of large stocks and small stocks in the Korean stock market. This study investigates the return and volatility transmission mechanisms between large and small stocks in
the Korea Exchange (KRX). This study also explores whether bad news in the large stock market leads to a volatility of the small stock
market that is larger than the good news volatility of the large stock market. By employing the Granger causality test, we found
unidirectional return transmissions from the large stocks to medium
and small stocks. This evidence indicates that pat information about
the large stocks has a better ability to predict the returns of the medium and small stocks in the Korean stock market. Moreover, by using the
asymmetric GARCH-BEKK model, we observed the unidirectional relationship of asymmetric volatility transmission from large stocks to
the medium and small stocks. This finding suggests that volatility in
the medium and small stocks following a negative shock in the large
stocks is larger than that following a positive shock in the large stocks.
Abstract: The primary purpose of this article is an attempt to
find the implication of globalization on education. Globalization has
an important role as a process in the economical, political, cultural
and technological dimensions in the life of the contemporary human
being and has been affected by it. Education has its effects in this
procedure and while influencing it through educating global citizens
having universal human features and characteristics, has been
influenced by this phenomenon too. Nowadays, the role of education
is not just to develop in the students the knowledge and skills
necessary for the new kinds of jobs. If education wants to help
students be prepared of the new global society, it has to make them
engaged productive and critical citizens for the global era, so that
they can reflect about their roles as key actors in a dynamic often
uneven, matrix of economic and cultural exchanges. If education
wants to reinforce and raise the national identity, the value system
and the children and teenagers, it should make them ready for living
in the global era of this century. The used method in this research is
documentary and analyzing the documents. Studies in this field show
globalization has influences on the processes of the production,
distribution and consuming of knowledge. The happening of this
event in the information era has not only provide the necessary
opportunities for the exchanges of education worldwide but also has
privileges for the developing countries which enables them to
strengthen educational bases of their society and have an important
step toward their future.
Abstract: In today's world where everything is rapidly changing
and information technology is high in development, many features of culture, society, politic and economy has changed. The advent of
information technology and electronic data transmission lead to easy communication and fields like e-learning and e-commerce, are
accessible for everyone easily. One of these technologies is virtual
training. The "quality" of such kind of education systems is critical. 131 questionnaires were prepared and distributed among university
student in Toba University. So the research has followed factors that affect the quality of learning from the perspective of staff, students, professors and this type of university. It is concluded that the important factors in virtual training are the quality of professors, the
quality of staff, and the quality of the university. These mentioned factors were the most prior factors in this education system and
necessary for improving virtual training.
Abstract: The widely used Total Variation de-noising algorithm can preserve sharp edge, while removing noise. However, since fixed regularization parameter over entire image, small details and textures are often lost in the process. In this paper, we propose a modified Total Variation algorithm to better preserve smaller-scaled features. This is done by allowing an adaptive regularization parameter to control the amount of de-noising in any region of image, according to relative information of local feature scale. Experimental results demonstrate the efficient of the proposed algorithm. Compared with standard Total Variation, our algorithm can better preserve smaller-scaled features and show better performance.
Abstract: The broadcast problem including the plan design is
considered. The data are inserted and numbered at predefined order
into customized size relations. The server ability to create a full,
regular Broadcast Plan (RBP) with single and multiple channels after
some data transformations is examined. The Regular Geometric
Algorithm (RGA) prepares a RBP and enables the users to catch their
items avoiding energy waste of their devices. Moreover, the
Grouping Dimensioning Algorithm (GDA) based on integrated
relations can guarantee the discrimination of services with a
minimum number of channels. This last property among the selfmonitoring,
self-organizing, can be offered by servers today
providing also channel availability and less energy consumption by
using smaller number of channels. Simulation results are provided.