Abstract: While compressing text files is useful, compressing
still image files is almost a necessity. A typical image takes up much
more storage than a typical text message and without compression
images would be extremely clumsy to store and distribute. The
amount of information required to store pictures on modern
computers is quite large in relation to the amount of bandwidth
commonly available to transmit them over the Internet and
applications. Image compression addresses the problem of reducing
the amount of data required to represent a digital image. Performance
of any image compression method can be evaluated by measuring the
root-mean-square-error & peak signal to noise ratio. The method of
image compression that will be analyzed in this paper is based on the
lossy JPEG image compression technique, the most popular
compression technique for color images. JPEG compression is able to
greatly reduce file size with minimal image degradation by throwing
away the least “important" information. In JPEG, both color
components are downsampled simultaneously, but in this paper we
will compare the results when the compression is done by
downsampling the single chroma part. In this paper we will
demonstrate more compression ratio is achieved when the
chrominance blue is downsampled as compared to downsampling the
chrominance red in JPEG compression. But the peak signal to noise
ratio is more when the chrominance red is downsampled as compared
to downsampling the chrominance blue in JPEG compression. In
particular we will use the hats.jpg as a demonstration of JPEG
compression using low pass filter and demonstrate that the image is
compressed with barely any visual differences with both methods.
Abstract: Despite so many years- development, the mainstream of workflow solutions from IT industries has not made ad-hoc workflow-support easy or inexpensive in MIS. Moreover, most of academic approaches tend to make their resulted BPM (Business Process Management) more complex and clumsy since they used to necessitate modeling workflow. To cope well with various ad-hoc or casual requirements on workflows while still keeping things simple and inexpensive, the author puts forth first the TSM design pattern that can provide a flexible workflow control while minimizing demand of predefinitions and modeling workflow, which introduces a generic approach for building BPM in workflow-aware MISs (Management Information Systems) with low development and running expenses.
Abstract: A framework to estimate the state of dynamically
varying environment where data are generated from heterogeneous
sources possessing partial knowledge about the environment is presented.
This is entirely derived within Dempster-Shafer and Evidence
Filtering frameworks. The belief about the current state is expressed
as belief and plausibility functions. An addition to Single Input
Single Output Evidence Filter, Multiple Input Single Output Evidence
Filtering approach is introduced. Variety of applications such as
situational estimation of an emergency environment can be developed
within the framework successfully. Fire propagation scenario is used
to justify the proposed framework, simulation results are presented.
Abstract: This paper considers the problem of finding low cost
chip set for a minimum cost partitioning of a large logic circuits. Chip
sets are selected from a given library. Each chip in the library has a
different price, area, and I/O pin. We propose a low cost chip set
selection algorithm. Inputs to the algorithm are a netlist and a chip
information in the library. Output is a list of chip sets satisfied with
area and maximum partitioning number and it is sorted by cost. The
algorithm finds the sorted list of chip sets from minimum cost to
maximum cost. We used MCNC benchmark circuits for experiments.
The experimental results show that all of chip sets found satisfy the
multiple partitioning constraints.
Abstract: Sparse representation which can represent high dimensional
data effectively has been successfully used in computer vision
and pattern recognition problems. However, it doesn-t consider the
label information of data samples. To overcome this limitation,
we develop a novel dimensionality reduction algorithm namely
dscriminatively regularized sparse subspace learning(DR-SSL) in this
paper. The proposed DR-SSL algorithm can not only make use of
the sparse representation to model the data, but also can effective
employ the label information to guide the procedure of dimensionality
reduction. In addition,the presented algorithm can effectively deal
with the out-of-sample problem.The experiments on gene-expression
data sets show that the proposed algorithm is an effective tool for
dimensionality reduction and gene-expression data classification.
Abstract: This paper presents a robust method to detect obstacles in stereo images using shadow removal technique and color information. Stereo vision based obstacle detection is an algorithm that aims to detect and compute obstacle depth using stereo matching and disparity map. The proposed advanced method is divided into three phases, the first phase is detecting obstacles and removing shadows, the second one is matching and the last phase is depth computing. We propose a robust method for detecting obstacles in stereo images using a shadow removal technique based on color information in HIS space, at the first phase. In this paper we use Normalized Cross Correlation (NCC) function matching with a 5 × 5 window and prepare an empty matching table τ and start growing disparity components by drawing a seed s from S which is computed using canny edge detector, and adding it to τ. In this way we achieve higher performance than the previous works [2,17]. A fast stereo matching algorithm is proposed that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. The obstacle identified in phase one which appears in the disparity map of phase two enters to the third phase of depth computing. Finally, experimental results are presented to show the effectiveness of the proposed method.
Abstract: Little attention has been paid to information
transmission between the portfolios of large stocks and small stocks in the Korean stock market. This study investigates the return and volatility transmission mechanisms between large and small stocks in
the Korea Exchange (KRX). This study also explores whether bad news in the large stock market leads to a volatility of the small stock
market that is larger than the good news volatility of the large stock market. By employing the Granger causality test, we found
unidirectional return transmissions from the large stocks to medium
and small stocks. This evidence indicates that pat information about
the large stocks has a better ability to predict the returns of the medium and small stocks in the Korean stock market. Moreover, by using the
asymmetric GARCH-BEKK model, we observed the unidirectional relationship of asymmetric volatility transmission from large stocks to
the medium and small stocks. This finding suggests that volatility in
the medium and small stocks following a negative shock in the large
stocks is larger than that following a positive shock in the large stocks.
Abstract: The primary purpose of this article is an attempt to
find the implication of globalization on education. Globalization has
an important role as a process in the economical, political, cultural
and technological dimensions in the life of the contemporary human
being and has been affected by it. Education has its effects in this
procedure and while influencing it through educating global citizens
having universal human features and characteristics, has been
influenced by this phenomenon too. Nowadays, the role of education
is not just to develop in the students the knowledge and skills
necessary for the new kinds of jobs. If education wants to help
students be prepared of the new global society, it has to make them
engaged productive and critical citizens for the global era, so that
they can reflect about their roles as key actors in a dynamic often
uneven, matrix of economic and cultural exchanges. If education
wants to reinforce and raise the national identity, the value system
and the children and teenagers, it should make them ready for living
in the global era of this century. The used method in this research is
documentary and analyzing the documents. Studies in this field show
globalization has influences on the processes of the production,
distribution and consuming of knowledge. The happening of this
event in the information era has not only provide the necessary
opportunities for the exchanges of education worldwide but also has
privileges for the developing countries which enables them to
strengthen educational bases of their society and have an important
step toward their future.
Abstract: In today's world where everything is rapidly changing
and information technology is high in development, many features of culture, society, politic and economy has changed. The advent of
information technology and electronic data transmission lead to easy communication and fields like e-learning and e-commerce, are
accessible for everyone easily. One of these technologies is virtual
training. The "quality" of such kind of education systems is critical. 131 questionnaires were prepared and distributed among university
student in Toba University. So the research has followed factors that affect the quality of learning from the perspective of staff, students, professors and this type of university. It is concluded that the important factors in virtual training are the quality of professors, the
quality of staff, and the quality of the university. These mentioned factors were the most prior factors in this education system and
necessary for improving virtual training.
Abstract: The widely used Total Variation de-noising algorithm can preserve sharp edge, while removing noise. However, since fixed regularization parameter over entire image, small details and textures are often lost in the process. In this paper, we propose a modified Total Variation algorithm to better preserve smaller-scaled features. This is done by allowing an adaptive regularization parameter to control the amount of de-noising in any region of image, according to relative information of local feature scale. Experimental results demonstrate the efficient of the proposed algorithm. Compared with standard Total Variation, our algorithm can better preserve smaller-scaled features and show better performance.
Abstract: Transcription factors are a group of proteins that
helps for interpreting the genetic information in DNA.
Protein-protein interactions play a major role in the execution
of key biological functions of a cell. These interactions are
represented in the form of a graph with nodes and edges.
Studies have showed that some nodes have high degree of
connectivity and such nodes, known as hub nodes, are the
inevitable parts of the network. In the present paper a method
is proposed to identify hub transcription factor proteins using
sequence information. On a complete data set of transcription
factor proteins available from the APID database, the
proposed method showed an accuracy of 77%, sensitivity of
79% and specificity of 76%.
Abstract: In an Orthogonal Frequency Division Multiplexing (OFDM) systems, the Peak to Average power Ratio (PAR) is high. The clipping signal scheme is a useful and simple method to reduce the PAR. However, it introduces additional noise that degrades the systems performance. We propose an oversampling scheme to deal with the received signal in order to reduce the clipping noise by using Finite Impulse Response (FIR) filter. Coefficients of filter are obtained by correlation function of the received signal and the oversampling information at receiver. The performance of the proposed technique is evaluated for frequency selective channel. Results show that the proposed scheme can mitigate the clipping noise significantly for OFDM systems and in order to maintain the system's capacity, the clipping ratio should be larger than 2.5.
Abstract: This paper aims to (1) analyze the profiles of
transgressors (detected evaders); (2) examine reason(s) that triggered a
tax audit, causes of tax evasion, audit timeframe and tax penalty
charged; and (3) to assess if tax auditors followed the guidelines as
stated in the 'Tax Audit Framework' when conducting tax audits. In
2011, the Inland Revenue Board Malaysia (IRBM) had audited and
finalized 557 company cases. With official permission, data of all the
557 cases were obtained from the IRBM. Of these, a total of 421 cases
with complete information were analyzed. About 58.1% was small and
medium corporations and from the construction industry (32.8%). The
selection for tax audit was based on risk analysis (66.8%), information
from third party (11.1%), and firm with low profitability or fluctuating
profit pattern (7.8%). The three persistent causes of tax evasion by
firms were over claimed expenses (46.8%), fraudulent reporting of
income (38.5%) and overstating purchases (10.5%). These findings
are consistent with past literature. Results showed that tax auditors
took six to 18 months to close audit cases. More than half of tax
evaders were fined 45% on additional tax raised during audit for the
first offence. The study found tax auditors did follow the guidelines in
the 'Tax Audit Framework' in audit selection, settlement and penalty
imposition.
Abstract: One important problem in today organizations is the
existence of non-integrated information systems, inconsistency and
lack of suitable correlations between legacy and modern systems.
One main solution is to transfer the local databases into a global one.
In this regards we need to extract the data structures from the legacy
systems and integrate them with the new technology systems. In
legacy systems, huge amounts of a data are stored in legacy
databases. They require particular attention since they need more
efforts to be normalized, reformatted and moved to the modern
database environments. Designing the new integrated (global)
database architecture and applying the reverse engineering requires
data normalization. This paper proposes the use of database reverse
engineering in order to integrate legacy and modern databases in
organizations. The suggested approach consists of methods and
techniques for generating data transformation rules needed for the
data structure normalization.
Abstract: This paper proposes a modeling methodology for the
development of data analysis solution. The Author introduce the
approach to address data warehousing issues at the at enterprise level.
The methodology covers the process of the requirements eliciting and
analysis stage as well as initial design of data warehouse. The paper
reviews extended business process model, which satisfy the needs of
data warehouse development. The Author considers that the use of
business process models is necessary, as it reflects both enterprise
information systems and business functions, which are important for
data analysis. The Described approach divides development into
three steps with different detailed elaboration of models. The
Described approach gives possibility to gather requirements and
display them to business users in easy manner.
Abstract: A high performance computer includes a fast
processor and millions bytes of memory. During the data processing,
huge amount of information are shuffled between the memory and
processor. Because of its small size and its effectiveness speed, cache
has become a common feature of high performance computers.
Enhancing cache performance proved to be essential in the speed up
of cache-based computers. Most enhancement approaches can be
classified as either software based or hardware controlled. The
performance of the cache is quantified in terms of hit ratio or miss
ratio. In this paper, we are optimizing the cache performance based
on enhancing the cache hit ratio. The optimum cache performance is
obtained by focusing on the cache hardware modification in the way
to make a quick rejection to the missed line's tags from the hit-or
miss comparison stage, and thus a low hit time for the wanted line in
the cache is achieved. In the proposed technique which we called
Even- Odd Tabulation (EOT), the cache lines come from the main
memory into cache are classified in two types; even line's tags and
odd line's tags depending on their Least Significant Bit (LSB). This
division is exploited by EOT technique to reject the miss match line's
tags in very low time compared to the time spent by the main
comparator in the cache, giving an optimum hitting time for the
wanted cache line. The high performance of EOT technique against
the familiar mapping technique FAM is shown in the simulated
results.
Abstract: This paper presents an interval-based multi-attribute
decision making (MADM) approach in support of the decision
process with imprecise information. The proposed decision
methodology is based on the model of linear additive utility function
but extends the problem formulation with the measure of composite
utility variance. A sample study concerning with the evaluation of
electric generation expansion strategies is provided showing how the
imprecise data may affect the choice toward the best solution and
how a set of alternatives, acceptable to the decision maker (DM),
may be identified with certain confidence.
Abstract: Microcirculation is essential for the proper supply of
oxygen and nutritive substances to the biological tissue and the
removal of waste products of metabolism. The determination of
blood flow in the capillaries is therefore of great interest to clinicians.
A comparison has been carried out using the developed non-invasive,
non-contact and whole field laser speckle contrast imaging (LSCI)
based technique and as well as a commercially available laser
Doppler blood flowmeter (LDF) to evaluate blood flow at the finger
tip and elbow and is presented here. The LSCI technique gives more
quantitative information on the velocity of blood when compared to
the perfusion values obtained using the LDF. Measurement of blood
flow in capillaries can be of great interest to clinicians in the
diagnosis of vascular diseases of the upper extremities.
Abstract: PARADIGMA (PARticipative Approach to DIsease
Global Management) is a pilot project which aims to develop and
demonstrate an Internet based reference framework to share scientific
resources and findings in the treatment of major diseases.
PARADIGMA defines and disseminates a common methodology and
optimised protocols (Clinical Pathways) to support service functions
directed to patients and individuals on matters like prevention, posthospitalisation
support and awareness. PARADIGMA will provide a
platform of information services - user oriented and optimised
against social, cultural and technological constraints - supporting the
Health Care Global System of the Euro-Mediterranean Community
in a continuous improvement process.
Abstract: The objective of this study was to improve our
understanding of vulnerability and environmental change; it's causes
basically show the intensity, its distribution and human-environment
effect on the ecosystem in the Apodi Valley Region, This paper is
identify, assess and classify vulnerability and environmental change
in the Apodi valley region using a combined approach of landscape
pattern and ecosystem sensitivity. Models were developed using the
following five thematic layers: Geology, geomorphology, soil,
vegetation and land use/cover, by means of a Geographical
Information Systems (GIS)-based on hydro-geophysical parameters.
In spite of the data problems and shortcomings, using ESRI-s ArcGIS
9.3 program, the vulnerability score, to classify, weight and combine
a number of 15 separate land cover classes to create a single indicator
provides a reliable measure of differences (6 classes) among regions
and communities that are exposed to similar ranges of hazards.
Indeed, the ongoing and active development of vulnerability
concepts and methods have already produced some tools to help
overcome common issues, such as acting in a context of high
uncertainties, taking into account the dynamics and spatial scale of
asocial-ecological system, or gathering viewpoints from different
sciences to combine human and impact-based approaches. Based on
this assessment, this paper proposes concrete perspectives and
possibilities to benefit from existing commonalities in the
construction and application of assessment tools.