Abstract: Electrocardiogram (ECG) data compression algorithm
is needed that will reduce the amount of data to be transmitted, stored
and analyzed, but without losing the clinical information content. A
wavelet ECG data codec based on the Set Partitioning In Hierarchical
Trees (SPIHT) compression algorithm is proposed in this paper. The
SPIHT algorithm has achieved notable success in still image coding.
We modified the algorithm for the one-dimensional (1-D) case and
applied it to compression of ECG data.
By this compression method, small percent root mean square
difference (PRD) and high compression ratio with low
implementation complexity are achieved. Experiments on selected
records from the MIT-BIH arrhythmia database revealed that the
proposed codec is significantly more efficient in compression and in
computation than previously proposed ECG compression schemes.
Compression ratios of up to 48:1 for ECG signals lead to acceptable
results for visual inspection.
Abstract: Gradual patterns have been studied for many years as
they contain precious information. They have been integrated in
many expert systems and rule-based systems, for instance to reason
on knowledge such as “the greater the number of turns, the greater
the number of car crashes”. In many cases, this knowledge has been
considered as a rule “the greater the number of turns → the greater
the number of car crashes” Historically, works have thus been
focused on the representation of such rules, studying how implication
could be defined, especially fuzzy implication. These rules were
defined by experts who were in charge to describe the systems they
were working on in order to turn them to operate automatically. More
recently, approaches have been proposed in order to mine databases
for automatically discovering such knowledge. Several approaches
have been studied, the main scientific topics being: how to determine
what is an relevant gradual pattern, and how to discover them as
efficiently as possible (in terms of both memory and CPU usage).
However, in some cases, end-users are not interested in raw level
knowledge, and are rather interested in trends. Moreover, it may be
the case that no relevant pattern can be discovered at a low level of
granularity (e.g. city), whereas some can be discovered at a higher
level (e.g. county). In this paper, we thus extend gradual pattern
approaches in order to consider multiple level gradual patterns. For
this purpose, we consider two aggregation policies, namely
horizontal and vertical.
Abstract: This paper examines the impact of information and
communication technology (ICT) usage, internal relationship,
supplier-retailer relationship, logistics services and inventory
management on convenience store suppliers- performance. Data was
collected from 275 convenience store managers in Malaysia using a
set of questionnaire. The multiple linear regression results indicate
that inventory management, supplier-retailer relationship, logistics
services and internal relationship are predictors of supplier
performance as perceived by convenience store managers. However,
ICT usage is not a predictor of supplier performance. The study
focuses only on convenience stores and petrol station convenience
stores and concentrates only on managers. The results provide
insights to suppliers who serve convenience stores and possibly
similar retail format on factors to consider in improving their service
to retailers. The results also provide insights to government in its
aspiration to improve business operations of convenience store to
consider ways to enhance the adoption of ICT by retailers and
suppliers.
Abstract: The state of the art in instructional design for
computer-assisted learning has been strongly influenced by advances
in information technology, Internet and Web-based systems. The
emphasis of educational systems has shifted from training to
learning. The course delivered has also been changed from large
inflexible content to sequential small chunks of learning objects. The
concepts of learning objects together with the advanced technologies
of Web and communications support the reusability, interoperability,
and accessibility design criteria currently exploited by most learning
systems. These concepts enable just-in-time learning. We propose to
extend theses design criteria further to include the learnability
concept that will help adapting content to the needs of learners. The
learnability concept offers a better personalization leading to the
creation and delivery of course content more appropriate to
performance and interest of each learner. In this paper we present a
new framework of learning environments containing knowledge
discovery as a tool to automatically learn patterns of learning
behavior from learners' profiles and history.
Abstract: Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes 1, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) 2. In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages.
Abstract: In recent years, rapid advances in software and hardware in the field of information technology along with a digital imaging revolution in the medical domain facilitate the generation and storage of large collections of images by hospitals and clinics. To search these large image collections effectively and efficiently poses significant technical challenges, and it raises the necessity of constructing intelligent retrieval systems. Content-based Image Retrieval (CBIR) consists of retrieving the most visually similar images to a given query image from a database of images[5]. Medical CBIR (content-based image retrieval) applications pose unique challenges but at the same time offer many new opportunities. On one hand, while one can easily understand news or sports videos, a medical image is often completely incomprehensible to untrained eyes.
Abstract: We study a new technique for optimal data compression
subject to conditions of causality and different types of memory. The
technique is based on the assumption that some information about
compressed data can be obtained from a solution of the associated
problem without constraints of causality and memory. This allows
us to consider two separate problem related to compression and decompression
subject to those constraints. Their solutions are given
and the analysis of the associated errors is provided.
Abstract: The importance of inter-organizational system (IOS)
has been increasingly recognized by organizations. However, IOS
adoption has proved to be difficult and, at this stage, why this is so is
not fully uncovered. In practice, benefits have often remained
concentrated, primarily accruing to the dominant party, resulting in
low rates of adoption and usage, and often culminating in the failure
of the IOS. The main research question is why organizations initiate
or join IOS and what factors influence their adoption and use levels.
This paper reviews the literature on IOS adoption and proposes a
theoretical framework in order to identify the critical factors to
capture a complete picture of IOS adoption. With our proposed
critical factors, we are able to investigate their relative contributions
to IOS adoption decisions. We obtain findings that suggested that
there are five groups of factors that significantly affect the adoption
and use decision of IOS in the Supply Chain Management (SCM)
context: 1) interorganizational context, 2) organizational context, 3)
technological context, 4) perceived costs, and 5) perceived benefits.
Abstract: This research proposes a methodology for patent-citation-based technology input-output analysis by applying the patent information to input-output analysis developed for the dependencies among different industries. For this analysis, a technology relationship matrix and its components, as well as input and technology inducement coefficients, are constructed using patent information. Then, a technology inducement coefficient is calculated by normalizing the degree of citation from certain IPCs to the different IPCs (International patent classification) or to the same IPCs. Finally, we construct a Dependency Structure Matrix (DSM) based on the technology inducement coefficient to suggest a useful application for this methodology.
Abstract: A Data Warehouses is a repository of information
integrated from source data. Information stored in data warehouse is
the form of materialized in order to provide the better performance
for answering the queries. Deciding which appropriated views to be
materialized is one of important problem. In order to achieve this
requirement, the constructing search space close to optimal is a
necessary task. It will provide effective result for selecting view to be
materialized. In this paper we have proposed an approach to reoptimize
Multiple View Processing Plan (MVPP) by using global
common subexpressions. The merged queries which have query
processing cost not close to optimal would be rewritten. The
experiment shows that our approach can help to improve the total
query processing cost of MVPP and sum of query processing cost
and materialized view maintenance cost is reduced as well after views
are selected to be materialized.
Abstract: In order to make environmental test centrifuge balance
automatically and accurately, reduce unbalance centrifugal force,
balance adjusting system of centrifuge is designed. The new balance
adjusting system comprises motor-reducer, timing belt, screw pair,
slider-guideway and four rocker force sensors. According to
information obtained by the four rocker force sensors, unbalanced
value at both ends of the big arm is computed and heavy block is
moved to achieve balance adjusting. In this paper, motor power and
torque to move the heavy block is calculated. In full load running
progress of centrifuge, the stress-strain of screw pair composed by
adjusting nut and big arm are analyzed. A successful application of the
balance adjusting system is also put forwarded. The results show that
the balance adjusting system can satisfy balance require of
environmental test centrifuge.
Abstract: There are various approaches to implement quality
improvements. Organizations aim for a management standard which
is capable of providing customers with quality assurance on their
product/service via continuous process improvement. Carefully
planned steps are necessary to ensure the right quality improvement
methodology (QIM) and business operations are consistent, reliable
and truly meet the customers' needs. This paper traces the evolution
of QIM in Malaysia-s Information Technology (IT) industry in the
past, current and future; and highlights some of the thought of
researchers who contributed to the science and practice of quality,
and identifies leading methodologies in use today. Some of the
misconceptions and mistakes leading to quality system failures will
also be examined and discussed. This paper aims to provide a general
overview of different types of QIMs available for IT businesses in
maximizing business advantages, enhancing product quality,
improving process routines and increasing performance earnings.
Abstract: With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.
Abstract: Cognizant of the fact that enterprise systems involve
organizational change and their implementation is over shadowed by a
high failure rate, it is argued that there is the need to focus attention on
employees- perceptions of such organizational change when
explaining adoption behavior of enterprise systems. For this purpose,
the research incorporates a conceptual constructo fattitude toward
change that captures views about the need for organizational change.
Centered on this conceptual construct, the research model includes
beliefs regarding the system and behavioral intention as its
consequences, and the personal characteristics of organizational
commitment and perceived personal competence as its antecedents.
Structural equation analysis using LISREL provides significant
support for the proposed relationships. Theoretical and practical
implications are discussed along with limitations.
Abstract: This paper is a description approach to predict
incoming and outgoing data rate in network system by using
association rule discover, which is one of the data mining
techniques. Information of incoming and outgoing data in each
times and network bandwidth are network performance
parameters, which needed to solve in the traffic problem. Since
congestion and data loss are important network problems. The result
of this technique can predicted future network traffic. In addition,
this research is useful for network routing selection and network
performance improvement.
Abstract: Recent advancements in sensor technologies and
Wireless Body Area Networks (WBANs) have led to the
development of cost-effective healthcare devices which can be used
to monitor and analyse a person-s physiological parameters from
remote locations. These advancements provides a unique opportunity
to overcome current healthcare challenges of low quality service
provisioning, lack of easy accessibility to service varieties, high costs
of services and increasing population of the elderly experienced
globally. This paper reports on a prototype implementation of an
architecture that seamlessly integrates Wireless Body Area Network
(WBAN) with Web services (WS) to proactively collect
physiological data of remote patients to recommend diagnostic
services. Technologies based upon WBAN and WS can provide
ubiquitous accessibility to a variety of services by allowing
distributed healthcare resources to be massively reused to provide
cost-effective services without individuals physically moving to the
locations of those resources. In addition, these technologies can
reduce costs of healthcare services by allowing individuals to access
services to support their healthcare. The prototype uses WBAN body
sensors implemented on arduino fio platforms to be worn by the
patient and an android smart phone as a personal server. The
physiological data are collected and uploaded through GPRS/internet
to the Medical Health Server (MHS) to be analysed. The prototype
monitors the activities, location and physiological parameters such as
SpO2 and Heart Rate of the elderly and patients in rehabilitation.
Medical practitioners would have real time access to the uploaded
information through a web application.
Abstract: Software complexity metrics are used to predict
critical information about reliability and maintainability of software
systems. Object oriented software development requires a different
approach to software complexity metrics. Object Oriented Software
Metrics can be broadly classified into static and dynamic metrics.
Static Metrics give information at the code level whereas dynamic
metrics provide information on the actual runtime. In this paper we
will discuss the various complexity metrics, and the comparison
between static and dynamic complexity.
Abstract: Natural disasters, including earthquake, kill many people around the world every year. Society rescue actions, which start after the earthquake and are called LAST in abbreviation, include locating, access, stabilization and transportation. In the present article, we have studied the process of local accessibility to the injured and transporting them to health care centers. With regard the heavy traffic load due to earthquake, the destruction of connecting roads and bridges and the heavy debris in alleys and street, which put the lives of the injured and the people buried under the debris in danger, accelerating the rescue actions and facilitating the accessibilities are of great importance, obviously. Tehran, the capital of Iran, is among the crowded cities in the world and is the center of extensive economic, political, cultural and social activities. Tehran has a population of about 9.5 millions and because of the immigration of people from the surrounding cities. Furthermore, considering the fact that Tehran is located on two important and large faults, a 6 Richter magnitude earthquake in this city could lead to the greatest catastrophe during the entire human history. The present study is a kind of review and a major part of the required information for it, has been obtained from libraries all of the rescue vehicles around the world, including rescue helicopters, ambulances, fire fighting vehicles and rescue boats, and their applied technology, and also the robots specifically designed for the rescue system and the advantages and disadvantages of them, have been investigated. The studies show that there is a significant relationship between the rescue team-s arrival time at the incident zone and the number of saved people; so that, if the duration of burial under debris 30 minutes, the probability of survival is %99.3, after a day is %81, after 2days is %19 and after 5days is %7.4. The exiting transport systems all have some defects. If these defects are removed, more people could be saved each hour and the preparedness against natural disasters is increased. In this study, transport system has been designed for the rescue team and the injured; which could carry the rescue team to the incident zone and the injured to the health care centers. In addition, this system is able to fly in the air and move on the earth as well; so that the destruction of roads and the heavy traffic load could not prevent the rescue team from arriving early at the incident zone. The system also has the equipment required firebird for debris removing, optimum transport of the injured and first aid.
Abstract: Combinatorial optimization problems arise in many scientific and practical applications. Therefore many researchers try to find or improve different methods to solve these problems with high quality results and in less time. Genetic Algorithm (GA) and Simulated Annealing (SA) have been used to solve optimization problems. Both GA and SA search a solution space throughout a sequence of iterative states. However, there are also significant differences between them. The GA mechanism is parallel on a set of solutions and exchanges information using the crossover operation. SA works on a single solution at a time. In this work SA and GA are combined using new technique in order to overcome the disadvantages' of both algorithms.
Abstract: Electronic Government is one of the special concepts
which has been performed successfully within recent decades.
Electronic government is a digital, wall-free government with a
virtual organization for presenting of online governmental services
and further cooperation in different political/social activities. In order
to have a successful implementation of electronic government
strategy and benefiting from its complete potential and benefits and
generally for establishment and applying of electronic government, it
is necessary to have different infrastructures as the basics of
electronic government with lack of which it is impossible to benefit
from mentioned services. For this purpose, in this paper we have
managed to recognize relevant obstacles for establishment of
electronic government in Iran. All required data for recognition of
obstacles were collected from statistical society of involved
specialists of Ministry of Communications & Information
Technology of Iran and Information Technology Organization of
Tehran Municipality through questionnaire. Then by considering of
five-point Likert scope and μ =3 as the index of relevant factors of
proposed model, we could specify current obstacles against
electronic government in Iran along with some guidelines and
proposal in this regard. According to the results, mentioned obstacles
for applying of electronic government in Iran are as follows:
Technical & technological problems, Legal, judicial & safety
problems, Economic problems and Humanistic Problems.