Abstract: Wireless capsule endoscopy provides real-time images in the digestive tract. Capsule images are usually low resolution and are diverse images due to travel through various regions of human body. Color information has been a primary reference in predicting abnormalities such as bleeding. Often color is not sufficient for this purpose. In this study, we took morphological shapes into account as additional, but important criterion. First, we processed gastric images in order to indentify various objects in the image. Then, we analyzed color information in the object. In this way, we could remove unnecessary information and increase the accuracy. Compared to our previous investigations, we could handle images of various degrees of brightness and improve our diagnostic algorithm.
Abstract: Visual secret sharing (VSS) was proposed by Naor and Shamir in 1995. Visual secret sharing schemes encode a secret image into two or more share images, and single share image can’t obtain any information about the secret image. When superimposes the shares, it can restore the secret by human vision. Due to the traditional VSS have some problems like pixel expansion and the cost of sophisticated. And this method only can encode one secret image. The schemes of encrypting more secret images by random grids into two shares were proposed by Chen et al. in 2008. But when those restored secret images have much distortion, those schemes are almost limited in decoding. In the other words, if there is too much distortion, we can’t encrypt too much information. So, if we can adjust distortion to very small, we can encrypt more secret images. In this paper, four new algorithms which based on Chang et al.’s scheme be held in 2010 are proposed. First algorithm can adjust distortion to very small. Second algorithm distributes the distortion into two restored secret images. Third algorithm achieves no distortion for special secret images. Fourth algorithm encrypts three secret images, which not only retain the advantage of VSS but also improve on the problems of decoding.
Abstract: Text categorization - the assignment of natural language documents to one or more predefined categories based on their semantic content - is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. An adaptation of the algorithm is proposed in which a decision tree from root node until a final leave is used for initialization of multilayer neural network. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters-21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Abstract: In this article, a single application is suggested to determine the position of vehicles using Geographical Information Systems (GIS) and Geographical Position Systems (GPS). The part of the article material included mapping three dimensional coordinates to two dimensional coordinates using UTM or LAMBERT geographical methods, and the algorithm of conversion of GPS information into GIS maps is studied. Also, suggestions are given in order to implement this system based on web (called web based systems). To apply this system in IRAN, related official in this case are introduced and their duties are explained. Finally, economy analyzed is assisted according to IRAN communicational system.
Abstract: This article describes a Web pages automatic filtering system. It is an open and dynamic system based on multi agents architecture. This system is built up by a set of agents having each a quite precise filtering task of to carry out (filtering process broken up into several elementary treatments working each one a partial solution). New criteria can be added to the system without stopping its execution or modifying its environment. We want to show applicability and adaptability of the multi-agents approach to the networks information automatic filtering. In practice, most of existing filtering systems are based on modular conception approaches which are limited to centralized applications which role is to resolve static data flow problems. Web pages filtering systems are characterized by a data flow which varies dynamically.
Abstract: The purpose of this paper is to propose an integrated
consumer health informatics utilization framework that can be used
to gauge the online health information needs and usage patterns
among Malaysian women. The proposed framework was developed
based on four different theories/models: Use and Gratification
Theory, Technology Acceptance 3 Model, Health Belief Model, and
Multi-level Model of Information Seeking. The relevant constructs
and research hypotheses are also presented in this paper. The
framework will be tested in order for it to be used successfully to
identify Malaysian women-s preferences of online health information
resources and health information seeking activities.
Abstract: Research and development R&D work involves
enormous amount of work that has to do with data measurement and
collection. This process evolves as new information is fed, new
technologies are utilized, and eventually new knowledge is created
by the stakeholders i.e., researchers, clients, and end-users. When
new knowledge is created, procedures of R&D work should evolve
and produce better results within improved research skills and
improved methods of data measurements and collection. This
measurement improvement should then be benchmarked against a
metric that should be developed at the organization. In this paper, we
are suggesting a conceptual metric for R&D work performance
improvement (PI) at the Kuwait Institute for Scientific Research
(KISR). This PI is to be measured against a set of variables in the
suggested metric, which are more closely correlated to organizational
output, as opposed to organizational norms. The paper also mentions
and discusses knowledge creation and management as an addedvalue
to R&D work and measurement improvement. The research
methodology followed in this work is qualitative in nature, based on
a survey that was distributed to researchers and interviews held with
senior researchers at KISR. Research and analyses in this paper also
include looking at and analyzing KISR-s literature.
Abstract: We investigated statistical performance of Bayesian inference using maximum entropy and MAP estimation for several models which approximated wave-fronts in remote sensing using SAR interferometry. Using Monte Carlo simulation for a set of wave-fronts generated by assumed true prior, we found that the method of maximum entropy realized the optimal performance around the Bayes-optimal conditions by using model of the true prior and the likelihood representing optical measurement due to the interferometer. Also, we found that the MAP estimation regarded as a deterministic limit of maximum entropy almost achieved the same performance as the Bayes-optimal solution for the set of wave-fronts. Then, we clarified that the MAP estimation perfectly carried out phase unwrapping without using prior information, and also that the MAP estimation realized accurate phase unwrapping using conjugate gradient (CG) method, if we assumed the model of the true prior appropriately.
Abstract: Nowadays, organizations and business has several motivating factors to protect an individual-s privacy. Confidentiality refers to type of sharing information to third parties. This is always referring to private information, especially for personal information that usually needs to keep as a private. Because of the important of privacy concerns today, we need to design a database system that suits with privacy. Agrawal et. al. has introduced Hippocratic Database also we refer here as a privacy-aware database. This paper will explain how HD can be a future trend for web-based application to enhance their privacy level of trustworthiness among internet users.
Abstract: In this paper, a novel scheme is proposed for Ownership Identification and Color Image Authentication by deploying Cryptography & Digital Watermarking. The color image is first transformed from RGB to YST color space exclusively designed for watermarking. Followed by color space transformation, each channel is divided into 4×4 non-overlapping blocks with selection of central 2×2 sub-blocks. Depending upon the channel selected two to three LSBs of each central 2×2 sub-block are set to zero to hold the ownership, authentication and recovery information. The size & position of sub-block is important for correct localization, enhanced security & fast computation. As YS ÔèÑ T so it is suitable to embed the recovery information apart from the ownership and authentication information, therefore 4×4 block of T channel along with ownership information is then deployed by SHA160 to compute the content based hash that is unique and invulnerable to birthday attack or hash collision instead of using MD5 that may raise the condition i.e. H(m)=H(m'). For recovery, intensity mean of 4x4 block of each channel is computed and encoded upto eight bits. For watermark embedding, key based mapping of blocks is performed using 2DTorus Automorphism. Our scheme is oblivious, generates highly imperceptible images with correct localization of tampering within reasonable time and has the ability to recover the original work with probability of near one.
Abstract: In recent years, the development of e-learning is very
rapid. E-learning is an attractive and efficient way for computer
education. Student interaction and collaboration also plays an
important role in e-learning. In this paper, a collaborative web-based
e-learning environment is presented. A wide range of interactive and
collaborative methods are integrated into a web-based environment.
This e-learning environment is designed for information security
curriculum.
Abstract: Processing the data by computers and performing
reasoning tasks is an important aim in Computer Science. Semantic
Web is one step towards it. The use of ontologies to enhance the
information by semantically is the current trend. Huge amount of
domain specific, unstructured on-line data needs to be expressed in
machine understandable and semantically searchable format.
Currently users are often forced to search manually in the results
returned by the keyword-based search services. They also want to use
their native languages to express what they search. In this paper, an
ontology-based automated question answering system on software
test documents domain is presented. The system allows users to enter
a question about the domain by means of natural language and
returns exact answer of the questions. Conversion of the natural
language question into the ontology based query is the challenging
part of the system. To be able to achieve this, a new algorithm
regarding free text to ontology based search engine query conversion
is proposed. The algorithm is based on investigation of suitable
question type and parsing the words of the question sentence.
Abstract: In this study, a classification-based video
super-resolution method using artificial neural network (ANN) is
proposed to enhance low-resolution (LR) to high-resolution (HR)
frames. The proposed method consists of four main steps:
classification, motion-trace volume collection, temporal adjustment,
and ANN prediction. A classifier is designed based on the edge
properties of a pixel in the LR frame to identify the spatial information.
To exploit the spatio-temporal information, a motion-trace volume is
collected using motion estimation, which can eliminate unfathomable
object motion in the LR frames. In addition, temporal lateral process is
employed for volume adjustment to reduce unnecessary temporal
features. Finally, ANN is applied to each class to learn the complicated
spatio-temporal relationship between LR and HR frames. Simulation
results show that the proposed method successfully improves both
peak signal-to-noise ratio and perceptual quality.
Abstract: The evolution of information and communication
technology has made a very powerful support for the improvement of
online learning platforms in creation of courses. This paper presents a
study that attempts to explore new web architecture for creating an
adaptive online learning system to profiles of learners, using the Web
as a source for the automatic creation of courses for the online
training platform. This architecture will reduce the time and decrease
the effort performed by the drafters of the current e-learning
platform, and direct adaptation of the Web content will greatly enrich
the quality of online training courses.
Abstract: In this paper, an artificial neural network simulator is
employed to carry out diagnosis and prognosis on electric motor as
rotating machinery based on predictive maintenance. Vibration data
of the primary failed motor including unbalance, misalignment and
bearing fault were collected for training the neural network. Neural
network training was performed for a variety of inputs and the motor
condition was used as the expert training information. The main
purpose of applying the neural network as an expert system was to
detect the type of failure and applying preventive maintenance. The
advantage of this study is for machinery Industries by providing
appropriate maintenance that has an essential activity to keep the
production process going at all processes in the machinery industry.
Proper maintenance is pivotal in order to prevent the possible failures
in operating system and increase the availability and effectiveness of
a system by analyzing vibration monitoring and developing expert
system.
Abstract: The world wide web coupled with the ever-increasing
sophistication of online technologies and software applications puts
greater emphasis on the need of even more sophisticated and
consistent quality requirements modeling than traditional software
applications. Web sites and Web applications (WebApps) are
becoming more information driven and content-oriented raising the
concern about their information quality (InQ). The consistent and
consolidated modeling of InQ requirements for WebApps at different
stages of the life cycle still poses a challenge. This paper proposes an
approach to specify InQ requirements for WebApps by reusing and
extending the ISO 25012:2008(E) data quality model. We also
discuss learnability aspect of information quality for the WebApps.
The proposed ISO 25012 based InQ framework is a step towards a
standardized approach to evaluate WebApps InQ.
Abstract: Location-aware computing is a type of pervasive
computing that utilizes user-s location as a dominant factor for
providing urban services and application-related usages. One of the
important urban services is navigation instruction for wayfinders in a
city especially when the user is a tourist. The services which are
presented to the tourists should provide adapted location aware
instructions. In order to achieve this goal, the main challenge is to
find spatial relevant objects and location-dependent information. The
aim of this paper is the development of a reusable location-aware
model to handle spatial relevancy parameters in urban location-aware
systems. In this way we utilized ontology as an approach which could
manage spatial relevancy by defining a generic model. Our
contribution is the introduction of an ontological model based on the
directed interval algebra principles. Indeed, it is assumed that the
basic elements of our ontology are the spatial intervals for the user
and his/her related contexts. The relationships between them would
model the spatial relevancy parameters. The implementation language
for the model is OWLs, a web ontology language. The achieved
results show that our proposed location-aware model and the
application adaptation strategies provide appropriate services for the
user.
Abstract: Through the course of this paper we define Locationbased
Intelligence (LBI) which is outgrowing from process of
amalgamation of geolocation and Business Intelligence.
Amalgamating geolocation with traditional Business Intelligence (BI)
results in a new dimension of BI named Location-based Intelligence.
LBI is defined as leveraging unified location information for business
intelligence. Collectively, enterprises can transform location data into
business intelligence applications that will benefit all aspects of the
enterprise. Expectations from this new dimension of business
intelligence are great and its future is obviously bright.
Abstract: This paper investigates the performance of Multiple- Input Multiple-Output (MIMO) feedback system combined with Orthogonal Frequency Division Multiplexing (OFDM). Two types of codebook based channel feedback techniques are used in this work. The first feedback technique uses a combination of both the long-term and short-term channel state information (CSI) at the transmitter, whereas the second technique uses only the short term CSI. The long-term and short-term CSI at the transmitter is used for efficient channel utilization. OFDM is a powerful technique employed in communication systems suffering from frequency selectivity. Combined with multiple antennas at the transmitter and receiver, OFDM proves to be robust against delay spread. Moreover, it leads to significant data rates with improved bit error performance over links having only a single antenna at both the transmitter and receiver. The effectiveness of these techniques has been demonstrated through the simulation of a MIMO-OFDM feedback system. The results have been evaluated for 4x4 MIMO channels. Simulation results indicate the benefits of the MIMO-OFDM channel feedback system over the one without incorporating OFDM. Performance gain of about 3 dB is observed for MIMO-OFDM feedback system as compared to the one without employing OFDM. Hence MIMO-OFDM becomes an attractive approach for future high speed wireless communication systems.
Abstract: The objective of the present paper is a numerical
analysis of the flow forces acting on spool surfaces of a pressure
regulated valve. The transient, compressible and turbulent flow
structures inside the valve are simulated using ANSYS FLUENT
coupled with a special UDF. Here, valve inlet pressure is varied in a
stepwise manner. For every value of inlet pressure, transient analysis
leads to a quasi-static flow through the valve. Spool forces are
calculated based on different pressures at inlet. From this information
of spool forces, pressure characteristic of the passive control circuit
has been derived.