Abstract: This paper describes fast and efficient method for page segmentation of document containing nonrectangular block. The segmentation is based on edge following algorithm using small window of 16 by 32 pixels. This segmentation is very fast since only border pixels of paragraph are used without scanning the whole page. Still, the segmentation may contain error if the space between them is smaller than the window used in edge following. Consequently, this paper reduce this error by first identify the missed segmentation point using direction information in edge following then, using X-Y cut at the missed segmentation point to separate the connected columns. The advantage of the proposed method is the fast identification of missed segmentation point. This methodology is faster with fewer overheads than other algorithms that need to access much more pixel of a document.
Abstract: The goal of this paper is to examine the effects of laser
radiation on the skin wound healing using infrared thermography as
non-invasive method for the monitoring of the skin temperature
changes during laser treatment. Thirty Wistar rats were used in this
study. A skin lesion was performed at the leg on all rats. The animals
were exposed to laser radiation (λ = 670 nm, P = 15 mW, DP = 16.31
mW/cm2) for 600 s. Thermal images of wound were acquired before
and after laser irradiation. The results have demonstrated that the
tissue temperature decreases from 35.5±0.50°C in the first treatment
day to 31.3±0.42°C after the third treatment day. This value is close
to the normal value of the skin temperature and indicates the end of
the skin repair process. In conclusion, the improvements in the
wound healing following exposure to laser radiation have been
revealed by infrared thermography.
Abstract: The software evolution control requires a deep
understanding of the changes and their impact on different system
heterogeneous artifacts. And an understanding of descriptive
knowledge of the developed software artifacts is a prerequisite
condition for the success of the evolutionary process.
The implementation of an evolutionary process is to make changes
more or less important to many heterogeneous software artifacts such
as source code, analysis and design models, unit testing, XML
deployment descriptors, user guides, and others. These changes can
be a source of degradation in functional, qualitative or behavioral
terms of modified software. Hence the need for a unified approach
for extraction and representation of different heterogeneous artifacts
in order to ensure a unified and detailed description of heterogeneous
software artifacts, exploitable by several software tools and allowing
to responsible for the evolution of carry out the reasoning change
concerned.
Abstract: One of the major features of hypermedia learning is its non-linear structure, allowing learners, the opportunity of flexible navigation to accommodate their own needs. Nevertheless, such flexibility can also cause problems such as insufficient navigation and disorientation for some learners, especially those with Field Dependent cognitive styles. As a result students learning performance can be deteriorated and in turn, they can have negative attitudes with hypermedia learning systems. It was suggested that visual elements can be used to compensate dilemmas. However, it is unclear whether these visual elements improve their learning or whether problems still exist. The aim of this study is to investigate the effect of students cognitive styles and visual elements on students learning performance and attitudes in hypermedia learning environment. Cognitive Style Analysis (CSA), Learning outcome in terms of pre and post-test, practical task, and Attitude Questionnaire (AQ) were administered to a sample of 60 university students. The findings revealed that FD students preformed equally to those of FI. Also, FD students experienced more disorientation in the hypermedia learning system where they depend a lot on the visual elements for navigation and orientation purposes. Furthermore, they had more positive attitudes towards the visual elements which escape them from experiencing navigation and disorientation dilemmas. In contrast, FI students were more comfortable, did not get disturbed or did not need some of the visual elements in the hypermedia learning system.
Abstract: Nowadays, more engineering systems are using some
kind of Artificial Intelligence (AI) for the development of their
processes. Some well-known AI techniques include artificial neural
nets, fuzzy inference systems, and neuro-fuzzy inference systems
among others. Furthermore, many decision-making applications base
their intelligent processes on Fuzzy Logic; due to the Fuzzy
Inference Systems (FIS) capability to deal with problems that are
based on user knowledge and experience. Also, knowing that users
have a wide variety of distinctiveness, and generally, provide
uncertain data, this information can be used and properly processed
by a FIS. To properly consider uncertainty and inexact system input
values, FIS normally use Membership Functions (MF) that represent
a degree of user satisfaction on certain conditions and/or constraints.
In order to define the parameters of the MFs, the knowledge from
experts in the field is very important. This knowledge defines the MF
shape to process the user inputs and through fuzzy reasoning and
inference mechanisms, the FIS can provide an “appropriate" output.
However an important issue immediately arises: How can it be
assured that the obtained output is the optimum solution? How can it
be guaranteed that each MF has an optimum shape? A viable solution
to these questions is through the MFs parameter optimization. In this
Paper a novel parameter optimization process is presented. The
process for FIS parameter optimization consists of the five simple
steps that can be easily realized off-line. Here the proposed process
of FIS parameter optimization it is demonstrated by its
implementation on an Intelligent Interface section dealing with the
on-line customization / personalization of internet portals applied to
E-commerce.
Abstract: Today advertising is actively penetrating into many spheres of our lives. We cannot imagine the existence of a lot of economic activities without advertising. That mostly concerns trade and services. Everyone of us should look better into the everyday communication and carefully consider the amount and the quality of the information we receive as well as its influence on our behaviour. Special attention should be paid to the young generation. Theoretical and practical research has proved the ever growing influence of information (especially the one contained in advertising) on a society; on its economics, culture, religion, politics and even people-s private lives and behaviour. Children have plenty of free time and, therefore, see a lot of different advertising. Though education of children is in the hands of parents and schools, advertising makers and customers should think with responsibility about the selection of time and transmission channels of child targeted advertising. The purpose of the present paper is to investigate the influence of advertising upon consumer views and behaviour of children in different age groups. The present investigation has clarified the influence of advertising as a means of information on a certain group of society, which in the modern information society is the most vulnerable – children. In this paper we assess children-s perception and their understanding of advertising.
Abstract: This paper proposes a stroke extraction method for use in off-line signature verification. After giving a brief overview of the current ongoing researches an algorithm is introduced for detecting and following strokes in static images of signatures. Problems like the handling of junctions and variations in line width and line intensity are discussed in detail. Results are validated by both using an existing on-line signature database and by employing image registration methods.
Abstract: A novel behavioral detection framework is proposed
to detect zero day buffer overflow vulnerabilities (based on network
behavioral signatures) using zero-day exploits, instead of the
signature-based or anomaly-based detection solutions currently
available for IDPS techniques. At first we present the detection
model that uses shadow honeypot. Our system is used for the online
processing of network attacks and generating a behavior detection
profile. The detection profile represents the dataset of 112 types of
metrics describing the exact behavior of malware in the network. In
this paper we present the examples of generating behavioral
signatures for two attacks – a buffer overflow exploit on FTP server
and well known Conficker worm. We demonstrated the visualization
of important aspects by showing the differences between valid
behavior and the attacks. Based on these metrics we can detect
attacks with a very high probability of success, the process of
detection is however very expensive.
Abstract: Defect prevention is the most vital but habitually
neglected facet of software quality assurance in any project. If
functional at all stages of software development, it can condense the
time, overheads and wherewithal entailed to engineer a high quality
product. The key challenge of an IT industry is to engineer a
software product with minimum post deployment defects.
This effort is an analysis based on data obtained for five selected
projects from leading software companies of varying software
production competence. The main aim of this paper is to provide
information on various methods and practices supporting defect
detection and prevention leading to thriving software generation. The
defect prevention technique unearths 99% of defects. Inspection is
found to be an essential technique in generating ideal software
generation in factories through enhanced methodologies of abetted
and unaided inspection schedules. On an average 13 % to 15% of
inspection and 25% - 30% of testing out of whole project effort time
is required for 99% - 99.75% of defect elimination.
A comparison of the end results for the five selected projects
between the companies is also brought about throwing light on the
possibility of a particular company to position itself with an
appropriate complementary ratio of inspection testing.
Abstract: Synchronous cooperative systems (SCS) bring together users that are geographically distributed and connected through a network to carry out a task. Examples of SCS include Tele- Immersion and Tele-Conferences. In SCS, the coordination is the core of the system, and it has been defined as the act of managing interdependencies between activities performed to achieve a goal. Some of the main problems that SCS present deal with the management of constraints between simultaneous activities and the execution ordering of these activities. In order to resolve these problems, orderings based on Lamport-s happened-before relation have been used, namely, causal, Δ-causal, and causal-total orderings. They mainly differ in the degree of asynchronous execution allowed. One of the most important orderings is the causal order, which establishes that the events must be seen in the cause-effect order as they occur in the system. In this paper we show that for certain SCS (e.g. videoconferences, tele-immersion) where some degradation of the system is allowed, ensuring the causal order is still rigid, which can render negative affects to the system. In this paper, we illustrate how a more relaxed ordering, which we call Fuzzy Causal Order (FCO), is useful for such kind of systems by allowing a more asynchronous execution than the causal order. The benefit of the FCO is illustrated by applying it to a particular scenario of intermedia synchronization of an audio-conference system.
Abstract: Spatial outliers in remotely sensed imageries represent
observed quantities showing unusual values compared to their
neighbor pixel values. There have been various methods to detect the
spatial outliers based on spatial autocorrelations in statistics and data
mining. These methods may be applied in detecting forest fire pixels
in the MODIS imageries from NASA-s AQUA satellite. This is
because the forest fire detection can be referred to as finding spatial
outliers using spatial variation of brightness temperature. This point is
what distinguishes our approach from the traditional fire detection
methods. In this paper, we propose a graph-based forest fire detection
algorithm which is based on spatial outlier detection methods, and test
the proposed algorithm to evaluate its applicability. For this the
ordinary scatter plot and Moran-s scatter plot were used. In order to
evaluate the proposed algorithm, the results were compared with the
MODIS fire product provided by the NASA MODIS Science Team,
which showed the possibility of the proposed algorithm in detecting
the fire pixels.
Abstract: To derive the fractional flow equation oil
displacement will be assumed to take place under the so-called
diffusive flow condition. The constraints are that fluid saturations at
any point in the linear displacement path are uniformly distributed
with respect to thickness; this allows the displacement to be described
mathematically in one dimension. The simultaneous flow of oil and
water can be modeled using thickness averaged relative permeability,
along the centerline of the reservoir. The condition for fluid potential
equilibrium is simply that of hydrostatic equilibrium for which the
saturation distribution can be determined as a function of capillary
pressure and therefore, height. That is the fluids are distributed in
accordance with capillary-gravity equilibrium.
This paper focused on the fraction flow of water versus
cumulative oil recoveries using Buckley Leverett method. Several
field cases have been developed to aid in analysis. Producing watercut
(at surface conditions) will be compared with the cumulative oil
recovery at breakthrough for the flowing fluid.
Abstract: The activities of alkaline phosphatase and Ca2+ATPase in mud crab (Scylla serrata) collected from a soft-shell crab farm in Chantaburi Province, Thailand, in several stages of molting cycle were observed. The results showed that the activity of alkaline phosphatase in gill after molting was highly significant (p
Abstract: Maximal length sequences (m-sequences) are also
known as pseudo random sequences or pseudo noise sequences
for closely following Golomb-s popular randomness properties: (P1)
balance, (P2) run, and (P3) ideal autocorrelation. Apart from these,
there also exist certain other less known properties of such sequences
all of which are discussed in this tutorial paper. Comprehensive proofs
to each of these properties are provided towards better understanding
of such sequences. A simple test is also proposed at the end of
the paper in order to distinguish pseudo noise sequences from truly
random sequences such as Bernoulli sequences.
Abstract: This work presents a methodology for the selection
and design of propeller oriented to the experimental verification of
theoretical results. The problem of propeller selection and design
usually present itself in the following manner: a certain air volume
and static pressure are required for a certain system. Once the
necessity of fan design on a theoretical basis has been recognized, it
is possible to determinate the dimensions for a fan unit so that it will
perform in accordance with a certain set of specifications. The same
procedures in this work then can be applied in other propeller
selection.
Abstract: Decentralized Tuple Space (DTS) implements tuple
space model among a series of decentralized hosts and provides the
logical global shared tuple repository. Replication has been introduced
to promote performance problem incurred by remote tuple access. In
this paper, we propose a replication approach of DTS allowing
replication policies self-adapting. The accesses from users or other
nodes are monitored and collected to contribute the decision making.
The replication policy may be changed if the better performance is
expected. The experiments show that this approach suitably adjusts the
replication policies, which brings negligible overhead.
Abstract: In the 1980s, companies began to feel the effect of three major influences on their product development: newer and innovative technologies, increasing product complexity and larger organizations. And therefore companies were forced to look for new product development methods. This paper tries to focus on the two of new product development methods (DFM and CE). The aim of this paper is to see and analyze different product development methods specifically on Design for Manufacturability and Concurrent Engineering. Companies can achieve and be benefited by minimizing product life cycle, cost and meeting delivery schedule. This paper also presents simplified models that can be modified and used by different companies based on the companies- objective and requirements. Methodologies that are followed to do this research are case studies. Two companies were taken and analysed on the product development process. Historical data, interview were conducted on these companies in addition to that, Survey of literatures and previous research works on similar topics has been done during this research. This paper also tries to show the implementation cost benefit analysis and tries to calculate the implementation time. From this research, it has been found that the two companies did not achieve the delivery time to the customer. Some of most frequently coming products are analyzed and 50% to 80 % of their products are not delivered on time to the customers. The companies are following the traditional way of product development that is sequentially design and production method, which highly affect time to market. In the case study it is found that by implementing these new methods and by forming multi disciplinary team in designing and quality inspection; the company can reduce the workflow steps from 40 to 30.
Abstract: Antioxidants contribute to endogenous photoprotection
and are important for the maintenance of skin health. The study was carried out to compare the skin hydration and transepidermal
water loss (TEWL) effects of a stable cosmetic preparation
containing flavonoids, following two applications a day over a period
of tenth week. The skin trans-epidermal water loss and skin hydration
effect was measured at the beginning and up to the end of study period of ten weeks. Any effect produced was measured by Corneometer and TEWA meter (Non-invasive probe).
Two formulations were developed for this study design. Formulation one the control formulation in which no apple juice
extract( Flavonoids) was incorporated while second one was the active formulation in which the apple juice extract (3%) containing
flavonoids was incorporated into water in oil emulsion using Abil EM 90 as an emulsifier. Stable formulations (control and Active)
were applied on human cheeks (n = 12) for a study period of 10 weeks. Result of each volunteer of skin hydration and TEWL was
measured by corneometer and TEWA meter. By using ANOVA and Paired sample t test as a statistical evaluation, result of both base and
formulation were compared. Statistical significant results (p≤0.05)
were observed regarding skin hydration and TEWL when two creams, control and Formulation were compared. It showed that
desired formulation (Active) may have interesting application as an
active moisturizing cream on healthy skin.
Abstract: With a growing number of digital libraries and other
open education repositories being made available throughout the
world, effective search and retrieval tools are necessary to access the
desired materials that surpass the effectiveness of traditional, allinclusive
search engines. This paper discusses the design and use of
Folksemantic, a platform that integrates OpenCourseWare search,
Open Educational Resource recommendations, and social network
functionality into a single open source project. The paper describes
how the system was originally envisioned, its goals for users, and
data that provides insight into how it is actually being used. Data
sources include website click-through data, query logs, web server
log files and user account data. Based on a descriptive analysis of its
current use, modifications to the platform's design are recommended
to better address goals of the system, along with recommendations
for additional phases of research.
Abstract: Post-disaster reconstruction projects offer
opportunities to facilitate physical, social and economic development
and to reduce future hazard vulnerability long after the disasters.
Sustainability of post-disaster reconstruction project conducted in the
villages of Dinar following the 1995 earthquake was investigated in
this paper. Officials of the Government who were involved in the
project were interviewed. Besides, two field surveys were done in 12
villages of Dinar in winter months of 2008. Beneficiaries were
interviewed and physical, socio-cultural and economic impacts of the
reconstruction were examined. The research revealed that the postdisaster
reconstruction project has negative aspects from the point
view of sustainability. The physical, socio-cultural and economic
factors were not considered during decision making process of the
project.