Abstract: In this paper, we proposed the robust mobile object
detection method for light effect in the night street image block based
updating reference background model using block state analysis.
Experiment image is acquired sequence color video from steady
camera. When suddenly appeared artificial illumination, reference
background model update this information such as street light, sign
light. Generally natural illumination is change by temporal, but
artificial illumination is suddenly appearance. So in this paper for
exactly detect artificial illumination have 2 state process. First process
is compare difference between current image and reference
background by block based, it can know changed blocks. Second
process is difference between current image-s edge map and reference
background image-s edge map, it possible to estimate illumination at
any block. This information is possible to exactly detect object,
artificial illumination and it was generating reference background
more clearly. Block is classified by block-state analysis. Block-state
has a 4 state (i.e. transient, stationary, background, artificial
illumination). Fig. 1 is show characteristic of block-state respectively
[1]. Experimental results show that the presented approach works well
in the presence of illumination variance.
Abstract: Oil debris signal generated from the inductive oil
debris monitor (ODM) is useful information for machine condition
monitoring but is often spoiled by background noise. To improve the
reliability in machine condition monitoring, the high-fidelity signal
has to be recovered from the noisy raw data. Considering that the noise
components with large amplitude often have higher frequency than
that of the oil debris signal, the integral transform is proposed to
enhance the detectability of the oil debris signal. To cancel out the
baseline wander resulting from the integral transform, the empirical
mode decomposition (EMD) method is employed to identify the trend
components. An optimal reconstruction strategy including both
de-trending and de-noising is presented to detect the oil debris signal
with less distortion. The proposed approach is applied to detect the oil
debris signal in the raw data collected from an experimental setup. The
result demonstrates that this approach is able to detect the weak oil
debris signal with acceptable distortion from noisy raw data.
Abstract: DNA microarray technology is widely used by
geneticists to diagnose or treat diseases through gene expression.
This technology is based on the hybridization of a tissue-s DNA
sequence into a substrate and the further analysis of the image
formed by the thousands of genes in the DNA as green, red or yellow
spots. The process of DNA microarray image analysis involves
finding the location of the spots and the quantification of the
expression level of these. In this paper, a tool to perform DNA
microarray image analysis is presented, including a spot addressing
method based on the image projections, the spot segmentation
through contour based segmentation and the extraction of relevant
information due to gene expression.
Abstract: The High Voltage (HV) transmission mains into the community necessitate earthing design to ensure safety compliance of the system. Concrete poles are widely used within HV transmission mains; which could have an impact on the earth grid impedance and input impedance of the system from the fault point of view. This paper provides information on concrete pole earthing to enhance the split factor of the system; further, it discusses the deployment of concrete structures in high soil resistivity area to reduce the earth grid system of the plant. This paper introduces the cut off soil resistivity SC ρ when replacing timber poles with concrete ones.
Abstract: After Apple's first introduction its smart phone, iPhone
in the end of 2009 in Korea, the number of Korean smarphone users
had been rapidly increasing so that the half of Korean population
became smart phone users as of February, 2012. Currently, smart
phones are positioned as a major digital media with powerful
influences in Korea. And, now, Koreans are leaning new information,
enjoying games and communicating other people every time and
everywhere. As smart phone devices' performances increased, the
number of usable services became more while adequate GUI
developments are required to implement various functions with smart
phones. The strategy to provide similar experiences on smart phones
through familiar features based on employment of existing media's
functions mostly contributed to smart phones' popularization in
connection with smart phone devices' iconic GUIs.
The spread of Smart phone increased mobile web accesses.
Therefore, the attempts to implement PC's web in the smart phone's
web are continuously made. The mobile web GUI provides familiar
experiences to users through designs adequately utilizing the smart
phone's GUIs. As the number of users familiarized to smart phones
and mobile web GUIs, opposite to reversed remediation from many
parts of PCs, PCs are starting to adapt smart phone GUIs.
This study defines this phenomenon as the reversed remediation,
and reviews the reversed remediation cases of Smart phone GUI'
characteristics of PCs. For this purpose, the established study issues
are as under:
· what is the reversed remediation?
· what are the smart phone GUI's characteristics?
· what kind of interrelationship exist s between the smart phone and
PC's web site?
It is meaningful in the forecast of the future GUI's change by
understanding of characteristics in the paradigm changes of PC and
smart phone's GUI designs. This also will be helpful to establish
strategies for digital devices' development and design.
Abstract: The emergence of the Internet has brewed the
revolution of information storage and retrieval. As most of the
data in the web is unstructured, and contains a mix of text,
video, audio etc, there is a need to mine information to cater to
the specific needs of the users without loss of important
hidden information. Thus developing user friendly and
automated tools for providing relevant information quickly
becomes a major challenge in web mining research. Most of
the existing web mining algorithms have concentrated on
finding frequent patterns while neglecting the less frequent
ones that are likely to contain outlying data such as noise,
irrelevant and redundant data. This paper mainly focuses on
Signed approach and full word matching on the organized
domain dictionary for mining web content outliers. This
Signed approach gives the relevant web documents as well as
outlying web documents. As the dictionary is organized based
on the number of characters in a word, searching and retrieval
of documents takes less time and less space.
Abstract: The proliferation of user-generated content (UGC) results in huge opportunities to explore event patterns. However, existing event recommendation systems primarily focus on advanced information technology users. Little work has been done to address novice and low-literacy users. The next billion users providing and consuming UGC are likely to include communities from developing countries who are ready to use affordable technologies for subsistence goals. Therefore, we propose a design framework for providing event recommendations to address the needs of such users. Grounded in information integration theory (IIT), our framework advocates that effective event recommendation is supported by systems capable of (1) reliable information gathering through structured user input, (2) accurate sense making through spatial-temporal analytics, and (3) intuitive information dissemination through interactive visualization techniques. A mobile pest management application is developed as an instantiation of the design framework. Our preliminary study suggests a set of design principles for novice and low-literacy users.
Abstract: In order to achieve better road utilization and traffic
efficiency, there is an urgent need for a travel information delivery
mechanism to assist the drivers in making better decisions in the
emerging intelligent transportation system applications. In this paper,
we propose a relayed multicast scheme under heterogeneous networks
for this purpose. In the proposed system, travel information consisting
of summarized traffic conditions, important events, real-time traffic
videos, and local information service contents is formed into layers
and multicasted through an integration of WiMAX infrastructure and
Vehicular Ad hoc Networks (VANET). By the support of adaptive
modulation and coding in WiMAX, the radio resources can be
optimally allocated when performing multicast so as to dynamically
adjust the number of data layers received by the users. In addition to
multicast supported by WiMAX, a knowledge propagation and
information relay scheme by VANET is designed. The experimental
results validate the feasibility and effectiveness of the proposed
scheme.
Abstract: Nowadays increasingly the population makes use of
Information Technology (IT). As such, in recent year the Portuguese
government increased its focus on using the IT for improving
people-s life and began to develop a set of measures to enable the
modernization of the Public Administration, and so reducing the gap
between Public Administration and citizens.Thus the Portuguese
Government launched the Simplex Program. However these
SIMPLEX eGov measures, which have been implemented over the
years, present a serious challenge: how to forecast its impact on
existing Information Systems Architecture (ISA). Thus, this research
is focus in addressing the problem of automating the evaluation of the
actual impact of implementation an eGovSimplification and
Modernization measures in the Information Systems Architecture. To
realize the evaluation we proposes a Framework, which is supported
by some key concepts as: Quality Factors, ISA modeling,
Multicriteria Approach, Polarity Profile and Quality Metrics
Abstract: Medical image registration is the key technology in image guided radiation therapy (IGRT) systems. On the basis of the previous work on our IGRT prototype with a biorthogonal x-ray imaging system, we described a method focused on the 2D/2D rigid-body registration using multiresolution pyramid based mutual information in this paper. Three key steps were involved in the method : firstly, four 2D images were obtained including two x-ray projection images and two digital reconstructed radiographies(DRRs ) as the input for the registration ; Secondly, each pair of the corresponding x-ray image and DRR image were matched using multiresolution pyramid based mutual information under the ITK registration framework ; Thirdly, we got the final couch offset through a coordinate transformation by calculating the translations acquired from the two pairs of the images. A simulation example of a parotid gland tumor case and a clinical example of an anthropomorphic head phantom were employed in the verification tests. In addition, the influence of different CT slice thickness were tested. The simulation results showed that the positioning errors were 0.068±0.070, 0.072±0.098, 0.154±0.176mm along three axes which were lateral, longitudinal and vertical. The clinical test indicated that the positioning errors of the planned isocenter were 0.066, 0.07, 2.06mm on average with a CT slice thickness of 2.5mm. It can be concluded that our method with its verified accuracy and robustness can be effectively used in IGRT systems for patient setup.
Abstract: The development of information and communication
technology, the increased use of the internet, as well as the effects of
the recession within the last years, have lead to the increased use of
cloud computing based solutions, also called on-demand solutions.
These solutions offer a large number of benefits to organizations as
well as challenges and risks, mainly determined by data visualization
in different geographic locations on the internet. As far as the specific
risks of cloud environment are concerned, data security is still
considered a peak barrier in adopting cloud computing. The present
study offers an approach upon ensuring the security of cloud data,
oriented towards the whole data life cycle. The final part of the study
focuses on the assessment of data security in the cloud, this
representing the bases in determining the potential losses and the
premise for subsequent improvements and continuous learning.
Abstract: Noise causes significant sensibility changes on a human. This study investigated the effect of five different noises on electroencephalogram (EEG) and subjective evaluation. Six human subjects were exposed to classic piano, ocean wave, alarm in army, ambulance, mosquito noise and EEG data were collected during the experimental session. Alpha band activity in the mosquito noise was smaller than that in the classic piano. Alpha band activity decreased 43.4 ± 8.2 % in the mosquito noise. On the other hand, Beta band activity in the mosquito noise was greater than that in the classic piano. Beta band activity increased 60.1 ± 10.7 % in the mosquito noise. The advances from this study may aid the product design process with human sensibility engineering. This result may provide useful information in designing a human-oriented product to avoid the stress.
Abstract: In the world of Peer-to-Peer (P2P) networking
different protocols have been developed to make the resource sharing
or information retrieval more efficient. The SemPeer protocol is a
new layer on Gnutella that transforms the connections of the nodes
based on semantic information to make information retrieval more
efficient. However, this transformation causes high clustering in the
network that decreases the number of nodes reached, therefore the
probability of finding a document is also decreased. In this paper we
describe a mathematical model for the Gnutella and SemPeer
protocols that captures clustering-related issues, followed by a
proposition to modify the SemPeer protocol to achieve moderate
clustering. This modification is a sort of link management for the
individual nodes that allows the SemPeer protocol to be more
efficient, because the probability of a successful query in the P2P
network is reasonably increased. For the validation of the models, we
evaluated a series of simulations that supported our results.
Abstract: The security of their network remains the priorities of almost all companies. Existing security systems have shown their limit; thus a new type of security systems was born: honeypots. Honeypots are defined as programs or intended servers which have to attract pirates to study theirs behaviours. It is in this context that the leurre.com project of gathering about twenty platforms was born. This article aims to specify a model of honeypots attack. Our model describes, on a given platform, the evolution of attacks according to theirs hours. Afterward, we show the most attacked services by the studies of attacks on the various ports. It is advisable to note that this article was elaborated within the framework of the research projects on honeyspots within the LABTIC (Laboratory of Information Technologies and Communication).
Abstract: In this paper we present a novel approach for face image coding. The proposed method makes a use of the features of video encoders like motion prediction. At first encoder selects appropriate prototype from the database and warps it according to features of encoding face. Warped prototype is placed as first I frame. Encoding face is placed as second frame as P frame type. Information about features positions, color change, selected prototype and data flow of P frame will be sent to decoder. The condition is both encoder and decoder own the same database of prototypes. We have run experiment with H.264 video encoder and obtained results were compared to results achieved by JPEG and JPEG2000. Obtained results show that our approach is able to achieve 3 times lower bitrate and two times higher PSNR in comparison with JPEG. According to comparison with JPEG2000 the bitrate was very similar, but subjective quality achieved by proposed method is better.
Abstract: The stereophotogrammetry modality is gaining more widespread use in the clinical setting. Registration and visualization of this data, in conjunction with conventional 3D volumetric image modalities, provides virtual human data with textured soft tissue and internal anatomical and structural information. In this investigation computed tomography (CT) and stereophotogrammetry data is acquired from 4 anatomical phantoms and registered using the trimmed iterative closest point (TrICP) algorithm. This paper fully addresses the issue of imaging artifacts around the stereophotogrammetry surface edge using the registered CT data as a reference. Several iterative algorithms are implemented to automatically identify and remove stereophotogrammetry surface edge outliers, improving the overall visualization of the combined stereophotogrammetry and CT data. This paper shows that outliers at the surface edge of stereophotogrammetry data can be successfully removed automatically.
Abstract: In this paper we proposed comparison of four content based objective metrics with results of subjective tests from 80 video sequences. We also include two objective metrics VQM and SSIM to our comparison to serve as “reference” objective metrics because their pros and cons have already been published. Each of the video sequence was preprocessed by the region recognition algorithm and then the particular objective video quality metric were calculated i.e. mutual information, angular distance, moment of angle and normalized cross-correlation measure. The Pearson coefficient was calculated to express metrics relationship to accuracy of the model and the Spearman rank order correlation coefficient to represent the metrics relationship to monotonicity. The results show that model with the mutual information as objective metric provides best result and it is suitable for evaluating quality of video sequences.
Abstract: This paper presents the development of a Bayesian
belief network classifier for prediction of graft status and survival
period in renal transplantation using the patient profile information
prior to the transplantation. The objective was to explore feasibility
of developing a decision making tool for identifying the most suitable
recipient among the candidate pool members. The dataset was
compiled from the University of Toledo Medical Center Hospital
patients as reported to the United Network Organ Sharing, and had
1228 patient records for the period covering 1987 through 2009. The
Bayes net classifiers were developed using the Weka machine
learning software workbench. Two separate classifiers were induced
from the data set, one to predict the status of the graft as either failed
or living, and a second classifier to predict the graft survival period.
The classifier for graft status prediction performed very well with a
prediction accuracy of 97.8% and true positive values of 0.967 and
0.988 for the living and failed classes, respectively. The second
classifier to predict the graft survival period yielded a prediction
accuracy of 68.2% and a true positive rate of 0.85 for the class
representing those instances with kidneys failing during the first year
following transplantation. Simulation results indicated that it is
feasible to develop a successful Bayesian belief network classifier for
prediction of graft status, but not the graft survival period, using the
information in UNOS database.
Abstract: Different types of Islamic debts have been
increasingly utilized as preferred means of debt funding by
Malaysian private firms in recent years. This study examines the
impact of Islamic debts announcement on private firms- stock
returns. Our sample includes forty five listed companies on Bursa
Malaysia involved in issuing of Islamic debts during 2005 to 2008.
The abnormal returns and cumulative average abnormal returns are
calculated and tested using standard event study methodology. The
results show that a significant, negative abnormal return occurs one
day before announcement date. This negative abnormal return is
representing market participant-s adverse attitude toward Islamic
private debt announcement during the research period.
Abstract: This paper proposes a new approach for image encryption
using a combination of different permutation techniques.
The main idea behind the present work is that an image can be
viewed as an arrangement of bits, pixels and blocks. The intelligible
information present in an image is due to the correlations among the
bits, pixels and blocks in a given arrangement. This perceivable information
can be reduced by decreasing the correlation among the bits,
pixels and blocks using certain permutation techniques. This paper
presents an approach for a random combination of the aforementioned
permutations for image encryption. From the results, it is observed
that the permutation of bits is effective in significantly reducing the
correlation thereby decreasing the perceptual information, whereas
the permutation of pixels and blocks are good at producing higher
level security compared to bit permutation. A random combination
method employing all the three techniques thus is observed to be
useful for tactical security applications, where protection is needed
only against a casual observer.