Abstract: To illustrate diversity of methods used to extract relevant (where the concept of relevance can be differently defined for different applications) visual data, the paper discusses three groups of such methods. They have been selected from a range of alternatives to highlight how hardware and software tools can be complementarily used in order to achieve various functionalities in case of different specifications of “relevant data". First, principles of gated imaging are presented (where relevance is determined by the range). The second methodology is intended for intelligent intrusion detection, while the last one is used for content-based image matching and retrieval. All methods have been developed within projects supervised by the author.
Abstract: Secure electronic payment system is presented in this
paper. This electronic payment system is to be secure for clients such
as customers and shop owners. The security architecture of the
system is designed by RC5 encryption / decryption algorithm. This
eliminates the fraud that occurs today with stolen credit card
numbers. The symmetric key cryptosystem RC5 can protect
conventional transaction data such as account numbers, amount and
other information. This process can be done electronically using RC5
encryption / decryption program written by Microsoft Visual Basic
6.0. There is no danger of any data sent within the system being
intercepted, and replaced. The alternative is to use the existing
network, and to encrypt all data transmissions. The system with
encryption is acceptably secure, but that the level of encryption has
to be stepped up, as computing power increases. Results In order to
be secure the system the communication between modules is
encrypted using symmetric key cryptosystem RC5. The system will
use simple user name, password, user ID, user type and cipher
authentication mechanism for identification, when the user first
enters the system. It is the most common method of authentication in
most computer system.
Abstract: This questionnaire-based study, aimed to measure and
compare the awareness of English reading strategies among EFL
learners at Bangkok University (BU) classified by their gender, field
of study, and English learning experience. Proportional stratified
random sampling was employed to formulate a sample of 380 BU
students. The data were statistically analyzed in terms of the mean
and standard deviation. t-Test analysis was used to find differences in
awareness of reading strategies between two groups (-male and
female- /-science and social-science students). In addition, one-way
analysis of variance (ANOVA) was used to compare reading strategy
awareness among BU students with different lengths of English
learning experience. The results of this study indicated that the
overall awareness of reading strategies of EFL learners at BU was at
a high level (ðÑ = 3.60) and that there was no statistically significant
difference between males and females, and among students who have
different lengths of English learning experience at the significance
level of 0.05. However, significant differences among students
coming from different fields of study were found at the same level of
significance.
Abstract: Grid computing is a group of clusters connected over
high-speed networks that involves coordinating and sharing
computational power, data storage and network resources operating
across dynamic and geographically dispersed locations. Resource
management and job scheduling are critical tasks in grid computing.
Resource selection becomes challenging due to heterogeneity and
dynamic availability of resources. Job scheduling is a NP-complete
problem and different heuristics may be used to reach an optimal or
near optimal solution. This paper proposes a model for resource and
job scheduling in dynamic grid environment. The main focus is to
maximize the resource utilization and minimize processing time of
jobs. Grid resource selection strategy is based on Max Heap Tree
(MHT) that best suits for large scale application and root node of
MHT is selected for job submission. Job grouping concept is used to
maximize resource utilization for scheduling of jobs in grid
computing. Proposed resource selection model and job grouping
concept are used to enhance scalability, robustness, efficiency and
load balancing ability of the grid.
Abstract: Traffic Engineering (TE) is the process of controlling
how traffic flows through a network in order to facilitate efficient and
reliable network operations while simultaneously optimizing network
resource utilization and traffic performance. TE improves the
management of data traffic within a network and provides the better
utilization of network resources. Many research works considers intra
and inter Traffic Engineering separately. But in reality one influences
the other. Hence the effective network performances of both inter and
intra Autonomous Systems (AS) are not optimized properly. To
achieve a better Joint Optimization of both Intra and Inter AS TE, we
propose a joint Optimization technique by considering intra-AS
features during inter – AS TE and vice versa. This work considers the
important criterion say latency within an AS and between ASes. and
proposes a Bi-Criteria Latency optimization model. Hence an overall
network performance can be improved by considering this jointoptimization
technique in terms of Latency.
Abstract: This paper presents an intrusion detection system of hybrid neural network model based on RBF and Elman. It is used for anomaly detection and misuse detection. This model has the memory function .It can detect discrete and related aggressive behavior effectively. RBF network is a real-time pattern classifier, and Elman network achieves the memory ability for former event. Based on the hybrid model intrusion detection system uses DARPA data set to do test evaluation. It uses ROC curve to display the test result intuitively. After the experiment it proves this hybrid model intrusion detection system can effectively improve the detection rate, and reduce the rate of false alarm and fail.
Abstract: Electrocardiogram (ECG) segmentation is necessary to help reduce the time consuming task of manually annotating ECG's. Several algorithms have been developed to segment the ECG automatically. We first review several of such methods, and then present a new single lead segmentation method based on Adaptive piecewise constant approximation (APCA) and Piecewise derivative dynamic time warping (PDDTW). The results are tested on the QT database. We compared our results to Laguna's two lead method. Our proposed approach has a comparable mean error, but yields a slightly higher standard deviation than Laguna's method.
Abstract: The purpose of this article applies the monthly final
energy yield and failure data of 202 PV systems installed in Taiwan to
analyze the PV operational performance and system availability. This
data is collected by Industrial Technology Research Institute through
manual records. Bad data detection and failure data estimation
approaches are proposed to guarantee the quality of the received
information. The performance ratio value and system availability are
then calculated and compared with those of other countries. It is
indicated that the average performance ratio of Taiwan-s PV systems
is 0.74 and the availability is 95.7%. These results are similar with
those of Germany, Switzerland, Italy and Japan.
Abstract: Primary and secondary data from the Bauchi abattoir were utilized to determine the relative contributions of different livestock species to meat supply in Bauchi Metropolis. Daily livestock slaughter figures for five months (June – October 2011) indicated that more goats (64.0) were slaughtered than either sheep (47.3) or cattle (41.30) each day (P
Abstract: Transient shape variation of a rotating liquid dropletis
simulated numerically. The three dimensional Navier-Stokes
equations were solved by using the level set method. The shape
variation from the sphere to the rotating ellipsoid, and to the two-robed
shapeare simulated, and the elongation of the two-robed droplet is
discussed. The two-robed shape after the initial transient is found to be
stable and the elongation is almost the same for the cases with different
initial rotation rate. The relationship between the elongation and the
rotation rate is obtained by averaging the transient shape variation. It is
shown that the elongation of two-robed shape is in good agreement
with the existing experimental data. It is found that the transient
numerical simulation is necessary for analyzing the largely elongated
two-robed shape of rotating droplet.
Abstract: In this paper, a new descent-projection method with a
new search direction for monotone structured variational inequalities
is proposed. The method is simple, which needs only projections
and some function evaluations, so its computational load is very tiny.
Under mild conditions on the problem-s data, the method is proved to
converges globally. Some preliminary computational results are also
reported to illustrate the efficiency of the method.
Abstract: In present study the effects of anti-inflammatory and
antinociceptive of vitex hydro-alcoholic extract were evaluated on
male mice. In inflammatory test mice were divided into 7 groups:
first group was control. The second group, positive control group,
received dexamethasone (15 mg/kg) and the other five groups
received different doses of hydroalcohol extract of Vitex fruit (265,
365, 465, 565, and 665 mg/kg). The inflammation was caused by
xylene-induced ear edema. Formalin test was used for evaluation of
antinociceptive effect of extract. In this test, mice were divided into 7
groups: control, morphine (10mg/kg) as positive control group, and
Vitex extract groups ((265, 365, 465, 565, and 665 mg/kg). All drugs
were administered intrapritoneally, 30 min before each test. The data
were analyzed using one-way ANOVA followed by Tukey-kramer
multiple comparison test. Results have shown significant antiinflammatory
effects of extract at all dosed as compared with control
(P
Abstract: This paper shows a simple and effective approach to
the design and implementation of Industrial Information Systems
(IIS) oriented to control the characteristics of each individual product manufactured in a production line and also their manufacturing conditions. The particular products considered in this work are large steel strips that are coiled just after their manufacturing. However, the approach is directly applicable to coiled strips in other industries, like
paper, textile, aluminum, etc. These IIS provide very detailed information of each manufactured product, which complement the general information managed by the ERP system of the production line. In spite of the high importance of this type of IIS to guarantee and improve the quality of the products manufactured in many industries, there are very few works about them in the technical literature. For this reason, this paper represents an important contribution to the development of this type of IIS, providing guidelines for their design, implementation and exploitation.
Abstract: Sickness absence represents a major economic and
social issue. Analysis of sick leave data is a recurrent challenge to analysts because of the complexity of the data structure which is
often time dependent, highly skewed and clumped at zero. Ignoring these features to make statistical inference is likely to be inefficient
and misguided. Traditional approaches do not address these problems. In this study, we discuss model methodologies in terms of statistical techniques for addressing the difficulties with sick leave data. We also introduce and demonstrate a new method by performing a longitudinal assessment of long-term absenteeism using
a large registration dataset as a working example available from the Helsinki Health Study for municipal employees from Finland during the period of 1990-1999. We present a comparative study on model
selection and a critical analysis of the temporal trends, the occurrence
and degree of long-term sickness absences among municipal employees. The strengths of this working example include the large
sample size over a long follow-up period providing strong evidence in supporting of the new model. Our main goal is to propose a way to
select an appropriate model and to introduce a new methodology for analysing sickness absence data as well as to demonstrate model
applicability to complicated longitudinal data.
Abstract: This paper describes a computer-aided design for
design of the concave globoidal cam with cylindrical rollers and
swinging follower. Four models with different modeling methods are
made from the same input data. The input data are angular input and
output displacements of the cam and the follower and some other
geometrical parameters of the globoidal cam mechanism. The best
cam model is the cam which has no interference with the rollers
when their motions are simulated in assembly conditions. The
angular output displacement of the follower for the best cam is also
compared with that of in the input data to check errors. In this study,
Pro/ENGINEER® Wildfire 2.0 is used for modeling the cam,
simulating motions and checking interference and errors of the
system.
Abstract: Rule Discovery is an important technique for mining knowledge from large databases. Use of objective measures for discovering interesting rules lead to another data mining problem, although of reduced complexity. Data mining researchers have studied subjective measures of interestingness to reduce the volume of discovered rules to ultimately improve the overall efficiency of KDD process. In this paper we study novelty of the discovered rules as a subjective measure of interestingness. We propose a hybrid approach that uses objective and subjective measures to quantify novelty of the discovered rules in terms of their deviations from the known rules. We analyze the types of deviation that can arise between two rules and categorize the discovered rules according to the user specified threshold. We implement the proposed framework and experiment with some public datasets. The experimental results are quite promising.
Abstract: In the present study, fracture behavior of woven
fabric-reinforced glass/epoxy composite laminates under mode III
crack growth was experimentally investigated and numerically
modeled. Two methods were used for the calculation of the strain
energy release rate: the experimental compliance calibration (CC)
method and the Virtual Crack Closure Technique (VCCT). To
achieve this aim ECT (Edge Crack Torsion) was used to evaluate
fracture toughness in mode III loading (out of plane-shear) at
different crack lengths. Load–displacement and associated energy
release rates were obtained for various case of interest. To
calculate fracture toughness JIII, two criteria were considered
including non-linearity and maximum points in load-displacement
curve and it is observed that JIII increases with the crack length
increase. Both the experimental compliance method and the virtual
crack closure technique proved applicable for the interpretation of the
fracture mechanics data of woven glass/epoxy laminates in mode III.
Abstract: The purposes of this research are to study and develop
the algorithm of Thai spoonerism words by semi-automatic computer
programs, that is to say, in part of data input, syllables are already
separated and in part of spoonerism, the developed algorithm is
utilized, which can establish rules and mechanisms in Thai
spoonerism words for bi-syllables by utilizing analysis in elements of
the syllables, namely cluster consonant, vowel, intonation mark and
final consonant. From the study, it is found that bi-syllable Thai
spoonerism has 1 case of spoonerism mechanism, namely
transposition in value of vowel, intonation mark and consonant of
both 2 syllables but keeping consonant value and cluster word (if
any).
From the study, the rules and mechanisms in Thai spoonerism
word were applied to develop as Thai spoonerism word software,
utilizing PHP program. the software was brought to conduct a
performance test on software execution; it is found that the program
performs bi-syllable Thai spoonerism correctly or 99% of all words
used in the test and found faults on the program at 1% as the words
obtained from spoonerism may not be spelling in conformity with
Thai grammar and the answer in Thai spoonerism could be more than
1 answer.
Abstract: In this paper, we apply and compare two generalized estimating equation approaches to the analysis of car breakdowns data in Mauritius. Number of breakdowns experienced by a machinery is a highly under-dispersed count random variable and its value can be attributed to the factors related to the mechanical input and output of that machinery. Analyzing such under-dispersed count observation as a function of the explanatory factors has been a challenging problem. In this paper, we aim at estimating the effects of various factors on the number of breakdowns experienced by a passenger car based on a study performed in Mauritius over a year. We remark that the number of passenger car breakdowns is highly under-dispersed. These data are therefore modelled and analyzed using Com-Poisson regression model. We use the two types of quasi-likelihood estimation approaches to estimate the parameters of the model: marginal and joint generalized quasi-likelihood estimating equation approaches. Under-dispersion parameter is estimated to be around 2.14 justifying the appropriateness of Com-Poisson distribution in modelling underdispersed count responses recorded in this study.
Abstract: The traditional Failure Mode and Effects Analysis
(FMEA) uses Risk Priority Number (RPN) to evaluate the risk level
of a component or process. The RPN index is determined by
calculating the product of severity, occurrence and detection indexes.
The most critically debated disadvantage of this approach is that
various sets of these three indexes may produce an identical value of
RPN. This research paper seeks to address the drawbacks in
traditional FMEA and to propose a new approach to overcome these
shortcomings. The Risk Priority Code (RPC) is used to prioritize
failure modes, when two or more failure modes have the same RPN.
A new method is proposed to prioritize failure modes, when there is a
disagreement in ranking scale for severity, occurrence and detection.
An Analysis of Variance (ANOVA) is used to compare means of
RPN values. SPSS (Statistical Package for the Social Sciences)
statistical analysis package is used to analyze the data. The results
presented are based on two case studies. It is found that the proposed
new methodology/approach resolves the limitations of traditional
FMEA approach.