Abstract: Psoriasis is a chronic inflammatory skin condition
which affects 2-3% of population around the world. Psoriasis Area
and Severity Index (PASI) is a gold standard to assess psoriasis
severity as well as the treatment efficacy. Although a gold standard,
PASI is rarely used because it is tedious and complex. In practice,
PASI score is determined subjectively by dermatologists, therefore
inter and intra variations of assessment are possible to happen even
among expert dermatologists. This research develops an algorithm to
assess psoriasis lesion for PASI scoring objectively. Focus of this
research is thickness assessment as one of PASI four parameters
beside area, erythema and scaliness. Psoriasis lesion thickness is
measured by averaging the total elevation from lesion base to lesion
surface. Thickness values of 122 3D images taken from 39 patients
are grouped into 4 PASI thickness score using K-means clustering.
Validation on lesion base construction is performed using twelve
body curvature models and show good result with coefficient of
determinant (R2) is equal to 1.
Abstract: This paper presents a implementation of an object tracking system in a video sequence. This object tracking is an important task in many vision applications. The main steps in video analysis are two: detection of interesting moving objects and tracking of such objects from frame to frame. In a similar vein, most tracking algorithms use pre-specified methods for preprocessing. In our work, we have implemented several object tracking algorithms (Meanshift, Camshift, Kalman filter) with different preprocessing methods. Then, we have evaluated the performance of these algorithms for different video sequences. The obtained results have shown good performances according to the degree of applicability and evaluation criteria.
Abstract: The focus in this work is to assess which method
allows a better forecasting of malaria cases in Bujumbura ( Burundi)
when taking into account association between climatic factors and
the disease. For the period 1996-2007, real monthly data on both
malaria epidemiology and climate in Bujumbura are described and
analyzed. We propose a hierarchical approach to achieve our
objective. We first fit a Generalized Additive Model to malaria cases
to obtain an accurate predictor, which is then used to predict future
observations. Various well-known forecasting methods are compared
leading to different results. Based on in-sample mean average
percentage error (MAPE), the multiplicative exponential smoothing
state space model with multiplicative error and seasonality performed
better.
Abstract: This paper describes the evolution of strategies to
evaluate ePortfolios in an online Master-s of Education (M.Ed.)
degree in Instructional Technology. The ePortfolios are required as a
culminating activity for students in the program. By using Web 2.0
tools to develop the ePortfolios, students are able to showcase their
technical skills, integrate national standards, demonstrate their
professional understandings, and reflect on their individual learning.
Faculty have created assessment strategies to evaluate student
achievement of these skills. To further develop ePortfolios as a tool
promoting authentic learning, faculty are moving toward integrating
transparency as part of the evaluation process.
Abstract: Radio Frequency Identification (RFID) system is
looked upon as one of the top ten important technologies in the 20th
century and find its applications in many fields such as car industry.
The intelligent cars are one important part of this industry and always
try to find new and satisfied intelligent cars. The purpose of this
paper is to introduce an intelligent car with the based of RFID. By
storing the moving control commands such as turn right, turn left,
speed up and speed down etc. into the RFID tags beforehand and
sticking the tags on the tracks Car can read the moving control
commands from the tags and accomplish the proper actions.
Abstract: This paper presents the development of a wavelet
based algorithm, for distinguishing between magnetizing inrush
currents and power system fault currents, which is quite adequate,
reliable, fast and computationally efficient tool. The proposed
technique consists of a preprocessing unit based on discrete wavelet
transform (DWT) in combination with an artificial neural network
(ANN) for detecting and classifying fault currents. The DWT acts as
an extractor of distinctive features in the input signals at the relay
location. This information is then fed into an ANN for classifying
fault and magnetizing inrush conditions. A 220/55/55 V, 50Hz
laboratory transformer connected to a 380 V power system were
simulated using ATP-EMTP. The DWT was implemented by using
Matlab and Coiflet mother wavelet was used to analyze primary
currents and generate training data. The simulated results presented
clearly show that the proposed technique can accurately discriminate
between magnetizing inrush and fault currents in transformer
protection.
Abstract: Zirconium diamine and triamine complexes can possess biological activities. These complexes were synthesised via the reaction of equimolar quantities of 1,10-phenanthroline {NC3H3(C6H2)NC3H3} (L1) or 4-4-amino phenazone {ONC6H5(NH)CH(NH2} (L2) or diphenyl carbizon {HNNCO(NH)2(C6H5)} (L3) with a Zirconium Salt {ZrOCl2} in a 1:1 ratio to form complexes [{NC3H3(C6H2)NC3H3}ZrOCl2}] [ZrOCl2L1], [{(O2NC6H4(NH)(NH2)}ZrOCl2] [ZrOCl2L2] and [{HNNCO(NH)2(C6H5)ZrOCl2}] [ZrOCl2L3] respectively. They were characterised using Fourier Transform Infrared (FT-IR) and UV-Visible spectroscopy. Also a variable temperature study of these complexes was completed, using UV-Visible spectroscopy to observe electronic transitions under temperature control. Also a DFT study was done on these complexes via the information from FT-IR and UV-Visible spectroscopy.
These complexes were found to show different inhibition to the growth of bacterial strains of Bacillus spp. & Klebsiella spp. & E. coli & Proteus spp. & Pseudomona spp. at different concentrations (0.001, 0.2 and 1M). For better understanding these complexes were examined by using a Density Functional Theory (DFT) calculation.
Abstract: This paper argues that fostering mutual understanding in landscape planning is as much about the planners educating stakeholder groups as the stakeholders educating the planners. In other words it is an epistemological agreement as to the meaning and nature of place, especially where an effort is made to go beyond the quantitative aspects, which can be achieved by the phenomenological experience of the Virtual Reality (VR) environment. This education needs to be a bi-directional process in which distance can be both temporal as well as spatial separation of participants, that there needs to be a common framework of understanding in which neither 'side' is disadvantaged during the process of information exchange and it follows that a medium such as VR offers an effective way of overcoming some of the shortcomings of traditional media by taking advantage of continuing technological advances in Information, Technology and Communications (ITC). In this paper we make particular reference to this as an extension to Geographical Information Systems (GIS). VR as a two-way communication tool offers considerable potential particularly in the area of Public Participation GIS (PPGIS). Information rich virtual environments that can operate over broadband networks are now possible and thus allow for the representation of large amounts of qualitative and quantitative information 'side-by-side'. Therefore, with broadband access becoming standard for households and enterprises alike, distributed virtual reality environments have great potential to contribute to enabling stakeholder participation and mutual learning within the planning context.
Abstract: Requirements management is critical to software
delivery success and project lifecycle. Requirements management
and their traceability provide assistance for many software
engineering activities like impact analysis, coverage analysis,
requirements validation and regression testing. In addition
requirements traceability is the recognized component of many
software process improvement initiatives. Requirements traceability
also helps to control and manage evolution of a software system.
This paper aims to provide an evaluation of current requirements
management and traceability tools. Management and test managers
require an appropriate tool for the software under test. We hope,
evaluation identified here will help to select the efficient and
effective tool.
Abstract: Cytogenetic analysis still remains the gold standard method for prenatal diagnosis of trisomy 21 (Down syndrome, DS). Nevertheless, the conventional cytogenetic analysis needs live cultured cells and is too time-consuming for clinical application. In contrast, molecular methods such as FISH, QF-PCR, MLPA and quantitative Real-time PCR are rapid assays with results available in 24h. In the present study, we have successfully used a novel MGB TaqMan probe-based real time PCR assay for rapid diagnosis of trisomy 21 status in Down syndrome samples. We have also compared the results of this molecular method with corresponding results obtained by the cytogenetic analysis. Blood samples obtained from DS patients (n=25) and normal controls (n=20) were tested by quantitative Real-time PCR in parallel to standard G-banding analysis. Genomic DNA was extracted from peripheral blood lymphocytes. A high precision TaqMan probe quantitative Real-time PCR assay was developed to determine the gene dosage of DSCAM (target gene on 21q22.2) relative to PMP22 (reference gene on 17p11.2). The DSCAM/PMP22 ratio was calculated according to the formula; ratio=2 -ΔΔCT. The quantitative Real-time PCR was able to distinguish between trisomy 21 samples and normal controls with the gene ratios of 1.49±0.13 and 1.03±0.04 respectively (p value
Abstract: In this paper a new fast simplification method is
presented. Such method realizes Karnough map with large
number of variables. In order to accelerate the operation of the
proposed method, a new approach for fast detection of group
of ones is presented. Such approach implemented in the
frequency domain. The search operation relies on performing
cross correlation in the frequency domain rather than time one.
It is proved mathematically and practically that the number of
computation steps required for the presented method is less
than that needed by conventional cross correlation. Simulation
results using MATLAB confirm the theoretical computations.
Furthermore, a powerful solution for realization of complex
functions is given. The simplified functions are implemented
by using a new desigen for neural networks. Neural networks
are used because they are fault tolerance and as a result they
can recognize signals even with noise or distortion. This is
very useful for logic functions used in data and computer
communications. Moreover, the implemented functions are
realized with minimum amount of components. This is done
by using modular neural nets (MNNs) that divide the input
space into several homogenous regions. Such approach is
applied to implement XOR function, 16 logic functions on one
bit level, and 2-bit digital multiplier. Compared to previous
non- modular designs, a clear reduction in the order of
computations and hardware requirements is achieved.
Abstract: In this study, any possible differences between mathematics beliefs and anxiety of prospective elementary mathematics teachers have been investigated according to their gender. In this purpose, 1st, 2nd, 3rd and 4th grade students from a Government University in Turkey were selected as a sample. Mathematics Teaching Anxiety Scale (MATAS) and Beliefs About Mathematics Survey (BAMS) has been used as data collection tools. As a result of the study, it has been observed that prospective male teachers have more instrumentalist approach in learning mathematics than females according to their mathematical beliefs. On the other hand, females have more mathematics teaching anxiety than males especially, for subject knowledge in mathematics and selfconfidence.
Abstract: In this paper, a new learning algorithm based on a
hybrid metaheuristic integrating Differential Evolution (DE) and
Reduced Variable Neighborhood Search (RVNS) is introduced to train
the classification method PROAFTN. To apply PROAFTN, values of
several parameters need to be determined prior to classification. These
parameters include boundaries of intervals and relative weights for
each attribute. Based on these requirements, the hybrid approach,
named DEPRO-RVNS, is presented in this study. In some cases, the
major problem when applying DE to some classification problems
was the premature convergence of some individuals to local optima.
To eliminate this shortcoming and to improve the exploration and
exploitation capabilities of DE, such individuals were set to iteratively
re-explored using RVNS. Based on the generated results on
both training and testing data, it is shown that the performance of
PROAFTN is significantly improved. Furthermore, the experimental
study shows that DEPRO-RVNS outperforms well-known machine
learning classifiers in a variety of problems.
Abstract: This work presents a new approach of securing a
wireless network. The configuration is focused on securing &
Protecting wireless network traffic for a small network such as a
home or dorm room. The security Mechanism provided both
authentication, allowing only known authorized users access to the
wireless network, and encryption, preventing anyone from reading
the wireless traffic. The mentioned solution utilizes the open source
free S/WAN software which implements the Internet Protocol
Security –IPSEC. In addition to wireless components, wireless NIC
in PC and wireless access point needs a machine running Linux to act
as security gateway. While the current configuration assumes that the
wireless PC clients are running Linux, Windows XP/VISTA/7 based
machines equipped with VPN software which will allow to interface
with this configuration.
Abstract: A dynamic of Bertrand duopoly game is analyzed, where players use different production methods and choose their prices with bounded rationality. The equilibriums of the corresponding discrete dynamical systems are investigated. The stability conditions of Nash equilibrium under a local adjustment process are studied. The stability conditions of Nash equilibrium under a local adjustment process are studied. The stability of Nash equilibrium, as some parameters of the model are varied, gives rise to complex dynamics such as cycles of higher order and chaos. On this basis, we discover that an increase of adjustment speed of bounded rational player can make Bertrand market sink into the chaotic state. Finally, the complex dynamics, bifurcations and chaos are displayed by numerical simulation.
Abstract: Medical image segmentation based on image smoothing followed by edge detection assumes a great degree of importance in the field of Image Processing. In this regard, this paper proposes a novel algorithm for medical image segmentation based on vigorous smoothening by identifying the type of noise and edge diction ideology which seems to be a boom in medical image diagnosis. The main objective of this algorithm is to consider a particular medical image as input and make the preprocessing to remove the noise content by employing suitable filter after identifying the type of noise and finally carrying out edge detection for image segmentation. The algorithm consists of three parts. First, identifying the type of noise present in the medical image as additive, multiplicative or impulsive by analysis of local histograms and denoising it by employing Median, Gaussian or Frost filter. Second, edge detection of the filtered medical image is carried out using Canny edge detection technique. And third part is about the segmentation of edge detected medical image by the method of Normalized Cut Eigen Vectors. The method is validated through experiments on real images. The proposed algorithm has been simulated on MATLAB platform. The results obtained by the simulation shows that the proposed algorithm is very effective which can deal with low quality or marginal vague images which has high spatial redundancy, low contrast and biggish noise, and has a potential of certain practical use of medical image diagnosis.
Abstract: Postgraduate education is generally aimed at providing in-depth knowledge and understanding that include general philosophy in the world sciences, management, technologies, applications and other elements closely related to specific areas. In most universities, besides core and non-core subjects, a thesis is one of the requirements for the postgraduate student to accomplish before graduating. This paper reports on the empirical investigation into attributes that are associated with the obstacles to thesis accomplishment among postgraduate students. Using the quantitative approach the experiences of postgraduate students were tapped. Findings clearly revealed that information seeking, writing skills and other factors which refer to supervisor and time management, in particular, are recognized as contributory factors which positively or negatively influence postgraduates’ thesis accomplishment. Among these, writing skills dimensions were found to be the most difficult process in thesis accomplishment compared to information seeking and other factors. This pessimistic indication has provided some implications not only for the students but supervisors and institutions as a whole.
Abstract: Consider a mass production of HDD arms where
hundreds of CNC machines are used to manufacturer the HDD arms.
According to an overwhelming number of machines and models of
arm, construction of separate control chart for monitoring each HDD
arm model by each machine is not feasible. This research proposed a
strategy to optimize the SPC management on shop floor. The
procedure started from identifying the clusters of the machine with
similar manufacturing performance using clustering technique. The
three way control chart ( I - MR - R ) is then applied to each
clustered group of machine. This proposed research has
advantageous to the manufacturer in terms of not only better
performance of the SPC but also the quality management paradigm.
Abstract: Estimation of runoff water quality parameters is required to determine appropriate water quality management options. Various models are used to estimate runoff water quality parameters. However, most models provide event-based estimates of water quality parameters for specific sites. The work presented in this paper describes the development of a model that continuously simulates the accumulation and wash-off of water quality pollutants in a catchment. The model allows estimation of pollutants build-up during dry periods and pollutants wash-off during storm events. The model was developed by integrating two individual models; rainfall-runoff model, and catchment water quality model. The rainfall-runoff model is based on the time-area runoff estimation method. The model allows users to estimate the time of concentration using a range of established methods. The model also allows estimation of the continuing runoff losses using any of the available estimation methods (i.e., constant, linearly varying or exponentially varying). Pollutants build-up in a catchment was represented by one of three pre-defined functions; power, exponential, or saturation. Similarly, pollutants wash-off was represented by one of three different functions; power, rating-curve, or exponential. The developed runoff water quality model was set-up to simulate the build-up and wash-off of total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The application of the model was demonstrated using available runoff and TSS field data from road and roof surfaces in the Gold Coast, Australia. The model provided excellent representation of the field data demonstrating the simplicity yet effectiveness of the proposed model.
Abstract: AAM has been successfully applied to face alignment,
but its performance is very sensitive to initial values. In case the initial
values are a little far distant from the global optimum values, there
exists a pretty good possibility that AAM-based face alignment may
converge to a local minimum. In this paper, we propose a progressive
AAM-based face alignment algorithm which first finds the feature
parameter vector fitting the inner facial feature points of the face and
later localize the feature points of the whole face using the first
information. The proposed progressive AAM-based face alignment
algorithm utilizes the fact that the feature points of the inner part of the
face are less variant and less affected by the background surrounding
the face than those of the outer part (like the chin contour). The
proposed algorithm consists of two stages: modeling and relation
derivation stage and fitting stage. Modeling and relation derivation
stage first needs to construct two AAM models: the inner face AAM
model and the whole face AAM model and then derive relation matrix
between the inner face AAM parameter vector and the whole face
AAM model parameter vector. In the fitting stage, the proposed
algorithm aligns face progressively through two phases. In the first
phase, the proposed algorithm will find the feature parameter vector
fitting the inner facial AAM model into a new input face image, and
then in the second phase it localizes the whole facial feature points of
the new input face image based on the whole face AAM model using
the initial parameter vector estimated from using the inner feature
parameter vector obtained in the first phase and the relation matrix
obtained in the first stage. Through experiments, it is verified that the
proposed progressive AAM-based face alignment algorithm is more
robust with respect to pose, illumination, and face background than the
conventional basic AAM-based face alignment algorithm.