Abstract: Motor imagery classification provides an important basis for designing Brain Machine Interfaces [BMI]. A BMI captures and decodes brain EEG signals and transforms human thought into actions. The ability of an individual to control his EEG through imaginary mental tasks enables him to control devices through the BMI. This paper presents a method to design a four state BMI using EEG signals recorded from the C3 and C4 locations. Principle features extracted through principle component analysis of the segmented EEG are analyzed using two novel classification algorithms using Elman recurrent neural network and functional link neural network. Performance of both classifiers is evaluated using a particle swarm optimization training algorithm; results are also compared with the conventional back propagation training algorithm. EEG motor imagery recorded from two subjects is used in the offline analysis. From overall classification performance it is observed that the BP algorithm has higher average classification of 93.5%, while the PSO algorithm has better training time and maximum classification. The proposed methods promises to provide a useful alternative general procedure for motor imagery classification
Abstract: A person-to-person information sharing is easily realized
by P2P networks in which servers are not essential. Leakage
of information, which are caused by malicious accesses for P2P
networks, has become a new social issues. To prevent information
leakage, it is necessary to detect and block traffics of P2P software.
Since some P2P softwares can spoof port numbers, it is difficult to
detect the traffics sent from P2P softwares by using port numbers.
It is more difficult to devise effective countermeasures for detecting
the software because their protocol are not public.
In this paper, a discriminating method of network applications
based on communication characteristics of application messages
without port numbers is proposed. The proposed method is based
on an assumption that there can be some rules about time intervals
to transmit messages in application layer and the number of necessary
packets to send one message. By extracting the rule from network
traffic, the proposed method can discriminate applications without
port numbers.
Abstract: This research was conducted in the Pua Watershed whereas located in the Upper Nan River Basin in Nan province, Thailand. Nan River basin originated in Nan province that comprises of many tributary streams to produce as inflow to the Sirikit dam provided huge reservoir with the storage capacity of 9510 million cubic meters. The common problems of most watersheds were found i.e. shortage water supply for consumption and agriculture utilizations, deteriorate of water quality, flood and landslide including debris flow, and unstable of riverbank. The Pua Watershed is one of several small river basins that flow through the Nan River Basin. The watershed includes 404 km2 representing the Pua District, the Upper Nan Basin, or the whole Nan River Basin, of 61.5%, 18.2% or 1.2% respectively. The Pua River is a main stream producing all year streamflow supplying the Pua District and an inflow to the Upper Nan Basin. Its length approximately 56.3 kilometers with an average slope of the channel by 1.9% measured. A diversion weir namely Pua weir bound the plain and mountainous areas with a very steep slope of the riverbed to 2.9% and drainage area of 149 km2 as upstream watershed while a mild slope of the riverbed to 0.2% found in a river reach of 20.3 km downstream of this weir, which considered as a gauged basin. However, the major branch streams of the Pua River are ungauged catchments namely: Nam Kwang and Nam Koon with the drainage area of 86 and 35 km2 respectively. These upstream watersheds produce runoff through the 3-streams downstream of Pua weir, Jao weir, and Kang weir, with an averaged annual runoff of 578 million cubic meters. They were analyzed using both statistical data at Pua weir and simulated data resulted from the hydrologic modeling system (HEC–HMS) which applied for the remaining ungauged basins. Since the Kwang and Koon catchments were limited with lack of hydrological data included streamflow and rainfall. Therefore, the mathematical modeling: HEC-HMS with the Snyder-s hydrograph synthesized and transposed methods were applied for those areas using calibrated hydrological parameters from the upstream of Pua weir with continuously daily recorded of streamflow and rainfall data during 2008-2011. The results showed that the simulated daily streamflow and sum up as annual runoff in 2008, 2010, and 2011 were fitted with observed annual runoff at Pua weir using the simple linear regression with the satisfied correlation R2 of 0.64, 062, and 0.59, respectively. The sensitivity of simulation results were come from difficulty using calibrated parameters i.e. lag-time, coefficient of peak flow, initial losses, uniform loss rates, and missing some daily observed data. These calibrated parameters were used to apply for the other 2-ungauged catchments and downstream catchments simulated.
Abstract: Multi-user interference (MUI) is the main reason of system deterioration in the Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) system. MUI increases with the number of simultaneous users, resulting into higher probability bit rate and limits the maximum number of simultaneous users. On the other hand, Phase induced intensity noise (PIIN) problem which is originated from spontaneous emission of broad band source from MUI severely limits the system performance should be addressed as well. Since the MUI is caused by the interference of simultaneous users, reducing the MUI value as small as possible is desirable. In this paper, an extensive study for the system performance specified by MUI and PIIN reducing is examined. Vectors Combinatorial (VC) codes families are adopted as a signature sequence for the performance analysis and a comparison with reported codes is performed. The results show that, when the received power increases, the PIIN noise for all the codes increases linearly. The results also show that the effect of PIIN can be minimized by increasing the code weight leads to preserve adequate signal to noise ratio over bit error probability. A comparison study between the proposed code and the existing codes such as Modified frequency hopping (MFH), Modified Quadratic- Congruence (MQC) has been carried out.
Abstract: A new code for spectral-amplitude coding optical
code-division multiple-access system is proposed called Random
diagonal (RD) code. This code is constructed using code segment and
data segment. One of the important properties of this code is that the
cross correlation at data segment is always zero, which means that
Phase Intensity Induced Noise (PIIN) is reduced. For the performance
analysis, the effects of phase-induced intensity noise, shot noise, and
thermal noise are considered simultaneously. Bit-error rate (BER)
performance is compared with Hadamard and Modified Frequency
Hopping (MFH) codes. It is shown that the system using this new
code matrices not only suppress PIIN, but also allows larger number
of active users compare with other codes. Simulation results shown
that using point to point transmission with three encoded channels,
RD code has better BER performance than other codes, also its found
that at 0 dbm PIIN noise are 10-10 and 10-11 for RD and MFH
respectively.
Abstract: An adaptive spatial Gaussian mixture model is proposed for clustering based color image segmentation. A new clustering objective function which incorporates the spatial information is introduced in the Bayesian framework. The weighting parameter for controlling the importance of spatial information is made adaptive to the image content to augment the smoothness towards piecewisehomogeneous region and diminish the edge-blurring effect and hence the name adaptive spatial finite mixture model. The proposed approach is compared with the spatially variant finite mixture model for pixel labeling. The experimental results with synthetic and Berkeley dataset demonstrate that the proposed method is effective in improving the segmentation and it can be employed in different practical image content understanding applications.
Abstract: This paper describes an automatic algorithm to restore
the shape of three-dimensional (3D) left ventricle (LV) models created
from magnetic resonance imaging (MRI) data using a geometry-driven
optimization approach. Our basic premise is to restore the LV shape
such that the LV epicardial surface is smooth after the restoration. A
geometrical measure known as the Minimum Principle Curvature (κ2)
is used to assess the smoothness of the LV. This measure is used to
construct the objective function of a two-step optimization process.
The objective of the optimization is to achieve a smooth epicardial
shape by iterative in-plane translation of the MRI slices.
Quantitatively, this yields a minimum sum in terms of the magnitude
of κ
2, when κ2 is negative. A limited memory quasi-Newton algorithm,
L-BFGS-B, is used to solve the optimization problem. We tested our
algorithm on an in vitro theoretical LV model and 10 in vivo
patient-specific models which contain significant motion artifacts. The
results show that our method is able to automatically restore the shape
of LV models back to smoothness without altering the general shape of
the model. The magnitudes of in-plane translations are also consistent
with existing registration techniques and experimental findings.
Abstract: The statistical distributions are modeled in explaining
nature of various types of data sets. Although these distributions are
mostly uni-modal, it is quite common to see multiple modes in the
observed distribution of the underlying variables, which make the
precise modeling unrealistic. The observed data do not exhibit
smoothness not necessarily due to randomness, but could also be due
to non-randomness resulting in zigzag curves, oscillations, humps
etc. The present paper argues that trigonometric functions, which
have not been used in probability functions of distributions so far,
have the potential to take care of this, if incorporated in the
distribution appropriately. A simple distribution (named as, Sinoform
Distribution), involving trigonometric functions, is illustrated in the
paper with a data set. The importance of trigonometric functions is
demonstrated in the paper, which have the characteristics to make
statistical distributions exotic. It is possible to have multiple modes,
oscillations and zigzag curves in the density, which could be suitable
to explain the underlying nature of select data set.
Abstract: Model Predictive Control (MPC) is increasingly being
proposed for real time applications and embedded systems. However
comparing to PID controller, the implementation of the MPC in
miniaturized devices like Field Programmable Gate Arrays (FPGA)
and microcontrollers has historically been very small scale due to its
complexity in implementation and its computation time requirement.
At the same time, such embedded technologies have become an
enabler for future manufacturing enterprises as well as a transformer
of organizations and markets. Recently, advances in microelectronics
and software allow such technique to be implemented in embedded
systems. In this work, we take advantage of these recent advances
in this area in the deployment of one of the most studied and
applied control technique in the industrial engineering. In fact in
this paper, we propose an efficient framework for implementation
of Generalized Predictive Control (GPC) in the performed STM32
microcontroller. The STM32 keil starter kit based on a JTAG interface
and the STM32 board was used to implement the proposed GPC
firmware. Besides the GPC, the PID anti windup algorithm was
also implemented using Keil development tools designed for ARM
processor-based microcontroller devices and working with C/Cµ
langage. A performances comparison study was done between both
firmwares. This performances study show good execution speed and
low computational burden. These results encourage to develop simple
predictive algorithms to be programmed in industrial standard hardware.
The main features of the proposed framework are illustrated
through two examples and compared with the anti windup PID
controller.
Abstract: This paper proposes the use of Bayesian belief
networks (BBN) as a higher level of health risk assessment for a
dumping site of lead battery smelter factory. On the basis of the
epidemiological studies, the actual hospital attendance records and
expert experiences, the BBN is capable of capturing the probabilistic
relationships between the hazardous substances and their adverse
health effects, and accordingly inferring the morbidity of the adverse
health effects. The provision of the morbidity rates of the related
diseases is more informative and can alleviate the drawbacks of
conventional methods.
Abstract: Image enhancement is the most important challenging preprocessing for almost all applications of Image Processing. By now, various methods such as Median filter, α-trimmed mean filter, etc. have been suggested. It was proved that the α-trimmed mean filter is the modification of median and mean filters. On the other hand, ε-filters have shown excellent performance in suppressing noise. In spite of their simplicity, they achieve good results. However, conventional ε-filter is based on moving average. In this paper, we suggested a new ε-filter which utilizes α-trimmed mean. We argue that this new method gives better outcomes compared to previous ones and the experimental results confirmed this claim.
Abstract: The condition of lightning surge causes the traveling waves and the temporary increase in voltage in the transmission line system. Lightning is the most harmful for destroying the transmission line and setting devices so it is necessary to study and analyze the temporary increase in voltage for designing and setting the surge arrester. This analysis describes the figure of the lightning wave in transmission line with 115 kV voltage level in Thailand by using ATP/EMTP program to create the model of the transmission line and lightning surge. Because of the limit of this program, it must be calculated for the geometry of the transmission line and surge parameter and calculation in the manual book for the closest value of the parameter. On the other hand, for the effects on surge protector when the lightning comes, the surge arrester model must be right and standardized as metropolitan electrical authority's standard. The candidate compared the real information to the result from calculation, also. The results of the analysis show that the temporary increase in voltage value will be rise to 326.59 kV at the line which is done by lightning when the surge arrester is not set in the system. On the other hand, the temporary increase in voltage value will be 182.83 kV at the line which is done by lightning when the surge arrester is set in the system and the period of the traveling wave is reduced, also. The distance for setting the surge arrester must be as near to the transformer as possible. Moreover, it is necessary to know the right distance for setting the surge arrester and the size of the surge arrester for preventing the temporary increase in voltage, effectively.
Abstract: A computer model of Quantum Theory (QT) has been
developed by the author. Major goal of the computer model was
support and demonstration of an as large as possible scope of QT.
This includes simulations for the major QT (Gedanken-) experiments
such as, for example, the famous double-slit experiment.
Besides the anticipated difficulties with (1) transforming exacting
mathematics into a computer program, two further types of problems
showed up, namely (2) areas where QT provides a complete mathematical
formalism, but when it comes to concrete applications the
equations are not solvable at all, or only with extremely high effort;
(3) QT rules which are formulated in natural language and which do
not seem to be translatable to precise mathematical expressions, nor
to a computer program.
The paper lists problems in all three categories and describes also
the possible solutions or circumventions developed for the computer
model.
Abstract: Face recognition is a technique to automatically
identify or verify individuals. It receives great attention in
identification, authentication, security and many more applications.
Diverse methods had been proposed for this purpose and also a lot of
comparative studies were performed. However, researchers could not
reach unified conclusion. In this paper, we are reporting an extensive
quantitative accuracy analysis of four most widely used face
recognition algorithms: Principal Component Analysis (PCA),
Independent Component Analysis (ICA), Linear Discriminant
Analysis (LDA) and Support Vector Machine (SVM) using AT&T,
Sheffield and Bangladeshi people face databases under diverse
situations such as illumination, alignment and pose variations.
Abstract: This research proposes a Preemptive Possibilistic
Linear Programming (PPLP) approach for solving multiobjective
Aggregate Production Planning (APP) problem with interval demand
and imprecise unit price and related operating costs. The proposed
approach attempts to maximize profit and minimize changes of
workforce. It transforms the total profit objective that has imprecise
information to three crisp objective functions, which are maximizing
the most possible value of profit, minimizing the risk of obtaining the
lower profit and maximizing the opportunity of obtaining the higher
profit. The change of workforce level objective is also converted.
Then, the problem is solved according to objective priorities. It is
easier than simultaneously solve the multiobjective problem as
performed in existing approach. Possible range of interval demand is
also used to increase flexibility of obtaining the better production
plan. A practical application of an electronic company is illustrated to
show the effectiveness of the proposed model.
Abstract: The purpose of this study is to analyze Green IT industry in major developed countries and to suggest overall directions for IT-Energy convergence industry. Recently, IT industry is pointed out as a problem such as environmental pollution, energy exhaustion, and high energy consumption. Therefore, Green IT gets focused which concerns as solution of these problems. However, since it is a beginning stage of this convergence area, there are only a few studies of IT-Energy convergence industry. According to this, this study examined the major developed countries in terms of institution arrangements, resources, markets and companies based on Van de Ven(1999)'s social system framework that shows relationship among key components of industrial infrastructure. Subsequently, the direction of the future study of convergence on IT and Energy industry is proposed.
Abstract: The number of cross-border student between Hong
Kong and mainland China is increasing due to an increase of
cross-border marriage between Hong Kong and mainland China. Since
the education system is different to the mainland China, the statue
Since all the children who have the right of abode in Hong Kong
entitle to have free education in Hong Kong, many of the cross-border
family prefer to send the children back to Hong Kong for their
education.
Abstract: Neighborhood Rough Sets (NRS) has been proven to
be an efficient tool for heterogeneous attribute reduction. However,
most of researches are focused on dealing with complete and noiseless
data. Factually, most of the information systems are noisy, namely,
filled with incomplete data and inconsistent data. In this paper, we
introduce a generalized neighborhood rough sets model, called
VPTNRS, to deal with the problem of heterogeneous attribute
reduction in noisy system. We generalize classical NRS model with
tolerance neighborhood relation and the probabilistic theory.
Furthermore, we use the neighborhood dependency to evaluate the
significance of a subset of heterogeneous attributes and construct a
forward greedy algorithm for attribute reduction based on it.
Experimental results show that the model is efficient to deal with noisy
data.
Abstract: With the fast evolution of digital data exchange, security information becomes much important in data storage and transmission. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. In this paper, we analyze the Advanced Encryption Standard (AES), and we add a key stream generator (A5/1, W7) to AES to ensure improving the encryption performance; mainly for images characterised by reduced entropy. The implementation of both techniques has been realized for experimental purposes. Detailed results in terms of security analysis and implementation are given. Comparative study with traditional encryption algorithms is shown the superiority of the modified algorithm.
Abstract: Acute kidney injury (AKI) is a new worldwide public
health problem. A diagnosis of this disease using creatinine is still a
problem in clinical practice. Therefore, a measurement of biomarkers
responsible for AKI has received much attention in the past couple
years. Cytokine interleukin-18 (IL-18) was reported as one of the
early biomarkers for AKI. The most commonly used method to
detect this biomarker is an immunoassay. This study used a planar
platform to perform an immunoassay using fluorescence for
detection. In this study, anti-IL-18 antibody was immobilized onto a
microscope slide using a covalent binding method. Make-up samples
were diluted at the concentration between 10 to 1000 pg/ml to create
a calibration curve. The precision of the system was determined
using a coefficient of variability (CV), which was found to be less
than 10%. The performance of this immunoassay system was
compared with the measurement from ELISA.