Abstract: Appropriate ventilation in a classroom is helpful for
enhancing air exchange rate and student concentration. This study
focuses on the effects of fenestration in a four-story school building by
performing numerical simulation of a building when considering
indoor and outdoor environments simultaneously. The wind profile
function embedded in PHOENICS code was set as the inlet boundary
condition in a suburban environment. Sixteen fenestration
combinations were compared in a classroom containing thirty seats.
This study evaluates mean age of air (AGE) and airflow pattern of a
classroom on different floors. Considering both wind profile and
fenestration effects, the airflow on higher floors is channeled toward
the area near ceiling in a room and causes older mean age of air in the
breathing zone. The results in this study serve as a useful guide for
enhancing natural ventilation in a typical school building.
Abstract: There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. In this paper, we tried to predict the level of impact of the existing faults in software systems. Neuro-Fuzzy based predictor models is applied NASA-s public domain defect dataset coded in C programming language. As Correlation-based Feature Selection (CFS) evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. So, CFS is used for the selecting the best metrics that have highly correlated with level of severity of faults. The results are compared with the prediction results of Logistic Models (LMT) that was earlier quoted as the best technique in [17]. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provide a relatively better prediction accuracy as compared to other models and hence, can be used for the modeling of the level of impact of faults in function based systems.
Abstract: This paper presents a new fingerprint coding technique
based on contourlet transform and multistage vector quantization.
Wavelets have shown their ability in representing natural images that
contain smooth areas separated with edges. However, wavelets
cannot efficiently take advantage of the fact that the edges usually
found in fingerprints are smooth curves. This issue is addressed by
directional transforms, known as contourlets, which have the
property of preserving edges. The contourlet transform is a new
extension to the wavelet transform in two dimensions using
nonseparable and directional filter banks. The computation and
storage requirements are the major difficulty in implementing a
vector quantizer. In the full-search algorithm, the computation and
storage complexity is an exponential function of the number of bits
used in quantizing each frame of spectral information. The storage
requirement in multistage vector quantization is less when compared
to full search vector quantization. The coefficients of contourlet
transform are quantized by multistage vector quantization. The
quantized coefficients are encoded by Huffman coding. The results
obtained are tabulated and compared with the existing wavelet based
ones.
Abstract: This research project is developed in order to study
managerial styles of modern Thai executives. The thorough
understanding will lead to continuous improvement and efficient
performance of Thai business organizations. Regarding managerial
skills, Thai executives focus heavily upon human skills. Also, the
negotiator roles are most emphasis in their management. In addition,
Thai executives pay most attention to the fundamental management
principles including Harmony and Unity of Direction of the
organizations. Moreover, the management techniques, consisting of
Team work and Career Planning are of their main concern. Finally,
Thai executives wish to enhance their firms- image and employees-
morale through conducting the ethical and socially responsible
activities. The major tactic deployed to stimulate employees- ethical
behaviors and mindset is Code of Ethics development.
Abstract: The design of an active leg orthosis for tumble
protection is proposed in this paper. The orthosis would be applied to
assist elders or invalids in rebalancing while they fall unexpectedly.
We observe the regain balance motion of healthy and youthful people,
and find the difference to elders or invalids. First, the physical model
of leg would be established, and we consider the leg motions are
achieve through four joints (phalanx stem, ankle, knee, and hip joint)
and five links (phalanges, talus, tibia, femur, and hip bone). To
formulate the dynamic equations, the coordinates which can clearly
describe the position in 3D space are first defined accordance with the
human movement of leg, and the kinematics and dynamics of the leg
movement can be formulated based on the robotics. For the purpose,
assisting elders and invalids in avoiding tumble, the posture variation
of unbalance and regaining balance motion are recorded by the
motion-capture image system, and the trajectory is taken as the desire
one. Then we calculate the force and moment of each joint based on
the leg motion model through programming MATLAB code. The
results would be primary information of the active leg orthosis design
for tumble protection.
Abstract: In this paper, a new encoding algorithm of spectral envelope based on NLMS in G.729.1 for VoIP is proposed. In the TDAC part of G.729.1, the spectral envelope and MDCT coefficients extracted in the weighted CELP coding error (lower-band) and the higher-band input signal are encoded. In order to reduce allocation bits for spectral envelope coding, a new quantization algorithm based on NLMS is proposed. Also, reduced bits are used to enhance sound quality. The performance of the proposed algorithm is evaluated by sound quality and bit reduction rates in clean and frame loss conditions.
Abstract: In the present study, the lattice Boltzmann Method (LBM) is applied for simulating of Natural Convection in an inclined open ended cavity. The cavity horizontal walls are insulated while the west wall is maintained at a uniform temperature higher than the ambient. Prandtl number is fixed to 0.71 (air) while Rayligh numbers, aspect ratio of the cavity are changed in the range of 103 to 104 and of 1-4, respectively. The numerical code is validated for the previously results for open ended cavities, and then the results of an inclined open ended cavity for various angles of rotating open ended cavity are presented. Result shows by increasing of aspect ratio, the average Nusselt number on hot wall decreases for all rotation angles. When gravity acceleration direction is opposite of standard gravity direction the convection heat transfer has a manner same as conduction.
Abstract: This paper is about hiding RFID tag identifier (ID)
using handheld device like a cellular phone. By modifying the tag ID
of objects periodically or manually using cellular phone built-in a
RFID reader chip or with a external RFID reader device, we can
prevent other people from gathering the information related with
objects querying information server (like an EPC IS) with a tag ID or
deriving the information from tag ID-s code structure or tracking the
location of the objects and the owner of the objects. In this paper, we
use a cryptographic algorithm for modification and restoring of RFID
tag ID, and for one original tag ID, there are several different
temporary tag ID, periodically.
Abstract: The purpose of this study is to introduce a new
interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues
in proton therapy. This interface program was developed under
MATLAB software and includes a friendly graphical user interface
with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image
segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton
beam. The result of the mentioned technique is a number of nonoverlapped
squares with different sizes in every image. By this way
the resolution of image segmentation is high enough in and near
heterogeneous areas to preserve the precision of dose calculations
and is low enough in homogeneous areas to reduce the number of
cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron
and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.
Abstract: Motor imagery classification provides an important basis for designing Brain Machine Interfaces [BMI]. A BMI captures and decodes brain EEG signals and transforms human thought into actions. The ability of an individual to control his EEG through imaginary mental tasks enables him to control devices through the BMI. This paper presents a method to design a four state BMI using EEG signals recorded from the C3 and C4 locations. Principle features extracted through principle component analysis of the segmented EEG are analyzed using two novel classification algorithms using Elman recurrent neural network and functional link neural network. Performance of both classifiers is evaluated using a particle swarm optimization training algorithm; results are also compared with the conventional back propagation training algorithm. EEG motor imagery recorded from two subjects is used in the offline analysis. From overall classification performance it is observed that the BP algorithm has higher average classification of 93.5%, while the PSO algorithm has better training time and maximum classification. The proposed methods promises to provide a useful alternative general procedure for motor imagery classification
Abstract: Multi-user interference (MUI) is the main reason of system deterioration in the Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) system. MUI increases with the number of simultaneous users, resulting into higher probability bit rate and limits the maximum number of simultaneous users. On the other hand, Phase induced intensity noise (PIIN) problem which is originated from spontaneous emission of broad band source from MUI severely limits the system performance should be addressed as well. Since the MUI is caused by the interference of simultaneous users, reducing the MUI value as small as possible is desirable. In this paper, an extensive study for the system performance specified by MUI and PIIN reducing is examined. Vectors Combinatorial (VC) codes families are adopted as a signature sequence for the performance analysis and a comparison with reported codes is performed. The results show that, when the received power increases, the PIIN noise for all the codes increases linearly. The results also show that the effect of PIIN can be minimized by increasing the code weight leads to preserve adequate signal to noise ratio over bit error probability. A comparison study between the proposed code and the existing codes such as Modified frequency hopping (MFH), Modified Quadratic- Congruence (MQC) has been carried out.
Abstract: A new code for spectral-amplitude coding optical
code-division multiple-access system is proposed called Random
diagonal (RD) code. This code is constructed using code segment and
data segment. One of the important properties of this code is that the
cross correlation at data segment is always zero, which means that
Phase Intensity Induced Noise (PIIN) is reduced. For the performance
analysis, the effects of phase-induced intensity noise, shot noise, and
thermal noise are considered simultaneously. Bit-error rate (BER)
performance is compared with Hadamard and Modified Frequency
Hopping (MFH) codes. It is shown that the system using this new
code matrices not only suppress PIIN, but also allows larger number
of active users compare with other codes. Simulation results shown
that using point to point transmission with three encoded channels,
RD code has better BER performance than other codes, also its found
that at 0 dbm PIIN noise are 10-10 and 10-11 for RD and MFH
respectively.
Abstract: In this work, we developed the concept of
supercompression, i.e., compression above the compression standard
used. In this context, both compression rates are multiplied. In fact,
supercompression is based on super-resolution. That is to say,
supercompression is a data compression technique that superpose
spatial image compression on top of bit-per-pixel compression to
achieve very high compression ratios. If the compression ratio is very
high, then we use a convolutive mask inside decoder that restores the
edges, eliminating the blur. Finally, both, the encoder and the
complete decoder are implemented on General-Purpose computation
on Graphics Processing Units (GPGPU) cards. Specifically, the
mentio-ned mask is coded inside texture memory of a GPGPU.
Abstract: In this work a new method for low complexity
image coding is presented, that permits different settings and great
scalability in the generation of the final bit stream. This coding
presents a continuous-tone still image compression system that
groups loss and lossless compression making use of finite arithmetic
reversible transforms. Both transformation in the space of color and
wavelet transformation are reversible. The transformed coefficients
are coded by means of a coding system in depending on a
subdivision into smaller components (CFDS) similar to the bit
importance codification. The subcomponents so obtained are
reordered by means of a highly configure alignment system
depending on the application that makes possible the re-configure of
the elements of the image and obtaining different importance levels
from which the bit stream will be generated. The subcomponents of
each importance level are coded using a variable length entropy
coding system (VBLm) that permits the generation of an embedded
bit stream. This bit stream supposes itself a bit stream that codes a
compressed still image. However, the use of a packing system on the
bit stream after the VBLm allows the realization of a final highly
scalable bit stream from a basic image level and one or several
improvement levels.
Abstract: The paper is intended to declare and apply ethics, i. e.
moral principles, rules in marketing environment. Ethical behavior of
selected pharmaceutical companies in the Slovak Republic is the
object of our research. The aim of our research is to determine
perception of ethical behavior of the pharmaceutical industry in
Slovakia by the medicine representatives in comparison with the
assessment of doctors and patients. The experimental sample
included 90 participants who were divided into three groups:
medicine representatives of the pharmaceutical companies (N=30),
doctors (N=30) and patients (N=30). The research method was a
Questionnaire of ethical behavior, created by us, that describes
individual areas included in the Code of ethics of the pharmaceutical
industry in Slovakia. The results showed influence of professional
status on ethical behavior perception, not gender. Higher perception
was indicated at patients rather than doctors and medicine
representatives.
Abstract: The present work presents a method of calculating the
ductility of rectangular sections of beams considering nonlinear
behavior of concrete and steel. This calculation procedure allows us
to trace the curvature of the section according to the bending
moment, and consequently deduce ductility. It also allowed us to
study the various parameters that affect the value of the ductility. A
comparison of the effect of maximum rates of tension steel, adopted
by the codes, ACI [1], EC8 [2] and RPA [3] on the value of the
ductility was made. It was concluded that the maximum rate of steels
permitted by the ACI [1] codes and RPA [3] are almost similar in
their effect on the ductility and too high. Therefore, the ductility
mobilized in case of an earthquake is low, the inverse of code EC8
[2]. Recommendations have been made in this direction.
Abstract: Vector quantization is a powerful tool for speech
coding applications. This paper deals with LPC Coding of speech
signals which uses a new technique called Multi Switched Split
Vector Quantization (MSSVQ), which is a hybrid of Multi, switched,
split vector quantization techniques. The spectral distortion
performance, computational complexity, and memory requirements
of MSSVQ are compared to split vector quantization (SVQ), multi
stage vector quantization(MSVQ) and switched split vector
quantization (SSVQ) techniques. It has been proved from results that
MSSVQ has better spectral distortion performance, lower
computational complexity and lower memory requirements when
compared to all the above mentioned product code vector
quantization techniques. Computational complexity is measured in
floating point operations (flops), and memory requirements is
measured in (floats).
Abstract: Obtaining labeled data in supervised learning is often
difficult and expensive, and thus the trained learning algorithm tends
to be overfitting due to small number of training data. As a result,
some researchers have focused on using unlabeled data which may
not necessary to follow the same generative distribution as the labeled
data to construct a high-level feature for improving performance on
supervised learning tasks. In this paper, we investigate the impact of
the relationship between unlabeled and labeled data for classification
performance. Specifically, we will apply difference unlabeled data
which have different degrees of relation to the labeled data for
handwritten digit classification task based on MNIST dataset. Our
experimental results show that the higher the degree of relation
between unlabeled and labeled data, the better the classification
performance. Although the unlabeled data that is completely from
different generative distribution to the labeled data provides the lowest
classification performance, we still achieve high classification performance.
This leads to expanding the applicability of the supervised
learning algorithms using unsupervised learning.
Abstract: Analysis of the elastic scattering of protons on 6,7Li
nuclei has been done in the framework of the optical model at the
beam energies up to 50 MeV. Differential cross sections for the 6,7Li +
p scattering were measured over the proton laboratory–energy range
from 400 to 1050 keV. The elastic scattering of 6,7Li+p data at
different proton incident energies have been analyzed using singlefolding
model. In each case the real potential obtained from the
folding model was supplemented by a phenomenological imaginary
potential, and during the fitting process the real potential was
normalized and the imaginary potential optimized. Normalization
factor NR is calculated in the range between 0.70 and 0.84.
Abstract: Higher-order Statistics (HOS), also known as
cumulants, cross moments and their frequency domain counterparts,
known as poly spectra have emerged as a powerful signal processing
tool for the synthesis and analysis of signals and systems. Algorithms
used for the computation of cross moments are computationally
intensive and require high computational speed for real-time
applications. For efficiency and high speed, it is often advantageous
to realize computation intensive algorithms in hardware. A promising
solution that combines high flexibility together with the speed of a
traditional hardware is Field Programmable Gate Array (FPGA). In
this paper, we present FPGA-based parallel architecture for the
computation of third-order cross moments. The proposed design is
coded in Very High Speed Integrated Circuit (VHSIC) Hardware
Description Language (VHDL) and functionally verified by
implementing it on Xilinx Spartan-3 XC3S2000FG900-4 FPGA.
Implementation results are presented and it shows that the proposed
design can operate at a maximum frequency of 86.618 MHz.