Abstract: SDMA (Space-Division Multiple Access) is a MIMO
(Multiple-Input and Multiple-Output) based wireless communication
network architecture which has the potential to significantly increase
the spectral efficiency and the system performance. The maximum
likelihood (ML) detection provides the optimal performance, but its
complexity increases exponentially with the constellation size of
modulation and number of users. The QR decomposition (QRD)
MUD can be a substitute to ML detection due its low complexity and
near optimal performance. The minimum mean-squared-error
(MMSE) multiuser detection (MUD) minimises the mean square
error (MSE), which may not give guarantee that the BER of the
system is also minimum. But the minimum bit error rate (MBER)
MUD performs better than the classic MMSE MUD in term of
minimum probability of error by directly minimising the BER cost
function. Also the MBER MUD is able to support more users than
the number of receiving antennas, whereas the rest of MUDs fail in
this scenario. In this paper the performance of various MUD
techniques is verified for the correlated MIMO channel models based
on IEEE 802.16n standard.
Abstract: Most simple nonlinear thresholding rules for
wavelet- based denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. This paper attempts to give a recipe for selecting one of the popular image-denoising algorithms based
on VisuShrink, SureShrink, OracleShrink, BayesShrink and BiShrink and also this paper compares different Bivariate models used for image denoising applications. The first part of the paper
compares different Shrinkage functions used for image-denoising.
The second part of the paper compares different bivariate models
and the third part of this paper uses the Bivariate model with modified marginal variance which is based on Laplacian assumption. This paper gives an experimental comparison on six 512x512 commonly used images, Lenna, Barbara, Goldhill,
Clown, Boat and Stonehenge. The following noise powers 25dB,26dB, 27dB, 28dB and 29dB are added to the six standard images and the corresponding Peak Signal to Noise Ratio (PSNR) values
are calculated for each noise level.
Abstract: One of the popular methods for recognition of facial
expressions such as happiness, sadness and surprise is based on
deformation of facial features. Motion vectors which show these
deformations can be specified by the optical flow. In this method, for
detecting emotions, the resulted set of motion vectors are compared
with standard deformation template that caused by facial expressions.
In this paper, a new method is introduced to compute the quantity of
likeness in order to make decision based on the importance of
obtained vectors from an optical flow approach. For finding the
vectors, one of the efficient optical flow method developed by
Gautama and VanHulle[17] is used. The suggested method has been
examined over Cohn-Kanade AU-Coded Facial Expression Database,
one of the most comprehensive collections of test images available.
The experimental results show that our method could correctly
recognize the facial expressions in 94% of case studies. The results
also show that only a few number of image frames (three frames) are
sufficient to detect facial expressions with rate of success of about
83.3%. This is a significant improvement over the available methods.
Abstract: The identification and elimination of bad
measurements is one of the basic functions of a robust state estimator
as bad data have the effect of corrupting the results of state
estimation according to the popular weighted least squares method.
However this is a difficult problem to handle especially when dealing
with multiple errors from the interactive conforming type. In this
paper, a self adaptive genetic based algorithm is proposed. The
algorithm utilizes the results of the classical linearized normal
residuals approach to tune the genetic operators thus instead of
making a randomized search throughout the whole search space it is
more likely to be a directed search thus the optimum solution is
obtained at very early stages(maximum of 5 generations). The
algorithm utilizes the accumulating databases of already computed
cases to reduce the computational burden to minimum. Tests are
conducted with reference to the standard IEEE test systems. Test
results are very promising.
Abstract: As mobile ad hoc networks (MANET) have different
characteristics from wired networks and even from standard wireless
networks, there are new challenges related to security issues that
need to be addressed. Due to its unique features such as open nature,
lack of infrastructure and central management, node mobility and
change of dynamic topology, prevention methods from attacks on
them are not enough. Therefore intrusion detection is one of the
possible ways in recognizing a possible attack before the system
could be penetrated. All in all, techniques for intrusion detection in
old wireless networks are not suitable for MANET. In this paper, we
classify the architecture for Intrusion detection systems that have so
far been introduced for MANETs, and then existing intrusion
detection techniques in MANET presented and compared. We then
indicate important future research directions.
Abstract: Policies that support entrepreneurship are keys to the
generation of new business. In Brazil, seed capital, installation of
technology parks, programs and zero interest financing, economic
subsidy as Program First Innovative Company (PRIME) are
examples of incentive policies. For the implementation of PRIME, in
particular the Brazilian Innovation Agency (FINEP) decentralized
operationalization so that business incubators could select innovative
projects. This paper analyzes the program PRIME Business Incubator
Center of the State of Sergipe (CISE) after calculating the mean and
standard deviation of the grades obtained by companies in the factors
of innovation, market potential, financial return economic, market
strategy and staff and application of the Mann-Whitney test.
Abstract: Increase in globalization of capital markets brings the
higher requirements on financial information provided for investors
who look for a highly comparable information. Paper deals with the
advantages and limitations of applying International Financial
Reporting Standards (IFRS) in the Czech Republic and Ukraine. As a
greatest limit for full adoption of IFRS shall be acknowledged the
strong connection of continental accounting to tax system and
enormous high administrative burden for IFRS appliers.
Abstract: In Image processing the Image compression can improve
the performance of the digital systems by reducing the cost and
time in image storage and transmission without significant reduction
of the Image quality. This paper describes hardware architecture of
low complexity Discrete Cosine Transform (DCT) architecture for
image compression[6]. In this DCT architecture, common computations
are identified and shared to remove redundant computations
in DCT matrix operation. Vector processing is a method used for
implementation of DCT. This reduction in computational complexity
of 2D DCT reduces power consumption. The 2D DCT is performed
on 8x8 matrix using two 1-Dimensional Discrete cosine transform
blocks and a transposition memory [7]. Inverse discrete cosine
transform (IDCT) is performed to obtain the image matrix and
reconstruct the original image. The proposed image compression
algorithm is comprehended using MATLAB code. The VLSI design
of the architecture is implemented Using Verilog HDL. The proposed
hardware architecture for image compression employing DCT was
synthesized using RTL complier and it was mapped using 180nm
standard cells. . The Simulation is done using Modelsim. The
simulation results from MATLAB and Verilog HDL are compared.
Detailed analysis for power and area was done using RTL compiler
from CADENCE. Power consumption of DCT core is reduced to
1.027mW with minimum area[1].
Abstract: Background: Tissue Doppler Echocardiography
(TDE) assesses diastolic function more accurately than routine pulse
Doppler echo. Assessment of the effects of dynamic and static
exercises on the heart by using TDE can provides new information
about the athlete-s heart syndrome. Methods: This study was
conducted on 20 elite wrestlers, 14 endurance runners at national
level and 21 non-athletes as the control group. Participants underwent
two-dimensional echocardiography, standard Doppler and TDE.
Results: Wrestlers had the highest left ventricular mass index, enddiastolic
inter-ventricular septum thickness and left ventricular
Posterior wall thickness. Runners had the highest Left ventricular
end-diastolic volume, LV ejection fraction, stroke volume and
cardiac output. In TDE, the early diastolic velocity of mitral annulus
to the late diastolic velocity ratio in athletic groups was greater than
the controls with no significant difference. Conclusion: In spite of
cardiac morphological changes in athletes, TDE shows that cardiac
diastolic function won-t be adversely affected.
Abstract: Bagging and boosting are among the most popular re-sampling ensemble methods that generate and combine a diversity of regression models using the same learning algorithm as base-learner. Boosting algorithms are considered stronger than bagging on noise-free data. However, there are strong empirical indications that bagging is much more robust than boosting in noisy settings. For this reason, in this work we built an ensemble using an averaging methodology of bagging and boosting ensembles with 10 sub-learners in each one. We performed a comparison with simple bagging and boosting ensembles with 25 sub-learners on standard benchmark datasets and the proposed ensemble gave better accuracy.
Abstract: This paper introduces a new signal denoising based on the Empirical mode decomposition (EMD) framework. The method is a fully data driven approach. Noisy signal is decomposed adaptively into oscillatory components called Intrinsic mode functions (IMFs) by means of a process called sifting. The EMD denoising involves filtering or thresholding each IMF and reconstructs the estimated signal using the processed IMFs. The EMD can be combined with a filtering approach or with nonlinear transformation. In this work the Savitzky-Golay filter and shoftthresholding are investigated. For thresholding, IMF samples are shrinked or scaled below a threshold value. The standard deviation of the noise is estimated for every IMF. The threshold is derived for the Gaussian white noise. The method is tested on simulated and real data and compared with averaging, median and wavelet approaches.
Abstract: The objective of this study is to introduce estimators to the parameters and survival function for Weibull distribution using three different methods, Maximum Likelihood estimation, Standard Bayes estimation and Modified Bayes estimation. We will then compared the three methods using simulation study to find the best one base on MPE and MSE.
Abstract: The paper examines the performance of bit-interleaved parity (BIP) methods in error rate monitoring, and in declaration and clearing of alarms in those transport networks that employ automatic protection switching (APS). The BIP-based error rate monitoring is attractive for its simplicity and ease of implementation. The BIP-based results are compared with exact results and are found to declare the alarms too late, and to clear the alarms too early. It is concluded that the standards development and systems implementation should take into account the fact of early clearing and late declaration of alarms. The window parameters defining the detection and clearing thresholds should be set so as to build sufficient hysteresis into the system to ensure that BIP-based implementations yield acceptable performance results.
Abstract: Silver nanoparticles were prepared by chemical reduction method. Silver nitrate was taken as the metal precursor and hydrazine hydrate as a reducing agent. The formation of the silver nanoparticles was monitored using UV-Vis absorption spectroscopy. The UV-Vis spectroscopy revealed the formation of silver nanopart├¡cles by exhibing the typical surface plasmon absorption maxima at 418-420 nm from the UV–Vis spectrum. Comparison of theoretical (Mie light scattering theory) and experimental results showed that diameter of silver nanoparticles in colloidal solution is about 60 nm. We have used energy-dispersive spectroscopy (EDX), X-ray diffraction (XRD), transmission electron microscopy (TEM) and, UV–Vis spectroscopy to characterize the nanoparticles obtained. The energy-dispersive spectroscopy (EDX) of the nanoparticles dispersion confirmed the presence of elemental silver signal no peaks of other impurity were detected. The average size and morphology of silver nanoparticles were determined by transmission electron microscopy (TEM). TEM photographs indicate that the nanopowders consist of well dispersed agglomerates of grains with a narrow size distribution (40 and 60 nm), whereas the radius of the individual particles are between 10 and 20 nm. The synthesized nanoparticles have been structurally characterized by X-ray diffraction and transmission high-energy electron diffraction (HEED). The peaks in the XRD pattern are in good agreement with the standard values of the face-centered-cubic form of metallic silver (ICCD-JCPDS card no. 4-0787) and no peaks of other impurity crystalline phases were detected. Additionally, the antibacterial activity of the nanopart├¡culas dispersion was measured by Kirby-Bauer method. The nanoparticles of silver showed high antimicrobial and bactericidal activity against gram positive bacteria such as Escherichia Coli, Pseudimonas aureginosa and staphylococcus aureus which is a highly methicillin resistant strain.
Abstract: In recent years, there has been an increasing interest in using daylight to save energy in buildings. In tropical regions, daylighting is always an energy saver. On the other hand, daylight provides visual comfort. According to standards, it shows that many criteria should be taken into consideration in order to have daylight utilization and visual comfort. The current standard in Malaysia, MS 1525 does not provide sufficient guideline. Hence, more research is needed on daylight performance. If architects do not consider daylight design, it not only causes inconvenience in working spaces but also causes more energy consumption as well as environmental pollution. This research had surveyed daylight performance in 5 selected office buildings from different area of Malaysian through experimental method. Several parameters of daylight quality such as daylight factor, surface luminance and surface luminance ratio were measured in different rooms in each building. The result of this research demonstrated that most of the buildings were not designed for daylight utilization. Therefore, it is very important that architects follow the daylight design recommendation to reduce consumption of electric power for artificial lighting while the sufficient quality of daylight is available.
Abstract: Industrial surveys shows that manufacturing
companies define the qualities of thermal removing process based on
the dimension and physical appearance of the cutting material
surface. Therefore, the roughness of the surface area of the material
cut by the plasma arc cutting process and the rate of the removed
material by the manual plasma arc cutting machine was importantly
considered. Plasma arc cutter Selco Genesis 90 was used to cut
Standard AISI 1017 Steel of 200 mm x100 mm x 6 mm manually
based on the selected parameters setting. The material removal rate
(MRR) was measured by determining the weight of the specimens
before and after the cutting process. The surface roughness (SR)
analysis was conducted using Mitutoyo CS-3100 to determine the
average roughness value (Ra). Taguchi method was utilized to
achieve optimum condition for both outputs studied. The
microstructure analysis in the region of the cutting surface is
performed using SEM. The results reveal that the SR values are
inversely proportional to the MRR values. The quality of the surface
roughness depends on the dross peak that occurred after the cutting
process.
Abstract: In this paper, a new dependable algorithm based on an adaptation of the standard variational iteration method (VIM) is used for analyzing the transition from steady convection to chaos for lowto-intermediate Rayleigh numbers convection in porous media. The solution trajectories show the transition from steady convection to chaos that occurs at a slightly subcritical value of Rayleigh number, the critical value being associated with the loss of linear stability of the steady convection solution. The VIM is treated as an algorithm in a sequence of intervals for finding accurate approximate solutions to the considered model and other dynamical systems. We shall call this technique as the piecewise VIM. Numerical comparisons between the piecewise VIM and the classical fourth-order Runge–Kutta (RK4) numerical solutions reveal that the proposed technique is a promising tool for the nonlinear chaotic and nonchaotic systems.
Abstract: In this paper, the implementation of low power,
high throughput convolutional filters for the one dimensional
Discrete Wavelet Transform and its inverse are presented. The
analysis filters have already been used for the implementation of a
high performance DWT encoder [15] with minimum memory
requirements for the JPEG 2000 standard. This paper presents the
design techniques and the implementation of the convolutional filters
included in the JPEG2000 standard for the forward and inverse DWT
for achieving low-power operation, high performance and reduced
memory accesses. Moreover, they have the ability of performing
progressive computations so as to minimize the buffering between
the decomposition and reconstruction phases. The experimental
results illustrate the filters- low power high throughput characteristics
as well as their memory efficient operation.
Abstract: An experimental investigation was performed on pulp
liquid flow in straight ducts with a square cross section. Fully
developed steady flow was visualized and the fiber concentration was
obtained using a light-section method developed by the author et al.
The obtained results reveal quantitatively, in a definite form, the
distribution of the fiber concentration. From the results and
measurements of pressure loss, it is found that the flow characteristics
of pulp liquid in ducts can be classified into five patterns. The
relationships among the distributions of mean and fluctuation of fiber
concentration, the pressure loss and the flow velocity are discussed,
and then the features for each pattern are extracted. The degree of
nonuniformity of the fiber concentration, which is indicated by the
standard deviation of its distribution, is decreased from 0.3 to 0.05
with an increase in the velocity of the tested pulp liquid from 0.4 to
0.8%.
Abstract: This paper describes a platform that faces the main
research areas for e-learning educational contents. Reusability tackles
the possibility to use contents in different courses reducing costs and
exploiting available data from repositories. In our approach the
production of educational material is based on templates to reuse
learning objects. In terms of interoperability the main challenge lays
on reaching the audience through different platforms. E-learning
solution must track social consumption evolution where nowadays
lots of multimedia contents are accessed through the social networks.
Our work faces it by implementing a platform for generation of
multimedia presentations focused on the new paradigm related to
social media. The system produces videos-courses on top of web
standard SMIL (Synchronized Multimedia Integration Language)
ready to be published and shared. Regarding interfaces it is
mandatory to satisfy user needs and ease communication. To
overcome it the platform deploys virtual teachers that provide natural
interfaces while multimodal features remove barriers to pupils with
disabilities.