Abstract: The SOM has several beneficial features which make
it a useful method for data mining. One of the most important
features is the ability to preserve the topology in the projection.
There are several measures that can be used to quantify the goodness
of the map in order to obtain the optimal projection, including the
average quantization error and many topological errors. Many
researches have studied how the topology preservation should be
measured. One option consists of using the topographic error which
considers the ratio of data vectors for which the first and second best
BMUs are not adjacent. In this work we present a study of the
behaviour of the topographic error in different kinds of maps. We
have found that this error devaluates the rectangular maps and we
have studied the reasons why this happens. Finally, we suggest a new
topological error to improve the deficiency of the topographic error.
Abstract: This paper introduces a new signal denoising based on the Empirical mode decomposition (EMD) framework. The method is a fully data driven approach. Noisy signal is decomposed adaptively into oscillatory components called Intrinsic mode functions (IMFs) by means of a process called sifting. The EMD denoising involves filtering or thresholding each IMF and reconstructs the estimated signal using the processed IMFs. The EMD can be combined with a filtering approach or with nonlinear transformation. In this work the Savitzky-Golay filter and shoftthresholding are investigated. For thresholding, IMF samples are shrinked or scaled below a threshold value. The standard deviation of the noise is estimated for every IMF. The threshold is derived for the Gaussian white noise. The method is tested on simulated and real data and compared with averaging, median and wavelet approaches.
Abstract: The implicit block methods based on the backward
differentiation formulae (BDF) for the solution of stiff initial value
problems (IVPs) using variable step size is derived. We construct a
variable step size block methods which will store all the coefficients
of the method with a simplified strategy in controlling the step size
with the intention of optimizing the performance in terms of
precision and computation time. The strategy involves constant,
halving or increasing the step size by 1.9 times the previous step size.
Decision of changing the step size is determined by the local
truncation error (LTE). Numerical results are provided to support the
enhancement of method applied.
Abstract: IEEE 802.11e is the enhanced version of the IEEE
802.11 MAC dedicated to provide Quality of Service of wireless
network. It supports QoS by the service differentiation and
prioritization mechanism. Data traffic receives different priority
based on QoS requirements. Fundamentally, applications are divided
into four Access Categories (AC). Each AC has its own buffer queue
and behaves as an independent backoff entity. Every frame with a
specific priority of data traffic is assigned to one of these access
categories. IEEE 802.11e EDCA (Enhanced Distributed Channel
Access) is designed to enhance the IEEE 802.11 DCF (Distributed
Coordination Function) mechanisms by providing a distributed
access method that can support service differentiation among
different classes of traffic. Performance of IEEE 802.11e MAC layer
with different ACs is evaluated to understand the actual benefits
deriving from the MAC enhancements.
Abstract: The objective of this study is to introduce estimators to the parameters and survival function for Weibull distribution using three different methods, Maximum Likelihood estimation, Standard Bayes estimation and Modified Bayes estimation. We will then compared the three methods using simulation study to find the best one base on MPE and MSE.
Abstract: In this paper a way of hiding text message (Steganography) in the gray image has been presented. In this method tried to find binary value of each character of text message and then in the next stage, tried to find dark places of gray image (black) by converting the original image to binary image for labeling each object of image by considering on 8 connectivity. Then these images have been converted to RGB image in order to find dark places. Because in this way each sequence of gray color turns into RGB color and dark level of grey image is found by this way if the Gary image is very light the histogram must be changed manually to find just dark places. In the final stage each 8 pixels of dark places has been considered as a byte and binary value of each character has been put in low bit of each byte that was created manually by dark places pixels for increasing security of the main way of steganography (LSB).
Abstract: This paper tests the level of market integration between Malaysia and Singapore stock markets with the world market. Kalman Filter (KF) methodology is used on the International Capital Asset Pricing Model (ICAPM) and the pricing errors estimated within the framework of ICAPM are used as a measure of market integration or segmentation. The advantage of the KF technique is that it allows for time-varying coefficients in estimating ICAPM and hence able to capture the varying degree of market integration. Empirical results show clear evidence of varying degree of market integration for both case of Malaysia and Singapore. Furthermore, the results show that the changes in the level of market integration are found to coincide with certain economic events that have taken placed. The findings certainly provide evidence on the practicability of the KF technique to estimate stock markets integration. In the comparison between Malaysia and Singapore stock market, the result shows that the trends of the market integration indices for Malaysia and Singapore look similar through time but the magnitude is notably different with the Malaysia stock market showing greater degree of market integration. Finally, significant evidence of varying degree of market integration shows the inappropriate use of OLS in estimating the level of market integration.
Abstract: Most often the contaminants are not taken seriously into consideration, and this behavior comes out directly from the lack of monitoring and professional reporting about pollution in the printing facilities in Serbia. The goal of planned and systematic ozone measurements in ambient air of the screen printing facilities in Novi Sad is to examine of its impact on the employees health, and to track trends in concentration. In this study, ozone concentrations were determined by using discontinuous and continuous method during the automatic and manual screen printing process. Obtained results indicates that the average concentrations of ozone measured during the automatic process were almost 3 to 28 times higher for discontinuous and 10 times higher for continuous method (1.028 ppm) compared to the values prescribed by OSHA. In the manual process, average concentrations of ozone were within prescribed values for discontinuous and almost 3 times higher for continuous method (0.299 ppm).
Abstract: Economic dispatch (ED) is considered to be one of the
key functions in electric power system operation. This paper presents
a new hybrid approach based genetic algorithm (GA) to economic
dispatch problems. GA is most commonly used optimizing algorithm
predicated on principal of natural evolution. Utilization of chaotic
queue with GA generates several neighborhoods of near optimal
solutions to keep solution variation. It could avoid the search process
from becoming pre-mature. For the objective of chaotic queue
generation, utilization of tent equation as opposed to logistic equation
results in improvement of iterative speed. The results of the proposed
approach were compared in terms of fuel cost, with existing
differential evolution and other methods in literature.
Abstract: This paper makes an attempt to solve the problem of
searching and retrieving of similar MRI photos via Internet services
using morphological features which are sourced via the original
image. This study is aiming to be considered as an additional tool of
searching and retrieve methods. Until now the main way of the
searching mechanism is based on the syntactic way using keywords.
The technique it proposes aims to serve the new requirements of
libraries. One of these is the development of computational tools for
the control and preservation of the intellectual property of digital
objects, and especially of digital images. For this purpose, this paper
proposes the use of a serial number extracted by using a previously
tested semantic properties method. This method, with its center being
the multi-layers of a set of arithmetic points, assures the following
two properties: the uniqueness of the final extracted number and the
semantic dependence of this number on the image used as the
method-s input. The major advantage of this method is that it can
control the authentication of a published image or its partial
modification to a reliable degree. Also, it acquires the better of the
known Hash functions that the digital signature schemes use and
produces alphanumeric strings for cases of authentication checking,
and the degree of similarity between an unknown image and an
original image.
Abstract: Speckle noise affects all coherent imaging systems
including medical ultrasound. In medical images, noise suppression
is a particularly delicate and difficult task. A tradeoff between noise
reduction and the preservation of actual image features has to be made
in a way that enhances the diagnostically relevant image content.
Even though wavelets have been extensively used for denoising
speckle images, we have found that denoising using contourlets gives
much better performance in terms of SNR, PSNR, MSE, variance and
correlation coefficient. The objective of the paper is to determine the
number of levels of Laplacian pyramidal decomposition, the number
of directional decompositions to perform on each pyramidal level and
thresholding schemes which yields optimal despeckling of medical
ultrasound images, in particular. The proposed method consists of the
log transformed original ultrasound image being subjected to contourlet
transform, to obtain contourlet coefficients. The transformed
image is denoised by applying thresholding techniques on individual
band pass sub bands using a Bayes shrinkage rule. We quantify the
achieved performance improvement.
Abstract: In this paper, a new method of image edge-detection
and characterization is presented. “Parametric Filtering method" uses
a judicious defined filter, which preserves the signal correlation
structure as input in the autocorrelation of the output. This leads,
showing the evolution of the image correlation structure as well as
various distortion measures which quantify the deviation between
two zones of the signal (the two Hamming signals) for the protection
of an image edge.
Abstract: This paper presents a novel approach for optimal
reconfiguration of radial distribution systems. Optimal
reconfiguration involves the selection of the best set of branches to
be opened, one each from each loop, such that the resulting radial
distribution system gets the desired performance. In this paper an
algorithm is proposed based on simple heuristic rules and identified
an effective switch status configuration of distribution system for the
minimum loss reduction. This proposed algorithm consists of two
parts; one is to determine the best switching combinations in all loops
with minimum computational effort and the other is simple optimum
power loss calculation of the best switching combination found in
part one by load flows. To demonstrate the validity of the proposed
algorithm, computer simulations are carried out on 33-bus system.
The results show that the performance of the proposed method is
better than that of the other methods.
Abstract: This research study the application of the immobilized
TiO2 layer and Cu-TiO2 layer on graphite substrate as a negative
electrode or anode for Li-ion battery. The titania layer was produced
through chemical bath deposition method, meanwhile Cu particles
were deposited electrochemically. A material can be used as an
electrode as it has capability to intercalates Li ions into its crystal
structure. The Li intercalation into TiO2/Graphite and Cu-
TiO2/Graphite were analyzed from the changes of its XRD pattern
after it was used as electrode during discharging process. The XRD
patterns were refined by Le Bail method in order to determine the
crystal structure of the prepared materials. A specific capacity and the
cycle ability measurement were carried out to study the performance
of the prepared materials as negative electrode of the Li-ion battery.
The specific capacity was measured during discharging process from
fully charged until the cut off voltage. A 300 was used as a load.
The result shows that the specific capacity of Li-ion battery with
TiO2/Graphite as negative electrode is 230.87 ± 1.70mAh.g-1 which is
higher than the specific capacity of Li-ion battery with pure graphite
as negative electrode, i.e 140.75 ±0.46mAh.g-1. Meanwhile
deposition of Cu onto TiO2 layer does not increase the specific
capacity, and the value even lower than the battery with
TiO2/Graphite as electrode. The cycle ability of the prepared battery
is only two cycles, due to the Li ribbon which was used as cathode
became fragile and easily broken.
Abstract: The complexity of lignocellulosic biomass requires
a pretreatment step to improve the yield of fermentable sugars. The
efficient pretreatment of corn cobs using microwave and potassium
hydroxide and enzymatic hydrolysis was investigated. The
objective of this work was to characterize the optimal condition of
pretreatment of corn cobs using microwave and potassium
hydroxide enhance enzymatic hydrolysis. Corn cobs were
submerged in different potassium hydroxide concentration at varies
temperature and resident time. The pretreated corn cobs were
hydrolyzed to produce the reducing sugar for analysis. The
morphology and microstructure of samples were investigated by
Thermal gravimetric analysis (TGA, scanning electron microscope
(SEM), X-ray diffraction (XRD). The results showed that lignin
and hemicellulose were removed by microwave/potassium
hydroxide pretreatment. The crystallinity of the pretreated corn
cobs was higher than the untreated. This method was compared
with autoclave and conventional heating method. The results
indicated that microwave-alkali treatment was an efficient way to
improve the enzymatic hydrolysis rate by increasing its
accessibility hydrolysis enzymes.
Abstract: The IDR(s) method based on an extended IDR theorem was proposed by Sonneveld and van Gijzen. The original IDR(s) method has excellent property compared with the conventional iterative methods in terms of efficiency and small amount of memory. IDR(s) method, however, has unexpected property that relative residual 2-norm stagnates at the level of less than 10-12. In this paper, an effective strategy for stagnation detection, stagnation avoidance using adaptively information of parameter s and improvement of convergence rate itself of IDR(s) method are proposed in order to gain high accuracy of the approximated solution of IDR(s) method. Through numerical experiments, effectiveness of adaptive tuning IDR(s) method is verified and demonstrated.
Abstract: The paper examines the performance of bit-interleaved parity (BIP) methods in error rate monitoring, and in declaration and clearing of alarms in those transport networks that employ automatic protection switching (APS). The BIP-based error rate monitoring is attractive for its simplicity and ease of implementation. The BIP-based results are compared with exact results and are found to declare the alarms too late, and to clear the alarms too early. It is concluded that the standards development and systems implementation should take into account the fact of early clearing and late declaration of alarms. The window parameters defining the detection and clearing thresholds should be set so as to build sufficient hysteresis into the system to ensure that BIP-based implementations yield acceptable performance results.
Abstract: The research investigates the effects of super plasticizer and molarity of sodium hydroxide alkaline solution on the workability, microstructure and compressive strength of self compacting geopolymer concrete (SCGC). SCGC is an improved way of concreting execution that does not require compaction and is made by complete elimination of ordinary Portland cement content. The parameters studied were superplasticizer (SP) dosage and molarity of NaOH solution. SCGC were synthesized from low calcium fly ash, activated by combinations of sodium hydroxide and sodium silicate solutions, and by incorporation of superplasticizer for self compactability. The workability properties such as filling ability, passing ability and resistance to segregation were assessed using slump flow, T-50, V-funnel, L-Box and J-ring test methods. It was found that the essential workability requirements for self compactability according to EFNARC were satisfied. Results showed that the workability and compressive strength improved with the increase in superplasticizer dosage. An increase in strength and a decrease in workability of these concrete samples were observed with the increase in molarity of NaOH solution from 8M to 14M. Improvement of interfacial transition zone (ITZ) and micro structure with the increase of SP and increase of concentration from 8M to 12M were also identified.
Abstract: Mercury is a natural occurring element and present in
various concentrations in the environment. Due to its toxic effects, it
is desirable to research mercury sensitive materials to adsorb
mercury. This paper describes the preparation of Au nanoparticles for
mercury adsorption by using a microwave (MW)-polyol method in
the presence of three different Sodium Chloride (NaCl)
concentrations (10, 20 and 30 mM). Mixtures of spherical, triangular,
octahedral, decahedral particles and 1-D product were obtained using
this rapid method. Sizes and shapes was found strongly depend on the
concentrations of NaCl. Without NaCl concentration, spherical,
triangular plates, octahedral, decahedral nanoparticles and 1D
product were produced. At the lower NaCl concentration (10 mM),
spherical, octahedral and decahedral nanoparticles were present,
while spherical and decahedral nanoparticles were preferentially form
by using 20 mM of NaCl concentration. Spherical, triangular plates,
octahedral and decahedral nanoparticles were obtained at the highest
NaCl concentration (30 mM). The amount of mercury adsorbed using
20 ppm mercury solution is the highest (67.5 %) for NaCl
concentration of 30 mM. The high yield of polygonal particles will
increase the mercury adsorption. In addition, the adsorption of
mercury is also due to the sizes of the particles. The sizes of particles
become smaller with increasing NaCl concentrations (size ranges, 5-
16 nm) than those synthesized without addition of NaCl (size ranges
11-32 nm). It is concluded that NaCl concentrations affects the
formation of sizes and shapes of Au nanoparticles thus affects the
mercury adsorption.
Abstract: Many companies have switched their processes to project-oriented in the last years. This brings new possibilities and effectiveness not only in the field of external processes connected with the product delivery but also the internal processes as well. However centralized project organization which is based on the role of project manager in the team has proved insufficient in some cases. Agile methods of project organization are trying to solve this problem by bringing new view on the project organization, roles, processes and competences. Scrum is one of these methods which builds on the principles of knowledge management to drive the project to effectiveness from all view angles. Using this method to organize internal and delivery projects helps the organization to create and share knowledge throughout the company. It also supports forming unique competences of individuals and project teams and drives innovations in the company.