Abstract: Compression algorithms reduce the redundancy in
data representation to decrease the storage required for that data.
Lossless compression researchers have developed highly
sophisticated approaches, such as Huffman encoding, arithmetic
encoding, the Lempel-Ziv (LZ) family, Dynamic Markov
Compression (DMC), Prediction by Partial Matching (PPM), and
Burrows-Wheeler Transform (BWT) based algorithms.
Decompression is also required to retrieve the original data by
lossless means. A compression scheme for text files coupled with
the principle of dynamic decompression, which decompresses only
the section of the compressed text file required by the user instead of
decompressing the entire text file. Dynamic decompressed files offer
better disk space utilization due to higher compression ratios
compared to most of the currently available text file formats.
Abstract: In H.264/AVC video encoding, rate-distortion
optimization for mode selection plays a significant role to achieve
outstanding performance in compression efficiency and video quality.
However, this mode selection process also makes the encoding
process extremely complex, especially in the computation of the ratedistortion
cost function, which includes the computations of the sum
of squared difference (SSD) between the original and reconstructed
image blocks and context-based entropy coding of the block. In this
paper, a transform-domain rate-distortion optimization accelerator
based on fast SSD (FSSD) and VLC-based rate estimation algorithm
is proposed. This algorithm could significantly simplify the hardware
architecture for the rate-distortion cost computation with only
ignorable performance degradation. An efficient hardware structure
for implementing the proposed transform-domain rate-distortion
optimization accelerator is also proposed. Simulation results
demonstrated that the proposed algorithm reduces about 47% of total
encoding time with negligible degradation of coding performance.
The proposed method can be easily applied to many mobile video
application areas such as a digital camera and a DMB (Digital
Multimedia Broadcasting) phone.
Abstract: When a high DC voltage is applied to a capacitor with
strongly asymmetrical electrodes, it generates a mechanical force that
affects the whole capacitor. This phenomenon is most likely to be
caused by the motion of ions generated around the smaller of the two
electrodes and their subsequent interaction with the surrounding
medium. A method to measure this force has been devised and used.
A formula describing the force has also been derived. After
comparing the data gained through experiments with those acquired
using the theoretical formula, a difference was found above a certain
value of current. This paper also gives reasons for this difference.
Abstract: This article presents a computationally tractable probabilistic model for the relation between the complex wavelet coefficients of two images of the same scene. The two images are acquisitioned at distinct moments of times, or from distinct viewpoints, or by distinct sensors. By means of the introduced probabilistic model, we argue that the similarity between the two images is controlled not by the values of the wavelet coefficients, which can be altered by many factors, but by the nature of the wavelet coefficients, that we model with the help of hidden state variables. We integrate this probabilistic framework in the construction of a new image registration algorithm. This algorithm has sub-pixel accuracy and is robust to noise and to other variations like local illumination changes. We present the performance of our algorithm on various image types.
Abstract: Water 2H NMR signal on the surface of nano-silica material, MCM-41, consists of two overlapping resonances. The 2H water spectrum shows a superposition of a Lorentzian line shape and the familiar NMR powder pattern line shape, indicating the existence of two spin components. Chemical exchange occurs between these two groups. Decomposition of the two signals is a crucial starting point for study the exchange process. In this article we have determined these spin component populations along with other important parameters for the 2H water NMR signal over a temperature range between 223 K and 343 K.
Abstract: This paper describes an efficient and practical method
for economic dispatch problem in one and two area electrical power
systems with considering the constraint of the tie transmission line
capacity constraint. Direct search method (DSM) is used with some
equality and inequality constraints of the production units with any
kind of fuel cost function. By this method, it is possible to use several
inequality constraints without having difficulty for complex cost
functions or in the case of unavailability of the cost function
derivative. To minimize the number of total iterations in searching,
process multi-level convergence is incorporated in the DSM.
Enhanced direct search method (EDSM) for two area power system
will be investigated. The initial calculation step size that causes less
iterations and then less calculation time is presented. Effect of the
transmission tie line capacity, between areas, on economic dispatch
problem and on total generation cost will be studied; line
compensation and active power with reactive power dispatch are
proposed to overcome the high generation costs for this multi-area
system.
Abstract: This paper presents a comparison of average outgoing
quality limit of the MCSP-2-C plan with MCSP-C when MCSP-2-C
has been developed from MCSP-C. The parameters used in MCSP-2-
C are: i (the clearance number), c (the acceptance number), m (the
number of conforming units to be found before allowing c nonconforming
units in the sampling inspection), f1 and f2 (the sampling
frequency at level 1 and 2, respectively). The average outgoing
quality limit (AOQL) values from two plans were compared and we
found that for all sets of i, r, and c values, MCSP-2-C gives higher
values than MCSP-C. For all sets of i, r, and c values, the average
outgoing quality values of MCSP-C and MCSP-2-C are similar when
p is low or high but is difference when p is moderate.
Abstract: Interior brick-infill partitions are usually considered as
non-structural components, and only their weight is accounted for in
practical structural design. In this study, the brick-infill panels are
simulated by compression struts to clarify their effect on the
progressive collapse potential of an earthquake-resistant RC building.
Three-dimensional finite element models are constructed for the RC
building subjected to sudden column loss. Linear static analyses are
conducted to investigate the variation of demand-to-capacity ratio
(DCR) of beam-end moment and the axial force variation of the beams
adjacent to the removed column. Study results indicate that the
brick-infill effect depends on their location with respect to the
removed column. As they are filled in a structural bay with a shorter
span adjacent to the column-removed line, more significant reduction
of DCR may be achieved. However, under certain conditions, the
brick infill may increase the axial tension of the two-span beam
bridging the removed column.
Abstract: This paper proposes a Fuzzy Expert System design to
determine the wearing properties of nitrided and non nitrided steel.
The proposed Fuzzy Expert System approach helps the user and the
manufacturer to forecast the wearing properties of nitrided and non
nitrided steel under specified laboratory conditions. Surfaces of the
engineering components are often nitrided for improving wear,
corosion, fatigue specifications. A major property of nitriding
process is reducing distortion and wearing of the metalic alloys. A
Fuzzy Expert System was developed for determining the wearing and
durability properties of nitrided and non nitrided steels that were
tested under different loads and different sliding speeds in the
laboratory conditions.
Abstract: The ever increasing product diversity and competition on the market of goods and services has dictated the pace of growth in the number of advertisements. Despite their admittedly diminished effectiveness over the recent years, advertisements remain the favored method of sales promotion. Consequently, the challenge for an advertiser is to explore every possible avenue of making an advertisement more noticeable, attractive and impellent for consumers. One way to achieve this is through invoking celebrity endorsements. On the one hand, the use of a celebrity to endorse a product involves substantial costs, however, on the other hand, it does not immediately guarantee the success of an advertisement. The question of how celebrities can be used in advertising to the best advantage is therefore of utmost importance. Celebrity endorsements have become commonplace: empirical evidence indicates that approximately 20 to 25 per cent of advertisements feature some famous person as a product endorser. The popularity of celebrity endorsements demonstrates the relevance of the topic, especially in the context of the current global economic downturn, when companies are forced to save in order to survive, yet simultaneously to heavily invest in advertising and sales promotion. The issue of the effective use of celebrity endorsements also figures prominently in the academic discourse. The study presented below is thus aimed at exploring what qualities (characteristics) of a celebrity endorser have an impact on the ffectiveness of the advertisement in which he/she appears and how.
Abstract: This paper suggests a new Affine Projection (AP) algorithm with variable data-reuse factor using the condition number as a decision factor. To reduce computational burden, we adopt a recently reported technique which estimates the condition number of an input data matrix. Several simulations show that the new algorithm has better performance than that of the conventional AP algorithm.
Abstract: Nowadays there is a growing interest in biofuel production in most countries because of the increasing concerns about hydrocarbon fuel shortage and global climate changes, also for enhancing agricultural economy and producing local needs for transportation fuel. Ethanol can be produced from biomass by the hydrolysis and sugar fermentation processes. In this study ethanol was produced without using expensive commercial enzymes from sugarcane bagasse. Alkali pretreatment was used to prepare biomass before enzymatic hydrolysis. The comparison between NaOH, KOH and Ca(OH)2 shows NaOH is more effective on bagasse. The required enzymes for biomass hydrolysis were produced from sugarcane solid state fermentation via two fungi: Trichoderma longibrachiatum and Aspergillus niger. The results show that the produced enzyme solution via A. niger has functioned better than T. longibrachiatum. Ethanol was produced by simultaneous saccharification and fermentation (SSF) with crude enzyme solution from T. longibrachiatum and Saccharomyces cerevisiae yeast. To evaluate this procedure, SSF of pretreated bagasse was also done using Celluclast 1.5L by Novozymes. The yield of ethanol production by commercial enzyme and produced enzyme solution via T. longibrachiatum was 81% and 50% respectively.
Abstract: Feature selection has recently been the subject of intensive research in data mining, specially for datasets with a large number of attributes. Recent work has shown that feature selection can have a positive effect on the performance of machine learning algorithms. The success of many learning algorithms in their attempts to construct models of data, hinges on the reliable identification of a small set of highly predictive attributes. The inclusion of irrelevant, redundant and noisy attributes in the model building process phase can result in poor predictive performance and increased computation. In this paper, a novel feature search procedure that utilizes the Ant Colony Optimization (ACO) is presented. The ACO is a metaheuristic inspired by the behavior of real ants in their search for the shortest paths to food sources. It looks for optimal solutions by considering both local heuristics and previous knowledge. When applied to two different classification problems, the proposed algorithm achieved very promising results.
Abstract: Metal matrix composites (MMC) are generating
extensive interest in diverse fields like defense, aerospace, electronics
and automotive industries. In this present investigation, material
removal rate (MRR) modeling has been carried out using an
axisymmetric model of Al-SiC composite during electrical discharge
machining (EDM). A FEA model of single spark EDM was
developed to calculate the temperature distribution.Further, single
spark model was extended to simulate the second discharge. For
multi-discharge machining material removal was calculated by
calculating the number of pulses. Validation of model has been done
by comparing the experimental results obtained under the same
process parameters with the analytical results. A good agreement was
found between the experimental results and the theoretical value.
Abstract: This paper presents a new strategy of identification
and classification of pathological voices using the hybrid method
based on wavelet transform and neural networks. After speech
acquisition from a patient, the speech signal is analysed in order to
extract the acoustic parameters such as the pitch, the formants, Jitter,
and shimmer. Obtained results will be compared to those normal and
standard values thanks to a programmable database. Sounds are
collected from normal people and patients, and then classified into
two different categories. Speech data base is consists of several
pathological and normal voices collected from the national hospital
“Rabta-Tunis". Speech processing algorithm is conducted in a
supervised mode for discrimination of normal and pathology voices
and then for classification between neural and vocal pathologies
(Parkinson, Alzheimer, laryngeal, dyslexia...). Several simulation
results will be presented in function of the disease and will be
compared with the clinical diagnosis in order to have an objective
evaluation of the developed tool.
Abstract: The purpose of this work is to present a method for
rigid registration of medical images using 1D binary projections
when a part of one of the two images is missing. We use 1D binary
projections and we adjust the projection limits according to the
reduced image in order to perform accurate registration. We use the
variance of the weighted ratio as a registration function which we
have shown is able to register 2D and 3D images more accurately and
robustly than mutual information methods. The function is computed
explicitly for n=5 Chebyshev points in a [-9,+9] interval and it is
approximated using Chebyshev polynomials for all other points. The
images used are MR scans of the head. We find that the method is
able to register the two images with average accuracy 0.3degrees for
rotations and 0.2 pixels for translations for a y dimension of 156 with
initial dimension 256. For y dimension 128/256 the accuracy
decreases to 0.7 degrees for rotations and 0.6 pixels for translations.
Abstract: In this paper, a method for matching image segments
using triangle-based (geometrical) regions is proposed. Triangular
regions are formed from triples of vertex points obtained from a
keypoint detector (SIFT). However, triangle regions are subject to
noise and distortion around the edges and vertices (especially acute
angles). Therefore, these triangles are expanded into parallelogramshaped
regions. The extracted image segments inherit an important
triangle property; the invariance to affine distortion. Given two
images, matching corresponding regions is conducted by computing
the relative affine matrix, rectifying one of the regions w.r.t. the other
one, then calculating the similarity between the reference and
rectified region. The experimental tests show the efficiency and
robustness of the proposed algorithm against geometrical distortion.
Abstract: Segmentation, filtering out of measurement errors and
identification of breakpoints are integral parts of any analysis of
microarray data for the detection of copy number variation (CNV).
Existing algorithms designed for these tasks have had some successes
in the past, but they tend to be O(N2) in either computation time or
memory requirement, or both, and the rapid advance of microarray
resolution has practically rendered such algorithms useless. Here we
propose an algorithm, SAD, that is much faster and much less thirsty
for memory – O(N) in both computation time and memory requirement
-- and offers higher accuracy. The two key ingredients of SAD are the
fundamental assumption in statistics that measurement errors are
normally distributed and the mathematical relation that the product of
two Gaussians is another Gaussian (function). We have produced a
computer program for analyzing CNV based on SAD. In addition to
being fast and small it offers two important features: quantitative
statistics for predictions and, with only two user-decided parameters,
ease of use. Its speed shows little dependence on genomic profile.
Running on an average modern computer, it completes CNV analyses
for a 262 thousand-probe array in ~1 second and a 1.8 million-probe
array in 9 seconds
Abstract: I/O workload is a critical and important factor to
analyze I/O pattern and file system performance. However tracing I/O
operations on the fly distributed parallel file system is non-trivial due
to collection overhead and a large volume of data. In this paper, we
design and implement a parallel file system logging method for high
performance computing using shared memory-based multi-layer
scheme. It minimizes the overhead with reduced logging operation
response time and provides efficient post-processing scheme through
shared memory. Separated logging server can collect sequential logs
from multiple clients in a cluster through packet communication.
Implementation and evaluation result shows low overhead and high
scalability of this architecture for high performance parallel logging
analysis.
Abstract: Food control measures are critical in fostering food safety management of a nation. However, no academic study has been undertaken to assess the food control system of Myanmar up to now. The objective of this research paper was to assess the food control system with in depth examination of five key components using desktop analysis and short survey from related food safety program organizations including regulators and inspectors. Study showed that the existing food control system is conventional, mainly focusing on primary health care approach while relying on reactive measures. The achievements of food control work have been limited to a certain extent due to insufficienttechnical capacity that is needed to upgrade staffs, laboratory equipment and technical assistance etc. associated with various sectors. Assessing food control measures is the first step in the integration of food safety management, this paper could assist policy makers in providing information for enhancing the safety and quality of food produced and consumed in Myanmar.