Abstract: Background noise is particularly damaging to speech
intelligibility for people with hearing loss especially for sensorineural
loss patients. Several investigations on speech intelligibility have
demonstrated sensorineural loss patients need 5-15 dB higher SNR
than the normal hearing subjects. This paper describes Discrete
Cosine Transform Power Normalized Least Mean Square algorithm
to improve the SNR and to reduce the convergence rate of the LMS
for Sensory neural loss patients. Since it requires only real arithmetic,
it establishes the faster convergence rate as compare to time domain
LMS and also this transformation improves the eigenvalue
distribution of the input autocorrelation matrix of the LMS filter.
The DCT has good ortho-normal, separable, and energy compaction
property. Although the DCT does not separate frequencies, it is a
powerful signal decorrelator. It is a real valued function and thus
can be effectively used in real-time operation. The advantages of
DCT-LMS as compared to standard LMS algorithm are shown via
SNR and eigenvalue ratio computations. . Exploiting the symmetry
of the basis functions, the DCT transform matrix [AN] can be
factored into a series of ±1 butterflies and rotation angles. This
factorization results in one of the fastest DCT implementation. There
are different ways to obtain factorizations. This work uses the fast
factored DCT algorithm developed by Chen and company. The
computer simulations results show superior convergence
characteristics of the proposed algorithm by improving the SNR at
least 10 dB for input SNR less than and equal to 0 dB, faster
convergence speed and better time and frequency characteristics.
Abstract: This paper presents an application of particle swarm
optimization (PSO) to the grounding grid planning which compares to
the application of genetic algorithm (GA). Firstly, based on IEEE
Std.80, the cost function of the grounding grid and the constraints of
ground potential rise, step voltage and touch voltage are constructed
for formulating the optimization problem of grounding grid planning.
Secondly, GA and PSO algorithms for obtaining optimal solution of
grounding grid are developed. Finally, a case of grounding grid
planning is shown the superiority and availability of the PSO
algorithm and proposal planning results of grounding grid in cost and
computational time.
Abstract: The aim of this study is to emphasize the opportunities in space design under the aspect of HCI as performance areas. HCI is a multidisciplinary approach that could be identified in many different areas. The aesthetical reflections of HCI by virtual reality in space design are the high-tech solutions of the new innovations as computational facilities by artistic features. The method of this paper is to identify the subject in 3 main parts. In the first part a general approach and definition of interactivity on the basis of space design; in the second part the concept of multimedia interactive theater by some chosen samples from the world and interactive design aspects; in the third part the samples from Turkey will be identified by stage designing principles. In the results it could be declared that the multimedia database is the virtual approach of theatre stage designing regarding interactive means by computational facilities according to aesthetical aspects. HCI is mostly identified in theatre stages as computational intelligence under the affect of interactivity.
Abstract: The paper discusses the mathematics of pattern
indexing and its applications to recognition of visual patterns that are
found in video clips. It is shown that (a) pattern indexes can be
represented by collections of inverted patterns, (b) solutions to
pattern classification problems can be found as intersections and
histograms of inverted patterns and, thus, matching of original
patterns avoided.
Abstract: Sensory nerves in the foot play an important part in the diagnosis of various neuropathydisorders, especially in diabetes mellitus.However, a detailed description of the anatomical distribution of the nerves is currently lacking. A computationalmodel of the afferent nerves inthe foot may bea useful tool for the study of diabetic neuropathy. In this study, we present the development of an anatomically-based model of various major sensory nerves of the sole and dorsal sidesof the foot. In addition, we presentan algorithm for generating synthetic somatosensory nerve networks in the big-toe region of a right foot model. The algorithm was based on a modified version of the Monte Carlo algorithm, with the capability of being able to vary the intra-epidermal nerve fiber density in differentregionsof the foot model. Preliminary results from the combinedmodel show the realistic anatomical structure of the major nerves as well as the smaller somatosensory nerves of the foot. The model may now be developed to investigate the functional outcomes of structural neuropathyindiabetic patients.
Abstract: Wireless sensor networks (WSN) consists of many sensor nodes that are placed on unattended environments such as military sites in order to collect important information. Implementing a secure protocol that can prevent forwarding forged data and modifying content of aggregated data and has low delay and overhead of communication, computing and storage is very important. This paper presents a new protocol for concealed data aggregation (CDA). In this protocol, the network is divided to virtual cells, nodes within each cell produce a shared key to send and receive of concealed data with each other. Considering to data aggregation in each cell is locally and implementing a secure authentication mechanism, data aggregation delay is very low and producing false data in the network by malicious nodes is not possible. To evaluate the performance of our proposed protocol, we have presented computational models that show the performance and low overhead in our protocol.
Abstract: The new idea of this research is application of a new fault detection and isolation (FDI) technique for supervision of sensor networks in transportation system. In measurement systems, it is necessary to detect all types of faults and failures, based on predefined algorithm. Last improvements in artificial neural network studies (ANN) led to using them for some FDI purposes. In this paper, application of new probabilistic neural network features for data approximation and data classification are considered for plausibility check in temperature measurement. For this purpose, two-phase FDI mechanism was considered for residual generation and evaluation.
Abstract: The H.264/AVC standard is a highly efficient video
codec providing high-quality videos at low bit-rates. As employing
advanced techniques, the computational complexity has been
increased. The complexity brings about the major problem in the
implementation of a real-time encoder and decoder. Parallelism is the
one of approaches which can be implemented by multi-core system.
We analyze macroblock-level parallelism which ensures the same bit
rate with high concurrency of processors. In order to reduce the
encoding time, dynamic data partition based on macroblock region is
proposed. The data partition has the advantages in load balancing and
data communication overhead. Using the data partition, the encoder
obtains more than 3.59x speed-up on a four-processor system. This
work can be applied to other multimedia processing applications.
Abstract: Emerging Bio-engineering fields such as Brain
Computer Interfaces, neuroprothesis devices and modeling and
simulation of neural networks have led to increased research activity
in algorithms for the detection, isolation and classification of Action
Potentials (AP) from noisy data trains. Current techniques in the field
of 'unsupervised no-prior knowledge' biosignal processing include
energy operators, wavelet detection and adaptive thresholding. These
tend to bias towards larger AP waveforms, AP may be missed due to
deviations in spike shape and frequency and correlated noise
spectrums can cause false detection. Also, such algorithms tend to
suffer from large computational expense.
A new signal detection technique based upon the ideas of phasespace
diagrams and trajectories is proposed based upon the use of a
delayed copy of the AP to highlight discontinuities relative to
background noise. This idea has been used to create algorithms that
are computationally inexpensive and address the above problems.
Distinct AP have been picked out and manually classified from
real physiological data recorded from a cockroach. To facilitate
testing of the new technique, an Auto Regressive Moving Average
(ARMA) noise model has been constructed bases upon background
noise of the recordings. Along with the AP classification means this
model enables generation of realistic neuronal data sets at arbitrary
signal to noise ratio (SNR).
Abstract: In this paper, a fast motion compensation algorithm is
proposed that improves coding efficiency for video sequences with
brightness variations. We also propose a cross entropy measure
between histograms of two frames to detect brightness variations. The
framewise brightness variation parameters, a multiplier and an offset
field for image intensity, are estimated and compensated. Simulation
results show that the proposed method yields a higher peak signal to
noise ratio (PSNR) compared with the conventional method, with a
greatly reduced computational load, when the video scene contains
illumination changes.
Abstract: This paper presents the applications of computational intelligence techniques to economic load dispatch problems. The fuel cost equation of a thermal plant is generally expressed as continuous quadratic equation. In real situations the fuel cost equations can be discontinuous. In view of the above, both continuous and discontinuous fuel cost equations are considered in the present paper. First, genetic algorithm optimization technique is applied to a 6- generator 26-bus test system having continuous fuel cost equations. Results are compared to conventional quadratic programming method to show the superiority of the proposed computational intelligence technique. Further, a 10-generator system each with three fuel options distributed in three areas is considered and particle swarm optimization algorithm is employed to minimize the cost of generation. To show the superiority of the proposed approach, the results are compared with other published methods.
Abstract: In this paper, a recursive algorithm for the
computation of 2-D DCT using Ramanujan Numbers is proposed.
With this algorithm, the floating-point multiplication is completely
eliminated and hence the multiplierless algorithm can be
implemented using shifts and additions only. The orthogonality of
the recursive kernel is well maintained through matrix factorization
to reduce the computational complexity. The inherent parallel
structure yields simpler programming and hardware implementation
and provides
log 1
2
3
2 N N-N+
additions and
N N
2 log
2 shifts which is
very much less complex when compared to other recent multiplierless
algorithms.
Abstract: Grid computing is a high performance computing
environment to solve larger scale computational applications. Grid
computing contains resource management, job scheduling, security
problems, information management and so on. Job scheduling is a
fundamental and important issue in achieving high performance in
grid computing systems. However, it is a big challenge to design an
efficient scheduler and its implementation. In Grid Computing, there
is a need of further improvement in Job Scheduling algorithm to
schedule the light-weight or small jobs into a coarse-grained or
group of jobs, which will reduce the communication time,
processing time and enhance resource utilization. This Grouping
strategy considers the processing power, memory-size and
bandwidth requirements of each job to realize the real grid system.
The experimental results demonstrate that the proposed scheduling
algorithm efficiently reduces the processing time of jobs in
comparison to others.
Abstract: Accurate assessment of the primary tumor response to
treatment is important in the management of breast cancer. This
paper introduces a new set of treatment evaluation indicators for
breast cancer cases based on the computational process of three
known metrics, the Euclidian, Hamming and Levenshtein distances.
The distance principals are applied to pairs of mammograms and/or
echograms, recorded before and after treatment, determining a
reference point in judging the evolution amount of the studied
carcinoma. The obtained numerical results are indeed very
transparent and indicate not only the evolution or the involution of
the tumor under treatment, but also a quantitative measurement of the
benefit in using the selected method of treatment.
Abstract: Computational study of two dimensional supersonic reacting hydrogen-air flows is performed to investigate the nitrogen effects on ignition delay time for premixed and diffusion flames. Chemical reaction is treated using detail kinetics and the advection upstream splitting method is used to calculate the numerical inviscid fluxes. The results show that just in stoichiometric condition for both premixed and diffusion flames, there is monotone dependency of the ignition delay time to the nitrogen addition. In other situations, the optimal condition from ignition viewpoint should be found using numerical investigations.
Abstract: In DMVC, we have more than one options of sources available for construction of side information. The newer techniques make use of both the techniques simultaneously by constructing a bitmask that determines the source of every block or pixel of the side information. A lot of computation is done to determine each bit in the bitmask. In this paper, we have tried to define areas that can only be well predicted by temporal interpolation and not by multiview interpolation or synthesis. We predict that all such areas that are not covered by two cameras cannot be appropriately predicted by multiview synthesis and if we can identify such areas in the first place, we don-t need to go through the script of computations for all the pixels that lie in those areas. Moreover, this paper also defines a technique based on KLT to mark the above mentioned areas before any other processing is done on the side view.
Abstract: A subsea hydrocarbon production system can undergo planned and unplanned shutdowns during the life of the field. The thermal FEA is used to simulate the cool down to verify the insulation design of the subsea equipment, but it is also used to derive an acceptable insulation design for the cold spots. The driving factors of subsea analyses require fast responding and accurate models of the equipment cool down. This paper presents cool down analysis carried out by a Krylov subspace reduction method, and compares this approach to the commonly used FEA solvers. The model considered represents a typical component of a subsea production system, a closed valve on a dead leg. The results from the Krylov reduction method exhibits the least error and requires the shortest computational time to reach the solution. These findings make the Krylov model order reduction method very suitable for the above mentioned subsea applications.
Abstract: Recently, Genetic Algorithms (GA) and Differential
Evolution (DE) algorithm technique have attracted considerable
attention among various modern heuristic optimization techniques.
Since the two approaches are supposed to find a solution to a given
objective function but employ different strategies and computational
effort, it is appropriate to compare their performance. This paper
presents the application and performance comparison of DE and GA
optimization techniques, for flexible ac transmission system
(FACTS)-based controller design. The design objective is to enhance
the power system stability. The design problem of the FACTS-based
controller is formulated as an optimization problem and both the PSO
and GA optimization techniques are employed to search for optimal
controller parameters. The performance of both optimization
techniques has been compared. Further, the optimized controllers are
tested on a weekly connected power system subjected to different
disturbances, and their performance is compared with the
conventional power system stabilizer (CPSS). The eigenvalue
analysis and non-linear simulation results are presented and
compared to show the effectiveness of both the techniques in
designing a FACTS-based controller, to enhance power system
stability.
Abstract: The H.264/AVC standard uses an intra prediction, 9
directional modes for 4x4 luma blocks and 8x8 luma blocks, 4
directional modes for 16x16 macroblock and 8x8 chroma blocks,
respectively. It means that, for a macroblock, it has to perform 736
different RDO calculation before a best RDO modes is determined.
With this Multiple intra-mode prediction, intra coding of H.264/AVC
offers a considerably higher improvement in coding efficiency
compared to other compression standards, but computational
complexity is increased significantly. This paper presents a fast intra
prediction algorithm for H.264/AVC intra prediction based a
characteristic of homogeneity information. In this study, the gradient
prediction method used to predict the homogeneous area and the
quadratic prediction function used to predict the nonhomogeneous
area. Based on the correlation between the homogeneity and block
size, the smaller block is predicted by gradient prediction and
quadratic prediction, so the bigger block is predicted by gradient
prediction. Experimental results are presented to show that the
proposed method reduce the complexity by up to 76.07%
maintaining the similar PSNR quality with about 1.94%bit rate
increase in average.
Abstract: To study the impact of the inter-module ventilation (IMV) on the space station, the Computational Fluid Dynamic (CFD) model under the influence of IMV, the mathematical model, boundary conditions and calculation method are established and determined to analyze the influence of IMV on cabin air flow characteristics and velocity distribution firstly; and then an integrated overall thermal mathematical model of the space station is used to consider the impact of IMV on thermal management. The results show that: the IMV has a significant influence on the cabin air flow, the flowrate of IMV within a certain range can effectively improve the air velocity distribution in cabin, if too much may lead to its deterioration; IMV can affect the heat deployment of the different modules in space station, thus affecting its thermal management, the use of IMV can effectively maintain the temperature levels of the different modules and help the space station to dissipate the waste heat.