Transient Population Dynamics of Phase Singularities in 2D Beeler-Reuter Model

The paper presented a transient population dynamics of phase singularities in 2D Beeler-Reuter model. Two stochastic modelings are examined: (i) the Master equation approach with the transition rate (i.e., λ(n, t) = λ(t)n and μ(n, t) = μ(t)n) and (ii) the nonlinear Langevin equation approach with a multiplicative noise. The exact general solution of the Master equation with arbitrary time-dependent transition rate is given. Then, the exact solution of the mean field equation for the nonlinear Langevin equation is also given. It is demonstrated that transient population dynamics is successfully identified by the generalized Logistic equation with fractional higher order nonlinear term. It is also demonstrated the necessity of introducing time-dependent transition rate in the master equation approach to incorporate the effect of nonlinearity.

Alternative to M-Estimates in Multisensor Data Fusion

To solve the problem of multisensor data fusion under non-Gaussian channel noise. The advanced M-estimates are known to be robust solution while trading off some accuracy. In order to improve the estimation accuracy while still maintaining the equivalent robustness, a two-stage robust fusion algorithm is proposed using preliminary rejection of outliers then an optimal linear fusion. The numerical experiments show that the proposed algorithm is equivalent to the M-estimates in the case of uncorrelated local estimates and significantly outperforms the M-estimates when local estimates are correlated.

An Edge Detection and Filtering Mechanism of Two Dimensional Digital Objects Based on Fuzzy Inference

The general idea behind the filter is to average a pixel using other pixel values from its neighborhood, but simultaneously to take care of important image structures such as edges. The main concern of the proposed filter is to distinguish between any variations of the captured digital image due to noise and due to image structure. The edges give the image the appearance depth and sharpness. A loss of edges makes the image appear blurred or unfocused. However, noise smoothing and edge enhancement are traditionally conflicting tasks. Since most noise filtering behaves like a low pass filter, the blurring of edges and loss of detail seems a natural consequence. Techniques to remedy this inherent conflict often encompass generation of new noise due to enhancement. In this work a new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of three stages. (1) Define fuzzy sets in the input space to computes a fuzzy derivative for eight different directions (2) construct a set of IFTHEN rules by to perform fuzzy smoothing according to contributions of neighboring pixel values and (3) define fuzzy sets in the output space to get the filtered and edged image. Experimental results are obtained to show the feasibility of the proposed approach with two dimensional objects.

Image Enhancement using α-Trimmed Mean ε-Filters

Image enhancement is the most important challenging preprocessing for almost all applications of Image Processing. By now, various methods such as Median filter, α-trimmed mean filter, etc. have been suggested. It was proved that the α-trimmed mean filter is the modification of median and mean filters. On the other hand, ε-filters have shown excellent performance in suppressing noise. In spite of their simplicity, they achieve good results. However, conventional ε-filter is based on moving average. In this paper, we suggested a new ε-filter which utilizes α-trimmed mean. We argue that this new method gives better outcomes compared to previous ones and the experimental results confirmed this claim.

Quantitative Analysis of PCA, ICA, LDA and SVM in Face Recognition

Face recognition is a technique to automatically identify or verify individuals. It receives great attention in identification, authentication, security and many more applications. Diverse methods had been proposed for this purpose and also a lot of comparative studies were performed. However, researchers could not reach unified conclusion. In this paper, we are reporting an extensive quantitative accuracy analysis of four most widely used face recognition algorithms: Principal Component Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) using AT&T, Sheffield and Bangladeshi people face databases under diverse situations such as illumination, alignment and pose variations.

Application of Artificial Intelligence for Tuning the Parameters of an AGC

This paper deals with the tuning of parameters for Automatic Generation Control (AGC). A two area interconnected hydrothermal system with PI controller is considered. Genetic Algorithm (GA) and Particle Swarm optimization (PSO) algorithms have been applied to optimize the controller parameters. Two objective functions namely Integral Square Error (ISE) and Integral of Time-multiplied Absolute value of the Error (ITAE) are considered for optimization. The effectiveness of an objective function is considered based on the variation in tie line power and change in frequency in both the areas. MATLAB/SIMULINK was used as a simulation tool. Simulation results reveal that ITAE is a better objective function than ISE. Performances of optimization algorithms are also compared and it was found that genetic algorithm gives better results than particle swarm optimization algorithm for the problems of AGC.

A Robust LS-SVM Regression

In comparison to the original SVM, which involves a quadratic programming task; LS–SVM simplifies the required computation, but unfortunately the sparseness of standard SVM is lost. Another problem is that LS-SVM is only optimal if the training samples are corrupted by Gaussian noise. In Least Squares SVM (LS–SVM), the nonlinear solution is obtained, by first mapping the input vector to a high dimensional kernel space in a nonlinear fashion, where the solution is calculated from a linear equation set. In this paper a geometric view of the kernel space is introduced, which enables us to develop a new formulation to achieve a sparse and robust estimate.

Adaptive Total Variation Based on Feature Scale

The widely used Total Variation de-noising algorithm can preserve sharp edge, while removing noise. However, since fixed regularization parameter over entire image, small details and textures are often lost in the process. In this paper, we propose a modified Total Variation algorithm to better preserve smaller-scaled features. This is done by allowing an adaptive regularization parameter to control the amount of de-noising in any region of image, according to relative information of local feature scale. Experimental results demonstrate the efficient of the proposed algorithm. Compared with standard Total Variation, our algorithm can better preserve smaller-scaled features and show better performance.

Development and Assessment of Measuring/Rehabilitation Device for Myelopathy Patients with Lower Extremity Function

Disordered function of maniphalanx and difficulty with ambulation will occur insofar as a human has a failure in the spinal marrow. Cervical spondylotic myelopathy as one of the myelopathy emanates from not only external factors but also increased age. In addition, the diacrisis is difficult since cervical spondylotic myelopathy is evaluated by a doctor-s neurological remark and imaging findings. As a quantitative method for measuring the degree of disability, hand-operated triangle step test (for short, TST) has formulated. In this research, a full automatic triangle step counter apparatus is designed and developed to measure the degree of disability in an accurate fashion according to the principle of TST. The step counter apparatus whose shape is a low triangle pole displays the number of stepping upon each corner. Furthermore, the apparatus has two modes of operation. Namely, one is for measuring the degree of disability and the other for rehabilitation exercise. In terms of usefulness, clinical practice should be executed before too long.

Modeling of Knowledge-Intensive Business Processes

Knowledge development in companies relies on knowledge-intensive business processes, which are characterized by a high complexity in their execution, weak structuring, communication-oriented tasks and high decision autonomy, and often the need for creativity and innovation. A foundation of knowledge development is provided, which is based on a new conception of knowledge and knowledge dynamics. This conception consists of a three-dimensional model of knowledge with types, kinds and qualities. Built on this knowledge conception, knowledge dynamics is modeled with the help of general knowledge conversions between knowledge assets. Here knowledge dynamics is understood to cover all of acquisition, conversion, transfer, development and usage of knowledge. Through this conception we gain a sound basis for knowledge management and development in an enterprise. Especially the type dimension of knowledge, which categorizes it according to its internality and externality with respect to the human being, is crucial for enterprise knowledge management and development, because knowledge should be made available by converting it to more external types. Built on this conception, a modeling approach for knowledgeintensive business processes is introduced, be it human-driven,e-driven or task-driven processes. As an example for this approach, a model of the creative activity for the renewal planning of a product is given.

Image Adaptive Watermarking with Visual Model in Orthogonal Polynomials based Transformation Domain

In this paper, an image adaptive, invisible digital watermarking algorithm with Orthogonal Polynomials based Transformation (OPT) is proposed, for copyright protection of digital images. The proposed algorithm utilizes a visual model to determine the watermarking strength necessary to invisibly embed the watermark in the mid frequency AC coefficients of the cover image, chosen with a secret key. The visual model is designed to generate a Just Noticeable Distortion mask (JND) by analyzing the low level image characteristics such as textures, edges and luminance of the cover image in the orthogonal polynomials based transformation domain. Since the secret key is required for both embedding and extraction of watermark, it is not possible for an unauthorized user to extract the embedded watermark. The proposed scheme is robust to common image processing distortions like filtering, JPEG compression and additive noise. Experimental results show that the quality of OPT domain watermarked images is better than its DCT counterpart.

A Novel Fuzzy Technique for Image Noise Reduction

A new fuzzy filter is presented for noise reduction of images corrupted with additive noise. The filter consists of two stages. In the first stage, all the pixels of image are processed for determining noisy pixels. For this, a fuzzy rule based system associates a degree to each pixel. The degree of a pixel is a real number in the range [0,1], which denotes a probability that the pixel is not considered as a noisy pixel. In the second stage, another fuzzy rule based system is employed. It uses the output of the previous fuzzy system to perform fuzzy smoothing by weighting the contributions of neighboring pixel values. Experimental results are obtained to show the feasibility of the proposed filter. These results are also compared to other filters by numerical measure and visual inspection.

Teaching Students the Black Magic of Electromagnetic Compatibility

Introducing Electromagnetic Interference and Electromagnetic Compatibility, or “The Art of Black Magic", for engineering students might be a terrifying experience both for students and tutors. Removing the obstacle of large, expensive facilities like a fully fitted EMC laboratory and hours of complex theory, this paper demonstrates a design of a laboratory setup for student exercises, giving students experience in the basics of EMC/EMI problems that may challenge the functionality and stability of embedded system designs. This is done using a simple laboratory installation and basic measurement equipment such as a medium cost digital storage oscilloscope, at the cost of not knowing the exact magnitude of the noise components, but rather if the noise is significant or not, as well as the source of the noise. A group of students have performed a trial exercise with good results and feedback.

Self-tuned LMS Algorithm for Sinusoidal Time Delay Tracking

In this paper the problem of estimating the time delay between two spatially separated noisy sinusoidal signals by system identification modeling is addressed. The system is assumed to be perturbed by both input and output additive white Gaussian noise. The presence of input noise introduces bias in the time delay estimates. Normally the solution requires a priori knowledge of the input-output noise variance ratio. We utilize the cascade of a self-tuned filter with the time delay estimator, thus making the delay estimates robust to input noise. Simulation results are presented to confirm the superiority of the proposed approach at low input signal-to-noise ratios.

EMD-Based Signal Noise Reduction

This paper introduces a new signal denoising based on the Empirical mode decomposition (EMD) framework. The method is a fully data driven approach. Noisy signal is decomposed adaptively into oscillatory components called Intrinsic mode functions (IMFs) by means of a process called sifting. The EMD denoising involves filtering or thresholding each IMF and reconstructs the estimated signal using the processed IMFs. The EMD can be combined with a filtering approach or with nonlinear transformation. In this work the Savitzky-Golay filter and shoftthresholding are investigated. For thresholding, IMF samples are shrinked or scaled below a threshold value. The standard deviation of the noise is estimated for every IMF. The threshold is derived for the Gaussian white noise. The method is tested on simulated and real data and compared with averaging, median and wavelet approaches.

The Alterations of Some Pancreas Gland Hormones after an Aerobic Strenuous Exercise in Male Students

The alterations in pancreas gland secretion hormones following an aerobic and exhausting exercise was the purpose of this study. Sixteen healthy men participated in the study. The blood samples of these participants were taken in four stages under fasting condition. The first sample was taken before Bruce exhausting and aerobic test, the second sample was taken after Bruce exercise and the third and forth stages samples were taken 24 and 48 hours after the exercises respectively. The final results indicated that a strenuous aerobic exercise can have a significant effect on glucagon and insulin concentration of blood serum. The increase in blood serum insulin was higher after 24 and 48 hours. It seems that an intensive exercise has little effect on changes in glucagon concentration of blood serum. Also, disorder in secretion in glucagon and insulin concentration of serum disturbs athletes- exercise.

A Semi- One Time Pad Using Blind Source Separation for Speech Encryption

We propose a new perspective on speech communication using blind source separation. The original speech is mixed with key signals which consist of the mixing matrix, chaotic signals and a random noise. However, parts of the keys (the mixing matrix and the random noise) are not necessary in decryption. In practice implement, one can encrypt the speech by changing the noise signal every time. Hence, the present scheme obtains the advantages of a One Time Pad encryption while avoiding its drawbacks in key exchange. It is demonstrated that the proposed scheme is immune against traditional attacks.

Adaptive Anisotropic Diffusion for Ultrasonic Image Denoising and Edge Enhancement

Utilizing echoic intension and distribution from different organs and local details of human body, ultrasonic image can catch important medical pathological changes, which unfortunately may be affected by ultrasonic speckle noise. A feature preserving ultrasonic image denoising and edge enhancement scheme is put forth, which includes two terms: anisotropic diffusion and edge enhancement, controlled by the optimum smoothing time. In this scheme, the anisotropic diffusion is governed by the local coordinate transformation and the first and the second order normal derivatives of the image, while the edge enhancement is done by the hyperbolic tangent function. Experiments on real ultrasonic images indicate effective preservation of edges, local details and ultrasonic echoic bright strips on denoising by our scheme.

Concept for a Multidisciplinary Design Process–An Application on High Lift Systems

Presents a concept for a multidisciplinary process supporting effective task transitions between different technical domains during the architectural design stage. A system configuration challenge is the multifunctional driven increased solution space. As a consequence, more iteration is needed to find a global optimum, i.e. a compromise between involved disciplines without negative impact on development time. Since state of the art standards like ISO 15288 and VDI 2206 do not provide a detailed methodology on multidisciplinary design process, higher uncertainties regarding final specifications arise. This leads to the need of more detailed and standardized concepts or processes which could mitigate risks. The performed work is based on analysis of multidisciplinary interaction, of modeling and simulation techniques. To demonstrate and prove the applicability of the presented concept, it is applied to the design of aircraft high lift systems, in the context of the engineering disciplines kinematics, actuation, monitoring, installation and structure design.

The Performance Analysis of Error Saturation Nonlinearity LMS in Impulsive Noise based on Weighted-Energy Conservation

This paper introduces a new approach for the performance analysis of adaptive filter with error saturation nonlinearity in the presence of impulsive noise. The performance analysis of adaptive filters includes both transient analysis which shows that how fast a filter learns and the steady-state analysis gives how well a filter learns. The recursive expressions for mean-square deviation(MSD) and excess mean-square error(EMSE) are derived based on weighted energy conservation arguments which provide the transient behavior of the adaptive algorithm. The steady-state analysis for co-related input regressor data is analyzed, so this approach leads to a new performance results without restricting the input regression data to be white.