A New H.264-Based Rate Control Algorithm for Stereoscopic Video Coding

According to investigating impact of complexity of stereoscopic frame pairs on stereoscopic video coding and transmission, a new rate control algorithm is presented. The proposed rate control algorithm is performed on three levels: stereoscopic group of pictures (SGOP) level, stereoscopic frame (SFrame) level and frame level. A temporal-spatial frame complexity model is firstly established, in the bits allocation stage, the frame complexity, position significance and reference property between the left and right frames are taken into account. Meanwhile, the target buffer is set according to the frame complexity. Experimental results show that the proposed method can efficiently control the bitrates, and it outperforms the fixed quantization parameter method from the rate distortion perspective, and average PSNR gain between rate-distortion curves (BDPSNR) is 0.21dB.

Learning Classifier Systems Approach for Automated Discovery of Crisp and Fuzzy Hierarchical Production Rules

This research presents a system for post processing of data that takes mined flat rules as input and discovers crisp as well as fuzzy hierarchical structures using Learning Classifier System approach. Learning Classifier System (LCS) is basically a machine learning technique that combines evolutionary computing, reinforcement learning, supervised or unsupervised learning and heuristics to produce adaptive systems. A LCS learns by interacting with an environment from which it receives feedback in the form of numerical reward. Learning is achieved by trying to maximize the amount of reward received. Crisp description for a concept usually cannot represent human knowledge completely and practically. In the proposed Learning Classifier System initial population is constructed as a random collection of HPR–trees (related production rules) and crisp / fuzzy hierarchies are evolved. A fuzzy subsumption relation is suggested for the proposed system and based on Subsumption Matrix (SM), a suitable fitness function is proposed. Suitable genetic operators are proposed for the chosen chromosome representation method. For implementing reinforcement a suitable reward and punishment scheme is also proposed. Experimental results are presented to demonstrate the performance of the proposed system.

Control of a DC Servomotor Using Fuzzy Logic Sliding Mode Model Following Controller

A DC servomotor position control system using a Fuzzy Logic Sliding mode Model Following Control or FLSMFC approach is presented. The FLSMFC structure consists of an integrator and variable structure system. The integral control is introduced into it in order to eliminated steady state error due to step and ramp command inputs and improve control precision, while the fuzzy control would maintain the insensitivity to parameter variation and disturbances. The FLSMFC strategy is implemented and applied to a position control of a DC servomotor drives. Experimental results indicated that FLSMFC system performance with respect to the sensitivity to parameter variations is greatly reduced. Also, excellent control effects and avoids the chattering phenomenon.

A Robust Visual Tracking Algorithm with Low-Rank Region Covariance

Region covariance (RC) descriptor is an effective and efficient feature for visual tracking. Current RC-based tracking algorithms use the whole RC matrix to track the target in video directly. However, there exist some issues for these whole RCbased algorithms. If some features are contaminated, the whole RC will become unreliable, which results in lost object-tracking. In addition, if some features are very discriminative to the background, other features are still processed and thus reduce the efficiency. In this paper a new robust tracking method is proposed, in which the whole RC matrix is decomposed into several low rank matrices. Those matrices are dynamically chosen and processed so as to achieve a good tradeoff between discriminability and complexity. Experimental results have shown that our method is more robust to complex environment changes, especially either when occlusion happens or when the background is similar to the target compared to other RC-based methods.

Fast Search for MPEG Video Clips Using Adjacent Pixel Intensity Difference Quantization Histogram Feature

In this paper, we propose a novel fast search algorithm for short MPEG video clips from video database. This algorithm is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Instead of fully decompressed video frames, partially decoded data, namely DC images are utilized. Combined with active search [4], a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by 6 hours of video to search for given 200 MPEG video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 3 % is achieved, which is more accurately and robust than conventional fast video search algorithm.

A Novel Non-Uniformity Correction Algorithm Based On Non-Linear Fit

Infrared focal plane arrays (IRFPA) sensors, due to their high sensitivity, high frame frequency and simple structure, have become the most prominently used detectors in military applications. However, they suffer from a common problem called the fixed pattern noise (FPN), which severely degrades image quality and limits the infrared imaging applications. Therefore, it is necessary to perform non-uniformity correction (NUC) on IR image. The algorithms of non-uniformity correction are classified into two main categories, the calibration-based and scene-based algorithms. There exist some shortcomings in both algorithms, hence a novel non-uniformity correction algorithm based on non-linear fit is proposed, which combines the advantages of the two algorithms. Experimental results show that the proposed algorithm acquires a good effect of NUC with a lower non-uniformity ratio.

Effect of Chloroform on Aerobic Biodegradation of Organic Solvents in Pharmaceutical Wastewater

In this study, cometabolic biodegradation of chloroform was experimented with mixed cultures in the presence of various organic solvents like methanol, ethanol, isopropanol, acetone, acetonitrile and toluene as these are predominant discharges in pharmaceutical industries. Toluene and acetone showed higher specific chloroform degradation rate when compared to other compounds. Cometabolic degradation of chloroform was further confirmed by observation of free chloride ions in the medium. An extended Haldane model, incorporating the inhibition due to chloroform and the competitive inhibition between primary substrates, was developed to predict the biodegradation of primary substrates, cometabolic degradation of chloroform and the biomass growth. The proposed model is based on the use of biokinetic parameters obtained from single substrate degradation studies. The model was able to satisfactorily predict the experimental results of ternary and quaternary mixtures. The proposed model can be used for predicting the performance of bioreactors treating discharges from pharmaceutical industries.

T-DOF PI Controller Design for a Speed Control of Induction Motor

This paper presents design and implements the T-DOF PI controller design for a speed control of induction motor. The voltage source inverter type space vector pulse width modulation technique is used the drive system. This scheme leads to be able to adjust the speed of the motor by control the frequency and amplitude of the input voltage. The ratio of input stator voltage to frequency should be kept constant. The T-DOF PI controller design by root locus technique is also introduced to the system for regulates and tracking speed response. The experimental results in testing the 120 watt induction motor from no-load condition to rated condition show the effectiveness of the proposed control scheme.

Watermarking Scheme for Color Images using Wavelet Transform based Texture Properties and Secret Sharing

In this paper, a new secure watermarking scheme for color image is proposed. It splits the watermark into two shares using (2, 2)- threshold Visual Cryptography Scheme (V CS) with Adaptive Order Dithering technique and embeds one share into high textured subband of Luminance channel of the color image. The other share is used as the key and is available only with the super-user or the author of the image. In this scheme only the super-user can reveal the original watermark. The proposed scheme is dynamic in the sense that to maintain the perceptual similarity between the original and the watermarked image the selected subband coefficients are modified by varying the watermark scaling factor. The experimental results demonstrate the effectiveness of the proposed scheme. Further, the proposed scheme is able to resist all common attacks even with strong amplitude.

Quantification of Technology Innovation Usinga Risk-Based Framework

There is significant interest in achieving technology innovation through new product development activities. It is recognized, however, that traditional project management practices focused only on performance, cost, and schedule attributes, can often lead to risk mitigation strategies that limit new technology innovation. In this paper, a new approach is proposed for formally managing and quantifying technology innovation. This approach uses a risk-based framework that simultaneously optimizes innovation attributes along with traditional project management and system engineering attributes. To demonstrate the efficacy of the new riskbased approach, a comprehensive product development experiment was conducted. This experiment simultaneously managed the innovation risks and the product delivery risks through the proposed risk-based framework. Quantitative metrics for technology innovation were tracked and the experimental results indicate that the risk-based approach can simultaneously achieve both project deliverable and innovation objectives.

Shape Error Concealment for Shape Independent Transform Coding

Arbitrarily shaped video objects are an important concept in modern video coding methods. The techniques presently used are not based on image elements but rather video objects having an arbitrary shape. In this paper, spatial shape error concealment techniques to be used for object-based image in error-prone environments are proposed. We consider a geometric shape representation consisting of the object boundary, which can be extracted from the α-plane. Three different approaches are used to replace a missing boundary segment: Bézier interpolation, Bézier approximation and NURBS approximation. Experimental results on object shape with different concealment difficulty demonstrate the performance of the proposed methods. Comparisons with proposed methods are also presented.

A New Method for Contour Approximation Using Basic Ramer Idea

This paper presented two new efficient algorithms for contour approximation. The proposed algorithm is compared with Ramer (good quality), Triangle (faster) and Trapezoid (fastest) in this work; which are briefly described. Cartesian co-ordinates of an input contour are processed in such a manner that finally contours is presented by a set of selected vertices of the edge of the contour. In the paper the main idea of the analyzed procedures for contour compression is performed. For comparison, the mean square error and signal-to-noise ratio criterions are used. Computational time of analyzed methods is estimated depending on a number of numerical operations. Experimental results are obtained both in terms of image quality, compression ratios, and speed. The main advantages of the analyzed algorithm is small numbers of the arithmetic operations compared to the existing algorithms.

Studies on Automatic Measurement Technology for Surface Braided Angle of Three-Dimensional Braided Composite Material Performs

This paper describes a new measuring algorithm for three-dimensional (3-D) braided composite material .Braided angle is an important parameter of braided composites. The objective of this paper is to present an automatic measuring system. In the paper, the algorithm is performed by using vcµ6.0 language on PC. An advanced filtered algorithm for image of 3-D braided composites material performs has been developed. The procedure is completely automatic and relies on the gray scale information content of the images and their local wavelet transform modulus maxims. Experimental results show that the proposed method is feasible. The algorithm was tested on both carbon-fiber and glass-fiber performs.

MEGSOR Iterative Scheme for the Solution of 2D Elliptic PDE's

Recently, the findings on the MEG iterative scheme has demonstrated to accelerate the convergence rate in solving any system of linear equations generated by using approximation equations of boundary value problems. Based on the same scheme, the aim of this paper is to investigate the capability of a family of four-point block iterative methods with a weighted parameter, ω such as the 4 Point-EGSOR, 4 Point-EDGSOR, and 4 Point-MEGSOR in solving two-dimensional elliptic partial differential equations by using the second-order finite difference approximation. In fact, the formulation and implementation of three four-point block iterative methods are also presented. Finally, the experimental results show that the Four Point MEGSOR iterative scheme is superior as compared with the existing four point block schemes.

3D Locomotion and Fractal Analysis of Goldfish for Acute Toxicity Bioassay

Biological reactions of individuals of a testing animal to toxic substance are unique and can be used as an indication of the existing of toxic substance. However, to distinguish such phenomenon need a very complicate system and even more complicate to analyze data in 3 dimensional. In this paper, a system to evaluate in vitro biological activities to acute toxicity of stochastic self-affine non-stationary signal of 3D goldfish swimming by using fractal analysis is introduced. Regular digital camcorders are utilized by proposed algorithm 3DCCPC to effectively capture and construct 3D movements of the fish. A Critical Exponent Method (CEM) has been adopted as a fractal estimator. The hypothesis was that the swimming of goldfish to acute toxic would show the fractal property which related to the toxic concentration. The experimental results supported the hypothesis by showing that the swimming of goldfish under the different toxic concentration has fractal properties. It also shows that the fractal dimension of the swimming related to the pH value of FD Ôëê 0.26pH + 0.05. With the proposed system, the fish is allowed to swim freely in all direction to react to the toxic. In addition, the trajectories are precisely evaluated by fractal analysis with critical exponent method and hence the results exhibit with much higher degree of confidence.

Speckle Characterization in Laser Projector Display

Speckle phenomena results from when coherent radiation is reflected from a rough surface. Characterizing the speckle strongly depends on the measurement condition and experimental setup. In this paper we report the experimental results produced with different parameters in the setup. We investigated the factors which affects the speckle contrast, such as, F-number, gamma value and exposure time of the camera, rather than geometric factors like the distance between the projector lens to the screen, the viewing distance, etc. The measurement results show that the speckle contrast decreases by decreasing F-number, by increasing gamma value, and slightly affects by exposure time of the camera and the gain value of the camera.

TFRank: An Evaluation of Users Importance with Fractal Views in Social Networks

One of research issues in social network analysis is to evaluate the position/importance of users in social networks. As the information diffusion in social network is evolving, it seems difficult to evaluate the importance of users using traditional approaches. In this paper, we propose an evaluation approach for user importance with fractal view in social networks. In this approach, the global importance (Fractal Importance) and the local importance (Topological Importance) of nodes are considered. The basic idea is that the bigger the product of fractal importance and topological importance of a node is, the more important of the node is. We devise the algorithm called TFRank corresponding to the proposed approach. Finally, we evaluate TFRank by experiments. Experimental results demonstrate our TFRank has the high correlations with PageRank algorithm and potential ranking algorithm, and it shows the effectiveness and advantages of our approach.

Learning Monte Carlo Data for Circuit Path Length

This paper analyzes the patterns of the Monte Carlo data for a large number of variables and minterms, in order to characterize the circuit path length behavior. We propose models that are determined by training process of shortest path length derived from a wide range of binary decision diagram (BDD) simulations. The creation of the model was done use of feed forward neural network (NN) modeling methodology. Experimental results for ISCAS benchmark circuits show an RMS error of 0.102 for the shortest path length complexity estimation predicted by the NN model (NNM). Use of such a model can help reduce the time complexity of very large scale integrated (VLSI) circuitries and related computer-aided design (CAD) tools that use BDDs.

An Edge Detection and Filtering Mechanism of Two Dimensional Digital Objects Based on Fuzzy Inference

The general idea behind the filter is to average a pixel using other pixel values from its neighborhood, but simultaneously to take care of important image structures such as edges. The main concern of the proposed filter is to distinguish between any variations of the captured digital image due to noise and due to image structure. The edges give the image the appearance depth and sharpness. A loss of edges makes the image appear blurred or unfocused. However, noise smoothing and edge enhancement are traditionally conflicting tasks. Since most noise filtering behaves like a low pass filter, the blurring of edges and loss of detail seems a natural consequence. Techniques to remedy this inherent conflict often encompass generation of new noise due to enhancement. In this work a new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of three stages. (1) Define fuzzy sets in the input space to computes a fuzzy derivative for eight different directions (2) construct a set of IFTHEN rules by to perform fuzzy smoothing according to contributions of neighboring pixel values and (3) define fuzzy sets in the output space to get the filtered and edged image. Experimental results are obtained to show the feasibility of the proposed approach with two dimensional objects.

Integrating Security Indifference Curve to Formal Decision Evaluation

Decisions are regularly made during a project or daily life. Some decisions are critical and have a direct impact on project or human success. Formal evaluation is thus required, especially for crucial decisions, to arrive at the optimal solution among alternatives to address issues. According to microeconomic theory, all people-s decisions can be modeled as indifference curves. The proposed approach supports formal analysis and decision by constructing indifference curve model from the previous experts- decision criteria. These knowledge embedded in the system can be reused or help naïve users select alternative solution of the similar problem. Moreover, the method is flexible to cope with unlimited number of factors influencing the decision-making. The preliminary experimental results of the alternative selection are accurately matched with the expert-s decisions.