Gradual Shot Boundary Detection and Classification Based on Fractal Analysis

Shot boundary detection is a fundamental step for the organization of large video data. In this paper, we propose a new method for video gradual shots detection and classification, using advantages of fractal analysis and AIS-based classifier. Proposed features are “vertical intercept" and “fractal dimension" of each frame of videos which are computed using Fourier transform coefficients. We also used a classifier based on Clonal Selection Algorithm. We have carried out our solution and assessed it according to the TRECVID2006 benchmark dataset.

The Willingness of Business Students on T Innovative Behavior within the Theory of Planned Behavior

Classes on creativity, innovation, and entrepreneurship are becoming quite popular at universities throughout the world. However, it is not easy for business students to get involved to innovative activities, especially patent application. The present study investigated how to enhance business students- intention to participate in innovative activities and which incentives universities should consider. A 22-item research scale was used, and confirmatory factor analysis was conducted to verify its reliability and validity. Multiple regression and discriminant analyses were also conducted. The results demonstrate the effect of growth-need strength on innovative behavior and indicate that the theory of planned behavior can explain and predict business students- intention to participate in innovative activities. Additionally, the results suggest that applying our proposed model in practice would effectively strengthen business students- intentions to engage in innovative activities.

Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis

This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.

On Symmetries and Exact Solutions of Einstein Vacuum Equations for Axially Symmetric Gravitational Fields

Einstein vacuum equations, that is a system of nonlinear partial differential equations (PDEs) are derived from Weyl metric by using relation between Einstein tensor and metric tensor. The symmetries of Einstein vacuum equations for static axisymmetric gravitational fields are obtained using the Lie classical method. We have examined the optimal system of vector fields which is further used to reduce nonlinear PDE to nonlinear ordinary differential equation (ODE). Some exact solutions of Einstein vacuum equations in general relativity are also obtained.

Incidence, Occurrence, Classification and Outcome of Small Animal Fractures: A Retrospective Study (2005-2010)

A retrospective study was undertaken to record the occurrence and pattern of fractures in small animals (dogs and cats) from year 2005 to 2010. A total of 650 cases were presented in small animal surgery unit out of which of 116 (dogs and cats) were presented with history of fractures of different bones. A total of 17.8% (116/650) cases were of fractures which constituted dogs 67% while cats were 23%. The majority of animals were intact. Trauma in the form of road side accident was the principal cause of fractures in dogs whereas as in cats it was fall from height. The ages of the fractured dog ranged from 4 months to 12 years whereas in cat it was from 4 weeks to 10 years. The femoral fractures represented 37.5% and 25% respectively in dogs and cats. Diaphysis, distal metaphyseal and supracondylar fractures were the most affected sites in dog and cats. Tibial fracture in dogs and cats represented 21.5% and 10% while humoral fractures were 7.9% and 14% in dogs and cats respectively. Humoral condyler fractures were most commonly seen in puppies aged 4 to 6 months. Fractured radius-ulna incidence was 19% and 14% in dogs and cats respectively. Other fractures recorded were of lumbar vertebrae, mandible and metacarpals etc. The management comprised of external and internal fixation in both the species. The most common internal fixation technique employed was Intramedullary fixation in long followed by other methods like stack or cross pinning, wiring etc as per findings in the cases. The cast bandage was used majorly as mean for external coaptation. The paper discusses the outcome of the case as per the technique employed.

Variance Based Component Analysis for Texture Segmentation

This paper presents a comparative analysis of a new unsupervised PCA-based technique for steel plates texture segmentation towards defect detection. The proposed scheme called Variance Based Component Analysis or VBCA employs PCA for feature extraction, applies a feature reduction algorithm based on variance of eigenpictures and classifies the pixels as defective and normal. While the classic PCA uses a clusterer like Kmeans for pixel clustering, VBCA employs thresholding and some post processing operations to label pixels as defective and normal. The experimental results show that proposed algorithm called VBCA is 12.46% more accurate and 78.85% faster than the classic PCA.

TRS: System for Recommending Semantic Web Service Composition Approaches

A large number of semantic web service composition approaches are developed by the research community and one is more efficient than the other one depending on the particular situation of use. So a close look at the requirements of ones particular situation is necessary to find a suitable approach to use. In this paper, we present a Technique Recommendation System (TRS) which using a classification of state-of-art semantic web service composition approaches, can provide the user of the system with the recommendations regarding the use of service composition approach based on some parameters regarding situation of use. TRS has modular architecture and uses the production-rules for knowledge representation.

An Improved Quality Adaptive Rate Filtering Technique Based on the Level Crossing Sampling

Mostly the systems are dealing with time varying signals. The Power efficiency can be achieved by adapting the system activity according to the input signal variations. In this context an adaptive rate filtering technique, based on the level crossing sampling is devised. It adapts the sampling frequency and the filter order by following the input signal local variations. Thus, it correlates the processing activity with the signal variations. Interpolation is required in the proposed technique. A drastic reduction in the interpolation error is achieved by employing the symmetry during the interpolation process. Processing error of the proposed technique is calculated. The computational complexity of the proposed filtering technique is deduced and compared to the classical one. Results promise a significant gain of the computational efficiency and hence of the power consumption.

Granulation using Clustering and Rough Set Theory and its Tree Representation

Granular computing deals with representation of information in the form of some aggregates and related methods for transformation and analysis for problem solving. A granulation scheme based on clustering and Rough Set Theory is presented with focus on structured conceptualization of information has been presented in this paper. Experiments for the proposed method on four labeled data exhibit good result with reference to classification problem. The proposed granulation technique is semi-supervised imbibing global as well as local information granulation. To represent the results of the attribute oriented granulation a tree structure is proposed in this paper.

Methods for Manufacture of Corrugated Wire Mesh Laminates

Corrugated wire mesh laminates (CWML) are a class of engineered open cell structures that have potential for applications in many areas including aerospace and biomedical engineering. Two different methods of fabricating corrugated wire mesh laminates from stainless steel, one using a high temperature Lithobraze alloy and the other using a low temperature Eutectic solder for joining the corrugated wire meshes are described herein. Their implementation is demonstrated by manufacturing CWML samples of 304 and 316 stainless steel (SST). It is seen that due to the facility of employing wire meshes of different densities and wire diameters, it is possible to create CWML laminates with a wide range of effective densities. The fabricated laminates are tested under uniaxial compression. The variation of the compressive yield strength with relative density of the CWML is compared to the theory developed by Gibson and Ashby for open cell structures [22]. It is shown that the compressive strength of the corrugated wire mesh laminates can be described using the same equations by using an appropriate value for the linear coefficient in the Gibson-Ashby model.

Investigating the Treatability of a Compost Leachate in a Hybrid Anaerobic Reactor: An Experimental Study

Compost manufacturing plants are one of units where wastewater is produced in significantly large amounts. Wastewater produced in these plants contains high amounts of substrate (organic loads) and is classified as stringent waste which creates significant pollution when discharged into the environment without treatment. A compost production plant in the one of the Iran-s province treating 200 tons/day of waste is one of the most important environmental pollutant operations in this zone. The main objectives of this paper are to investigate the compost wastewater treatability in hybrid anaerobic reactors with an upflow-downflow arrangement, to determine the kinetic constants, and eventually to obtain an appropriate mathematical model. After starting the hybrid anaerobic reactor of the compost production plant, the average COD removal rate efficiency was 95%.

The Robust Clustering with Reduction Dimension

A clustering is process to identify a homogeneous groups of object called as cluster. Clustering is one interesting topic on data mining. A group or class behaves similarly characteristics. This paper discusses a robust clustering process for data images with two reduction dimension approaches; i.e. the two dimensional principal component analysis (2DPCA) and principal component analysis (PCA). A standard approach to overcome this problem is dimension reduction, which transforms a high-dimensional data into a lower-dimensional space with limited loss of information. One of the most common forms of dimensionality reduction is the principal components analysis (PCA). The 2DPCA is often called a variant of principal component (PCA), the image matrices were directly treated as 2D matrices; they do not need to be transformed into a vector so that the covariance matrix of image can be constructed directly using the original image matrices. The decomposed classical covariance matrix is very sensitive to outlying observations. The objective of paper is to compare the performance of robust minimizing vector variance (MVV) in the two dimensional projection PCA (2DPCA) and the PCA for clustering on an arbitrary data image when outliers are hiden in the data set. The simulation aspects of robustness and the illustration of clustering images are discussed in the end of paper

Functional Near Infrared Spectroscope for Cognition Brain Tasks by Wavelets Analysis and Neural Networks

Brain Computer Interface (BCI) has been recently increased in research. Functional Near Infrared Spectroscope (fNIRs) is one the latest technologies which utilize light in the near-infrared range to determine brain activities. Because near infrared technology allows design of safe, portable, wearable, non-invasive and wireless qualities monitoring systems, fNIRs monitoring of brain hemodynamics can be value in helping to understand brain tasks. In this paper, we present results of fNIRs signal analysis indicating that there exist distinct patterns of hemodynamic responses which recognize brain tasks toward developing a BCI. We applied two different mathematics tools separately, Wavelets analysis for preprocessing as signal filters and feature extractions and Neural networks for cognition brain tasks as a classification module. We also discuss and compare with other methods while our proposals perform better with an average accuracy of 99.9% for classification.

Extremal Properties of Generalized Class of Close-to-convex Functions

Let Gα ,β (γ ,δ ) denote the class of function f (z), f (0) = f ′(0)−1= 0 which satisfied e δ {αf ′(z)+ βzf ′′(z)}> γ i Re in the open unit disk D = {z ∈ı : z < 1} for some α ∈ı (α ≠ 0) , β ∈ı and γ ∈ı (0 ≤γ 0 . In this paper, we determine some extremal properties including distortion theorem and argument of f ′( z ) .

Advanced Stochastic Models for Partially Developed Speckle

Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.

Classification of Non Stationary Signals Using Ben Wavelet and Artificial Neural Networks

The automatic classification of non stationary signals is an important practical goal in several domains. An essential classification task is to allocate the incoming signal to a group associated with the kind of physical phenomena producing it. In this paper, we present a modular system composed by three blocs: 1) Representation, 2) Dimensionality reduction and 3) Classification. The originality of our work consists in the use of a new wavelet called "Ben wavelet" in the representation stage. For the dimensionality reduction, we propose a new algorithm based on the random projection and the principal component analysis.

Association Rule and Decision Tree based Methodsfor Fuzzy Rule Base Generation

This paper focuses on the data-driven generation of fuzzy IF...THEN rules. The resulted fuzzy rule base can be applied to build a classifier, a model used for prediction, or it can be applied to form a decision support system. Among the wide range of possible approaches, the decision tree and the association rule based algorithms are overviewed, and two new approaches are presented based on the a priori fuzzy clustering based partitioning of the continuous input variables. An application study is also presented, where the developed methods are tested on the well known Wisconsin Breast Cancer classification problem.

Human Facial Expression Recognition using MANFIS Model

Facial expression analysis plays a significant role for human computer interaction. Automatic analysis of human facial expression is still a challenging problem with many applications. In this paper, we propose neuro-fuzzy based automatic facial expression recognition system to recognize the human facial expressions like happy, fear, sad, angry, disgust and surprise. Initially facial image is segmented into three regions from which the uniform Local Binary Pattern (LBP) texture features distributions are extracted and represented as a histogram descriptor. The facial expressions are recognized using Multiple Adaptive Neuro Fuzzy Inference System (MANFIS). The proposed system designed and tested with JAFFE face database. The proposed model reports 94.29% of classification accuracy.

Efficient Implementation of Serial and Parallel Support Vector Machine Training with a Multi-Parameter Kernel for Large-Scale Data Mining

This work deals with aspects of support vector learning for large-scale data mining tasks. Based on a decomposition algorithm that can be run in serial and parallel mode we introduce a data transformation that allows for the usage of an expensive generalized kernel without additional costs. In order to speed up the decomposition algorithm we analyze the problem of working set selection for large data sets and analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our modifications and settings lead to improvement of support vector learning performance and thus allow using extensive parameter search methods to optimize classification accuracy.

Classification of Soil Aptness to Establish of Panicum virgatum in Mississippi using Sensitivity Analysis and GIS

During the last decade Panicum virgatum, known as Switchgrass, has been broadly studied because of its remarkable attributes as a substitute pasture and as a functional biofuel source. The objective of this investigation was to establish soil suitability for Switchgrass in the State of Mississippi. A linear weighted additive model was developed to forecast soil suitability. Multicriteria analysis and Sensitivity analysis were utilized to adjust and optimize the model. The model was fit using seven years of field data associated with soils characteristics collected from Natural Resources Conservation System - United States Department of Agriculture (NRCS-USDA). The best model was selected by correlating calculated biomass yield with each model's soils-based output for Switchgrass suitability. Coefficient of determination (r2) was the decisive factor used to establish the 'best' soil suitability model. Coefficients associated with the 'best' model were implemented within a Geographic Information System (GIS) to create a map of relative soil suitability for Switchgrass in Mississippi. A Geodatabase associated with soil parameters was built and is available for future Geographic Information System use.