A New Approaches for Seismic Signals Discrimination

The automatic discrimination of seismic signals is an important practical goal for the earth-science observatories due to the large amount of information that they receive continuously. An essential discrimination task is to allocate the incoming signal to a group associated with the kind of physical phenomena producing it. In this paper, we present new techniques for seismic signals classification: local, regional and global discrimination. These techniques were tested on seismic signals from the data base of the National Geophysical Institute of the Centre National pour la Recherche Scientifique et Technique (Morocco) by using the Moroccan software for seismic signals analysis.

Operational Modal Analysis Implementation on a Hybrid Composite Plate

In aerospace applications, interactions of airflow with aircraft structures can result in undesirable structural deformations. This structural deformation in turn, can be predicted if the natural modes of the structure are known. This can be achieved through conventional modal testing that requires a known excitation force in order to extract these dynamic properties. This technique can be experimentally complex because of the need for artificial excitation and it is also does not represent actual operational condition. The current work presents part of research work that address the practical implementation of operational modal analysis (OMA) applied to a cantilevered hybrid composite plate employing single contactless sensing system via laser vibrometer. OMA technique extracts the modal parameters based only on the measurements of the dynamic response. The OMA results were verified with impact hammer modal testing and good agreement was obtained.

Web Application Security, Attacks and Mitigation

Today’s technology is heavily dependent on web applications. Web applications are being accepted by users at a very rapid pace. These have made our work efficient. These include webmail, online retail sale, online gaming, wikis, departure and arrival of trains and flights and list is very long. These are developed in different languages like PHP, Python, C#, ASP.NET and many more by using scripts such as HTML and JavaScript. Attackers develop tools and techniques to exploit web applications and legitimate websites. This has led to rise of web application security; which can be broadly classified into Declarative Security and Program Security. The most common attacks on the applications are by SQL Injection and XSS which give access to unauthorized users who totally damage or destroy the system. This paper presents a detailed literature description and analysis on Web Application Security, examples of attacks and steps to mitigate the vulnerabilities.

Performance Comparison of Particle Swarm Optimization with Traditional Clustering Algorithms used in Self-Organizing Map

Self-organizing map (SOM) is a well known data reduction technique used in data mining. It can reveal structure in data sets through data visualization that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOM, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of an adaptive heuristic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOM. The application of our method to several standard data sets demonstrates its feasibility. PSO algorithm utilizes a so-called U-matrix of SOM to determine cluster boundaries; the results of this novel automatic method compare very favorably to boundary detection through traditional algorithms namely k-means and hierarchical based approach which are normally used to interpret the output of SOM.

Effect of Laser Power and Powder Flow Rate on Properties of Laser Metal Deposited Ti6Al4V

Laser Metal Deposition (LMD) is an additive manufacturing process with capabilities that include: producing new part directly from 3 Dimensional Computer Aided Design (3D CAD) model, building new part on the existing old component and repairing an existing high valued component parts that would have been discarded in the past. With all these capabilities and its advantages over other additive manufacturing techniques, the underlying physics of the LMD process is yet to be fully understood probably because of high interaction between the processing parameters and studying many parameters at the same time makes it further complex to understand. In this study, the effect of laser power and powder flow rate on physical properties (deposition height and deposition width), metallurgical property (microstructure) and mechanical (microhardness) properties on laser deposited most widely used aerospace alloy are studied. Also, because the Ti6Al4V is very expensive, and LMD is capable of reducing buy-to-fly ratio of aerospace parts, the material utilization efficiency is also studied. Four sets of experiments were performed and repeated to establish repeatability using laser power of 1.8 kW and 3.0 kW, powder flow rate of 2.88 g/min and 5.67 g/min, and keeping the gas flow rate and scanning speed constant at 2 l/min and 0.005 m/s respectively. The deposition height / width are found to increase with increase in laser power and increase in powder flow rate. The material utilization is favoured by higher power while higher powder flow rate reduces material utilization. The results are presented and fully discussed.

ROI Based Embedded Watermarking of Medical Images for Secured Communication in Telemedicine

Medical images require special safety and confidentiality because critical judgment is done on the information provided by medical images. Transmission of medical image via internet or mobile phones demands strong security and copyright protection in telemedicine applications. Here, highly secured and robust watermarking technique is proposed for transmission of image data via internet and mobile phones. The Region of Interest (ROI) and Non Region of Interest (RONI) of medical image are separated. Only RONI is used for watermark embedding. This technique results in exact recovery of watermark with standard medical database images of size 512x512, giving 'correlation factor' equals to 1. The correlation factor for different attacks like noise addition, filtering, rotation and compression ranges from 0.90 to 0.95. The PSNR with weighting factor 0.02 is up to 48.53 dBs. The presented scheme is non blind and embeds hospital logo of 64x64 size.

Classifier Based Text Mining for Neural Network

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.

Energy Efficient Resource Allocation in Distributed Computing Systems

The problem of mapping tasks onto a computational grid with the aim to minimize the power consumption and the makespan subject to the constraints of deadlines and architectural requirements is considered in this paper. To solve this problem, we propose a solution from cooperative game theory based on the concept of Nash Bargaining Solution. The proposed game theoretical technique is compared against several traditional techniques. The experimental results show that when the deadline constraints are tight, the proposed technique achieves superior performance and reports competitive performance relative to the optimal solution.

A Supervised Text-Independent Speaker Recognition Approach

We provide a supervised speech-independent voice recognition technique in this paper. In the feature extraction stage we propose a mel-cepstral based approach. Our feature vector classification method uses a special nonlinear metric, derived from the Hausdorff distance for sets, and a minimum mean distance classifier.

Analysis of the Ambient Media Approach of Advertisement Samples from the Adman Awards and Symposium under the Category of Outdoor and Ambience

This research is to study the types of products and services that employs 'ambient media and respective techniques in its advertisement materials. Data collection has been done via analyses of a total of 62 advertisements that employed ambient media approach in Thailand during the years 2004 to 2011. The 62 advertisement were qualifying advertisements of the Adman Awards & Symposium under the category of Outdoor & Ambience. Analysis results reveal that there is a total of 14 products and services that chooses to utilize ambient media in its advertisement. Amongst all ambient media techniques, 'intrusion' uses the value of a medium in its representation of content most often. Following intrusion is 'interaction', where consumers are invited to participate and interact with the advertising materials. 'Illusion' ranks third in its ability to subject the viewers to distortions of reality that makes the division between reality and fantasy less clear.

A Technique for Improving the Performance of Median Smoothers at the Corners Characterized by Low Order Polynomials

Median filters with larger windows offer greater smoothing and are more robust than the median filters of smaller windows. However, the larger median smoothers (the median filters with the larger windows) fail to track low order polynomial trends in the signals. Due to this, constant regions are produced at the signal corners, leading to the loss of fine details. In this paper, an algorithm, which combines the ability of the 3-point median smoother in preserving the low order polynomial trends and the superior noise filtering characteristics of the larger median smoother, is introduced. The proposed algorithm (called the combiner algorithm in this paper) is evaluated for its performance on a test image corrupted with different types of noise and the results obtained are included.

A New Maximum Power Point Tracking for Photovoltaic Systems

In this paper a new maximum power point tracking algorithm for photovoltaic arrays is proposed. The algorithm detects the maximum power point of the PV. The computed maximum power is used as a reference value (set point) of the control system. ON/OFF power controller with hysteresis band is used to control the operation of a Buck chopper such that the PV module always operates at its maximum power computed from the MPPT algorithm. The major difference between the proposed algorithm and other techniques is that the proposed algorithm is used to control directly the power drawn from the PV. The proposed MPPT has several advantages: simplicity, high convergence speed, and independent on PV array characteristics. The algorithm is tested under various operating conditions. The obtained results have proven that the MPP is tracked even under sudden change of irradiation level.

A Novel Approach to Image Compression of Colour Images by Plane Reduction Technique

Several methods have been proposed for color image compression but the reconstructed image had very low signal to noise ratio which made it inefficient. This paper describes a lossy compression technique for color images which overcomes the drawbacks. The technique works on spatial domain where the pixel values of RGB planes of the input color image is mapped onto two dimensional planes. The proposed technique produced better results than JPEG2000, 2DPCA and a comparative study is reported based on the image quality measures such as PSNR and MSE.Experiments on real time images are shown that compare this methodology with previous ones and demonstrate its advantages.

A Propagator Method like Algorithm for Estimation of Multiple Real-Valued Sinusoidal Signal Frequencies

In this paper a novel method for multiple one dimensional real valued sinusoidal signal frequency estimation in the presence of additive Gaussian noise is postulated. A computationally simple frequency estimation method with efficient statistical performance is attractive in many array signal processing applications. The prime focus of this paper is to combine the subspace-based technique and a simple peak search approach. This paper presents a variant of the Propagator Method (PM), where a collaborative approach of SUMWE and Propagator method is applied in order to estimate the multiple real valued sine wave frequencies. A new data model is proposed, which gives the dimension of the signal subspace is equal to the number of frequencies present in the observation. But, the signal subspace dimension is twice the number of frequencies in the conventional MUSIC method for estimating frequencies of real-valued sinusoidal signal. The statistical analysis of the proposed method is studied, and the explicit expression of asymptotic (large-sample) mean-squared-error (MSE) or variance of the estimation error is derived. The performance of the method is demonstrated, and the theoretical analysis is substantiated through numerical examples. The proposed method can achieve sustainable high estimation accuracy and frequency resolution at a lower SNR, which is verified by simulation by comparing with conventional MUSIC, ESPRIT and Propagator Method.

Grouping and Indexing Color Features for Efficient Image Retrieval

Content-based Image Retrieval (CBIR) aims at searching image databases for specific images that are similar to a given query image based on matching of features derived from the image content. This paper focuses on a low-dimensional color based indexing technique for achieving efficient and effective retrieval performance. In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique. Then the cluster (region) mode is used as representative of the image in 3-D color space. The feature descriptor consists of the representative color of a region and is indexed using a spatial indexing method that uses *R -tree thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. Alternatively, the images in the database are clustered based on region feature similarity using Euclidian distance. Only representative (centroids) features of these clusters are indexed using *R -tree thus improving the efficiency. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The results of these methods are compared. A JAVA based query engine supporting query-by- example is built to retrieve images by color.

Work Structuring and the Feasibility of Application to Construction Projects in Vietnam

Design should be viewed concurrently by three ways as transformation, flow and value generation. An innovative approach to solve design – related problems is described as the integrated product - process design. As a foundation for a formal framework consisting of organizing principles and techniques, Work Structuring has been developed to guide efforts in the integration that enhances the development of operation and process design in alignment with product design. Vietnam construction projects are facing many delays, and cost overruns caused mostly by design related problems. A better design management that integrates product and process design could resolve these problems. A questionnaire survey and in – depth interviews were used to investigate the feasibility of applying Work Structuring to construction projects in Vietnam. The purpose of this paper is to present the research results and to illustrate the possible problems and potential solutions when Work Structuring is implemented to construction projects in Vietnam.

Single Input ANC for Suppression of Breath Sound

Various sounds generated in the chest are included in auscultation sound. Adaptive Noise Canceller (ANC) is one of the useful techniques for biomedical signal. But the ANC is not suitable for auscultation sound. Because the ANC needs two input channels as a primary signal and a reference signals, but a stethoscope can provide just one input sound. Therefore, in this paper, it was proposed the Single Input ANC (SIANC) for suppression of breath sound in a cardiac auscultation sound. For the SIANC, it was proposed that the reference generation system which included Heart Sound Detector, Control and Reference Generator. By experiment and comparison, it was confirmed that the proposed SIANC was efficient for heart sound enhancement and it was independent of variations of a heartbeat.

Defining a Semantic Web-based Framework for Enabling Automatic Reasoning on CIM-based Management Platforms

CIM is the standard formalism for modeling management information developed by the Distributed Management Task Force (DMTF) in the context of its WBEM proposal, designed to provide a conceptual view of the managed environment. In this paper, we propose the inclusion of formal knowledge representation techniques, based on Description Logics (DLs) and the Web Ontology Language (OWL), in CIM-based conceptual modeling, and then we examine the benefits of such a decision. The proposal is specified as a CIM metamodel level mapping to a highly expressive subset of DLs capable of capturing all the semantics of the models. The paper shows how the proposed mapping provides CIM diagrams with precise semantics and can be used for automatic reasoning about the management information models, as a design aid, by means of newgeneration CASE tools, thanks to the use of state-of-the-art automatic reasoning systems that support the proposed logic and use algorithms that are sound and complete with respect to the semantics. Such a CASE tool framework has been developed by the authors and its architecture is also introduced. The proposed formalization is not only useful at design time, but also at run time through the use of rational autonomous agents, in response to a need recently recognized by the DMTF.

Feedback-Controlled Server for Scheduling Aperiodic Tasks

This paper proposes a scheduling scheme using feedback control to reduce the response time of aperiodic tasks with soft real-time constraints. We design an algorithm based on the proposed scheduling scheme and Total Bandwidth Server (TBS) that is a conventional server technique for scheduling aperiodic tasks. We then describe the feedback controller of the algorithm and give the control parameter tuning methods. The simulation study demonstrates that the algorithm can reduce the mean response time up to 26% compared to TBS in exchange for slight deadline misses.

Usability Evaluation Framework for Computer Vision Based Interfaces

Human computer interaction has progressed considerably from the traditional modes of interaction. Vision based interfaces are a revolutionary technology, allowing interaction through human actions, gestures. Researchers have developed numerous accurate techniques, however, with an exception to few these techniques are not evaluated using standard HCI techniques. In this paper we present a comprehensive framework to address this issue. Our evaluation of a computer vision application shows that in addition to the accuracy, it is vital to address human factors