Network Analysis in a Natural Perturbed Ecosystem

The objective of this work is to explicit knowledge on the interactions between the chlorophyll-a and nine meroplankton larvae of epibenthonic fauna. The studied case is the Arraial do Cabo upwelling system, Southeastern of Brazil, which provides different environmental conditions. To assess this information a network approach based in probability estimative was used. Comparisons among the generated graphs are made in the light of different water masses, application of Shannon biodiversity index, and the closeness and betweenness centralities measurements. Our results show the main pattern among different water masses and how the core organisms belonging to the network skeleton are correlated to the main environmental variable. We conclude that the approach of complex networks is a promising tool for environmental diagnostic.

Evaluation Method for Information Security Levels of CIIP (Critical Information Infrastructure Protection)

As the information age matures, major social infrastructures such as communication, finance, military and energy, have become ever more dependent on information communication systems. And since these infrastructures are connected to the Internet, electronic intrusions such as hacking and viruses have become a new security threat. Especially, disturbance or neutralization of a major social infrastructure can result in extensive material damage and social disorder. To address this issue, many nations around the world are researching and developing various techniques and information security policies as a government-wide effort to protect their infrastructures from newly emerging threats. This paper proposes an evaluation method for information security levels of CIIP (Critical Information Infrastructure Protection), which can enhance the security level of critical information infrastructure by checking the current security status and establish security measures accordingly to protect infrastructures effectively.

Characterization of Carbon Based Nanometer Scale Coil Growth

The carbon based coils with the nanometer scale have the 3 dimension helix geometry. We synthesized the carbon nano-coils by the use of chemical vapor deposition technique with iron and tin as the catalysts. The fabricated coils have the external diameter of ranging few hundred nm to few thousand nm. The Scanning Electro-Microscope (SEM) and Tunneling Electro-Microscope has shown detail images of the coil-s structure. The fabrication of the carbon nano-coils can be grown on the metal and non-metal substrates, such as the stainless steel and silicon substrates. Besides growth on the flat substrate; they also can be grown on the stainless steel wires. After the synthesis of the coils, the mechanical and electro-mechanical property is measured. The experimental results were reported.

Performance Analysis of a Series of Adaptive Filters in Non-Stationary Environment for Noise Cancelling Setup

One of the essential components of much of DSP application is noise cancellation. Changes in real time signals are quite rapid and swift. In noise cancellation, a reference signal which is an approximation of noise signal (that corrupts the original information signal) is obtained and then subtracted from the noise bearing signal to obtain a noise free signal. This approximation of noise signal is obtained through adaptive filters which are self adjusting. As the changes in real time signals are abrupt, this needs adaptive algorithm that converges fast and is stable. Least mean square (LMS) and normalized LMS (NLMS) are two widely used algorithms because of their plainness in calculations and implementation. But their convergence rates are small. Adaptive averaging filters (AFA) are also used because they have high convergence, but they are less stable. This paper provides the comparative study of LMS and Normalized NLMS, AFA and new enhanced average adaptive (Average NLMS-ANLMS) filters for noise cancelling application using speech signals.

Error Propagation of the Hidden-Point Bar Method: Effect of Bar Geometry

The hidden-point bar method is useful in many surveying applications. The method involves determining the coordinates of a hidden point as a function of horizontal and vertical angles measured to three fixed points on the bar. Using these measurements, the procedure involves calculating the slant angles, the distances from the station to the fixed points, the coordinates of the fixed points, and then the coordinates of the hidden point. The propagation of the measurement errors in this complex process has not been fully investigated in the literature. This paper evaluates the effect of the bar geometry on the position accuracy of the hidden point which depends on the measurement errors of the horizontal and vertical angles. The results are used to establish some guidelines regarding the inclination angle of the bar and the location of the observed points that provide the best accuracy.

Deixis and Personalization in Ad Slogans

This study examines the use of the persuasive strategy of deixis and personalization in advertising slogans. This rhetorical/ stylistic and linguistic strategy has been found to be widely used in advertising slogans for over a century. A total of five hundred advertising slogans of multinational companies in both product and service sectors were obtained. The analysis reveals the 3 main components of this strategy as being deictic words, absolute uniqueness and personal pronouns. The percentage and mean of the use of the 3 components are tabulated. The findings show that advertisers have used this persuasive strategy in creative ways to persuade consumers to buy their products and services.

Fundamental Equation of Complete Factor Synergetics of Complex Systems with Normalization of Dimension

It is by reason of the unified measure of varieties of resources and the unified processing of the disposal of varieties of resources, that these closely related three of new basic models called the resources assembled node and the disposition integrated node as well as the intelligent organizing node are put forth in this paper; the three closely related quantities of integrative analytical mechanics including the disposal intensity and disposal- weighted intensity as well as the charge of resource charge are set; and then the resources assembled space and the disposition integrated space as well as the intelligent organizing space are put forth. The system of fundamental equations and model of complete factor synergetics is preliminarily approached for the general situation in this paper, to form the analytical base of complete factor synergetics. By the essential variables constituting this system of equations we should set twenty variables respectively with relation to the essential dynamical effect, external synergetic action and internal synergetic action of the system.

Application of Pearson Parametric Distribution Model in Fatigue Life Reliability Evaluation

The aim of this paper is to introduce a parametric distribution model in fatigue life reliability analysis dealing with variation in material properties. Service loads in terms of responsetime history signal of Belgian pave were replicated on a multi-axial spindle coupled road simulator and stress-life method was used to estimate the fatigue life of automotive stub axle. A PSN curve was obtained by monotonic tension test and two-parameter Weibull distribution function was used to acquire the mean life of the component. A Pearson system was developed to evaluate the fatigue life reliability by considering stress range intercept and slope of the PSN curve as random variables. Considering normal distribution of fatigue strength, it is found that the fatigue life of the stub axle to have the highest reliability between 10000 – 15000 cycles. Taking into account the variation of material properties associated with the size effect, machining and manufacturing conditions, the method described in this study can be effectively applied in determination of probability of failure of mass-produced parts.

A Research of the Influence that MP3 Sound Gives EEG of the Person

Currently, many types of no-reversible compressed sound source, represented by MP3 (MPEG Audio Layer-3) are popular in the world and they are widely used to make the music file size smaller. The sound data created in this way has less information as compared to pre-compressed data. The objective of this study is by analyzing EEG to determine if people can recognize such difference as differences in sound. A measurement system that can measure and analyze EEG when a subject listens to music were experimentally developed. And ten subjects were studied with this system. In this experiment, a WAVE formatted music data and a MP3 compressed music data that is made from the WAVE formatted data were prepared. Each subject was made to hear these music sources at the same volume. From the results of this experiment, clear differences were confirmed between two wound sources.

Text Mining Technique for Data Mining Application

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In decision tree approach is most useful in classification problem. With this technique, tree is constructed to model the classification process. There are two basic steps in the technique: building the tree and applying the tree to the database. This paper describes a proposed C5.0 classifier that performs rulesets, cross validation and boosting for original C5.0 in order to reduce the optimization of error ratio. The feasibility and the benefits of the proposed approach are demonstrated by means of medial data set like hypothyroid. It is shown that, the performance of a classifier on the training cases from which it was constructed gives a poor estimate by sampling or using a separate test file, either way, the classifier is evaluated on cases that were not used to build and evaluate the classifier are both are large. If the cases in hypothyroid.data and hypothyroid.test were to be shuffled and divided into a new 2772 case training set and a 1000 case test set, C5.0 might construct a different classifier with a lower or higher error rate on the test cases. An important feature of see5 is its ability to classifiers called rulesets. The ruleset has an error rate 0.5 % on the test cases. The standard errors of the means provide an estimate of the variability of results. One way to get a more reliable estimate of predictive is by f-fold –cross- validation. The error rate of a classifier produced from all the cases is estimated as the ratio of the total number of errors on the hold-out cases to the total number of cases. The Boost option with x trials instructs See5 to construct up to x classifiers in this manner. Trials over numerous datasets, large and small, show that on average 10-classifier boosting reduces the error rate for test cases by about 25%.

A Hidden Markov Model for Modeling Pavement Deterioration under Incomplete Monitoring Data

In this paper, the potential use of an exponential hidden Markov model to model a hidden pavement deterioration process, i.e. one that is not directly measurable, is investigated. It is assumed that the evolution of the physical condition, which is the hidden process, and the evolution of the values of pavement distress indicators, can be adequately described using discrete condition states and modeled as a Markov processes. It is also assumed that condition data can be collected by visual inspections over time and represented continuously using an exponential distribution. The advantage of using such a model in decision making process is illustrated through an empirical study using real world data.

Multiwavelet and Biological Signal Processing

In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals and then, selecting it for using with SPIHT codec. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MIT-BIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the Cardinal Balanced Multiwavelet (cardbal2) by the means of identity (Id) prefiltering method to be the best effective transformation. After finding the most efficient multiwavelet, we apply SPIHT coding algorithm on the transformed signal by this multiwavelet.

Process and Supply-Chain Optimization for Testing and Verification of Formation Tester/Pressure-While- Drilling Tools

Applying a rigorous process to optimize the elements of a supply-chain network resulted in reduction of the waiting time for a service provider and customer. Different sources of downtime of hydraulic pressure controller/calibrator (HPC) were causing interruptions in the operations. The process examined all the issues to drive greater efficiencies. The issues included inherent design issues with HPC pump, contamination of the HPC with impurities, and the lead time required for annual calibration in the USA. HPC is used for mandatory testing/verification of formation tester/pressure measurement/logging-while drilling tools by oilfield service providers, including Halliburton. After market study andanalysis, it was concluded that the current HPC model is best suited in the oilfield industry. To use theexisting HPC model effectively, design andcontamination issues were addressed through design and process improvements. An optimum network is proposed after comparing different supply-chain models for calibration lead-time reduction.

Wiener Filter as an Optimal MMSE Interpolator

The ideal sinc filter, ignoring the noise statistics, is often applied for generating an arbitrary sample of a bandlimited signal by using the uniformly sampled data. In this article, an optimal interpolator is proposed; it reaches a minimum mean square error (MMSE) at its output in the presence of noise. The resulting interpolator is thus a Wiener filter, and both the optimal infinite impulse response (IIR) and finite impulse response (FIR) filters are presented. The mean square errors (MSE-s) for the interpolator of different length impulse responses are obtained by computer simulations; it shows that the MSE-s of the proposed interpolators with a reasonable length are improved about 0.4 dB under flat power spectra in noisy environment with signal-to-noise power ratio (SNR) equal 10 dB. As expected, the results also demonstrate the improvements for the MSE-s with various fractional delays of the optimal interpolator against the ideal sinc filter under a fixed length impulse response.

Capacity Enhancement in Wireless Networks using Directional Antennas

One of the biggest drawbacks of the wireless environment is the limited bandwidth. However, the users sharing this limited bandwidth have been increasing considerably. SDMA technique which entails using directional antennas allows to increase the capacity of a wireless network by separating users in the medium. In this paper, it has been presented how the capacity can be enhanced while the mean delay is reduced by using directional antennas in wireless networks employing TDMA/FDD MAC. Computer modeling and simulation of the wireless system studied are realized using OPNET Modeler. Preliminary simulation results are presented and the performance of the model using directional antennas is evaluated and compared consistently with the one using omnidirectional antennas.

Probabilistic Model Development for Project Performance Forecasting

In this paper, based on the past project cost and time performance, a model for forecasting project cost performance is developed. This study presents a probabilistic project control concept to assure an acceptable forecast of project cost performance. In this concept project activities are classified into sub-groups entitled control accounts. Then obtain the Stochastic S-Curve (SS-Curve), for each sub-group and the project SS-Curve is obtained by summing sub-groups- SS-Curves. In this model, project cost uncertainties are considered through Beta distribution functions of the project activities costs required to complete the project at every selected time sections through project accomplishment, which are extracted from a variety of sources. Based on this model, after a percentage of the project progress, the project performance is measured via Earned Value Management to adjust the primary cost probability distribution functions. Then, accordingly the future project cost performance is predicted by using the Monte-Carlo simulation method.

Identifying Significant Factors of Brick Laying Process through Design of Experiment and Computer Simulation: A Case Study

Improving performance measures in the construction processes has been a major concern for managers and decision makers in the industry. They seek for ways to recognize the key factors which have the largest effect on the process. Identifying such factors can guide them to focus on the right parts of the process in order to gain the best possible result. In the present study design of experiment (DOE) has been applied to a computer simulation model of brick laying process to determine significant factors while productivity has been chosen as the response of the experiment. To this end, four controllable factors and their interaction have been experimented and the best factor level has been calculated for each one. The results indicate that three factors, namely, labor of brick, labor of mortar and inter arrival time of mortar along with interaction of labor of brick and labor of mortar are significant.

Investigating Simple Multipath Compensation for Frequency Modulated Signals at Lower Frequencies

Radio propagation from point-to-point is affected by the physical channel in many ways. A signal arriving at a destination travels through a number of different paths which are referred to as multi-paths. Research in this area of wireless communications has progressed well over the years with the research taking different angles of focus. By this is meant that some researchers focus on ways of reducing or eluding Multipath effects whilst others focus on ways of mitigating the effects of Multipath through compensation schemes. Baseband processing is seen as one field of signal processing that is cardinal to the advancement of software defined radio technology. This has led to wide research into the carrying out certain algorithms at baseband. This paper considers compensating for Multipath for Frequency Modulated signals. The compensation process is carried out at Radio frequency (RF) and at Quadrature baseband (QBB) and the results are compared. Simulations are carried out using MatLab so as to show the benefits of working at lower QBB frequencies than at RF.

Modeling of Dielectric Heating in Radio- Frequency Applicator Optimized for Uniform Temperature by Means of Genetic Algorithms

The paper presents an optimization study based on genetic algorithms (GA-s) for a radio-frequency applicator used in heating dielectric band products. The weakly coupled electro-thermal problem is analyzed using 2D-FEM. The design variables in the optimization process are: the voltage of a supplementary “guard" electrode and six geometric parameters of the applicator. Two objective functions are used: temperature uniformity and total active power absorbed by the dielectric. Both mono-objective and multiobjective formulations are implemented in GA optimization.

Face Recognition with Image Rotation Detection, Correction and Reinforced Decision using ANN

Rotation or tilt present in an image capture by digital means can be detected and corrected using Artificial Neural Network (ANN) for application with a Face Recognition System (FRS). Principal Component Analysis (PCA) features of faces at different angles are used to train an ANN which detects the rotation for an input image and corrected using a set of operations implemented using another system based on ANN. The work also deals with the recognition of human faces with features from the foreheads, eyes, nose and mouths as decision support entities of the system configured using a Generalized Feed Forward Artificial Neural Network (GFFANN). These features are combined to provide a reinforced decision for verification of a person-s identity despite illumination variations. The complete system performing facial image rotation detection, correction and recognition using re-enforced decision support provides a success rate in the higher 90s.