AJcFgraph - AspectJ Control Flow Graph Builder for Aspect-Oriented Software

The ever-growing usage of aspect-oriented development methodology in the field of software engineering requires tool support for both research environments and industry. So far, tool support for many activities in aspect-oriented software development has been proposed, to automate and facilitate their development. For instance, the AJaTS provides a transformation system to support aspect-oriented development and refactoring. In particular, it is well established that the abstract interpretation of programs, in any paradigm, pursued in static analysis is best served by a high-level programs representation, such as Control Flow Graph (CFG). This is why such analysis can more easily locate common programmatic idioms for which helpful transformation are already known as well as, association between the input program and intermediate representation can be more closely maintained. However, although the current researches define the good concepts and foundations, to some extent, for control flow analysis of aspectoriented programs but they do not provide a concrete tool that can solely construct the CFG of these programs. Furthermore, most of these works focus on addressing the other issues regarding Aspect- Oriented Software Development (AOSD) such as testing or data flow analysis rather than CFG itself. Therefore, this study is dedicated to build an aspect-oriented control flow graph construction tool called AJcFgraph Builder. The given tool can be applied in many software engineering tasks in the context of AOSD such as, software testing, software metrics, and so forth.

An ensemble of Weighted Support Vector Machines for Ordinal Regression

Instead of traditional (nominal) classification we investigate the subject of ordinal classification or ranking. An enhanced method based on an ensemble of Support Vector Machines (SVM-s) is proposed. Each binary classifier is trained with specific weights for each object in the training data set. Experiments on benchmark datasets and synthetic data indicate that the performance of our approach is comparable to state of the art kernel methods for ordinal regression. The ensemble method, which is straightforward to implement, provides a very good sensitivity-specificity trade-off for the highest and lowest rank.

Multiscale Analysis and Change Detection Based on a Contrario Approach

Automatic methods of detecting changes through satellite imaging are the object of growing interest, especially beca²use of numerous applications linked to analysis of the Earth’s surface or the environment (monitoring vegetation, updating maps, risk management, etc...). This work implemented spatial analysis techniques by using images with different spatial and spectral resolutions on different dates. The work was based on the principle of control charts in order to set the upper and lower limits beyond which a change would be noted. Later, the a contrario approach was used. This was done by testing different thresholds for which the difference calculated between two pixels was significant. Finally, labeled images were considered, giving a particularly low difference which meant that the number of “false changes” could be estimated according to a given limit.

A New Self-Adaptive EP Approach for ANN Weights Training

Evolutionary Programming (EP) represents a methodology of Evolutionary Algorithms (EA) in which mutation is considered as a main reproduction operator. This paper presents a novel EP approach for Artificial Neural Networks (ANN) learning. The proposed strategy consists of two components: the self-adaptive, which contains phenotype information and the dynamic, which is described by genotype. Self-adaptation is achieved by the addition of a value, called the network weight, which depends on a total number of hidden layers and an average number of neurons in hidden layers. The dynamic component changes its value depending on the fitness of a chromosome, exposed to mutation. Thus, the mutation step size is controlled by two components, encapsulated in the algorithm, which adjust it according to the characteristics of a predefined ANN architecture and the fitness of a particular chromosome. The comparative analysis of the proposed approach and the classical EP (Gaussian mutation) showed, that that the significant acceleration of the evolution process is achieved by using both phenotype and genotype information in the mutation strategy.

Fuel Reserve Tanks Dynamic Analysis Due to Earthquake Loading

In this paper, the dynamic analysis of fuel storage tanks has been studied and some equations are presented for the created fluid waves due to storage tank motions. Also, the equations for finite elements of fluid and structure interactions, and boundary conditions dominant on structure and fluid, were researched. In this paper, a numerical simulation is performed for the dynamic analysis of a storage tank contained a fluid. This simulation has carried out by ANSYS software, using FSI solver (Fluid and Structure Interaction solver), and by considering the simulated fluid dynamic motions due to earthquake loading, based on velocities and movements of structure and fluid according to all boundary conditions dominant on structure and fluid.

Orchestra/Percussion Classification Algorithm for United Speech Audio Coding System

Unified Speech Audio Coding (USAC), the latest MPEG standardization for unified speech and audio coding, uses a speech/audio classification algorithm to distinguish speech and audio segments of the input signal. The quality of the recovered audio can be increased by well-designed orchestra/percussion classification and subsequent processing. However, owing to the shortcoming of the system, introducing an orchestra/percussion classification and modifying subsequent processing can enormously increase the quality of the recovered audio. This paper proposes an orchestra/percussion classification algorithm for the USAC system which only extracts 3 scales of Mel-Frequency Cepstral Coefficients (MFCCs) rather than traditional 13 scales of MFCCs and use Iterative Dichotomiser 3 (ID3) Decision Tree rather than other complex learning method, thus the proposed algorithm has lower computing complexity than most existing algorithms. Considering that frequent changing of attributes may lead to quality loss of the recovered audio signal, this paper also design a modified subsequent process to help the whole classification system reach an accurate rate as high as 97% which is comparable to classical 99%.

Mathematical Analysis of EEG of Patients with Non-fatal Nonspecific Diffuse Encephalitis

Diffuse viral encephalitis may lack fever and other cardinal signs of infection and hence its distinction from other acute encephalopathic illnesses is challenging. Often, the EEG changes seen routinely are nonspecific and reflect diffuse encephalopathic changes only. The aim of this study was to use nonlinear dynamic mathematical techniques for analyzing the EEG data in order to look for any characteristic diagnostic patterns in diffuse forms of encephalitis.It was diagnosed on clinical, imaging and cerebrospinal fluid criteria in three young male patients. Metabolic and toxic encephalopathies were ruled out through appropriate investigations. Digital EEGs were done on the 3rd to 5th day of onset. The digital EEGs of 5 male and 5 female age and sex matched healthy volunteers served as controls.Two sample t-test indicated that there was no statistically significant difference between the average values in amplitude between the two groups. However, the standard deviation (or variance) of the EEG signals at FP1-F7 and FP2-F8 are significantly higher for the patients than the normal subjects. The regularisation dimension is significantly less for the patients (average between 1.24-1.43) when compared to the normal persons (average between 1.41-1.63) for the EEG signals from all locations except for the Fz-Cz signal. Similarly the wavelet dimension is significantly less (P = 0.05*) for the patients (1.122) when compared to the normal person (1.458). EEGs are subdued in the case of the patients with presence of uniform patterns, manifested in the values of regularisation and wavelet dimensions, when compared to the normal person, indicating a decrease in chaotic nature.

Prototype for Enhancing Information Security Awareness in Industry

Human-related information security breaches within organizations are primarily caused by employees who have not been made aware of the importance of protecting the information they work with. Information security awareness is accordingly attracting more attention from industry, because stakeholders are held accountable for the information with which they work. The authors developed an Information Security Retrieval and Awareness model – entitled “ISRA" – that is tailored specifically towards enhancing information security awareness in industry amongst all users of information, to address shortcomings in existing information security awareness models. This paper is principally aimed at expounding a prototype for the ISRA model to highlight the advantages of utilizing the model. The prototype will focus on the non-technical, humanrelated information security issues in industry. The prototype will ensure that all stakeholders in an organization are part of an information security awareness process, and that these stakeholders are able to retrieve specific information related to information security issues relevant to their job category, preventing them from being overburdened with redundant information.

Surgical Theater Utilization and PACU Staffing

In this work, the surgical theater of a local hospital in KSA was analyzed using simulation. The focus was on attempting to answer questions related to how many Operating Rooms (ORs) to open and to analyze the performance of the surgical theater in general and mainly the Post Anesthesia Care Unit (PACU) to assist making decisions regarding PACU staffing. The surgical theater consists of ten operating rooms and the PACU unit which has a maximum capacity of fifteen beds. Different sequencing rules to sequence the surgical cases were tested and the Longest Case First (LCF) were superior to others. The results of the different alternatives developed and tested can be used by the manager as a tool to plan and manage the OR and PACU

Optimization Approaches for a Complex Dairy Farm Simulation Model

This paper describes the optimization of a complex dairy farm simulation model using two quite different methods of optimization, the Genetic algorithm (GA) and the Lipschitz Branch-and-Bound (LBB) algorithm. These techniques have been used to improve an agricultural system model developed by Dexcel Limited, New Zealand, which describes a detailed representation of pastoral dairying scenarios and contains an 8-dimensional parameter space. The model incorporates the sub-models of pasture growth and animal metabolism, which are themselves complex in many cases. Each evaluation of the objective function, a composite 'Farm Performance Index (FPI)', requires simulation of at least a one-year period of farm operation with a daily time-step, and is therefore computationally expensive. The problem of visualization of the objective function (response surface) in high-dimensional spaces is also considered in the context of the farm optimization problem. Adaptations of the sammon mapping and parallel coordinates visualization are described which help visualize some important properties of the model-s output topography. From this study, it is found that GA requires fewer function evaluations in optimization than the LBB algorithm.

Clustering Categorical Data Using Hierarchies (CLUCDUH)

Clustering large populations is an important problem when the data contain noise and different shapes. A good clustering algorithm or approach should be efficient enough to detect clusters sensitively. Besides space complexity, time complexity also gains importance as the size grows. Using hierarchies we developed a new algorithm to split attributes according to the values they have and choosing the dimension for splitting so as to divide the database roughly into equal parts as much as possible. At each node we calculate some certain descriptive statistical features of the data which reside and by pruning we generate the natural clusters with a complexity of O(n).

Early Onset Neonatal Sepsis Pathogens in Malaysian Hospitals: Determining Empiric Antibiotic

Information regarding early onset neonatal sepsis (EONS) pathogens may vary between regions. Global perspectives showed Group B Streptococcal (GBS) as the most common causative pathogens, but the widespread use of intrapartum antibiotics has changed the pathogens pattern towards gram negative microorganisms, especially E. coli. Objective of this study is to describe the pathogens isolated, to assess current treatment and risk of EONS. Records of 899 neonates born in three General Hospitals between 2009 until 2012 were retrospectively reviewed. Proven was found in 22 (3%) neonates. The majority was isolated with gram positive organisms, 17 (2.3%). All grams positive and most gram negative organisms showed sensitivity to the tested antibiotics. Only two rare gram negative organisms showed total resistant. Male was possible risk of proven EONS. Although proven EONS remains uncommon in Malaysia, nonetheless, the effect of intrapartum antibiotics still required continuous surveillance.

A Trainable Neural Network Ensemble for ECG Beat Classification

This paper illustrates the use of a combined neural network model for classification of electrocardiogram (ECG) beats. We present a trainable neural network ensemble approach to develop customized electrocardiogram beat classifier in an effort to further improve the performance of ECG processing and to offer individualized health care. We process a three stage technique for detection of premature ventricular contraction (PVC) from normal beats and other heart diseases. This method includes a denoising, a feature extraction and a classification. At first we investigate the application of stationary wavelet transform (SWT) for noise reduction of the electrocardiogram (ECG) signals. Then feature extraction module extracts 10 ECG morphological features and one timing interval feature. Then a number of multilayer perceptrons (MLPs) neural networks with different topologies are designed. The performance of the different combination methods as well as the efficiency of the whole system is presented. Among them, Stacked Generalization as a proposed trainable combined neural network model possesses the highest recognition rate of around 95%. Therefore, this network proves to be a suitable candidate in ECG signal diagnosis systems. ECG samples attributing to the different ECG beat types were extracted from the MIT-BIH arrhythmia database for the study.

Feasibility Investigation of Near Infrared Spectrometry for Particle Size Estimation of Nano Structures

Determination of nano particle size is substantial since the nano particle size exerts a significant effect on various properties of nano materials. Accordingly, proposing non-destructive, accurate and rapid techniques for this aim is of high interest. There are some conventional techniques to investigate the morphology and grain size of nano particles such as scanning electron microscopy (SEM), atomic force microscopy (AFM) and X-ray diffractometry (XRD). Vibrational spectroscopy is utilized to characterize different compounds and applied for evaluation of the average particle size based on relationship between particle size and near infrared spectra [1,4] , but it has never been applied in quantitative morphological analysis of nano materials. So far, the potential application of nearinfrared (NIR) spectroscopy with its ability in rapid analysis of powdered materials with minimal sample preparation, has been suggested for particle size determination of powdered pharmaceuticals. The relationship between particle size and diffuse reflectance (DR) spectra in near infrared region has been applied to introduce a method for estimation of particle size. Back propagation artificial neural network (BP-ANN) as a nonlinear model was applied to estimate average particle size based on near infrared diffuse reflectance spectra. Thirty five different nano TiO2 samples with different particle size were analyzed by DR-FTNIR spectrometry and the obtained data were processed by BP- ANN.

Emotion Classification by Incremental Association Language Features

The Major Depressive Disorder has been a burden of medical expense in Taiwan as well as the situation around the world. Major Depressive Disorder can be defined into different categories by previous human activities. According to machine learning, we can classify emotion in correct textual language in advance. It can help medical diagnosis to recognize the variance in Major Depressive Disorder automatically. Association language incremental is the characteristic and relationship that can discovery words in sentence. There is an overlapping-category problem for classification. In this paper, we would like to improve the performance in classification in principle of no overlapping-category problems. We present an approach that to discovery words in sentence and it can find in high frequency in the same time and can-t overlap in each category, called Association Language Features by its Category (ALFC). Experimental results show that ALFC distinguish well in Major Depressive Disorder and have better performance. We also compare the approach with baseline and mutual information that use single words alone or correlation measure.

Effective Online Staff Training: Is This Possible?

The purpose of this paper is to consider the introduction of online courses to replace the current classroom-based staff training. The current training is practical, and must be completed before access to the financial computer system is authorized. The long term objective is to measure the efficacy, effectiveness and efficiency of the training, and to establish whether a transfer of knowledge back to the workplace has occurred. This paper begins with an overview explaining the importance of staff training in an evolving, competitive business environment and defines the problem facing this particular organization. A summary of the literature review is followed by a brief discussion of the research methodology and objective. The implementation of the alpha version of the online course is then described. This paper may be of interest to those seeking insights into, or new theory regarding, practical interventions of online learning in the real world.

Research on Transformer Condition-based Maintenance System using the Method of Fuzzy Comprehensive Evaluation

This study adopted previous fault patterns, results of detection analysis, historical records and data, and experts- experiences to establish fuzzy principles and estimate the failure probability index of components of a power transformer. Considering that actual parameters and limiting conditions of parameters may differ, this study used the standard data of IEC, IEEE, and CIGRE as condition parameters. According to the characteristics of each condition parameter, relative degradation was introduced to reflect the degree of influence of the factors on the transformer condition. The method of fuzzy mathematics was adopted to determine the subordinate function of the transformer condition. The calculation used the Matlab Fuzzy Tool Box to select the condition parameters of coil winding, iron core, bushing, OLTC, insulating oil and other auxiliary components and factors (e.g., load records, performance history, and maintenance records) of the transformer to establish the fuzzy principles. Examples were presented to support the rationality and effectiveness of the evaluation method of power transformer performance conditions, as based on fuzzy comprehensive evaluation.

Alternative Approach in Ground Vehicle Wake Analysis

In this paper an alternative visualisation approach of the wake behind different vehicle body shapes with simplified and fully-detailed underbody has been proposed and analysed. This allows for a more clear distinction among the different wake regions. This visualisation is based on a transformation of the cartesian coordinates of a chosen wake plane to polar coordinates, using as filter velocities lower than the freestream. This transformation produces a polar wake plot that enables the division and quantification of the wake in a number of sections. In this paper, local drag has been used to visualise the drag contribution of the flow by the different sections. Visually, a balanced wake can be observed by the concentric behaviour of the polar plots. Alternatively, integration of the local drag of each degree section as a ratio of the total local drag yields a quantifiable approach of the wake uniformity, where different sections contribute equally to the local drag, with the exception of the wheels.

Exponential Stability and Periodicity of a Class of Cellular Neural Networks with Time-Varying Delays

The problem of exponential stability and periodicity for a class of cellular neural networks (DCNNs) with time-varying delays is investigated. By dividing the network state variables into subgroups according to the characters of the neural networks, some sufficient conditions for exponential stability and periodicity are derived via the methods of variation parameters and inequality techniques. These conditions are represented by some blocks of the interconnection matrices. Compared with some previous methods, the method used in this paper does not resort to any Lyapunov function, and the results derived in this paper improve and generalize some earlier criteria established in the literature cited therein. Two examples are discussed to illustrate the main results.

Appraisal of Energy Efficiency of Urban Development Plans: The Fidelity Concept on Izmir-Balcova Case

Design and land use are closely linked to the energy efficiency levels for an urban area. The current city planning practice does not involve an effective land useenergy evaluation in its 'blueprint' urban plans. The study proposed an appraisal method that can be embedded in GIS programs using five planning criteria as how far a planner can give away from the planning principles (criteria) for the most energy output s/he can obtain. The case of Balcova, a district in the Izmir Metropolitan area, is used conformingly for evaluating the proposed master plan and the geothermal energy (heating only) use for the concern district. If the land use design were proposed accordingly at-most energy efficiency (a 30% obtained), mainly increasing the density around the geothermal wells and also proposing more mixed use zones, we could have 17% distortion (infidelity to the main planning principles) from the original plan. The proposed method can be an effective tool for planners as simulation media, of which calculations can be made by GIS ready tools, to evaluate efficiency levels for different plan proposals, letting to know how much energy saving causes how much deviation from the other planning ideals. Lower energy uses can be possible for different land use proposals for various policy trials.