Application of a Time-Frequency-Based Blind Source Separation to an Instantaneous Mixture of Secondary Radar Sources

In Secondary Surveillance Radar (SSR) systems, it is more difficult to locate and recognise aircrafts in the neighbourhood of civil airports since aerial traffic becomes greater. Here, we propose to apply a recent Blind Source Separation (BSS) algorithm based on Time-Frequency Analysis, in order to separate messages sent by different aircrafts and falling in the same radar beam in reception. The above source separation method involves joint-diagonalization of a set of smoothed version of spatial Wigner-Ville distributions. The technique makes use of the difference in the t-f signatures of the nonstationary sources to be separated. Consequently, as the SSR sources emit different messages at different frequencies, the above fitted to this new application. We applied the technique in simulation to separate SSR replies. Results are provided at the end of the paper.

A Comparative Study of Turbulence Models Performance for Turbulent Flow in a Planar Asymmetric Diffuser

This paper presents a computational study of the separated flow in a planer asymmetric diffuser. The steady RANS equations for turbulent incompressible fluid flow and six turbulence closures are used in the present study. The commercial software code, FLUENT 6.3.26, was used for solving the set of governing equations using various turbulence models. Five of the used turbulence models are available directly in the code while the v2-f turbulence model was implemented via User Defined Scalars (UDS) and User Defined Functions (UDF). A series of computational analysis is performed to assess the performance of turbulence models at different grid density. The results show that the standard k-ω, SST k-ω and v2-f models clearly performed better than other models when an adverse pressure gradient was present. The RSM model shows an acceptable agreement with the velocity and turbulent kinetic energy profiles but it failed to predict the location of separation and attachment points. The standard k-ε and the low-Re k- ε delivered very poor results.

Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)

There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.

Migration Loneliness and Family Links: A Case Narrative

Culture and family structure provide a sense security. Further, the chrono, macro and micro contexts of development influence developmental transitions and timetable particularly owing to variations in the macrosystem associated with non normative life events like migration. Migration threatens family links, security and attachment bonds. Rising migratory trends have prompted an increased interest in migration consequences on familial bonds, developmental autonomy, socialization process, and sense of security. This paper takes a narrative approach and applies the attachment paradigm from a lifespan perspective, to examine the settlement experiences of an India-born migrant student in Sydney, Australia. It focuses on her quest to preserve family ties; her remote secure base; her continual struggle to balance dependency and autonomy, a major developmental milestone. As positional parental power is culturally more potent in the Indian society, the paper therefore raises some important concerns related to cultural expectations, adaptation, acculturative stress and sense of security.

A Generic e-Tutor for Graphical Problems

For a variety of safety and economic reasons, engineering undergraduates in Australia have experienced diminishing access to the real hardware that is typically the embodiment of their theoretical studies. This trend will delay the development of practical competence, decrease the ability to model and design, and suppress motivation. The author has attempted to address this concern by creating a software tool that contains both photographic images of real machinery, and sets of graphical modeling 'tools'. Academics from a range of disciplines can use the software to set tutorial tasks, and incorporate feedback comments for a range of student responses. An evaluation of the software demonstrated that students who had solved modeling problems with the aid of the electronic tutor performed significantly better in formal examinations with similar problems. The 2-D graphical diagnostic routines in the Tutor have the potential to be used in a wider range of problem-solving tasks.

An Investigation into the Application of Artificial Neural Networks to the Prediction of Injuries in Sport

Artificial Neural Networks (ANNs) have been used successfully in many scientific, industrial and business domains as a method for extracting knowledge from vast amounts of data. However the use of ANN techniques in the sporting domain has been limited. In professional sport, data is stored on many aspects of teams, games, training and players. Sporting organisations have begun to realise that there is a wealth of untapped knowledge contained in the data and there is great interest in techniques to utilise this data. This study will use player data from the elite Australian Football League (AFL) competition to train and test ANNs with the aim to predict the onset of injuries. The results demonstrate that an accuracy of 82.9% was achieved by the ANNs’ predictions across all examples with 94.5% of all injuries correctly predicted. These initial findings suggest that ANNs may have the potential to assist sporting clubs in the prediction of injuries.

Integrated Reasoning Approach for Car Faulty Diagnosis

This paper presents an integrated case based and rule based reasoning method for car faulty diagnosis. The reasoning method is done through extracting the past cases from the Proton Service Center while comparing with the preset rules to deduce a diagnosis/solution to a car service case. New cases will be stored to the knowledge base. The test cases examples illustrate the effectiveness of the proposed integrated reasoning. It has proven accuracy of similar reasoning if carried out by a service advisor from the service center.

Citizen Participation in Informal Settlements; Potentials & Obstacles - The Case of Iran, Shiraz, Saadi Community

In recent years, “Bottom-up Planning Approach" has been widely accepted and expanded from planning theorists. Citizen participation becomes more important in decision-making in informal settlements. Many of previous projects and strategies due to ignorance of citizen participation, have been failed facing with informal settlements and in some cases lead physical expansion of these neighbourhoods. According to recent experiences, the new participatory approach was in somehow successful. This paper focuses on local experiences in Iran. A considerable amount of people live in informal settlements in Iran. With the previous methods, the government could not solve the problems of these settlements. It is time to examine new methods such as empowerment of the local citizens and involve them to solve the current physical, social, and economic problems. The paper aims to address the previous and new strategies facing with informal settlements, the conditions under which citizens could be involved in planning process, limits and potentials of this process, the main actors and issues and finally motivations that are able to promote citizen participation. Documentary studies, observation, interview and questionnaire have been used to achieve the above mentioned objectives. Nearly 80 percent of responder in Saadi Community are ready to participate in regularising their neighbourhoods, if pre-conditions of citizen involvement are being provided. These pre-conditions include kind of problem and its severity, the importance of issue, existence of a short-term solution, etc. Moreover, confirmation of dweller-s ownership can promote the citizen engagement in participatory projects.

Feature Selection with Kohonen Self Organizing Classification Algorithm

In this paper a one-dimension Self Organizing Map algorithm (SOM) to perform feature selection is presented. The algorithm is based on a first classification of the input dataset on a similarity space. From this classification for each class a set of positive and negative features is computed. This set of features is selected as result of the procedure. The procedure is evaluated on an in-house dataset from a Knowledge Discovery from Text (KDT) application and on a set of publicly available datasets used in international feature selection competitions. These datasets come from KDT applications, drug discovery as well as other applications. The knowledge of the correct classification available for the training and validation datasets is used to optimize the parameters for positive and negative feature extractions. The process becomes feasible for large and sparse datasets, as the ones obtained in KDT applications, by using both compression techniques to store the similarity matrix and speed up techniques of the Kohonen algorithm that take advantage of the sparsity of the input matrix. These improvements make it feasible, by using the grid, the application of the methodology to massive datasets.

Enhancement of Biogas Production from Bakery Waste by Pseudomonas aeruginosa

Production of biogas from bakery waste was enhanced by additional bacterial cell. This study was divided into 2 steps. First step, grease waste from bakery industry-s grease trap was initially degraded by Pseudomonas aeruginosa. The concentration of byproduct, especially glycerol, was determined and found that glycerol concentration increased from 12.83% to 48.10%. Secondary step, 3 biodigesters were set up in 3 different substrates: non-degraded waste as substrate in first biodigester, degraded waste as substrate in secondary biodigester, and degraded waste mixed with swine manure in ratio 1:1 as substrate in third biodigester. The highest concentration of biogas was found in third biodigester that was 44.33% of methane and 63.71% of carbon dioxide. The lower concentration at 24.90% of methane and 18.98% of carbon dioxide was exhibited in secondary biodigester whereas the lowest was found in non-degraded waste biodigester. It was demonstrated that the biogas production was greatly increased with the initial grease waste degradation by Pseudomonas aeruginosa.

Performance of Heat Pump Dryer for Kaffir Lime Leaves and Quality of Dried Products under Different Temperatures and Media

This research is to study the performance of heat pump dryer for drying of kaffir lime leaves under different media and to compare the color values and essential oil content of final products after drying. In the experiments, kaffir lime leaves were dried in the closed-loop system at drying temperatures of 40, 50 and 60 oC. The drying media used in this study were hot air, CO2 and N2 gases. The velocity of drying media in the drying chamber was 0.4 m/s with bypass ratio of 30%. The initial moisture content of kaffir lime leaves was approximately 180-190 % d.b. It was dried until down to a final moisture content of 10% d.b. From the experiments, the results showed that drying rate, the coefficient of performance (COP) and specific energy consumption (SEC) depended on drying temperature. While drying media did not affect on drying rate. The time for kaffir lime leaves drying at 40, 50 and 60 oC was 10, 5 and 3 hours, respectively. The performance of the heat pump system decreased with drying temperature in the range of 2.20-3.51. In the aspect of final product color, the greenness and overall color had a great change under drying temperature at 60 oC rather than drying at 40 and 50 oC. When compared among drying media, the greenness and overall color of product dried with hot air at 60 oC had a great change rather than dried with CO2 and N2.

Using Multi-Objective Particle Swarm Optimization for Bi-objective Multi-Mode Resource-Constrained Project Scheduling Problem

In this paper the multi-mode resource-constrained project scheduling problem with discounted cash flows is considered. Minimizing the makespan and maximization the net present value (NPV) are the two common objectives that have been investigated in the literature. We apply one evolutionary algorithm named multiobjective particle swarm optimization (MOPSO) to find Pareto front solutions. We used standard sets of instances from the project scheduling problem library (PSPLIB). The results are computationally compared respect to different metrics taken from the literature on evolutionary multi-objective optimization.

Using Heuristic Rules from Sentence Decomposition of Experts- Summaries to Detect Students- Summarizing Strategies

Summarizing skills have been introduced to English syllabus in secondary school in Malaysia to evaluate student-s comprehension for a given text where it requires students to employ several strategies to produce the summary. This paper reports on our effort to develop a computer-based summarization assessment system that detects the strategies used by the students in producing their summaries. Sentence decomposition of expert-written summaries is used to analyze how experts produce their summary sentences. From the analysis, we identified seven summarizing strategies and their rules which are then transformed into a set of heuristic rules on how to determine the summarizing strategies. We developed an algorithm based on the heuristic rules and performed some experiments to evaluate and support the technique proposed.

The Theoretical Framework of the Necessity of Conducting Operational Auditing in Iran

Nowadays, efficiency, effectiveness and economy are regarded as the main objectives of managers and the secret of the continuity of an organization in competing economy. In such competing settings, it is essential that the management of an organization has not been neglected and been obliged to identify quickly the opportunities for improving the operation of organization and remove the shortcomings of their managed system in order to use the opportunities for development. Operational auditing is a useful tool for system adjustment and leading an organization toward its objectives. Operational auditing is indeed a viewpoint which identifies the causes of insufficiencies, weaknesses and deficiencies of system and plans to eliminate them. Operational auditing is useful in the effectiveness and optimization of executive managers- decisions and increasing the efficiency and economy of their performance in the future and prevents the waste and incorrect use of resources. Evidence shows that operational auditing is used at a limited level in Iran. This matter raises some questions like the following ones in the minds. Why do a limited number of corporations use operational auditing? Which factors can guarantee its full implementation? What obstacles are there in its implementation? The purpose of this article is to determine executive objectives, the operation domain of operational auditing, the components of operational auditing and the executive obstacles to operational auditing in Iran.

Groebner Bases Computation in Boolean Rings is P-SPACE

The theory of Groebner Bases, which has recently been honored with the ACM Paris Kanellakis Theory and Practice Award, has become a crucial building block to computer algebra, and is widely used in science, engineering, and computer science. It is wellknown that Groebner bases computation is EXP-SPACE in a general polynomial ring setting. However, for many important applications in computer science such as satisfiability and automated verification of hardware and software, computations are performed in a Boolean ring. In this paper, we give an algorithm to show that Groebner bases computation is PSPACE in Boolean rings. We also show that with this discovery, the Groebner bases method can theoretically be as efficient as other methods for automated verification of hardware and software. Additionally, many useful and interesting properties of Groebner bases including the ability to efficiently convert the bases for different orders of variables making Groebner bases a promising method in automated verification.

Quality Monitoring and Dynamic Pricing in Cold Chain Management

This paper presents a cold chain monitoring system which focuses on assessment of quality and dynamic pricing information about food in cold chain. Cold chain is composed of many actors and stages; however it can be seen as a single entity since a breakdown in temperature control at any stage can impact the final quality of the product. In a cold chain, the shelf life, quality, and safety of perishable food throughout the supply chain is greatly impacted by environmental factors especially temperature. In this paper, a prototype application is implemented to retrieve timetemperature history, the current quality and the dynamic price setting according to changing quality impacted by temperature fluctuations in real-time.

Using Knowledge Management and Critical Thinking to Understand Thai Perceptions and Decisions towards Work-Life Balance in a Multinational Software Development Firm

Work-life balance has been acknowledged and promoted for the sake of employee retention. It is essential for a manager to realize the human resources situation within a company to help employees work happily and perform at their best. This paper suggests knowledge management and critical thinking are useful to motivate employees to think about their work-life balance. A qualitative case study is presented, which aimed to discover the meaning of work-life balance-s meaning from the perspective of Thai knowledge workers and how it affects their decision-making towards work resignation. Results found three types of work-life balance dimensions; a work- life balance including a workplace and a private life setting, an organizational working life balance only, and a worklife balance only in a private life setting. These aspects all influenced the decision-making of the employees. Factors within a theme of an organizational work-life balance were involved with systematic administration, fair treatment, employee recognition, challenging assignments to gain working experience, assignment engagement, teamwork, relationship with superiors, and working environment, while factors concerning private life settings were about personal demands such as an increasing their salary or starting their own business.

Evolutionary Eigenspace Learning using CCIPCA and IPCA for Face Recognition

Traditional principal components analysis (PCA) techniques for face recognition are based on batch-mode training using a pre-available image set. Real world applications require that the training set be dynamic of evolving nature where within the framework of continuous learning, new training images are continuously added to the original set; this would trigger a costly continuous re-computation of the eigen space representation via repeating an entire batch-based training that includes the old and new images. Incremental PCA methods allow adding new images and updating the PCA representation. In this paper, two incremental PCA approaches, CCIPCA and IPCA, are examined and compared. Besides, different learning and testing strategies are proposed and applied to the two algorithms. The results suggest that batch PCA is inferior to both incremental approaches, and that all CCIPCAs are practically equivalent.

Certain Data Dimension Reduction Techniques for application with ANN based MCS for Study of High Energy Shower

Cosmic showers, from their places of origin in space, after entering earth generate secondary particles called Extensive Air Shower (EAS). Detection and analysis of EAS and similar High Energy Particle Showers involve a plethora of experimental setups with certain constraints for which soft-computational tools like Artificial Neural Network (ANN)s can be adopted. The optimality of ANN classifiers can be enhanced further by the use of Multiple Classifier System (MCS) and certain data - dimension reduction techniques. This work describes the performance of certain data dimension reduction techniques like Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Self Organizing Map (SOM) approximators for application with an MCS formed using Multi Layer Perceptron (MLP), Recurrent Neural Network (RNN) and Probabilistic Neural Network (PNN). The data inputs are obtained from an array of detectors placed in a circular arrangement resembling a practical detector grid which have a higher dimension and greater correlation among themselves. The PCA, ICA and SOM blocks reduce the correlation and generate a form suitable for real time practical applications for prediction of primary energy and location of EAS from density values captured using detectors in a circular grid.

Concept Indexing using Ontology and Supervised Machine Learning

Nowadays, ontologies are the only widely accepted paradigm for the management of sharable and reusable knowledge in a way that allows its automatic interpretation. They are collaboratively created across the Web and used to index, search and annotate documents. The vast majority of the ontology based approaches, however, focus on indexing texts at document level. Recently, with the advances in ontological engineering, it became clear that information indexing can largely benefit from the use of general purpose ontologies which aid the indexing of documents at word level. This paper presents a concept indexing algorithm, which adds ontology information to words and phrases and allows full text to be searched, browsed and analyzed at different levels of abstraction. This algorithm uses a general purpose ontology, OntoRo, and an ontologically tagged corpus, OntoCorp, both developed for the purpose of this research. OntoRo and OntoCorp are used in a two-stage supervised machine learning process aimed at generating ontology tagging rules. The first experimental tests show a tagging accuracy of 78.91% which is encouraging in terms of the further improvement of the algorithm.