Multidimensional Data Mining by Means of Randomly Travelling Hyper-Ellipsoids

The present study presents a new approach to automatic data clustering and classification problems in large and complex databases and, at the same time, derives specific types of explicit rules describing each cluster. The method works well in both sparse and dense multidimensional data spaces. The members of the data space can be of the same nature or represent different classes. A number of N-dimensional ellipsoids are used for enclosing the data clouds. Due to the geometry of an ellipsoid and its free rotation in space the detection of clusters becomes very efficient. The method is based on genetic algorithms that are used for the optimization of location, orientation and geometric characteristics of the hyper-ellipsoids. The proposed approach can serve as a basis for the development of general knowledge systems for discovering hidden knowledge and unexpected patterns and rules in various large databases.

Robust H State-Feedback Control for Uncertain Fuzzy Markovian Jump Systems: LMI-Based Design

This paper investigates the problem of designing a robust state-feedback controller for a class of uncertain Markovian jump nonlinear systems that guarantees the L2-gain from an exogenous input to a regulated output is less than or equal to a prescribed value. First, we approximate this class of uncertain Markovian jump nonlinear systems by a class of uncertain Takagi-Sugeno fuzzy models with Markovian jumps. Then, based on an LMI approach, LMI-based sufficient conditions for the uncertain Markovian jump nonlinear systems to have an H performance are derived. An illustrative example is used to illustrate the effectiveness of the proposed design techniques.

Study on the Derivatization Process Using N-O-bis-(trimethylsilyl)-trifluoroacetamide, N-(tert-butyldimethylsilyl)-N-methyltrifluoroace tamide, Trimethylsilydiazomethane for the Determination of Fecal Sterols by Gas Chromatography-Mass Spectrometry

Fecal sterol has been proposed as a chemical indicator of human fecal pollution even when fecal coliform populations have diminished due to water chlorination or toxic effects of industrial effluents. This paper describes an improved derivatization procedure for simultaneous determination of four fecal sterols including coprostanol, epicholestanol, cholesterol and cholestanol using gas chromatography-mass spectrometry (GC-MS), via optimization study on silylation procedures using N-O-bis (trimethylsilyl)-trifluoroacetamide (BSTFA), and N-(tert-butyldimethylsilyl)-N-methyltrifluoroacetamide (MTBSTFA), which lead to the formation of trimethylsilyl (TMS) and tert-butyldimethylsilyl (TBS) derivatives, respectively. Two derivatization processes of injection-port derivatization and water bath derivatization (60 oC, 1h) were inspected and compared. Furthermore, the methylation procedure at 25 oC for 2h with trimethylsilydiazomethane (TMSD) for fecal sterols analysis was also studied. It was found that most of TMS derivatives demonstrated the highest sensitivities, followed by methylated derivatives. For BSTFA or MTBSTFA derivatization processes, the simple injection-port derivatization process could achieve the same efficiency as that in the tedious water bath derivatization procedure.

A Growing Natural Gas Approach for Evaluating Quality of Software Modules

The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.

Design of Liquids Mixing Control System using Fuzzy Time Control Discrete Event Model for Industrial Applications

This paper presents a time control liquids mixing system in the tanks as an application of fuzzy time control discrete model. The system is designed for a wide range of industrial applications. The simulation design of control system has three inputs: volume, viscosity, and selection of product, along with the three external control adjustments for the system calibration or to take over the control of the system autonomously in local or distributed environment. There are four controlling elements: rotatory motor, grinding motor, heating and cooling units, and valves selection, each with time frame limit. The system consists of three controlled variables measurement through its sensing mechanism for feed back control. This design also facilitates the liquids mixing system to grind certain materials in tanks and mix with fluids under required temperature controlled environment to achieve certain viscous level. Design of: fuzzifier, inference engine, rule base, deffuzifiers, and discrete event control system, is discussed. Time control fuzzy rules are formulated, applied and tested using MATLAB simulation for the system.

Effect of Process Parameters on the Proximate Composition, Functional and Sensory Properties

Flour from Mucuna beans (Mucuna pruriens) were used in producing texturized meat analogue using a single screw extruder to monitor modifications on the proximate composition and the functional properties at high moisture level. Response surface methodology based on Box Behnken design at three levels of barrel temperature (110, 120, 130°C), screw speed (100,120,140rpm) and feed moisture (44, 47, 50%) were used in 17 runs. Regression models describing the effect of variables on the product responses were obtained. Descriptive profile analyses and consumer acceptability test were carried out on optimized flavoured extruded meat analogue. Responses were mostly affected by barrel temperature and moisture level and to a lesser extent by screw speed. Optimization results based on desirability concept indicated that a barrel temperature of 120.15°C, feed moisture of 47% and screw speed of 119.19 rpm would produce meat analogue of preferable proximate composition, functional and sensory properties which reveals consumers` likeness for the product.

A Computational Model of Minimal Consciousness Functions

Interest in Human Consciousness has been revived in the late 20th century from different scientific disciplines. Consciousness studies involve both its understanding and its application. In this paper, a computational model of the minimum consciousness functions necessary in my point of view for Artificial Intelligence applications is presented with the aim of improving the way computations will be made in the future. In section I, human consciousness is briefly described according to the scope of this paper. In section II, a minimum set of consciousness functions is defined - based on the literature reviewed - to be modelled, and then a computational model of these functions is presented in section III. In section IV, an analysis of the model is carried out to describe its functioning in detail.

Study of Single Network Adjustment Using QOCA Software in Korea

For this study, this researcher conducted a precision network adjustment with QOCA, the precision network adjustment software developed by Jet Propulsion Laboratory, to perform an integrated network adjustment on the Unified Control Points managed by the National Geographic Information Institute. Towards this end, 275 Unified Control Points observed in 2008 were selected before a network adjustment is performed on those 275 Unified Control Points. The RMSE on the discrepancies of coordinates as compared to the results of GLOBK was ±6.07mm along the N axis, ±2.68mm along the E axis and ±6.49mm along the U axis.

HIV Modelling - Parallel Implementation Strategies

We report on the development of a model to understand why the range of experience with respect to HIV infection is so diverse, especially with respect to the latency period. To investigate this, an agent-based approach is used to extract highlevel behaviour which cannot be described analytically from the set of interaction rules at the cellular level. A network of independent matrices mimics the chain of lymph nodes. Dealing with massively multi-agent systems requires major computational effort. However, parallelisation methods are a natural consequence and advantage of the multi-agent approach and, using the MPI library, are here implemented, tested and optimized. Our current focus is on the various implementations of the data transfer across the network. Three communications strategies are proposed and tested, showing that the most efficient approach is communication based on the natural lymph-network connectivity.

The Influence of Electrode Heating On the Force Generated On a High Voltage Capacitor with Asymmetrical Electrodes

When a high DC voltage is applied to a capacitor with strongly asymmetrical electrodes, it generates a mechanical force that affects the whole capacitor. This is caused by the motion of ions generated around the smaller of the two electrodes and their subsequent interaction with the surrounding medium. If one of the electrodes is heated, it changes the conditions around the capacitor and influences the process of ionisation, thus changing the value of the generated force. This paper describes these changes and gives reasons behind them. Further the experimental results are given as proof of the ionic mechanism of the phenomenon.

A Design of Electronically Tunable Voltagemode Universal Filter with High Input Impedance

This article presents a voltage-mode universal biquadratic filter performing simultaneous 3 standard functions: lowpass, high-pass and band-pass functions, employing differential different current conveyor (DDCC) and current controlled current conveyor (CCCII) as active element. The features of the circuit are that: the quality factor and pole frequency can be tuned independently via the input bias currents: the circuit description is very simple, consisting of 1 DDCC, 2 CCCIIs, 2 electronic resistors and 2 grounded capacitors. Without requiring component matching conditions, the proposed circuit is very appropriate to further develop into an integrated circuit. The PSPICE simulation results are depicted. The given results agree well with the theoretical anticipation.

Construction of Water Electrolyzer for Single Slice O2/H2 Polymer Electrolyte Membrane Fuel Cell

In the first part of the research work, an electrolyzer (10.16 cm dia and 24.13 cm height) to produce hydrogen and oxygen was constructed for single slice O2/H2 fuel cell using cation exchange membrane. The electrolyzer performance was tested with 23% NaOH, 30% NaOH, 30% KOH and 35% KOH electrolyte solution with current input 4 amp and 2.84 V from the rectifier. Rates of volume of hydrogen produced were 0.159 cm3/sec, 0.155 cm3/sec, 0.169 cm3/sec and 0.163 cm3/sec respectively from 23% NaOH, 30% NaOH, 30% KOH and 35% KOH solution. Rates of volume of oxygen produced were 0.212 cm3/sec, 0.201 cm3/sec, 0.227 cm3/sec and 0.219 cm3/sec respectively from 23% NaOH, 30% NaOH, 30% KOH and 35% KOH solution (1.5 L). In spite of being tested the increased concentration of electrolyte solution, the gas rate does not change significantly. Therefore, inexpensive 23% NaOH electrolyte solution was chosen to use as the electrolyte in the electrolyzer. In the second part of the research work, graphite serpentine flow plates, fiberglass end plates, stainless steel screen electrodes, silicone rubbers were made to assemble the single slice O2/H2 polymer electrolyte membrane fuel cell (PEMFC).

Organization Model of Semantic Document Repository and Search Techniques for Studying Information Technology

Nowadays, organizing a repository of documents and resources for learning on a special field as Information Technology (IT), together with search techniques based on domain knowledge or document-s content is an urgent need in practice of teaching, learning and researching. There have been several works related to methods of organization and search by content. However, the results are still limited and insufficient to meet user-s demand for semantic document retrieval. This paper presents a solution for the organization of a repository that supports semantic representation and processing in search. The proposed solution is a model which integrates components such as an ontology describing domain knowledge, a database of document repository, semantic representation for documents and a file system; with problems, semantic processing techniques and advanced search techniques based on measuring semantic similarity. The solution is applied to build a IT learning materials management system of a university with semantic search function serving students, teachers, and manager as well. The application has been implemented, tested at the University of Information Technology, Ho Chi Minh City, Vietnam and has achieved good results.

Estimation of Load Impedance in Presence of Harmonics

This paper presents a fast and efficient on-line technique for estimating impedance of unbalanced loads in power systems. The proposed technique is an application of a discrete timedynamic filter based on stochastic estimation theory which is suitable for estimating parameters in noisy environment. The algorithm uses sets of digital samples of the distorted voltage and current waveforms of the non-linear load to estimate the harmonic contents of these two signal. The non-linear load impedance is then calculated from these contents. The method is tested using practical data. Results are reported and compared with those obtained using the conventional least error squares technique. In addition to the very accurate results obtained, the method can detect and reject bad measurements. This can be considered as a very important advantage over the conventional static estimation methods such as the least error square method.

Quantum Computing: A New Era of Computing

Nature conducts its action in a very private manner. To reveal these actions classical science has done a great effort. But classical science can experiment only with the things that can be seen with eyes. Beyond the scope of classical science quantum science works very well. It is based on some postulates like qubit, superposition of two states, entanglement, measurement and evolution of states that are briefly described in the present paper. One of the applications of quantum computing i.e. implementation of a novel quantum evolutionary algorithm(QEA) to automate the time tabling problem of Dayalbagh Educational Institute (Deemed University) is also presented in this paper. Making a good timetable is a scheduling problem. It is NP-hard, multi-constrained, complex and a combinatorial optimization problem. The solution of this problem cannot be obtained in polynomial time. The QEA uses genetic operators on the Q-bit as well as updating operator of quantum gate which is introduced as a variation operator to converge toward better solutions.

Regional Analysis of Streamflow Drought: A Case Study for Southwestern Iran

Droughts are complex, natural hazards that, to a varying degree, affect some parts of the world every year. The range of drought impacts is related to drought occurring in different stages of the hydrological cycle and usually different types of droughts, such as meteorological, agricultural, hydrological, and socioeconomical are distinguished. Streamflow drought was analyzed by the method of truncation level (at 70% level) on daily discharges measured in 54 hydrometric stations in southwestern Iran. Frequency analysis was carried out for annual maximum series (AMS) of drought deficit volume and duration series. Some factors including physiographic, climatic, geologic, and vegetation cover were studied as influential factors in the regional analysis. According to the results of factor analysis, six most effective factors were identified as area, rainfall from December to February, the percent of area with Normalized Difference Vegetation Index (NDVI)

Generalized Differential Quadrature Nonlinear Consolidation Analysis of Clay Layer with Time-Varied Drainage Conditions

In this article, the phenomenon of nonlinear consolidation in saturated and homogeneous clay layer is studied. Considering time-varied drainage model, the excess pore water pressure in the layer depth is calculated. The Generalized Differential Quadrature (GDQ) method is used for the modeling and numerical analysis. For the purpose of analysis, first the domain of independent variables (i.e., time and clay layer depth) is discretized by the Chebyshev-Gauss-Lobatto series and then the nonlinear system of equations obtained from the GDQ method is solved by means of the Newton-Raphson approach. The obtained results indicate that the Generalized Differential Quadrature method, in addition to being simple to apply, enjoys a very high accuracy in the calculation of excess pore water pressure.

The Role Played by Swift Change of the Stability Characteristic of Mean Flow in Bypass Transition

The scenario of bypass transition is generally described as follows: the low-frequency disturbances in the free-stream may generate long stream-wise streaks in the boundary layer, which later may trigger secondary instability, leading to rapid increase of high-frequency disturbances. Then possibly turbulent spots emerge, and through their merging, lead to fully developed turbulence. This description, however, is insufficient in the sense that it does not provide the inherent mechanism of transition that during the transition, a large number of waves with different frequencies and wave numbers appear almost simultaneously, producing sufficiently large Reynolds stress, so the mean flow profile can change rapidly from laminar to turbulent. In this paper, such a mechanism will be figured out from analyzing DNS data of transition.

Decision Trees for Predicting Risk of Mortality using Routinely Collected Data

It is well known that Logistic Regression is the gold standard method for predicting clinical outcome, especially predicting risk of mortality. In this paper, the Decision Tree method has been proposed to solve specific problems that commonly use Logistic Regression as a solution. The Biochemistry and Haematology Outcome Model (BHOM) dataset obtained from Portsmouth NHS Hospital from 1 January to 31 December 2001 was divided into four subsets. One subset of training data was used to generate a model, and the model obtained was then applied to three testing datasets. The performance of each model from both methods was then compared using calibration (the χ2 test or chi-test) and discrimination (area under ROC curve or c-index). The experiment presented that both methods have reasonable results in the case of the c-index. However, in some cases the calibration value (χ2) obtained quite a high result. After conducting experiments and investigating the advantages and disadvantages of each method, we can conclude that Decision Trees can be seen as a worthy alternative to Logistic Regression in the area of Data Mining.

Technology Trend and Level Assessment Using Patent Data for Preliminary Feasibility Study on R and D Program

The Korean government has applied preliminary feasibility study for new and huge R&D programs since 2008.The study is carried out from the viewpoints of technology, policy, and Economics. Then integrate the separate analysis and finally arrive at a definite result; whether a program is feasible or unfeasible, This paper describes the concept and method of the feasibility analysis focused on technological viability assessment for technical analysis. It consists of technology trend assessment and technology level assessment. Through the analysis, we can determine the chance of schedule delay or cost overrun occurring in the proposed plan.