Grid-based Supervised Clustering - GBSC

This paper presents a supervised clustering algorithm, namely Grid-Based Supervised Clustering (GBSC), which is able to identify clusters of any shapes and sizes without presuming any canonical form for data distribution. The GBSC needs no prespecified number of clusters, is insensitive to the order of the input data objects, and is capable of handling outliers. Built on the combination of grid-based clustering and density-based clustering, under the assistance of the downward closure property of density used in bottom-up subspace clustering, the GBSC can notably reduce its search space to avoid the memory confinement situation during its execution. On two-dimension synthetic datasets, the GBSC can identify clusters with different shapes and sizes correctly. The GBSC also outperforms other five supervised clustering algorithms when the experiments are performed on some UCI datasets.

Interpolation of Geofield Parameters

Various methods of geofield parameters restoration (by algebraic polynoms; filters; rational fractions; interpolation splines; geostatistical methods – kriging; search methods of nearest points – inverse distance, minimum curvature, local – polynomial interpolation; neural networks) have been analyzed and some possible mistakes arising during geofield surface modeling have been presented.

Incidence of Trihalogenmethanes in Drinking Water

Trihalogenmethanes are the most significant byproducts of the reaction of disinfection agent with organic precursors naturally present in ground and surface waters.Their incidence negatively affects the quality of drinking water in relation to their nephrotoxic, hepatotoxic and genotoxic effects on human health. Taking into consideration the considerable volatility of monitored contaminants it could be assumed that their incidence in drinking water would depend on the distance of sampling from the area of disinfection. Based on the concentration of trihalogenmethanes determined with the help of gas chromatography with mass detector and the analysis of variance (ANOVA) such dependence has been proved as statistically significant. The acquired outcomes will be used for assessing the non-carcinogenic and genotoxic risks to consumers.

Debye Layer Confinement of Nucleons in Nuclei by Laser Ablated Plasma

Following the laser ablation studies leading to a theory of nuclei confinement by a Debye layer mechanism, we present here numerical evaluations for the known stable nuclei where the Coulomb repulsion is included as a rather minor component especially for lager nuclei. In this research paper the required physical conditions for the formation and stability of nuclei particularly endothermic nuclei with mass number greater than to which is an open astrophysical question have been investigated. Using the Debye layer mechanism, nuclear surface energy, Fermi energy and coulomb repulsion energy it is possible to find conditions under which the process of nucleation is permitted in early universe. Our numerical calculations indicate that about 200 second after the big bang at temperature of about 100 KeV and subrelativistic region with nucleon density nearly equal to normal nuclear density namely, 10cm all endothermic and exothermic nuclei have been formed.

Analysis of Reflectance Photoplethysmograph Sensors

Photoplethysmography is a simple measurement of the variation in blood volume in tissue. It detects the pulse signal of heart beat as well as the low frequency signal of vasoconstriction and vasodilation. The transmission type measurement is limited to only a few specific positions for example the index finger that have a short path length for light. The reflectance type measurement can be conveniently applied on most parts of the body surface. This study analyzed the factors that determine the quality of reflectance photoplethysmograph signal including the emitter-detector distance, wavelength, light intensity, and optical properties of skin tissue. Light emitting diodes (LEDs) with four different visible wavelengths were used as the light emitters. A phototransistor was used as the light detector. A micro translation stage adjusts the emitter-detector distance from 2 mm to 15 mm. The reflective photoplethysmograph signals were measured on different sites. The optimal emitter-detector distance was chosen to have a large dynamic range for low frequency drifting without signal saturation and a high perfusion index. Among these four wavelengths, a yellowish green (571nm) light with a proper emitter-detection distance of 2mm is the most suitable for obtaining a steady and reliable reflectance photoplethysmograph signal

Improvement of New Government R&D Program Plans through Preliminary Feasibility Studies

As a part of an evaluation system for R&D programs, the Korean Government has applied the preliminary feasibility study to new government R&D program plans. Basically, the fundamental purpose of the preliminary feasibility study is to decide that the government will either do or do not invest in a new R&D Program. Additionally, the preliminary feasibility study can contribute to the improvement of R&D program plans. For example, 2 cases of new R&D program plans applied to the study are explained in this paper and there are expectations that these R&D programs would yield better performance than without the study. It is thought that the important point of the preliminary feasibility study is not only the effective decision making process of R&D program but also the opportunity to improve R&D program plan actually.

Low Power Digital System for Reconfigurable Neural Recording System

A digital system is proposed for low power 100- channel neural recording system in this paper, which consists of 100 amplifiers, 100 analog-to-digital converters (ADC), digital controller and baseband, transceiver for data link and RF command link. The proposed system is designed in a 0.18 μm CMOS process and 65 nm CMOS process.

Categorical Data Modeling: Logistic Regression Software

A Matlab based software for logistic regression is developed to enhance the process of teaching quantitative topics and assist researchers with analyzing wide area of applications where categorical data is involved. The software offers an option of performing stepwise logistic regression to select the most significant predictors. The software includes a feature to detect influential observations in data, and investigates the effect of dropping or misclassifying an observation on a predictor variable. The input data may consist either as a set of individual responses (yes/no) with the predictor variables or as grouped records summarizing various categories for each unique set of predictor variables' values. Graphical displays are used to output various statistical results and to assess the goodness of fit of the logistic regression model. The software recognizes possible convergence constraints when present in data, and the user is notified accordingly.

Applying Gibbs Sampler for Multivariate Hierarchical Linear Model

Among various HLM techniques, the Multivariate Hierarchical Linear Model (MHLM) is desirable to use, particularly when multivariate criterion variables are collected and the covariance structure has information valuable for data analysis. In order to reflect prior information or to obtain stable results when the sample size and the number of groups are not sufficiently large, the Bayes method has often been employed in hierarchical data analysis. In these cases, although the Markov Chain Monte Carlo (MCMC) method is a rather powerful tool for parameter estimation, Procedures regarding MCMC have not been formulated for MHLM. For this reason, this research presents concrete procedures for parameter estimation through the use of the Gibbs samplers. Lastly, several future topics for the use of MCMC approach for HLM is discussed.

Application of Feed Forward Neural Networks in Modeling and Control of a Fed-Batch Crystallization Process

This paper is focused on issues of nonlinear dynamic process modeling and model-based predictive control of a fed-batch sugar crystallization process applying the concept of artificial neural networks as computational tools. The control objective is to force the operation into following optimal supersaturation trajectory. It is achieved by manipulating the feed flow rate of sugar liquor/syrup, considered as the control input. A feed forward neural network (FFNN) model of the process is first built as part of the controller structure to predict the process response over a specified (prediction) horizon. The predictions are supplied to an optimization procedure to determine the values of the control action over a specified (control) horizon that minimizes a predefined performance index. The control task is rather challenging due to the strong nonlinearity of the process dynamics and variations in the crystallization kinetics. However, the simulation results demonstrated smooth behavior of the control actions and satisfactory reference tracking.

An Experimental Investigation on the Effect of Deep cold Rolling Parameters on Surface Roughness and Hardness of AISI 4140 Steel

Deep cold rolling (DCR) is a cold working process, which easily produces a smooth and work-hardened surface by plastic deformation of surface irregularities. In the present study, the influence of main deep cold rolling process parameters on the surface roughness and the hardness of AISI 4140 steel were studied by using fractional factorial design of experiments. The assessment of the surface integrity aspects on work material was done, in terms of identifying the predominant factor amongst the selected parameters, their order of significance and setting the levels of the factors for minimizing surface roughness and/or maximizing surface hardness. It was found that the ball diameter, rolling force, initial surface roughness and number of tool passes are the most pronounced parameters, which have great effects on the work piece-s surface during the deep cold rolling process. A simple, inexpensive and newly developed DCR tool, with interchangeable collet for using different ball diameters, was used throughout the experimental work presented in this paper.

Antioxidant and Aِntimicrobial Properties of Peptides as Bioactive Components in Beef Burger

Dried soy protein hydrolysate powder was added to the burger in order to enhance the oxidative stability as well as decreases the microbial spoilage. The soybean bioactive compounds (soy protein hydrolysate) as antioxidant and antimicrobial were added at level of 1, 2 and 3 %.Chemical analysis and physical properties were affected by protein hydrolysate addition. The TBA values were significantly affected (P < 0.05) by the storage period and the level of soy protein hydrolysate. All the tested soybean protein hydrolysate additives showed strong antioxidant properties. Samples of soybean protein hydrolysate showed the lowest (P < 0.05) TBA values at each time of storage. The counts of all determined microbiological indicators were significantly (P < 0.05) affected by the addition of the soybean protein hydrolysate. Decreasing trends of different extent were also observed in samples of the treatments for total viable counts, Coliform, Staphylococcus aureus, yeast and molds. Storage period was being significantly (P < 0.05) affected on microbial counts in all samples Staphylococcus aureus were the most sensitive microbe followed by Coliform group of the sample containing protein hydrolysate, while molds and yeast count showed a decreasing trend but not significant (P < 0.05) until the end of the storage period compared with control sample. Sensory attributes were also performed, added protein hydrolysate exhibits beany flavor which was clear about samples of 3% protein hydrolysate.

A Mark-Up Approach to Add Value

This paper presents a mark-up approach to service creation in Next Generation Networks. The approach allows deriving added value from network functions exposed by Parlay/OSA (Open Service Access) interfaces. With OSA interfaces service logic scripts might be executed both on callrelated and call-unrelated events. To illustrate the approach XMLbased language constructions for data and method definitions, flow control, time measuring and supervision and database access are given and an example of OSA application is considered.

Support Vector Machine Approach for Classification of Cancerous Prostate Regions

The objective of this paper, is to apply support vector machine (SVM) approach for the classification of cancerous and normal regions of prostate images. Three kinds of textural features are extracted and used for the analysis: parameters of the Gauss- Markov random field (GMRF), correlation function and relative entropy. Prostate images are acquired by the system consisting of a microscope, video camera and a digitizing board. Cross-validated classification over a database of 46 images is implemented to evaluate the performance. In SVM classification, sensitivity and specificity of 96.2% and 97.0% are achieved for the 32x32 pixel block sized data, respectively, with an overall accuracy of 96.6%. Classification performance is compared with artificial neural network and k-nearest neighbor classifiers. Experimental results demonstrate that the SVM approach gives the best performance.

Project Management Success for Contractors

The aim of this paper is to provide a better understanding of the implementation of Project Management practices by UiTM contractors to ensure project success. A questionnaire survey was administered to 120 UiTM contractors in Malaysia. The purpose of this method was to gather information on the contractors- project background and project management skills. It was found that all of the contractors had basic knowledge and understanding of project management skills. It is suggested that a reasonable project plan and an appropriate organizational structure are influential factors for project success. It is recommended that the contractors need to have an effective program of work and up to date information system are emphasized.

The Traffic Prediction Multi-path Energy-aware Source Routing (TP-MESR)in Ad hoc Networks

The purpose of this study is to suggest energy efficient routing for ad hoc networks which are composed of nodes with limited energy. There are diverse problems including limitation of energy supply of node, and the node energy management problem has been presented. And a number of protocols have been proposed for energy conservation and energy efficiency. In this study, the critical point of the EA-MPDSR, that is the type of energy efficient routing using only two paths, is improved and developed. The proposed TP-MESR uses multi-path routing technique and traffic prediction function to increase number of path more than 2. It also verifies its efficiency compared to EA-MPDSR using network simulator (NS-2). Also, To give a academic value and explain protocol systematically, research guidelines which the Hevner(2004) suggests are applied. This proposed TP-MESR solved the existing multi-path routing problem related to overhead, radio interference, packet reassembly and it confirmed its contribution to effective use of energy in ad hoc networks.

FPGA Implementation of Generalized Maximal Ratio Combining Receiver Diversity

In this paper, we study FPGA implementation of a novel supra-optimal receiver diversity combining technique, generalized maximal ratio combining (GMRC), for wireless transmission over fading channels in SIMO systems. Prior published results using ML-detected GMRC diversity signal driven by BPSK showed superior bit error rate performance to the widely used MRC combining scheme in an imperfect channel estimation (ICE) environment. Under perfect channel estimation conditions, the performance of GMRC and MRC were identical. The main drawback of the GMRC study was that it was theoretical, thus successful FPGA implementation of it using pipeline techniques is needed as a wireless communication test-bed for practical real-life situations. Simulation results showed that the hardware implementation was efficient both in terms of speed and area. Since diversity combining is especially effective in small femto- and picocells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to the hardware of IP-based 4th generation networks.

Modeling and Analysis of the Effects of Nephrolithiasis in Kidney Using a Computational Tactile Sensing Approach

Having considered tactile sensing and palpation of a surgeon in order to detect kidney stone during open surgery; we present the 2D model of nephrolithiasis (two dimensional model of kidney containing a simulated stone). The effects of stone existence that appear on the surface of kidney (because of exerting mechanical load) are determined. Using Finite element method, it is illustrated that the created stress patterns on the surface of kidney and stress graphs not only show existence of stone inside kidney, but also show its exact location.

Control and Simulation of FOPDT Food Processes with Constraints using PI Controller

The most common type of controller being used in the industry is PI(D) controller which has been used since 1945 and is still being widely used due to its efficiency and simplicity. In most cases, the PI(D) controller was tuned without taking into consideration of the effect of actuator saturation. In real processes, the most common actuator which is valve will act as constraint and restrict the controller output. Since the controller is not designed to encounter saturation, the process may windup and consequently resulted in large oscillation or may become unstable. Usually, an antiwindup compensator is added to the feedback control loop to reduce the deterioration effect of integral windup. This research aims to specifically control processes with constraints. The proposed method was applied to two different types of food processes, which are blending and spray drying. Simulations were done using MATLAB and the performances of the proposed method were compared with other conventional methods. The proposed technique was able to control the processes and avoid saturation such that no anti windup compensator is needed.

Limiting Fiber Extensibility as Parameter for Damage in Venous Wall

An inflation–extension test with human vena cava inferior was performed with the aim to fit a material model. The vein was modeled as a thick–walled tube loaded by internal pressure and axial force. The material was assumed to be an incompressible hyperelastic fiber reinforced continuum. Fibers are supposed to be arranged in two families of anti–symmetric helices. Considered anisotropy corresponds to local orthotropy. Used strain energy density function was based on a concept of limiting strain extensibility. The pressurization was comprised by four pre–cycles under physiological venous loading (0 – 4kPa) and four cycles under nonphysiological loading (0 – 21kPa). Each overloading cycle was performed with different value of axial weight. Overloading data were used in regression analysis to fit material model. Considered model did not fit experimental data so good. Especially predictions of axial force failed. It was hypothesized that due to nonphysiological values of loading pressure and different values of axial weight the material was not preconditioned enough and some damage occurred inside the wall. A limiting fiber extensibility parameter Jm was assumed to be in relation to supposed damage. Each of overloading cycles was fitted separately with different values of Jm. Other parameters were held the same. This approach turned out to be successful. Variable value of Jm can describe changes in the axial force – axial stretch response and satisfy pressure – radius dependence simultaneously.