Diagnosis of Ovarian Cancer with Proteomic Patterns in Serum using Independent Component Analysis and Neural Networks

We propose a method for discrimination and classification of ovarian with benign, malignant and normal tissue using independent component analysis and neural networks. The method was tested for a proteomic patters set from A database, and radial basis functions neural networks. The best performance was obtained with probabilistic neural networks, resulting I 99% success rate, with 98% of specificity e 100% of sensitivity.

Dynamical Analysis of Circadian Gene Expression

Microarrays technique allows the simultaneous measurements of the expression levels of thousands of mRNAs. By mining this data one can identify the dynamics of the gene expression time series. By recourse of principal component analysis, we uncover the circadian rhythmic patterns underlying the gene expression profiles from Cyanobacterium Synechocystis. We applied PCA to reduce the dimensionality of the data set. Examination of the components also provides insight into the underlying factors measured in the experiments. Our results suggest that all rhythmic content of data can be reduced to three main components.

Algorithmic Method for Efficient Cruise Program

One of the mayor problems of programming a cruise circuit is to decide which destinations to include and which don-t. Thus a decision problem emerges, that might be solved using a linear and goal programming approach. The problem becomes more complex if several boats in the fleet must be programmed in a limited schedule, trying their capacity matches best a seasonal demand and also attempting to minimize the operation costs. Moreover, the programmer of the company should consider the time of the passenger as a limited asset, and would like to maximize its usage. The aim of this work is to design a method in which, using linear and goal programming techniques, a model to design circuits for the cruise company decision maker can achieve an optimal solution within the fleet schedule.

Decision Making with Dempster-Shafer Theory of Evidence Using Geometric Operators

We study the problem of decision making with Dempster-Shafer belief structure. We analyze the previous work developed by Yager about using the ordered weighted averaging (OWA) operator in the aggregation of the Dempster-Shafer decision process. We discuss the possibility of aggregating with an ascending order in the OWA operator for the cases where the smallest value is the best result. We suggest the introduction of the ordered weighted geometric (OWG) operator in the Dempster-Shafer framework. In this case, we also discuss the possibility of aggregating with an ascending order and we find that it is completely necessary as the OWG operator cannot aggregate negative numbers. Finally, we give an illustrative example where we can see the different results obtained by using the OWA, the Ascending OWA (AOWA), the OWG and the Ascending OWG (AOWG) operator.

Inferences on Compound Rayleigh Parameters with Progressively Type-II Censored Samples

This paper considers inference under progressive type II censoring with a compound Rayleigh failure time distribution. The maximum likelihood (ML), and Bayes methods are used for estimating the unknown parameters as well as some lifetime parameters, namely reliability and hazard functions. We obtained Bayes estimators using the conjugate priors for two shape and scale parameters. When the two parameters are unknown, the closed-form expressions of the Bayes estimators cannot be obtained. We use Lindley.s approximation to compute the Bayes estimates. Another Bayes estimator has been obtained based on continuous-discrete joint prior for the unknown parameters. An example with the real data is discussed to illustrate the proposed method. Finally, we made comparisons between these estimators and the maximum likelihood estimators using a Monte Carlo simulation study.

A Numerical Strategy to Design Maneuverable Micro-Biomedical Swimming Robots Based on Biomimetic Flagellar Propulsion

Medical applications are among the most impactful areas of microrobotics. The ultimate goal of medical microrobots is to reach currently inaccessible areas of the human body and carry out a host of complex operations such as minimally invasive surgery (MIS), highly localized drug delivery, and screening for diseases at their very early stages. Miniature, safe and efficient propulsion systems hold the key to maturing this technology but they pose significant challenges. A new type of propulsion developed recently, uses multi-flagella architecture inspired by the motility mechanism of prokaryotic microorganisms. There is a lack of efficient methods for designing this type of propulsion system. The goal of this paper is to overcome the lack and this way, a numerical strategy is proposed to design multi-flagella propulsion systems. The strategy is based on the implementation of the regularized stokeslet and rotlet theory, RFT theory and new approach of “local corrected velocity". The effects of shape parameters and angular velocities of each flagellum on overall flow field and on the robot net forces and moments are considered. Then a multi-layer perceptron artificial neural network is designed and employed to adjust the angular velocities of the motors for propulsion control. The proposed method applied successfully on a sample configuration and useful demonstrative results is obtained.

On Identity Disclosure Risk Measurement for Shared Microdata

Probability-based identity disclosure risk measurement may give the same overall risk for different anonymization strategy of the same dataset. Some entities in the anonymous dataset may have higher identification risks than the others. Individuals are more concerned about higher risks than the average and are more interested to know if they have a possibility of being under higher risk. A notation of overall risk in the above measurement method doesn-t indicate whether some of the involved entities have higher identity disclosure risk than the others. In this paper, we have introduced an identity disclosure risk measurement method that not only implies overall risk, but also indicates whether some of the members have higher risk than the others. The proposed method quantifies the overall risk based on the individual risk values, the percentage of the records that have a risk value higher than the average and how larger the higher risk values are compared to the average. We have analyzed the disclosure risks for different disclosure control techniques applied to original microdata and present the results.

A new Heuristic Algorithm for the Dynamic Facility Layout Problem with Budget Constraint

In this research, we have developed a new efficient heuristic algorithm for the dynamic facility layout problem with budget constraint (DFLPB). This heuristic algorithm combines two mathematical programming methods such as discrete event simulation and linear integer programming (IP) to obtain a near optimum solution. In the proposed algorithm, the non-linear model of the DFLP has been changed to a pure integer programming (PIP) model. Then, the optimal solution of the PIP model has been used in a simulation model that has been designed in a similar manner as the DFLP for determining the probability of assigning a facility to a location. After a sufficient number of runs, the simulation model obtains near optimum solutions. Finally, to verify the performance of the algorithm, several test problems have been solved. The results show that the proposed algorithm is more efficient in terms of speed and accuracy than other heuristic algorithms presented in previous works found in the literature.

Program Camouflage: A Systematic Instruction Hiding Method for Protecting Secrets

This paper proposes an easy-to-use instruction hiding method to protect software from malicious reverse engineering attacks. Given a source program (original) to be protected, the proposed method (1) takes its modified version (fake) as an input, (2) differences in assembly code instructions between original and fake are analyzed, and, (3) self-modification routines are introduced so that fake instructions become correct (i.e., original instructions) before they are executed and that they go back to fake ones after they are executed. The proposed method can add a certain amount of security to a program since the fake instructions in the resultant program confuse attackers and it requires significant effort to discover and remove all the fake instructions and self-modification routines. Also, this method is easy to use (with little effort) because all a user (who uses the proposed method) has to do is to prepare a fake source code by modifying the original source code.

Axisymmetric Vibration of Pyrocomposite Hollow Cylinder

Axisymmetric vibration of an infinite Pyrocomposite circular hollow cylinder made of inner and outer pyroelectric layer of 6mm-class bonded together by a Linear Elastic Material with Voids (LEMV) layer is studied. The exact frequency equation is obtained for the traction free surfaces with continuity condition at the interfaces. Numerical results in the form of data and dispersion curves for the first and second mode of the axisymmetric vibration of the cylinder BaTio3 / Adhesive / BaTio3 by taking the Adhesive layer as an existing Carbon Fibre Reinforced Polymer (CFRP) are compared with a hypothetical LEMV layer with and without voids and as well with a pyroelectric hollow cylinder. The damping is analyzed through the imaginary parts of the complex frequencies.

A Two-Channel Secure Communication Using Fractional Chaotic Systems

In this paper, a two-channel secure communication using fractional chaotic systems is presented. Conditions for chaos synchronization have been investigated theoretically by using Laplace transform. To illustrate the effectiveness of the proposed scheme, a numerical example is presented. The keys, key space, key selection rules and sensitivity to keys are discussed in detail. Results show that the original plaintexts have been well masked in the ciphertexts yet recovered faithfully and efficiently by the present schemes.

Development of Integrated GIS Interface for Characteristics of Regional Daily Flow

The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.

Simulation of Organic Matter Variability on a Sugarbeet Field Using the Computer Based Geostatistical Methods

Computer based geostatistical methods can offer effective data analysis possibilities for agricultural areas by using vectorial data and their objective informations. These methods will help to detect the spatial changes on different locations of the large agricultural lands, which will lead to effective fertilization for optimal yield with reduced environmental pollution. In this study, topsoil (0-20 cm) and subsoil (20-40 cm) samples were taken from a sugar beet field by 20 x 20 m grids. Plant samples were also collected from the same plots. Some physical and chemical analyses for these samples were made by routine methods. According to derived variation coefficients, topsoil organic matter (OM) distribution was more than subsoil OM distribution. The highest C.V. value of 17.79% was found for topsoil OM. The data were analyzed comparatively according to kriging methods which are also used widely in geostatistic. Several interpolation methods (Ordinary,Simple and Universal) and semivariogram models (Spherical, Exponential and Gaussian) were tested in order to choose the suitable methods. Average standard deviations of values estimated by simple kriging interpolation method were less than average standard deviations (topsoil OM ± 0.48, N ± 0.37, subsoil OM ± 0.18) of measured values. The most suitable interpolation method was simple kriging method and exponantial semivariogram model for topsoil, whereas the best optimal interpolation method was simple kriging method and spherical semivariogram model for subsoil. The results also showed that these computer based geostatistical methods should be tested and calibrated for different experimental conditions and semivariogram models.

XML Data Management in Compressed Relational Database

XML is an important standard of data exchange and representation. As a mature database system, using relational database to support XML data may bring some advantages. But storing XML in relational database has obvious redundancy that wastes disk space, bandwidth and disk I/O when querying XML data. For the efficiency of storage and query XML, it is necessary to use compressed XML data in relational database. In this paper, a compressed relational database technology supporting XML data is presented. Original relational storage structure is adaptive to XPath query process. The compression method keeps this feature. Besides traditional relational database techniques, additional query process technologies on compressed relations and for special structure for XML are presented. In this paper, technologies for XQuery process in compressed relational database are presented..

Investigation and Congestion Management to Solvethe Over-Load Problem of Shiraz Substation in FREC

In this paper, the transformers over-load problem of Shiraz substation in Fars Regional Electric Company (FREC) is investigated for a period of three years plan. So the suggestions for using phase shifting transformer (PST) and unified power flow controller (UPFC) in order to solve this problem are examined in details and finally, some economical and practical designs will be given in order to solve the related problems. Practical consideration and using the basic and fundamental concept of powers in transmission lines in order to find the economical design are the main advantages of this research. The simulation results of the integrated overall system with different designs compare them base on economical and practical aspects to solve the over-load and loss-reduction.

Mass Transfer Modeling of Nitrate in an Ion Exchange Selective Resin

The rate of nitrate adsorption by a nitrate selective ion exchange resin was investigated in a well-stirred batch experiments. The kinetic experimental data were simulated with diffusion models including external mass transfer, particle diffusion and chemical adsorption. Particle pore volume diffusion and particle surface diffusion were taken into consideration separately and simultaneously in the modeling. The model equations were solved numerically using the Crank-Nicholson scheme. An optimization technique was employed to optimize the model parameters. All nitrate concentration decay data were well described with the all diffusion models. The results indicated that the kinetic process is initially controlled by external mass transfer and then by particle diffusion. The external mass transfer coefficient and the coefficients of pore volume diffusion and surface diffusion in all experiments were close to each other with the average value of 8.3×10-3 cm/S for external mass transfer coefficient. In addition, the models are more sensitive to the mass transfer coefficient in comparison with particle diffusion. Moreover, it seems that surface diffusion is the dominant particle diffusion in comparison with pore volume diffusion.

The Game of Synchronized Quadromineering

In synchronized games players make their moves simultaneously rather than alternately. Synchronized Quadromineering is the synchronized version of Quadromineering, a variants of a classical two-player combinatorial game called Domineering. Experimental results for small m × n boards (with m + n < 15) and some theoretical results for general k × n boards (with k = 4, 5, 6) are presented. Moreover, some Synchronized Quadromineering variants are also investigated.

Classification of Initial Stripe Height Patterns using Radial Basis Function Neural Network for Proportional Gain Prediction

This paper aims to improve a fine lapping process of hard disk drive (HDD) lapping machines by removing materials from each slider together with controlling the strip height (SH) variation to minimum value. The standard deviation is the key parameter to evaluate the strip height variation, hence it is minimized. In this paper, a design of experiment (DOE) with factorial analysis by twoway analysis of variance (ANOVA) is adopted to obtain a statistically information. The statistics results reveal that initial stripe height patterns affect the final SH variation. Therefore, initial SH classification using a radial basis function neural network is implemented to achieve the proportional gain prediction.

Evolutionary Algorithms for the Multiobjective Shortest Path Problem

This paper presents an overview of the multiobjective shortest path problem (MSPP) and a review of essential and recent issues regarding the methods to its solution. The paper further explores a multiobjective evolutionary algorithm as applied to the MSPP and describes its behavior in terms of diversity of solutions, computational complexity, and optimality of solutions. Results show that the evolutionary algorithm can find diverse solutions to the MSPP in polynomial time (based on several network instances) and can be an alternative when other methods are trapped by the tractability problem.

Adaptive Fuzzy Control of Stewart Platform under Actuator Saturation

A novel adaptive fuzzy trajectory tracking algorithm of Stewart platform based motion platform is proposed to compensate path deviation and degradation of controller-s performance due to actuator torque limit. The algorithm can be divided into two parts: the real-time trajectory shaping part and the joint space adaptive fuzzy controller part. For a reference trajectory in task space whenever any of the actuators is saturated, the desired acceleration of the reference trajectory is modified on-line by using dynamic model of motion platform. Meanwhile an additional action with respect to the difference between the nominal and modified trajectories is utilized in the non-saturated region of actuators to reduce the path error. Using modified trajectory as input, the joint space controller incorporates compute torque controller, leg velocity observer and fuzzy disturbance observer with saturation compensation. It can ensure stability and tracking performance of controller in present of external disturbance and position only measurement. Simulation results verify the effectiveness of proposed control scheme.