Performance Evaluation of Routing Protocols for High Density Ad Hoc Networks Based on Energy Consumption by GlomoSim Simulator

Ad hoc networks are characterized by multihop wireless connectivity, frequently changing network topology and the need for efficient dynamic routing protocols. We compare the performance of three routing protocols for mobile ad hoc networks: Dynamic Source Routing (DSR), Ad Hoc On-Demand Distance Vector Routing (AODV), location-aided routing (LAR1).Our evaluation is based on energy consumption in mobile ad hoc networks. The performance differentials are analyzed using varying network load, mobility, and network size. We simulate protocols with GLOMOSIM simulator. Based on the observations, we make recommendations about when the performance of either protocol can be best.

The Effect of Granule Size on the Digestibility of Wheat Starch Using an in vitro Model

Wheat has a bimodal starch granule population and the dependency of the rate of enzymatic hydrolysis on particle size has been investigated. Ungelatinised wheaten starch granules were separated into two populations by sedimentation and decantation. Particle size was analysed by laser diffraction and morphological characteristics were viewed using SEM. The sedimentation technique though lengthy, gave satisfactory separation of the granules. Samples (10μm and original) were digested with a-amylase using a dialysis model. Granules of 10μm (p10μm. Moreover, the digestion rate was dependent on particle size whereby smaller granules produced higher rate of release. The methodology and results reported here can be used as a basis for further evaluations designed to delay the release of glucose during the digestion of native starches.

The Effects of TiO2 Nanoparticles on Tumor Cell Colonies: Fractal Dimension and Morphological Properties

Semiconductor nanomaterials like TiO2 nanoparticles (TiO2-NPs) approximately less than 100 nm in diameter have become a new generation of advanced materials due to their novel and interesting optical, dielectric, and photo-catalytic properties. With the increasing use of NPs in commerce, to date few studies have investigated the toxicological and environmental effects of NPs. Motivated by the importance of TiO2-NPs that may contribute to the cancer research field especially from the treatment prospective together with the fractal analysis technique, we have investigated the effect of TiO2-NPs on colony morphology in the dark condition using fractal dimension as a key morphological characterization parameter. The aim of this work is mainly to investigate the cytotoxic effects of TiO2-NPs in the dark on the growth of human cervical carcinoma (HeLa) cell colonies from morphological aspect. The in vitro studies were carried out together with the image processing technique and fractal analysis. It was found that, these colonies were abnormal in shape and size. Moreover, the size of the control colonies appeared to be larger than those of the treated group. The mean Df +/- SEM of the colonies in untreated cultures was 1.085±0.019, N= 25, while that of the cultures treated with TiO2-NPs was 1.287±0.045. It was found that the circularity of the control group (0.401±0.071) is higher than that of the treated group (0.103±0.042). The same tendency was found in the diameter parameters which are 1161.30±219.56 μm and 852.28±206.50 μm for the control and treated group respectively. Possible explanation of the results was discussed, though more works need to be done in terms of the for mechanism aspects. Finally, our results indicate that fractal dimension can serve as a useful feature, by itself or in conjunction with other shape features, in the classification of cancer colonies.

Researching on the Grey Incidence among the Macroscopic Agents in the Logistics Industry System

Quantitative researching on the degree of incidence between the logistics industry and relevant macroscopic system elements is the basis of reasonable and scientific policy on industrial development. In the light of the macro-level, the logistics industry system is consisted of multiple macroscopic agents such as macro-economic, infrastructure, social environment, market demanding, the traditional industry, industry life cycle, policy , system and so on. This paper studies the grey incidence among the macroscopic agents in the logistics industry system. It is demonstrated that the releasing of the logistics services from the logistics outsourcing enterprises determines the growth of the logistics size. Although the information and communication technology is able to promote the formation of the modern logistics industry to some extent, the development of the modern logistics industry depends more on the development of national economy and the investment in the capital assets of the logistics industry.

Constraint Based Frequent Pattern Mining Technique for Solving GCS Problem

Generalized Center String (GCS) problem are generalized from Common Approximate Substring problem and Common substring problems. GCS are known to be NP-hard allowing the problems lies in the explosion of potential candidates. Finding longest center string without concerning the sequence that may not contain any motifs is not known in advance in any particular biological gene process. GCS solved by frequent pattern-mining techniques and known to be fixed parameter tractable based on the fixed input sequence length and symbol set size. Efficient method known as Bpriori algorithms can solve GCS with reasonable time/space complexities. Bpriori 2 and Bpriori 3-2 algorithm are been proposed of any length and any positions of all their instances in input sequences. In this paper, we reduced the time/space complexity of Bpriori algorithm by Constrained Based Frequent Pattern mining (CBFP) technique which integrates the idea of Constraint Based Mining and FP-tree mining. CBFP mining technique solves the GCS problem works for all center string of any length, but also for the positions of all their mutated copies of input sequence. CBFP mining technique construct TRIE like with FP tree to represent the mutated copies of center string of any length, along with constraints to restraint growth of the consensus tree. The complexity analysis for Constrained Based FP mining technique and Bpriori algorithm is done based on the worst case and average case approach. Algorithm's correctness compared with the Bpriori algorithm using artificial data is shown.

Post Mining- Discovering Valid Rules from Different Sized Data Sources

A big organization may have multiple branches spread across different locations. Processing of data from these branches becomes a huge task when innumerable transactions take place. Also, branches may be reluctant to forward their data for centralized processing but are ready to pass their association rules. Local mining may also generate a large amount of rules. Further, it is not practically possible for all local data sources to be of the same size. A model is proposed for discovering valid rules from different sized data sources where the valid rules are high weighted rules. These rules can be obtained from the high frequency rules generated from each of the data sources. A data source selection procedure is considered in order to efficiently synthesize rules. Support Equalization is another method proposed which focuses on eliminating low frequency rules at the local sites itself thus reducing the rules by a significant amount.

The Performance Improvement of the Target Position Determining System in Laser Tracking Based on 4Q Detector using Neural Network

One of the methods for detecting the target position error in the laser tracking systems is using Four Quadrant (4Q) detectors. If the coordinates of the target center is yielded through the usual relations of the detector outputs, the results will be nonlinear, dependent on the shape, target size and its position on the detector screen. In this paper we have designed an algorithm with using neural network that coordinates of the target center in laser tracking systems is calculated by using detector outputs obtained from visual modeling. With this method, the results except from the part related to the detector intrinsic limitation, are linear and dependent from the shape and target size.

The Surface Adsorption of Nano-pore Template

This paper aims to fabricated high quality anodic aluminum oxide (AAO) film by anodization method. AAO pore size, pore density, and film thickness can be controlled in 10~500 nm, 108~1011 pore.cm-2, and 1~100 μm. AAO volume and surface area can be computed based on structural parameters such as thickness, pore size, pore density, and sample size. Base on the thetorical calculation, AAO has 100 μm thickness with 15 nm, 60 nm, and 500 nm pore diameters AAO surface areas are 1225.2 cm2, 3204.4 cm2, and 549.7 cm2, respectively. The large unit surface area which is useful for adsorption application. When AAO adsorbed pH indictor of bromphenol blue presented a sensitive pH detection of solution change. This testing method can further be used for the precise measurement of biotechnology, convenience measurement of industrial engineering.

Complexity Analysis of Some Known Graph Coloring Instances

Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.

Effects of Stream Tube Numbers on Flow and Sediments using GSTARS-3-A Case Study of the Karkheh Reservoir Dam in Western Dezful

Simulation of the flow and sedimentation process in the reservoir dams can be made by two methods of physical and mathematical modeling. The study area was within a region which ranged from the Jelogir hydrometric station to the Karkheh reservoir dam aimed at investigating the effects of stream tubes on the GSTARS-3 model behavior. The methodologies was to run the model based on 5 stream tubes in order to observe the influence of each scenario on longitudinal profiles, cross-section, flow velocity and bed load sediment size. Results further suggest that the use of two stream tubes or more which result in the semi-two-dimensional model will yield relatively closer results to the observational data than a singular stream tube modeling. Moreover, the results of modeling with three stream tubes shown to yield a relatively close results with the observational data. The overall conclusion of the paper is with applying various stream tubes; it would be possible to yield a significant influence on the modeling behavior Vis-a Vis the bed load sediment size.

Modeling of Radiofrequency Nerve Lesioning in Inhomogeneous Media

Radiofrequency (RF) lesioning of nerves have been commonly used to alleviate chronic pain, where RF current preventing transmission of pain signals through the nerve by heating the nerve causing the pain. There are some factors that affect the temperature distribution and the nerve lesion size, one of these factors is the inhomogeneities in the tissue medium. Our objective is to calculate the temperature distribution and the nerve lesion size in an inhomogeneous medium surrounding the RF electrode. A two 3-D finite element models are used to compare the temperature distribution in the homogeneous and inhomogeneous medium. Also the effect of temperature-dependent electric conductivity on maximum temperature and lesion size is observed. Results show that the presence of an inhomogeneous medium around the RF electrode has a valuable effect on the temperature distribution and lesion size. The dependency of electric conductivity on tissue temperature increased lesion size.

Exergy Analysis of Combined Cycle of Air Separation and Natural Gas Liquefaction

This paper presented a novel combined cycle of air separation and natural gas liquefaction. The idea is that natural gas can be liquefied, meanwhile gaseous or liquid nitrogen and oxygen are produced in one combined cryogenic system. Cycle simulation and exergy analysis were performed to evaluate the process and thereby reveal the influence of the crucial parameter, i.e., flow rate ratio through two stages expanders β on heat transfer temperature difference, its distribution and consequent exergy loss. Composite curves for the combined hot streams (feeding natural gas and recycled nitrogen) and the cold stream showed the degree of optimization available in this process if appropriate β was designed. The results indicated that increasing β reduces temperature difference and exergy loss in heat exchange process. However, the maximum limit value of β should be confined in terms of minimum temperature difference proposed in heat exchanger design standard and heat exchanger size. The optimal βopt under different operation conditions corresponding to the required minimum temperature differences was investigated.

Estimation of Buffer Size of Internet Gateway Server via G/M/1 Queuing Model

How to efficiently assign system resource to route the Client demand by Gateway servers is a tricky predicament. In this paper, we tender an enhanced proposal for autonomous recital of Gateway servers under highly vibrant traffic loads. We devise a methodology to calculate Queue Length and Waiting Time utilizing Gateway Server information to reduce response time variance in presence of bursty traffic. The most widespread contemplation is performance, because Gateway Servers must offer cost-effective and high-availability services in the elongated period, thus they have to be scaled to meet the expected load. Performance measurements can be the base for performance modeling and prediction. With the help of performance models, the performance metrics (like buffer estimation, waiting time) can be determined at the development process. This paper describes the possible queue models those can be applied in the estimation of queue length to estimate the final value of the memory size. Both simulation and experimental studies using synthesized workloads and analysis of real-world Gateway Servers demonstrate the effectiveness of the proposed system.

Fast and Accuracy Control Chart Pattern Recognition using a New cluster-k-Nearest Neighbor

By taking advantage of both k-NN which is highly accurate and K-means cluster which is able to reduce the time of classification, we can introduce Cluster-k-Nearest Neighbor as "variable k"-NN dealing with the centroid or mean point of all subclasses generated by clustering algorithm. In general the algorithm of K-means cluster is not stable, in term of accuracy, for that reason we develop another algorithm for clustering our space which gives a higher accuracy than K-means cluster, less subclass number, stability and bounded time of classification with respect to the variable data size. We find between 96% and 99.7 % of accuracy in the lassification of 6 different types of Time series by using K-means cluster algorithm and we find 99.7% by using the new clustering algorithm.

An Optimal Algorithm for HTML Page Building Process

Demand over web services is in growing with increases number of Web users. Web service is applied by Web application. Web application size is affected by its user-s requirements and interests. Differential in requirements and interests lead to growing of Web application size. The efficient way to save store spaces for more data and information is achieved by implementing algorithms to compress the contents of Web application documents. This paper introduces an algorithm to reduce Web application size based on reduction of the contents of HTML files. It removes unimportant contents regardless of the HTML file size. The removing is not ignored any character that is predicted in the HTML building process.

Shrinkage of High Strength Concrete

This paper presents the results of an experimental investigation carried out to evaluate the shrinkage of High Strength Concrete. High Strength Concrete is made by partially replacement of cement by flyash and silica fume. The shrinkage of High Strength Concrete has been studied using the different types of coarse and fine aggregates i.e. Sandstone and Granite of 12.5 mm size and Yamuna and Badarpur Sand. The Mix proportion of concrete is 1:0.8:2.2 with water cement ratio as 0.30. Superplasticizer dose @ of 2% by weight of cement is added to achieve the required degree of workability in terms of compaction factor. From the test results of the above investigation it can be concluded that the shrinkage strain of High Strength Concrete increases with age. The shrinkage strain of concrete with replacement of cement by 10% of Flyash and Silica fume respectively at various ages are more (6 to 10%) than the shrinkage strain of concrete without Flyash and Silica fume. The shrinkage strain of concrete with Badarpur sand as Fine aggregate at 90 days is slightly less (10%) than that of concrete with Yamuna Sand. Further, the shrinkage strain of concrete with Granite as Coarse aggregate at 90 days is slightly less (6 to 7%) than that of concrete with Sand stone as aggregate of same size. The shrinkage strain of High Strength Concrete is also compared with that of normal strength concrete. Test results show that the shrinkage strain of high strength concrete is less than that of normal strength concrete.

A Green Chemical Technique for the Synthesis of Magnetic Nanoparticles by Magnetotactic Bacteria

Bacterial magnetic nanoparticles have great useful potential in biotechnological and biomedical applications. In this study, a liquid growth medium was modified for cultivation a fastidious magnetotactic bacterium that has been isolated from Anzali lagoon, Iran in our previous research. These modifications include change in vitamin, mineral, carbon sources and etcetera. In our experience, the serum bottles and designed air-tight laboratory bottles were used to create microaerobic conditions in order to development of a method for scale-up experiment. This information may serve as a guide to green chemistry based biological protocols for the synthesis of magnetic nanoparticles with control over the chemical composition, morphology and size.

Methods for Case Maintenance in Case-Based Reasoning

Case-Based Reasoning (CBR) is one of machine learning algorithms for problem solving and learning that caught a lot of attention over the last few years. In general, CBR is composed of four main phases: retrieve the most similar case or cases, reuse the case to solve the problem, revise or adapt the proposed solution, and retain the learned cases before returning them to the case base for learning purpose. Unfortunately, in many cases, this retain process causes the uncontrolled case base growth. The problem affects competence and performance of CBR systems. This paper proposes competence-based maintenance method based on deletion policy strategy for CBR. There are three main steps in this method. Step 1, formulate problems. Step 2, determine coverage and reachability set based on coverage value. Step 3, reduce case base size. The results obtained show that this proposed method performs better than the existing methods currently discussed in literature.

A Hybrid Approach for Selection of Relevant Features for Microarray Datasets

Developing an accurate classifier for high dimensional microarray datasets is a challenging task due to availability of small sample size. Therefore, it is important to determine a set of relevant genes that classify the data well. Traditionally, gene selection method often selects the top ranked genes according to their discriminatory power. Often these genes are correlated with each other resulting in redundancy. In this paper, we have proposed a hybrid method using feature ranking and wrapper method (Genetic Algorithm with multiclass SVM) to identify a set of relevant genes that classify the data more accurately. A new fitness function for genetic algorithm is defined that focuses on selecting the smallest set of genes that provides maximum accuracy. Experiments have been carried on four well-known datasets1. The proposed method provides better results in comparison to the results found in the literature in terms of both classification accuracy and number of genes selected.

Performance Evaluation of Routing Protocols For High Density Ad Hoc Networks based on Qos by GlomoSim Simulator

Ad hoc networks are characterized by multihop wireless connectivity, frequently changing network topology and the need for efficient dynamic routing protocols. We compare the performance of three routing protocols for mobile ad hoc networks: Dynamic Source Routing (DSR) , Ad Hoc On-Demand Distance Vector Routing (AODV), location-aided routing(LAR1).The performance differentials are analyzed using varying network load, mobility, and network size. We simulate protocols with GLOMOSIM simulator. Based on the observations, we make recommendations about when the performance of either protocol can be best.