Experimental Investigation of Phase Distributions of Two-phase Air-silicone Oil Flow in a Vertical Pipe

This paper reports the results of an experimental study conducted to characterise the gas-liquid multiphase flows experienced within a vertical riser transporting a range of gas-liquid flow rates. The scale experiments were performed using an air/silicone oil mixture within a 6 m long riser. The superficial air velocities studied ranged from 0.047 to 2.836 m/ s, whilst maintaining a liquid superficial velocity at 0.047 m/ s. Measurements of the mean cross-sectional and time average radial void fraction were obtained using a wire mesh sensor (WMS). The data were recorded at an acquisition frequency of 1000 Hz over an interval of 60 seconds. For the range of flow conditions studied, the average void fraction was observed to vary between 0.1 and 0.9. An analysis of the data collected concluded that the observed void fraction was strongly affected by the superficial gas velocity, whereby the higher the superficial gas velocity, the higher was the observed average void fraction. The average void fraction distributions observed were in good agreement with the results obtained by other researchers. When the air-silicone oil flows were fully developed reasonably symmetric profiles were observed, with the shape of the symmetry profile being strongly dependent on the superficial gas velocity.

Artificial Neural Network Model for a Low Cost Failure Sensor: Performance Assessment in Pipeline Distribution

This paper describes an automated event detection and location system for water distribution pipelines which is based upon low-cost sensor technology and signature analysis by an Artificial Neural Network (ANN). The development of a low cost failure sensor which measures the opacity or cloudiness of the local water flow has been designed, developed and validated, and an ANN based system is then described which uses time series data produced by sensors to construct an empirical model for time series prediction and classification of events. These two components have been installed, tested and verified in an experimental site in a UK water distribution system. Verification of the system has been achieved from a series of simulated burst trials which have provided real data sets. It is concluded that the system has potential in water distribution network management.

In Vitro Antibacterial and Antifungal Effects of a 30 kDa D-Galactoside-Specific Lectin from the Demosponge, Halichondria okadai

The present study has been taken to explore the screening of in vitro antimicrobial activities of D-galactose-binding sponge lectin (HOL-30). HOL-30 was purified from the marine demosponge Halichondria okadai by affinity chromatography. The molecular mass of the lectin was determined to be 30 kDa with a single polypeptide by SDS-PAGE under non-reducing and reducing conditions. HOL-30 agglutinated trypsinized and glutaraldehydefixed rabbit and human erythrocytes with preference for type O erythrocytes. The lectin was subjected to evaluation for inhibition of microbial growth by the disc diffusion method against eleven human pathogenic gram-positive and gram-negative bacteria. The lectin exhibited strong antibacterial activity against gram-positive bacteria, such as Bacillus megaterium and Bacillus subtilis. However, it did not affect against gram-negative bacteria such as Salmonella typhi and Escherichia coli. The largest zone of inhibition was recorded of Bacillus megaterium (12 in diameter) and Bacillus subtilis (10 mm in diameter) at a concentration of the lectin (250 μg/disc). On the other hand, the antifungal activity of the lectin was investigated against six phytopathogenic fungi based on food poisoning technique. The lectin has shown maximum inhibition (22.83%) of mycelial growth of Botrydiplodia theobromae at a concentration of 100 μg/mL media. These findings indicate that the lectin may be of importance to clinical microbiology and have therapeutic applications.

Effect of Using Stone Cutting Waste on the Compression Strength and Slump Characteristics of Concrete

The aim of this work is to study the possible use of stone cutting sludge waste in concrete production, which would reduce both the environmental impact and the production cost .Slurry sludge was used a source of water in concrete production, which was obtained from Samara factory/Jordan, The physico-chemical and mineralogical characterization of the sludge was carried out to identify the major components and to compare it with the typical sand used to produce concrete. Samples analysis showed that 96% of slurry sludge volume is water, so it should be considered as an important source of water. Results indicated that the use of slurry sludge as water source in concrete production has insignificant effect on compression strength, while it has a sharp effect on the slump values. Using slurry sludge with a percentage of 25% of the total water content obtained successful concrete samples regarding slump and compression tests. To clarify slurry sludge, settling process can be used to remove the suspended solid. A settling period of 30 min. obtained 99% removal efficiency. The clarified water is suitable for using in concrete mixes, which reduce water consumption, conserve water recourses, increase the profit, reduce operation cost and save the environment. Additionally, the dry sludge could be used in the mix design instead of the fine materials with sizes < 160 um. This application could conserve the natural materials and solve the environmental and economical problem caused by sludge accumulation.

Elastic-Plastic Contact Analysis of Single Layer Solid Rough Surface Model using FEM

Evaluation of contact pressure, surface and subsurface contact stresses are essential to know the functional response of surface coatings and the contact behavior mainly depends on surface roughness, material property, thickness of layer and the manner of loading. Contact parameter evaluation of real rough surface contacts mostly relies on statistical single asperity contact approaches. In this work, a three dimensional layered solid rough surface in contact with a rigid flat is modeled and analyzed using finite element method. The rough surface of layered solid is generated by FFT approach. The generated rough surface is exported to a finite element method based ANSYS package through which the bottom up solid modeling is employed to create a deformable solid model with a layered solid rough surface on top. The discretization and contact analysis are carried by using the same ANSYS package. The elastic, elastoplastic and plastic deformations are continuous in the present finite element method unlike many other contact models. The Young-s modulus to yield strength ratio of layer is varied in the present work to observe the contact parameters effect while keeping the surface roughness and substrate material properties as constant. The contacting asperities attain elastic, elastoplastic and plastic states with their continuity and asperity interaction phenomena is inherently included. The resultant contact parameters show that neighboring asperity interaction and the Young-s modulus to yield strength ratio of layer influence the bulk deformation consequently affect the interface strength.

High Level Synthesis of Kahn Process Networks(KPN) for Streaming Applications

Streaming Applications usually run in parallel or in series that incrementally transform a stream of input data. It poses a design challenge to break such an application into distinguishable blocks and then to map them into independent hardware processing elements. For this, there is required a generic controller that automatically maps such a stream of data into independent processing elements without any dependencies and manual considerations. In this paper, Kahn Process Networks (KPN) for such streaming applications is designed and developed that will be mapped on MPSoC. This is designed in such a way that there is a generic Cbased compiler that will take the mapping specifications as an input from the user and then it will automate these design constraints and automatically generate the synthesized RTL optimized code for specified application.

Modality and Redundancy Effects on Music Theory Learning Among Pupils of Different Anxiety Levels

The purpose of this study was to investigate effects of modality and redundancy principles on music theory learning among pupils of different anxiety levels. The lesson of music theory was developed in three different modes, audio and image (AI), text with image (TI) and audio with image and text (AIT). The independent variables were the three modes of courseware. The moderator variable was the anxiety level, while the dependent variable was the post test score. The study sample consisted of 405 third-grade pupils. Descriptive and inferential statistics were conducted to analyze the collected data. Analyses of covariance (ANCOVA) and Post hoc were carried out to examine the main effects as well as the interaction effects of the independent variables on the dependent variable. The findings of this study showed that medium anxiety pupils performed significantly better than low and high anxiety pupils in all the three treatment modes. The AI mode was found to help pupils with high anxiety significantly more than the TI and AIT modes.

Gene Expressions Associated with Ultrastructural Changes in Vascular Endothelium of Atherosclerotic Lesion

Attachment of the circulating monocytes to the endothelium is the earliest detectable events during formation of atherosclerosis. The adhesion molecules, chemokines and matrix proteases genes were identified to be expressed in atherogenesis. Expressions of these genes may influence structural integrity of the luminal endothelium. The aim of this study is to relate changes in the ultrastructural morphology of the aortic luminal surface and gene expressions of the endothelial surface, chemokine and MMP-12 in normal and hypercholesterolemic rabbits. Luminal endothelial surface from rabbit aortic tissue was examined by scanning electron microscopy (SEM) using low vacuum mode to ascertain ultrastructural changes in development of atherosclerotic lesion. Gene expression of adhesion molecules, MCP-1 and MMP-12 were studied by Real-time PCR. Ultrastructural observations of the aortic luminal surface exhibited changes from normal regular smooth intact endothelium to irregular luminal surface including marked globular appearance and ruptures of the membrane layer. Real-time PCR demonstrated differentially expressed of studied genes in atherosclerotic tissues. The appearance of ultrastructural changes in aortic tissue of hypercholesterolemic rabbits is suggested to have relation with underlying changes of endothelial surface molecules, chemokine and MMP-12 gene expressions.

Finite Element Application to Estimate Inservice Material Properties using Miniature Specimen

This paper presents a method for determining the uniaxial tensile properties such as Young-s modulus, yield strength and the flow behaviour of a material in a virtually non-destructive manner. To achieve this, a new dumb-bell shaped miniature specimen has been designed. This helps in avoiding the removal of large size material samples from the in-service component for the evaluation of current material properties. The proposed miniature specimen has an advantage in finite element modelling with respect to computational time and memory space. Test fixtures have been developed to enable the tension tests on the miniature specimen in a testing machine. The studies have been conducted in a chromium (H11) steel and an aluminum alloy (AR66). The output from the miniature test viz. load-elongation diagram is obtained and the finite element simulation of the test is carried out using a 2D plane stress analysis. The results are compared with the experimental results. It is observed that the results from the finite element simulation corroborate well with the miniature test results. The approach seems to have potential to predict the mechanical properties of the materials, which could be used in remaining life estimation of the various in-service structures.

Statistical Models of Network Traffic

Model-based approaches have been applied successfully to a wide range of tasks such as specification, simulation, testing, and diagnosis. But one bottleneck often prevents the introduction of these ideas: Manual modeling is a non-trivial, time-consuming task. Automatically deriving models by observing and analyzing running systems is one possible way to amend this bottleneck. To derive a model automatically, some a-priori knowledge about the model structure–i.e. about the system–must exist. Such a model formalism would be used as follows: (i) By observing the network traffic, a model of the long-term system behavior could be generated automatically, (ii) Test vectors can be generated from the model, (iii) While the system is running, the model could be used to diagnose non-normal system behavior. The main contribution of this paper is the introduction of a model formalism called 'probabilistic regression automaton' suitable for the tasks mentioned above.

Sorting Primitives and Genome Rearrangementin Bioinformatics: A Unified Perspective

Bioinformatics and computational biology involve the use of techniques including applied mathematics, informatics, statistics, computer science, artificial intelligence, chemistry, and biochemistry to solve biological problems usually on the molecular level. Research in computational biology often overlaps with systems biology. Major research efforts in the field include sequence alignment, gene finding, genome assembly, protein structure alignment, protein structure prediction, prediction of gene expression and proteinprotein interactions, and the modeling of evolution. Various global rearrangements of permutations, such as reversals and transpositions,have recently become of interest because of their applications in computational molecular biology. A reversal is an operation that reverses the order of a substring of a permutation. A transposition is an operation that swaps two adjacent substrings of a permutation. The problem of determining the smallest number of reversals required to transform a given permutation into the identity permutation is called sorting by reversals. Similar problems can be defined for transpositions and other global rearrangements. In this work we perform a study about some genome rearrangement primitives. We show how a genome is modelled by a permutation, introduce some of the existing primitives and the lower and upper bounds on them. We then provide a comparison of the introduced primitives.

Eclectic Rule-Extraction from Support Vector Machines

Support vector machines (SVMs) have shown superior performance compared to other machine learning techniques, especially in classification problems. Yet one limitation of SVMs is the lack of an explanation capability which is crucial in some applications, e.g. in the medical and security domains. In this paper, a novel approach for eclectic rule-extraction from support vector machines is presented. This approach utilizes the knowledge acquired by the SVM and represented in its support vectors as well as the parameters associated with them. The approach includes three stages; training, propositional rule-extraction and rule quality evaluation. Results from four different experiments have demonstrated the value of the approach for extracting comprehensible rules of high accuracy and fidelity.

Modulation Identification Algorithm for Adaptive Demodulator in Software Defined Radios Using Wavelet Transform

A generalized Digital Modulation Identification algorithm for adaptive demodulator has been developed and presented in this paper. The algorithm developed is verified using wavelet Transform and histogram computation to identify QPSK and QAM with GMSK and M–ary FSK modulations. It has been found that the histogram peaks simplifies the procedure for identification. The simulated results show that the correct modulation identification is possible to a lower bound of 5 dB and 12 dB for GMSK and QPSK respectively. When SNR is above 5 dB the throughput of the proposed algorithm is more than 97.8%. The receiver operating characteristics (ROC) has been computed to measure the performance of the proposed algorithm and the analysis shows that the probability of detection (Pd) drops rapidly when SNR is 5 dB and probability of false alarm (Pf) is smaller than 0.3. The performance of the proposed algorithm has been compared with existing methods and found it will identify all digital modulation schemes with low SNR.

Global Electricity Consumption Estimation Using Particle Swarm Optimization (PSO)

An integrated Artificial Neural Network- Particle Swarm Optimization (PSO) is presented for analyzing global electricity consumption. To aim this purpose, following steps are done: STEP 1: in the first step, PSO is applied in order to determine world-s oil, natural gas, coal and primary energy demand equations based on socio-economic indicators. World-s population, Gross domestic product (GDP), oil trade movement and natural gas trade movement are used as socio-economic indicators in this study. For each socio-economic indicator, a feed-forward back propagation artificial neural network is trained and projected for future time domain. STEP 2: in the second step, global electricity consumption is projected based on the oil, natural gas, coal and primary energy consumption using PSO. global electricity consumption is forecasted up to year 2040.

Techniques for Video Mosaicing

Video Mosaicing is the stitching of selected frames of a video by estimating the camera motion between the frames and thereby registering successive frames of the video to arrive at the mosaic. Different techniques have been proposed in the literature for video mosaicing. Despite of the large number of papers dealing with techniques to generate mosaic, only a few authors have investigated conditions under which these techniques generate good estimate of motion parameters. In this paper, these techniques are studied under different videos, and the reasons for failures are found. We propose algorithms with incorporation of outlier removal algorithms for better estimation of motion parameters.

Isobaric Vapor-Liquid Equilibrium of Binary Mixture of Methyl Acetate with Isopropylbenzene at 97.3 kPa

Isobaric vapor-liquid equilibrium measurements are reported for the binary mixture of Methyl acetate and Isopropylbenzene at 97.3 kPa. The measurements have been performed using a vapor recirculating type (modified Othmer's) equilibrium still. The mixture shows positive deviation from ideality and does not form an azeotrope. The activity coefficients have been calculated taking into consideration the vapor phase nonideality. The data satisfy the thermodynamic consistency tests of Herington and Black. The activity coefficients have been satisfactorily correlated by means of the Margules, NRTL, and Black equations. A comparison of the values of activity coefficients obtained by experimental data with the UNIFAC model has been made.

Optimization of Ethanol Fermentation from Pineapple Peel Extract Using Response Surface Methodology (RSM)

Ethanol has been known for a long time, being perhaps the oldest product obtained through traditional biotechnology fermentation. Agriculture waste as substrate in fermentation is vastly discussed as alternative to replace edible food and utilization of organic material. Pineapple peel, highly potential source as substrate is a by-product of the pineapple processing industry. Bio-ethanol from pineapple (Ananas comosus) peel extract was carried out by controlling fermentation without any treatment. Saccharomyces ellipsoides was used as inoculum in this fermentation process as it is naturally found at the pineapple skin. In this study, the capability of Response Surface Methodology (RSM) for optimization of ethanol production from pineapple peel extract using Saccharomyces ellipsoideus in batch fermentation process was investigated. Effect of five test variables in a defined range of inoculum concentration 6- 14% (v/v), pH (4.0-6.0), sugar concentration (14-22°Brix), temperature (24-32°C) and time of incubation (30-54 hrs) on the ethanol production were evaluated. Data obtained from experiment were analyzed with RSM of MINITAB Software (Version 15) whereby optimum ethanol concentration of 8.637% (v/v) was determined. The optimum condition of 14% (v/v) inoculum concentration, pH 6, 22°Brix, 26°C and 30hours of incubation. The significant regression equation or model at the 5% level with correlation value of 99.96% was also obtained.

Establish a Methodology for Testing and Optimizing GPRS Performance Case Study: Libya GSM

The main goal of this paper is to establish a methodology for testing and optimizing GPRS performance over Libya GSM network as well as to propose a suitable optimization technique to improve performance. Some measurements of download, upload, throughput, round-trip time, reliability, handover, security enhancement and packet loss over a GPRS access network were carried out. Measured values are compared to the theoretical values that could be calculated beforehand. This data should be processed and delivered by the server across the wireless network to the client. The client on the fly takes those pieces of the data and process immediately. Also, we illustrate the results by describing the main parameters that affect the quality of service. Finally, Libya-s two mobile operators, Libyana Mobile Phone and Al-Madar al- Jadeed Company are selected as a case study to validate our methodology.

A high Speed 8 Transistor Full Adder Design Using Novel 3 Transistor XOR Gates

The paper proposes the novel design of a 3T XOR gate combining complementary CMOS with pass transistor logic. The design has been compared with earlier proposed 4T and 6T XOR gates and a significant improvement in silicon area and power-delay product has been obtained. An eight transistor full adder has been designed using the proposed three-transistor XOR gate and its performance has been investigated using 0.15um and 0.35um technologies. Compared to the earlier designed 10 transistor full adder, the proposed adder shows a significant improvement in silicon area and power delay product. The whole simulation has been carried out using HSPICE.

Adjustment of a PET Scanner for PEPT

Positron emission particle tracking (PEPT) is a technique in which a single radioactive tracer particle can be accurately tracked as it moves. A limitation of PET is that in order to reconstruct a tomographic image it is necessary to acquire a large volume of data (millions of events), so it is difficult to study rapidly changing systems. By considering this fact, PEPT is a very fast process compared with PET. In PEPT detecting both photons defines a line and the annihilation is assumed to have occurred somewhere along this line. The location of the tracer can be determined to within a few mm from coincident detection of a small number of pairs of back-to-back gamma rays and using triangulation. This can be achieved many times per second and the track of a moving particle can be reliably followed. This technique was invented at the University of Birmingham [1]. The attempt in PEPT is not to form an image of the tracer particle but simply to determine its location with time. If this tracer is followed for a long enough period within a closed, circulating system it explores all possible types of motion. The application of PEPT to industrial process systems carried out at the University of Birmingham is categorized in two subjects: the behaviour of granular materials and viscous fluids. Granular materials are processed in industry for example in the manufacture of pharmaceuticals, ceramics, food, polymers and PEPT has been used in a number of ways to study the behaviour of these systems [2]. PEPT allows the possibility of tracking a single particle within the bed [3]. Also PEPT has been used for studying systems such as: fluid flow, viscous fluids in mixers [4], using a neutrally-buoyant tracer particle [5].