CFD Study of the Fluid Viscosity Variation and Effect on the Flow in a Stirred Tank

Stirred tanks are widely used in all industrial sectors. The need for further studies of the mixing operation and its different aspects comes from the diversity of agitation tools and implemented geometries in addition to the specific characteristics of each application. Viscous fluids are often encountered in industry and they represent the majority of treated cases, as in the polymer sector, food processing, pharmaceuticals and cosmetics. That's why in this paper, we will present a three-dimensional numerical study using the software Fluent, to study the effect of varying the fluid viscosity in a stirred tank with a Rushton turbine. This viscosity variation was performed by adding carboxymethylcellulose (CMC) to the fluid (water) in the vessel. In this work, we studied first the flow generated in the tank with a Rushton turbine. Second, we studied the effect of the fluid viscosity variation on the thermodynamic quantities defining the flow. For this, three viscosities (0.9% CMC, 1.1% CMC and 1.7% CMC) were considered.

Finite Element Analysis of Composite Frames in Wheelchair under Upward Loading

The finite element analysis is adopted in this primary study. Using the Tsai-Wu criterion and delamination criterion, the stacking sequence [45/04/-454/904]s is the final optimal design for the wheelchair frame. On the contrary, the uni-directional laminates, i.e. [9013]s, [4513]s and [-4513]s, are bad designs due to the higher failure indexes.

Genetic Algorithm with Fuzzy Genotype Values and Its Application to Neuroevolution

The author proposes an extension of genetic algorithm (GA) for solving fuzzy-valued optimization problems. In the proposed GA, values in the genotypes are not real numbers but fuzzy numbers. Evolutionary processes in GA are extended so that GA can handle genotype instances with fuzzy numbers. The proposed method is applied to evolving neural networks with fuzzy weights and biases. Experimental results showed that fuzzy neural networks evolved by the fuzzy GA could model hidden target fuzzy functions well despite the fact that no training data was explicitly provided.

Development of Regression Equation for Surface Finish and Analysis of Surface Integrity in EDM

Electrical discharge machining (EDM) is a relatively modern machining process having distinct advantages over other machining processes and can machine Ti-alloys effectively. The present study emphasizes the features of the development of regression equation based on response surface methodology (RSM) for correlating the interactive and higher-order influences of machining parameters on surface finish of Titanium alloy Ti-6Al-4V. The process parameters selected in this study are discharge current, pulse on time, pulse off time and servo voltage. Machining has been accomplished using negative polarity of Graphite electrode. Analysis of variance is employed to ascertain the adequacy of the developed regression model. Experiments based on central composite of response surface method are carried out. Scanning electron microscopy (SEM) analysis was performed to investigate the surface topography of the EDMed job. The results evidence that the proposed regression equation can predict the surface roughness effectively. The lower ampere and short pulse on time yield better surface finish.

Performance Analysis of Wavelet Based Multiuser MIMO OFDM

Wavelet analysis has some strong advantages over Fourier analysis, as it allows a time-frequency domain analysis, allowing optimal resolution and flexibility. As a result, they have been satisfactorily applied in almost all the fields of communication systems including OFDM which is a strong candidate for next generation of wireless technology. In this paper, the performances of wavelet based Multiuser Multiple Input and Multiple Output Orthogonal Frequency Division Multiplexing (MU-MIMO OFDM) systems are analyzed in terms of BER. It has been shown that the wavelet based systems outperform the classical FFT based systems. This analysis also unfolds an interesting result, where wavelet based OFDM system will have a constant error performance using Regularized Channel Inversion (RCI) beamforming for any number of users, and outperforms in all possible scenario in a multiuser environment. An extensive computer simulations show that a PAPR reduction of up to 6.8dB can be obtained with M=64.

Leakage Reduction ONOFIC Approach for Deep Submicron VLSI Circuits Design

Minimizations of power dissipation, chip area with higher circuit performance are the necessary and key parameters in deep submicron regime. The leakage current increases sharply in deep submicron regime and directly affected the power dissipation of the logic circuits. In deep submicron region the power dissipation as well as high performance is the crucial concern since increasing importance of portable systems. Number of leakage reduction techniques employed to reduce the leakage current in deep submicron region but they have some trade-off to control the leakage current. ONOFIC approach gives an excellent agreement between power dissipation and propagation delay for designing the efficient CMOS logic circuits. In this article ONOFIC approach is compared with LECTOR technique and output results show that ONOFIC approach significantly reduces the power dissipation and enhance the speed of the logic circuits. The lower power delay product is the big outcome of this approach and makes it an influential leakage reduction technique.

Consumer Online Shopping Behavior: The Effect of Internet Marketing Environment, Product Characteristics, Familiarity and Confidence, and Promotional Offer

Online shopping enables consumers to search for information and purchase products or services through direct interaction with online store. This study aims to examine the effect of Internet marketing environment, product characteristics, familiarity and confidence, and promotional offers on consumer online shopping behavior. 200 questionnaires were distributed to the respondents, who are students and staff at a public university in the Federal Territory of Labuan, Malaysia, following simple random sampling as a means of data collection. Multiple regression analysis was used as a statistical measure to determine the strength of the relationship between one dependent variable and a series of other independent variables. Results revealed that familiarity and confidence was found to greatly influence consumer online shopping behavior followed by promotional offers. A clear understanding of consumer online shopping behavior can help marketing managers predict the online shopping rate and evaluate the future growth of online commerce.

Exploring Additional Intention Predictors within Dietary Behavior among Type 2 Diabetes

Objective: This study explored the possibility of integrating Health Belief Concepts as additional predictors of intention to adopt a recommended diet-category within the Theory of Planned Behavior (TPB). Methods: The study adopted a Sequential Exploratory Mixed Methods approach. Qualitative data were generated on attitude, subjective norm, perceived behavioral control and perceptions on predetermined diet-categories including perceived susceptibility, perceived benefits, perceived severity and cues to action. Synthesis of qualitative data was done using constant comparative approach during phase 1. A survey tool developed from qualitative results was used to collect information on the same concepts across 237 legible Type 2 diabetics. Data analysis included use of Structural Equation Modeling in Analysis of Moment Structures to explore the possibility of including perceived susceptibility, perceived benefits, perceived severity and cues to action as additional intention predictors in a single nested model. Results: Two models-one nested based on the traditional TPB model {χ2=223.3, df = 77, p = .02, χ2/df = 2.9; TLI = .93; CFI =.91; RMSEA (90CI) = .090(.039, .146)} and the newly proposed Planned Behavior Health Belief Model (PBHB) {χ2 = 743.47, df = 301, p = .019; TLI = .90; CFI=.91; RMSEA (90CI) = .079(.031, .14)} passed the goodness of fit tests based on common fit indicators used. Conclusion: The newly developed PBHB Model ranked higher than the traditional TPB model with reference made to chi-square ratios (PBHB: χ2/df = 2.47; p=0.19 against TPB: χ2/df = 2.9, p=0.02). The integrated model can be used to motivate Type 2 diabetics towards healthy eating.

Study of the Effects of Ceramic Nano-Pigments in Cement Mortar Corrosion Caused by Chlorine Ions

Superfine pigments that consist of natural and artificial pigments and are made of mineral soil with special characteristics are used in cementitious materials for various purposes. These pigments can decrease the amount of cement needed without loss of performance and strength and also change the monotonous and turbid colours of concrete into various attractive and light colours. In this study, the mechanical strength and resistance against chloride and halogen attacks of cement mortars containing ceramic nano-pigments in an affected environment are studied. This research suggests utilisation of ceramic nano-pigments between 50 and 1000 nm, obtaining full-depth coloured concrete, preventing chlorine penetration in the concrete up to a certain depth, and controlling corrosion in steel rebar with the Potentiostat (EG&G) apparatus.

Time Series Regression with Meta-Clusters

This paper presents a preliminary attempt to apply classification of time series using meta-clusters in order to improve the quality of regression models. In this case, clustering was performed as a method to obtain subgroups of time series data with normal distribution from the inflow into wastewater treatment plant data, composed of several groups differing by mean value. Two simple algorithms, K-mean and EM, were chosen as a clustering method. The Rand index was used to measure the similarity. After simple meta-clustering, a regression model was performed for each subgroups. The final model was a sum of the subgroups models. The quality of the obtained model was compared with the regression model made using the same explanatory variables, but with no clustering of data. Results were compared using determination coefficient (R2), measure of prediction accuracy- mean absolute percentage error (MAPE) and comparison on a linear chart. Preliminary results allow us to foresee the potential of the presented technique.

Molecular Detection and Characterization of Infectious Bronchitis Virus from Libya

Infectious bronchitis virus (IBV) is a very dynamic and evolving virus, causing major economic losses to the global poultry industry. Recently, the Libyan poultry industry faced severe outbreak of respiratory distress associated with high mortality and dramatic drop in egg production. Tracheal and cloacal swabs were analyzed for several poultry viruses. IBV was detected using SYBR Green I real-time PCR detection based on the nucleocapsid (N) gene. Sequence analysis of the partial N gene indicated high similarity (~ 94%) to IBV strain 3382/06 that was isolated from Taiwan. Even though the IBV strain 3382/06 is more similar to that of the Mass type H120, the isolate has been implicated associated with intertypic recombinant of 3 putative parental IBV strains namely H120, Taiwan strain 1171/92 and China strain CK/CH/LDL/97I. Complete sequencing and antigenicity studies of the Libya IBV strains are currently underway to determine the evolution of the virus and its importance in vaccine induced immunity. In this paper we documented for the first time the presence of possibly variant IBV strain from Libya which required dramatic change in vaccination program.

A Novel Application of Network Equivalencing Method in Time Domain to Precise Calculation of Dead Time in Power Transmission Title

Various studies have showed that about 90% of single line to ground faults occurred on High voltage transmission lines have transient nature. This type of faults is cleared by temporary outage (by the single phase auto-reclosure). The interval between opening and reclosing of the faulted phase circuit breakers is named “Dead Time” that is varying about several hundred milliseconds. For adjustment of traditional single phase auto-reclosures that usually are not intelligent, it is necessary to calculate the dead time in the off-line condition precisely. If the dead time used in adjustment of single phase auto-reclosure is less than the real dead time, the reclosing of circuit breakers threats the power systems seriously. So in this paper a novel approach for precise calculation of dead time in power transmission lines based on the network equivalencing in time domain is presented. This approach has extremely higher precision in comparison with the traditional method based on Thevenin equivalent circuit. For comparison between the proposed approach in this paper and the traditional method, a comprehensive simulation by EMTP-ATP is performed on an extensive power network.

The Visual Inspection of Surgical Tasks Using Machine Vision: Applications to Robotic Surgery

In this paper, the feasibility of using machine vision to assess task completion in a surgical intervention is investigated, with the aim of incorporating vision based inspection in robotic surgery systems. The visually rich operative field presents a good environment for the development of automated visual inspection techniques in these systems, for a more comprehensive approach when performing a surgical task. As a proof of concept, machine vision techniques were used to distinguish the two possible outcomes i.e. satisfactory or unsatisfactory, of three primary surgical tasks involved in creating a burr hole in the skull, namely incision, retraction, and drilling. Encouraging results were obtained for the three tasks under consideration, which has been demonstrated by experiments on cadaveric pig heads. These findings are suggestive for the potential use of machine vision to validate successful task completion in robotic surgery systems. Finally, the potential of using machine vision in the operating theatre, and the challenges that must be addressed, are identified and discussed.

Adaptive WiFi Fingerprinting for Location Approximation

WiFi has become an essential technology that is widely used nowadays. It is famous due to its convenience to be used with mobile devices. This is especially true for Internet users worldwide that use WiFi connections. There are many location based services that are available nowadays which uses Wireless Fidelity (WiFi) signal fingerprinting. A common example that is gaining popularity in this era would be Foursquare. In this work, the WiFi signal would be used to estimate the user or client’s location. Similar to GPS, fingerprinting method needs a floor plan to increase the accuracy of location estimation. Still, the factor of inconsistent WiFi signal makes the estimation defer at different time intervals. Given so, an adaptive method is needed to obtain the most accurate signal at all times. WiFi signals are heavily distorted by external factors such as physical objects, radio frequency interference, electrical interference, and environmental factors to name a few. Due to these factors, this work uses a method of reducing the signal noise and estimation using the Nearest Neighbour based on past activities of the signal to increase the signal accuracy up to more than 80%. The repository yet increases the accuracy by using Artificial Neural Network (ANN) pattern matching. The repository acts as the server cum support of the client side application decision. Numerous previous works has adapted the methods of collecting signal strengths in the repository over the years, but mostly were just static. In this work, proposed solutions on how the adaptive method is done to match the signal received to the data in the repository are highlighted. With the said approach, location estimation can be done more accurately. Adaptive update allows the latest location fingerprint to be stored in the repository. Furthermore, any redundant location fingerprints are removed and only the updated version of the fingerprint is stored in the repository. How the location estimation of the user can be predicted would be highlighted more in the proposed solution section. After some studies on previous works, it is found that the Artificial Neural Network is the most feasible method to deploy in updating the repository and making it adaptive. The Artificial Neural Network functions are to do the pattern matching of the WiFi signal to the existing data available in the repository.

Improved Ant Colony Optimization for Solving Reliability Redundancy Allocation Problems

This paper presents an improved ant colony optimization (IACO) for solving the reliability redundancy allocation problem (RAP) in order to maximize system reliability. To improve the performance of ACO algorithm, two additional techniques, i.e. neighborhood search, and re-initialization process are presented. To show its efficiency and effectiveness, the proposed IACO is applied to solve three RAPs. Additionally, the results of the proposed IACO are compared with those of the conventional heuristic approaches i.e. genetic algorithm (GA), particle swarm optimization (PSO) and ant colony optimization (ACO). The experimental results show that the proposed IACO approach is comparatively capable of obtaining higher quality solution and faster computational time.

Fractional Masks Based On Generalized Fractional Differential Operator for Image Denoising

This paper introduces an image denoising algorithm based on generalized Srivastava-Owa fractional differential operator for removing Gaussian noise in digital images. The structures of nxn fractional masks are constructed by this algorithm. Experiments show that, the capability of the denoising algorithm by fractional differential-based approach appears efficient to smooth the Gaussian noisy images for different noisy levels. The denoising performance is measured by using peak signal to noise ratio (PSNR) for the denoising images. The results showed an improved performance (higher PSNR values) when compared with standard Gaussian smoothing filter.

PAPR Reduction in OFDM Systems Using Orthogonal Eigenvector Matrix

OFDM systems are known to have a high PAPR (Peak-to-Average Power Ratio) compared with single-carrier systems. In fact, the high PAPR is one of the most detrimental aspects in the OFDM system, as it can cause power degradation (Inband distortion) and spectral spreading (Out-of-band radiation). In this paper, from the foundation of the PAPR analysis an effective method of PAPR reduction has been proposed based on Orthogonal Eigenvector Matrix (OEM) transform. Extensive computer simulations show that a PAPR reduction of up to 4.4 dB can be obtained without introducing in-band distortion or out-of-band radiation in the system.

Comparative Analysis of Turbulent Plane Jets from a Sharp-Edged Orifice, a Beveled-Edge Orifice and a Radially Contoured Nozzle

This article investigates through experiments the flow characteristics of plane jets from sharp-edged orifice-plate, beveled-edge and radially contoured nozzle. The first two configurations exhibit saddle-backed velocity profiles while the third shows a top-hat. A vena contracta is found for the jet emanating from orifice at x/h » 3 while the contoured case displays a potential core extending to the range x/h = 5. A spurt in jet pressure on the centerline supports vena contracta for the orifice-jet. Momentum thicknesses and integral length scales elongate linearly with x although the growth of the shear-layer and large-scale eddies for the orifice are greater than the contoured case. The near-field spectrum exhibits higher frequency of the primary eddies that concur with enhanced turbulence intensity. Importantly, highly “turbulent” state of the orifice-jet prevails in the far-field where the spectra confirm more energetic secondary eddies associated with greater flapping amplitude of the orifice-jet.

Analysis of Meteorological Drought Using Standardized Precipitation Index – A Case Study of Puruliya District, West Bengal, India

Drought is universally acknowledged as a phenomenon associated with scarcity of water. The Standardized Precipitation Index (SPI) expresses the actual rainfall as standardized departure from rainfall probability distribution function. In this study severity and spatial pattern of meteorological drought was analyzed in the Puruliya District, West Bengal, India using multi-temporal SPI. Daily gridded data for the period 1971-2005 from 4 rainfall stations surrounding the study area were collected from IMD, Pune, and used in the analysis. Geographic Information System (GIS) was used to generate drought severity maps for the different time scales and months of the year. Temporal SPI graphs show that the maximum SPI value (extreme drought) occurs in station 3 in the year 1993. Mild and moderate droughts occur in the central portion of the study area. Severe and extreme droughts were mostly found in the northeast, northwest and the southwest part of the region.

Design of a Novel Inclination Sensor Utilizing Grayscale Image

Several research works have been done in recent times utilizing grayscale image for the measurement of many physical phenomena. In this present paper, we have designed an embedded based inclination sensor utilizing the grayscale image with a resolution of 0.3º. The sensor module consists of a circular shaped metal disc, laminated with grayscale image and an optical transreceiver. The sensor principle is based on temporal changes in light intensity by the movement of grayscale image with the inclination of the target surface and the variation of light intensity has been detected in terms of voltage by the signal processing circuit (SPC).The output of SPC is fed to a microcontroller program to display the inclination angel digitally. The experimental results are shown a satisfactory performance of the sensor in a small inclination measuring range of -40º to + 40º with a sensitivity of 62 mV/°.