Carbon Nanotubes Synthesized Using Sugar Cane as a Percursor

This article deals with the carbon nanotubes (CNT) synthesized from a novel precursor, sugar cane and Anodic Aluminum Oxide (AAO). The objective was to produce CNTs to be used as catalyst supports for Proton Exchange Membranes. The influence of temperature, inert gas flow rate and concentration of the precursor is presented. The CNTs prepared were characterized using TEM, XRD, Raman Spectroscopy, and the surface area determined by BET. The results show that it is possible to form CNT from sugar cane by pyrolysis and the CNTs are the type multi-walled carbon nanotubes. The MWCNTs are short and closed at the two ends with very small surface area of SBET= 3.691m,/g.

Optimal Trailing Edge Flap Positions of Helicopter Rotor for Various Thrust Coefficients to Solidity (Ct/σ) Ratios

This study aims to determine change in optimal locations of dual trailing-edge flaps for various thrust coefficient to solidity (Ct /σ) ratios of helicopter to achieve minimum hub vibration levels, with low penalty in terms of required trailing-edge flap control power. Polynomial response functions are used to approximate hub vibration and flap power objective functions. Single objective and multiobjective optimization is carried with the objective of minimizing hub vibration and flap power. The optimization result shows that the inboard flap location at low Ct /σ ratio move farther from the baseline value and at high Ct /σ ratio move towards the root of the blade for minimizing hub vibration.

Chatter Stability Characterization of Full-Immersion End-Milling Using a Generalized Modified Map of the Full-Discretization Method, Part 1: Validation of Results and Study of Stability Lobes by Numerical Simulation

The objective in this work is to generate and discuss the stability results of fully-immersed end-milling process with parameters; tool mass m=0.0431kg,tool natural frequency ωn = 5700 rads^-1, damping factor ξ=0.002 and workpiece cutting coefficient C=3.5x10^7 Nm^-7/4. Different no of teeth is considered for the end-milling. Both 1-DOF and 2-DOF chatter models of the system are generated on the basis of non-linear force law. Chatter stability analysis is carried out using a modified form (generalized for both 1-DOF and 2-DOF models) of recently developed method called Full-discretization. The full-immersion three tooth end-milling together with higher toothed end-milling processes has secondary Hopf bifurcation lobes (SHBL’s) that exhibit one turning (minimum) point each. Each of such SHBL is demarcated by its minimum point into two portions; (i) the Lower Spindle Speed Portion (LSSP) in which bifurcations occur in the right half portion of the unit circle centred at the origin of the complex plane and (ii) the Higher Spindle Speed Portion (HSSP) in which bifurcations occur in the left half portion of the unit circle. Comments are made regarding why bifurcation lobes should generally get bigger and more visible with increase in spindle speed and why flip bifurcation lobes (FBL’s) could be invisible in the low-speed stability chart but visible in the high-speed stability chart of the fully-immersed three-tooth miller.

Transient Three Dimensional FE Modeling for Thermal Analysis of Pulsed Current Gas Tungsten Arc Welding of Aluminum Alloy

This paper presents the results of a study aimed at establishing the temperature distribution during the welding of aluminum alloy plates by Pulsed Current Gas Tungsten Arc Welding (PCGTAW) and Constant Current Gas Tungsten Arc Welding (CCGTAW) processes. Pulsing of the GTA welding current influences the dimensions and solidification rate of the fused zone, it also reduces the weld pool volume hence a narrower bead. In this investigation, the base material considered was aluminum alloy AA 6351 T6, which is finding use in aircraft, automobile and high-speed train components. A finite element analysis was carried out using ANSYS, and the results of the FEA were compared with the experimental results. It is evident from the study that the finite element analysis using ANSYS can be effectively used to model PCGTAW process for finding temperature distribution.

Numerical Analysis of Fractured Process in Locomotive Steel Wheels

Railway vehicle wheels are designed to operate in harsh environments and to withstand high hydrostatic contact pressures. This situation may result in critical circumstances, in particular wheel breakage. This paper presents a time history of a series of broken wheels during a time interval [2007-2008] belongs to locomotive fleet on Iranian Railways. Such fractures in locomotive wheels never reported before. Due to the importance of this issue, a research study has been launched to find the potential reasons of this problem. The authors introduce a FEM model to indicate how and where the wheels could have been affected during their operation. Then, the modeling results are presented and discussed in detail.

CFD Study of the Fluid Viscosity Variation and Effect on the Flow in a Stirred Tank

Stirred tanks are widely used in all industrial sectors. The need for further studies of the mixing operation and its different aspects comes from the diversity of agitation tools and implemented geometries in addition to the specific characteristics of each application. Viscous fluids are often encountered in industry and they represent the majority of treated cases, as in the polymer sector, food processing, pharmaceuticals and cosmetics. That's why in this paper, we will present a three-dimensional numerical study using the software Fluent, to study the effect of varying the fluid viscosity in a stirred tank with a Rushton turbine. This viscosity variation was performed by adding carboxymethylcellulose (CMC) to the fluid (water) in the vessel. In this work, we studied first the flow generated in the tank with a Rushton turbine. Second, we studied the effect of the fluid viscosity variation on the thermodynamic quantities defining the flow. For this, three viscosities (0.9% CMC, 1.1% CMC and 1.7% CMC) were considered.

Genetic Algorithm with Fuzzy Genotype Values and Its Application to Neuroevolution

The author proposes an extension of genetic algorithm (GA) for solving fuzzy-valued optimization problems. In the proposed GA, values in the genotypes are not real numbers but fuzzy numbers. Evolutionary processes in GA are extended so that GA can handle genotype instances with fuzzy numbers. The proposed method is applied to evolving neural networks with fuzzy weights and biases. Experimental results showed that fuzzy neural networks evolved by the fuzzy GA could model hidden target fuzzy functions well despite the fact that no training data was explicitly provided.

Performance Analysis of Wavelet Based Multiuser MIMO OFDM

Wavelet analysis has some strong advantages over Fourier analysis, as it allows a time-frequency domain analysis, allowing optimal resolution and flexibility. As a result, they have been satisfactorily applied in almost all the fields of communication systems including OFDM which is a strong candidate for next generation of wireless technology. In this paper, the performances of wavelet based Multiuser Multiple Input and Multiple Output Orthogonal Frequency Division Multiplexing (MU-MIMO OFDM) systems are analyzed in terms of BER. It has been shown that the wavelet based systems outperform the classical FFT based systems. This analysis also unfolds an interesting result, where wavelet based OFDM system will have a constant error performance using Regularized Channel Inversion (RCI) beamforming for any number of users, and outperforms in all possible scenario in a multiuser environment. An extensive computer simulations show that a PAPR reduction of up to 6.8dB can be obtained with M=64.

Dual-Network Memory Model for Temporal Sequences

In neural networks, when new patters are learned by a network, they radically interfere with previously stored patterns. This drawback is called catastrophic forgetting. We have already proposed a biologically inspired dual-network memory model which can much reduce this forgetting for static patterns. In this model, information is first stored in the hippocampal network, and thereafter, it is transferred to the neocortical network using pseudopatterns. Because temporal sequence learning is more important than static pattern learning in the real world, in this study, we improve our conventional  dual-network memory model so that it can deal with temporal sequences without catastrophic forgetting. The computer simulation results show the effectiveness of the proposed dual-network memory model.  

Exploring Additional Intention Predictors within Dietary Behavior among Type 2 Diabetes

Objective: This study explored the possibility of integrating Health Belief Concepts as additional predictors of intention to adopt a recommended diet-category within the Theory of Planned Behavior (TPB). Methods: The study adopted a Sequential Exploratory Mixed Methods approach. Qualitative data were generated on attitude, subjective norm, perceived behavioral control and perceptions on predetermined diet-categories including perceived susceptibility, perceived benefits, perceived severity and cues to action. Synthesis of qualitative data was done using constant comparative approach during phase 1. A survey tool developed from qualitative results was used to collect information on the same concepts across 237 legible Type 2 diabetics. Data analysis included use of Structural Equation Modeling in Analysis of Moment Structures to explore the possibility of including perceived susceptibility, perceived benefits, perceived severity and cues to action as additional intention predictors in a single nested model. Results: Two models-one nested based on the traditional TPB model {χ2=223.3, df = 77, p = .02, χ2/df = 2.9; TLI = .93; CFI =.91; RMSEA (90CI) = .090(.039, .146)} and the newly proposed Planned Behavior Health Belief Model (PBHB) {χ2 = 743.47, df = 301, p = .019; TLI = .90; CFI=.91; RMSEA (90CI) = .079(.031, .14)} passed the goodness of fit tests based on common fit indicators used. Conclusion: The newly developed PBHB Model ranked higher than the traditional TPB model with reference made to chi-square ratios (PBHB: χ2/df = 2.47; p=0.19 against TPB: χ2/df = 2.9, p=0.02). The integrated model can be used to motivate Type 2 diabetics towards healthy eating.

Time Series Regression with Meta-Clusters

This paper presents a preliminary attempt to apply classification of time series using meta-clusters in order to improve the quality of regression models. In this case, clustering was performed as a method to obtain subgroups of time series data with normal distribution from the inflow into wastewater treatment plant data, composed of several groups differing by mean value. Two simple algorithms, K-mean and EM, were chosen as a clustering method. The Rand index was used to measure the similarity. After simple meta-clustering, a regression model was performed for each subgroups. The final model was a sum of the subgroups models. The quality of the obtained model was compared with the regression model made using the same explanatory variables, but with no clustering of data. Results were compared using determination coefficient (R2), measure of prediction accuracy- mean absolute percentage error (MAPE) and comparison on a linear chart. Preliminary results allow us to foresee the potential of the presented technique.

Optical Switching Based On Bragg Solitons in A Nonuniform Fiber Bragg Grating

In this paper, we consider the nonlinear pulse propagation through a nonuniform birefringent fiber Bragg grating (FBG) whose index modulation depth varies along the propagation direction. Here, the pulse propagation is governed by the nonlinear birefringent coupled mode (NLBCM) equations. To form the Bragg soliton outside the photonic bandgap (PBG), the NLBCM equations are reduced to the well known NLS type equation by multiple scale analysis. As we consider the pulse propagation in a nonuniform FBG, the pulse propagation outside the PBG is governed by inhomogeneous NLS (INLS) rather than NLS. We then discuss the formation of soliton in the FBG known as Bragg soliton whose central frequency lies outside but close to the PBG of the grating structure. Further, we discuss Bragg soliton compression due to a delicate balance between the SPM and the varying grating induced dispersion. In addition, Bragg soliton collision, Bragg soliton switching and possible logic gates have also been discussed.

Molecular Detection and Characterization of Infectious Bronchitis Virus from Libya

Infectious bronchitis virus (IBV) is a very dynamic and evolving virus, causing major economic losses to the global poultry industry. Recently, the Libyan poultry industry faced severe outbreak of respiratory distress associated with high mortality and dramatic drop in egg production. Tracheal and cloacal swabs were analyzed for several poultry viruses. IBV was detected using SYBR Green I real-time PCR detection based on the nucleocapsid (N) gene. Sequence analysis of the partial N gene indicated high similarity (~ 94%) to IBV strain 3382/06 that was isolated from Taiwan. Even though the IBV strain 3382/06 is more similar to that of the Mass type H120, the isolate has been implicated associated with intertypic recombinant of 3 putative parental IBV strains namely H120, Taiwan strain 1171/92 and China strain CK/CH/LDL/97I. Complete sequencing and antigenicity studies of the Libya IBV strains are currently underway to determine the evolution of the virus and its importance in vaccine induced immunity. In this paper we documented for the first time the presence of possibly variant IBV strain from Libya which required dramatic change in vaccination program.

A Novel Application of Network Equivalencing Method in Time Domain to Precise Calculation of Dead Time in Power Transmission Title

Various studies have showed that about 90% of single line to ground faults occurred on High voltage transmission lines have transient nature. This type of faults is cleared by temporary outage (by the single phase auto-reclosure). The interval between opening and reclosing of the faulted phase circuit breakers is named “Dead Time” that is varying about several hundred milliseconds. For adjustment of traditional single phase auto-reclosures that usually are not intelligent, it is necessary to calculate the dead time in the off-line condition precisely. If the dead time used in adjustment of single phase auto-reclosure is less than the real dead time, the reclosing of circuit breakers threats the power systems seriously. So in this paper a novel approach for precise calculation of dead time in power transmission lines based on the network equivalencing in time domain is presented. This approach has extremely higher precision in comparison with the traditional method based on Thevenin equivalent circuit. For comparison between the proposed approach in this paper and the traditional method, a comprehensive simulation by EMTP-ATP is performed on an extensive power network.

The Visual Inspection of Surgical Tasks Using Machine Vision: Applications to Robotic Surgery

In this paper, the feasibility of using machine vision to assess task completion in a surgical intervention is investigated, with the aim of incorporating vision based inspection in robotic surgery systems. The visually rich operative field presents a good environment for the development of automated visual inspection techniques in these systems, for a more comprehensive approach when performing a surgical task. As a proof of concept, machine vision techniques were used to distinguish the two possible outcomes i.e. satisfactory or unsatisfactory, of three primary surgical tasks involved in creating a burr hole in the skull, namely incision, retraction, and drilling. Encouraging results were obtained for the three tasks under consideration, which has been demonstrated by experiments on cadaveric pig heads. These findings are suggestive for the potential use of machine vision to validate successful task completion in robotic surgery systems. Finally, the potential of using machine vision in the operating theatre, and the challenges that must be addressed, are identified and discussed.

Adaptive WiFi Fingerprinting for Location Approximation

WiFi has become an essential technology that is widely used nowadays. It is famous due to its convenience to be used with mobile devices. This is especially true for Internet users worldwide that use WiFi connections. There are many location based services that are available nowadays which uses Wireless Fidelity (WiFi) signal fingerprinting. A common example that is gaining popularity in this era would be Foursquare. In this work, the WiFi signal would be used to estimate the user or client’s location. Similar to GPS, fingerprinting method needs a floor plan to increase the accuracy of location estimation. Still, the factor of inconsistent WiFi signal makes the estimation defer at different time intervals. Given so, an adaptive method is needed to obtain the most accurate signal at all times. WiFi signals are heavily distorted by external factors such as physical objects, radio frequency interference, electrical interference, and environmental factors to name a few. Due to these factors, this work uses a method of reducing the signal noise and estimation using the Nearest Neighbour based on past activities of the signal to increase the signal accuracy up to more than 80%. The repository yet increases the accuracy by using Artificial Neural Network (ANN) pattern matching. The repository acts as the server cum support of the client side application decision. Numerous previous works has adapted the methods of collecting signal strengths in the repository over the years, but mostly were just static. In this work, proposed solutions on how the adaptive method is done to match the signal received to the data in the repository are highlighted. With the said approach, location estimation can be done more accurately. Adaptive update allows the latest location fingerprint to be stored in the repository. Furthermore, any redundant location fingerprints are removed and only the updated version of the fingerprint is stored in the repository. How the location estimation of the user can be predicted would be highlighted more in the proposed solution section. After some studies on previous works, it is found that the Artificial Neural Network is the most feasible method to deploy in updating the repository and making it adaptive. The Artificial Neural Network functions are to do the pattern matching of the WiFi signal to the existing data available in the repository.

Improved Ant Colony Optimization for Solving Reliability Redundancy Allocation Problems

This paper presents an improved ant colony optimization (IACO) for solving the reliability redundancy allocation problem (RAP) in order to maximize system reliability. To improve the performance of ACO algorithm, two additional techniques, i.e. neighborhood search, and re-initialization process are presented. To show its efficiency and effectiveness, the proposed IACO is applied to solve three RAPs. Additionally, the results of the proposed IACO are compared with those of the conventional heuristic approaches i.e. genetic algorithm (GA), particle swarm optimization (PSO) and ant colony optimization (ACO). The experimental results show that the proposed IACO approach is comparatively capable of obtaining higher quality solution and faster computational time.

Influence of Composition and Austempering Temperature on Machinability of Austempered Ductile Iron

Present investigations involve a systematic study on the machinability of austempered ductile irons (ADI) developed from four commercially viable ductile irons alloyed with different contents of 0, 0.1, 0.3 and 0.6 wt.% of Ni. The influence of Ni content, amount of retained austenite and hardness of ADI on machining behavior has been conducted systematically. Austempering heat treatment was carried out for 120 minutes at four temperatures- 270oC, 320oC, 370oC or 420oC, after austenitization at 900oC for 120 min. Milling tests were performed and machinability index, cutting forces and surface roughness measurements were used to evaluate the machinability. Higher cutting forces, lower machinability index and the poorer surface roughness of the samples austempered at lower temperatures indicated that austempering at higher temperatures resulted in better machinability. The machinability of samples austempered at 420oC, which contained higher fractions of retained austenite, was superior to that of samples austempered at lower temperatures, indicating that hardness is an important factor in assessing machinability in addition to high carbon austenite content. The ADI with 0.6% Ni, austempered at 420°C for 120 minutes, demonstrated best machinability.

Application of De-Laval Nozzle Transonic Flow Field Computation Approaches

A supersonic expansion cannot be achieved within a convergent-divergent nozzle if the flow velocity does not reach that of the sound at the throat. The computation of the flow field characteristics at the throat is thus essential to the nozzle developed thrust value and therefore to the aircraft or rocket it propels. Several approaches were developed in order to describe the transonic expansion, which takes place through the throat of a De-Laval convergent-divergent nozzle. They all allow reaching good results but showing a major shortcoming represented by their inability to describe the transonic flow field for nozzles having a small throat radius. The approach initially developed by Kliegel & Levine uses the velocity series development in terms of the normalized throat radius added to unity instead of solely the normalized throat radius or the traditional small disturbances theory approach. The present investigation carries out the application of these three approaches for different throat radiuses of curvature. The method using the normalized throat radius added to unity shows better results when applied to geometries integrating small throat radiuses.

Active Segment Selection Method in EEG Classification Using Fractal Features

BCI (Brain Computer Interface) is a communication machine that translates brain massages to computer commands. These machines with the help of computer programs can recognize the tasks that are imagined. Feature extraction is an important stage of the process in EEG classification that can effect in accuracy and the computation time of processing the signals. In this study we process the signal in three steps of active segment selection, fractal feature extraction, and classification. One of the great challenges in BCI applications is to improve classification accuracy and computation time together. In this paper, we have used student’s 2D sample t-statistics on continuous wavelet transforms for active segment selection to reduce the computation time. In the next level, the features are extracted from some famous fractal dimension estimation of the signal. These fractal features are Katz and Higuchi. In the classification stage we used ANFIS (Adaptive Neuro-Fuzzy Inference System) classifier, FKNN (Fuzzy K-Nearest Neighbors), LDA (Linear Discriminate Analysis), and SVM (Support Vector Machines). We resulted that active segment selection method would reduce the computation time and Fractal dimension features with ANFIS analysis on selected active segments is the best among investigated methods in EEG classification.