Modification of Rk Equation of State for Liquid and Vapor of Ammonia by Genetic Algorithm

Cubic equations of state like Redlich–Kwong (RK)  EOS have been proved to be very reliable tools in the prediction of  phase behavior. Despite their good performance in compositional  calculations, they usually suffer from weaknesses in the predictions  of saturated liquid density. In this research, RK equation was  modified. The result of this study show that modified equation has  good agreement with experimental data.  

A Source Point Distribution Scheme for Wave-Body Interaction Problem

A two-dimensional linear wave-body interaction problem can be solved using a desingularized integral method by placing free surface Rankine sources over calm water surface and satisfying boundary conditions at prescribed collocation points on the calm water surface. A new free-surface Rankine source distribution scheme, determined by the intersection points of free surface and body surface, is developed to reduce numerical computation cost. Associated with this, a new treatment is given to the intersection point. The present scheme results are in good agreement with traditional numerical results and measurements.

The Analysis of TRACE/FRAPTRAN in the Fuel Rods of Maanshan PWR for LBLOCA

Fuel rod analysis program transient (FRAPTRAN)  code was used to study the fuel rod performance during a postulated  large break loss of coolant accident (LBLOCA) in Maanshan nuclear  power plant (NPP). Previous transient results from thermal hydraulic  code, TRACE, with the same LBLOCA scenario, were used as input  boundary conditions for FRAPTRAN. The simulation results showed  that the peak cladding temperatures and the fuel centerline  temperatures were all below the 10CFR50.46 LOCA criteria. In  addition, the maximum hoop stress was 18 MPa and the oxide  thickness was 0.003mm for the present simulation cases, which are all  within the safety operation ranges. The present study confirms that this  analysis method, the FRAPTRAN code combined with TRACE, is an  appropriate approach to predict the fuel integrity under LBLOCA with  operational ECCS.  

Manufacturing of Full Automatic Carwash Using with Intelligent Control Algorithms

In this paper the intelligent control of full automatic car wash using a programmable logic controller (PLC) has been investigated and designed to do all steps of carwashing. The Intelligent control of full automatic carwash has the ability to identify and profile the geometrical dimensions of the vehicle chassis. Vehicle dimension identification is an important point in this control system to adjust the washing brushes position and time duration. The study also tries to design a control set for simulating and building the automatic carwash. The main purpose of the simulation is to develop criteria for designing and building this type of carwash in actual size to overcome challenges of automation. The results of this research indicate that the proposed method in process control not only increases productivity, speed, accuracy and safety but also reduce the time and cost of washing based on dynamic model of the vehicle. A laboratory prototype based on an advanced intelligent control has been built to study the validity of the design and simulation which it’s appropriate performance confirms the validity of this study.

The Effect of Multipass Cutting in Grinding Operation

Grinding requires high specific energy and the consequent development of high temperature at tool-workpiece contact zone impairs workpiece quality by inducing thermal damage to the surface. Finishing grinding process requires component to be cut more than one pass. This paper deals with an investigation on the effect of multipass cutting on grinding performance in term of surface roughness and surface defect. An experimental set-up has been developed for this and a detailed comparison has been done with a single pass and various numbers of cutting pass. Results showed that surface roughness increase with the increase in a number of cutting pass. Good surface finish of 0.26μm was obtained for single pass cutting and 0.73μm for twenty pass cutting. It was also observed that the thickness of the white layer increased with the increased in a number of cutting pass.

Recycled Plastic Fibers for Minimizing Plastic Shrinkage Cracking of Cement Based Mortar

The development of new construction materials using  recycled plastic is important to both the construction and the plastic  recycling industries. Manufacturing of fibers from industrial or  postconsumer plastic waste is an attractive approach with such  benefits as concrete performance enhancement, and reduced needs  for land filling. The main objective of this study is to investigate the  effect of Plastic fibers obtained locally from recycled waste on plastic  shrinkage cracking of ordinary cement based mortar. Parameters  investigated include: fiber length ranging from 20 to 50mm, and fiber  volume fraction ranging from 0% to 1.5% by volume. The test results  showed significant improvement in crack arresting mechanism and  substantial reduction in the surface area of cracks for the mortar  reinforced with recycled plastic fibers compared to plain mortar.  Furthermore, test results indicated that there was a slight decrease in  compressive strength of mortar reinforced with different lengths and  contents of recycled fibers compared to plain mortar. This study  suggests that adding more than 1% of RP fibers to mortar, can be  used effectively for controlling plastic shrinkage cracking of cement  based mortar, and thus results in waste reduction and resources  conservation.  

Effect of Jet Diameter on Surface Quenching at Different Spatial Locations

An experimental investigation has been carried out to study the cooling of a hot horizontal Stainless Steel surface of 3 mm thickness, which has 800±10 C initial temperature. A round water jet of 22 ± 1 oC temperature was injected over the hot surface through straight tube type nozzles of 2.5- 4.8 mm diameter and 250 mm length. The experiments were performed for the jet exit to target surface spacing of 4 times of jet diameter and jet Reynolds number of 5000 -24000. The effect of change in jet Reynolds number on the surface quenching has been investigated form the stagnation point to 16 mm spatial location.  

Disparity of Learning Styles and Cognitive Abilities in Vocational Education

This study is conducted to investigate the disparity of between learning styles and cognitive abilities specifically in Vocational Education.  Felder and Silverman Learning Styles Model (FSLSM) was applied to measure the students’ learning styles while the content in Building Construction Subject consists; knowledge, skills and problem solving were taken into account in constructing the elements of cognitive abilities. Building Construction is one of the vocational courses offered in Vocational Education structure. There are four dimension of learning styles proposed by Felder and Silverman intended to capture student learning preferences with regards to processing either active or reflective, perception based on sensing or intuitive, input of information used visual or verbal and understanding information represent with sequential or global learner. Felder-Solomon Learning Styles Index was developed based on FSLSM and the questions were used to identify what type of student learning preferences. The index consists 44 item-questions characterize for learning styles dimension in FSLSM. The achievement test was developed to determine the students’ cognitive abilities. The quantitative data was analyzed in descriptive and inferential statistic involving Multivariate Analysis of Variance (MANOVA). The study discovered students are tending to be visual learners and each type of learner having significant difference whereas cognitive abilities there are different finding for each type of learners in knowledge, skills and problem solving. This study concludes the gap between type of learner and the cognitive abilities in few illustrations and it explained how the connecting made. The finding may help teachers to facilitate students more effectively and to boost the student’s cognitive abilities.

Carbon Nanotubes Synthesized Using Sugar Cane as a Percursor

This article deals with the carbon nanotubes (CNT) synthesized from a novel precursor, sugar cane and Anodic Aluminum Oxide (AAO). The objective was to produce CNTs to be used as catalyst supports for Proton Exchange Membranes. The influence of temperature, inert gas flow rate and concentration of the precursor is presented. The CNTs prepared were characterized using TEM, XRD, Raman Spectroscopy, and the surface area determined by BET. The results show that it is possible to form CNT from sugar cane by pyrolysis and the CNTs are the type multi-walled carbon nanotubes. The MWCNTs are short and closed at the two ends with very small surface area of SBET= 3.691m,/g.

CFD Study of the Fluid Viscosity Variation and Effect on the Flow in a Stirred Tank

Stirred tanks are widely used in all industrial sectors. The need for further studies of the mixing operation and its different aspects comes from the diversity of agitation tools and implemented geometries in addition to the specific characteristics of each application. Viscous fluids are often encountered in industry and they represent the majority of treated cases, as in the polymer sector, food processing, pharmaceuticals and cosmetics. That's why in this paper, we will present a three-dimensional numerical study using the software Fluent, to study the effect of varying the fluid viscosity in a stirred tank with a Rushton turbine. This viscosity variation was performed by adding carboxymethylcellulose (CMC) to the fluid (water) in the vessel. In this work, we studied first the flow generated in the tank with a Rushton turbine. Second, we studied the effect of the fluid viscosity variation on the thermodynamic quantities defining the flow. For this, three viscosities (0.9% CMC, 1.1% CMC and 1.7% CMC) were considered.

Development of Regression Equation for Surface Finish and Analysis of Surface Integrity in EDM

Electrical discharge machining (EDM) is a relatively modern machining process having distinct advantages over other machining processes and can machine Ti-alloys effectively. The present study emphasizes the features of the development of regression equation based on response surface methodology (RSM) for correlating the interactive and higher-order influences of machining parameters on surface finish of Titanium alloy Ti-6Al-4V. The process parameters selected in this study are discharge current, pulse on time, pulse off time and servo voltage. Machining has been accomplished using negative polarity of Graphite electrode. Analysis of variance is employed to ascertain the adequacy of the developed regression model. Experiments based on central composite of response surface method are carried out. Scanning electron microscopy (SEM) analysis was performed to investigate the surface topography of the EDMed job. The results evidence that the proposed regression equation can predict the surface roughness effectively. The lower ampere and short pulse on time yield better surface finish.

Performance Analysis of Wavelet Based Multiuser MIMO OFDM

Wavelet analysis has some strong advantages over Fourier analysis, as it allows a time-frequency domain analysis, allowing optimal resolution and flexibility. As a result, they have been satisfactorily applied in almost all the fields of communication systems including OFDM which is a strong candidate for next generation of wireless technology. In this paper, the performances of wavelet based Multiuser Multiple Input and Multiple Output Orthogonal Frequency Division Multiplexing (MU-MIMO OFDM) systems are analyzed in terms of BER. It has been shown that the wavelet based systems outperform the classical FFT based systems. This analysis also unfolds an interesting result, where wavelet based OFDM system will have a constant error performance using Regularized Channel Inversion (RCI) beamforming for any number of users, and outperforms in all possible scenario in a multiuser environment. An extensive computer simulations show that a PAPR reduction of up to 6.8dB can be obtained with M=64.

Dual-Network Memory Model for Temporal Sequences

In neural networks, when new patters are learned by a network, they radically interfere with previously stored patterns. This drawback is called catastrophic forgetting. We have already proposed a biologically inspired dual-network memory model which can much reduce this forgetting for static patterns. In this model, information is first stored in the hippocampal network, and thereafter, it is transferred to the neocortical network using pseudopatterns. Because temporal sequence learning is more important than static pattern learning in the real world, in this study, we improve our conventional  dual-network memory model so that it can deal with temporal sequences without catastrophic forgetting. The computer simulation results show the effectiveness of the proposed dual-network memory model.  

Leakage Reduction ONOFIC Approach for Deep Submicron VLSI Circuits Design

Minimizations of power dissipation, chip area with higher circuit performance are the necessary and key parameters in deep submicron regime. The leakage current increases sharply in deep submicron regime and directly affected the power dissipation of the logic circuits. In deep submicron region the power dissipation as well as high performance is the crucial concern since increasing importance of portable systems. Number of leakage reduction techniques employed to reduce the leakage current in deep submicron region but they have some trade-off to control the leakage current. ONOFIC approach gives an excellent agreement between power dissipation and propagation delay for designing the efficient CMOS logic circuits. In this article ONOFIC approach is compared with LECTOR technique and output results show that ONOFIC approach significantly reduces the power dissipation and enhance the speed of the logic circuits. The lower power delay product is the big outcome of this approach and makes it an influential leakage reduction technique.

Study of the Effects of Ceramic Nano-Pigments in Cement Mortar Corrosion Caused by Chlorine Ions

Superfine pigments that consist of natural and artificial pigments and are made of mineral soil with special characteristics are used in cementitious materials for various purposes. These pigments can decrease the amount of cement needed without loss of performance and strength and also change the monotonous and turbid colours of concrete into various attractive and light colours. In this study, the mechanical strength and resistance against chloride and halogen attacks of cement mortars containing ceramic nano-pigments in an affected environment are studied. This research suggests utilisation of ceramic nano-pigments between 50 and 1000 nm, obtaining full-depth coloured concrete, preventing chlorine penetration in the concrete up to a certain depth, and controlling corrosion in steel rebar with the Potentiostat (EG&G) apparatus.

The Automated Selective Acquisition System

To support design process for launching the product on time, reverse engineering (RE) process has been introduced for quickly generating 3D CAD model from its physical object. The accuracy of the 3D CAD model depends upon the data acquisition technique selected, contact or non-contact methods. In order to reduce times used for acquiring surface and eliminating noises, the automated selective acquisition system has been developed and presented in this research as the alternative channel for non-contact acquisition technique where the data is selectively and locally scanned contour by contour without performing data reduction process. The results present as the organized contour points which are directly used to generate 3D virtual model. The comparison between the proposed technique and another non-contact scanning technique has been presented and discussed.

Time Series Regression with Meta-Clusters

This paper presents a preliminary attempt to apply classification of time series using meta-clusters in order to improve the quality of regression models. In this case, clustering was performed as a method to obtain subgroups of time series data with normal distribution from the inflow into wastewater treatment plant data, composed of several groups differing by mean value. Two simple algorithms, K-mean and EM, were chosen as a clustering method. The Rand index was used to measure the similarity. After simple meta-clustering, a regression model was performed for each subgroups. The final model was a sum of the subgroups models. The quality of the obtained model was compared with the regression model made using the same explanatory variables, but with no clustering of data. Results were compared using determination coefficient (R2), measure of prediction accuracy- mean absolute percentage error (MAPE) and comparison on a linear chart. Preliminary results allow us to foresee the potential of the presented technique.

A Novel Application of Network Equivalencing Method in Time Domain to Precise Calculation of Dead Time in Power Transmission Title

Various studies have showed that about 90% of single line to ground faults occurred on High voltage transmission lines have transient nature. This type of faults is cleared by temporary outage (by the single phase auto-reclosure). The interval between opening and reclosing of the faulted phase circuit breakers is named “Dead Time” that is varying about several hundred milliseconds. For adjustment of traditional single phase auto-reclosures that usually are not intelligent, it is necessary to calculate the dead time in the off-line condition precisely. If the dead time used in adjustment of single phase auto-reclosure is less than the real dead time, the reclosing of circuit breakers threats the power systems seriously. So in this paper a novel approach for precise calculation of dead time in power transmission lines based on the network equivalencing in time domain is presented. This approach has extremely higher precision in comparison with the traditional method based on Thevenin equivalent circuit. For comparison between the proposed approach in this paper and the traditional method, a comprehensive simulation by EMTP-ATP is performed on an extensive power network.

The Visual Inspection of Surgical Tasks Using Machine Vision: Applications to Robotic Surgery

In this paper, the feasibility of using machine vision to assess task completion in a surgical intervention is investigated, with the aim of incorporating vision based inspection in robotic surgery systems. The visually rich operative field presents a good environment for the development of automated visual inspection techniques in these systems, for a more comprehensive approach when performing a surgical task. As a proof of concept, machine vision techniques were used to distinguish the two possible outcomes i.e. satisfactory or unsatisfactory, of three primary surgical tasks involved in creating a burr hole in the skull, namely incision, retraction, and drilling. Encouraging results were obtained for the three tasks under consideration, which has been demonstrated by experiments on cadaveric pig heads. These findings are suggestive for the potential use of machine vision to validate successful task completion in robotic surgery systems. Finally, the potential of using machine vision in the operating theatre, and the challenges that must be addressed, are identified and discussed.

Adaptive WiFi Fingerprinting for Location Approximation

WiFi has become an essential technology that is widely used nowadays. It is famous due to its convenience to be used with mobile devices. This is especially true for Internet users worldwide that use WiFi connections. There are many location based services that are available nowadays which uses Wireless Fidelity (WiFi) signal fingerprinting. A common example that is gaining popularity in this era would be Foursquare. In this work, the WiFi signal would be used to estimate the user or client’s location. Similar to GPS, fingerprinting method needs a floor plan to increase the accuracy of location estimation. Still, the factor of inconsistent WiFi signal makes the estimation defer at different time intervals. Given so, an adaptive method is needed to obtain the most accurate signal at all times. WiFi signals are heavily distorted by external factors such as physical objects, radio frequency interference, electrical interference, and environmental factors to name a few. Due to these factors, this work uses a method of reducing the signal noise and estimation using the Nearest Neighbour based on past activities of the signal to increase the signal accuracy up to more than 80%. The repository yet increases the accuracy by using Artificial Neural Network (ANN) pattern matching. The repository acts as the server cum support of the client side application decision. Numerous previous works has adapted the methods of collecting signal strengths in the repository over the years, but mostly were just static. In this work, proposed solutions on how the adaptive method is done to match the signal received to the data in the repository are highlighted. With the said approach, location estimation can be done more accurately. Adaptive update allows the latest location fingerprint to be stored in the repository. Furthermore, any redundant location fingerprints are removed and only the updated version of the fingerprint is stored in the repository. How the location estimation of the user can be predicted would be highlighted more in the proposed solution section. After some studies on previous works, it is found that the Artificial Neural Network is the most feasible method to deploy in updating the repository and making it adaptive. The Artificial Neural Network functions are to do the pattern matching of the WiFi signal to the existing data available in the repository.