Injection Forging of Splines Using Numerical and Experimental Study

Injection forging is a Nett-shape manufacturing process in which one or two punches move axially causing a radial flow into a die cavity in a form which is prescribed by the exitgeometry, such as pulley, flanges, gears and splines on a shaft. This paper presents an experimental and numerical study of the injection forging of splines in terms of load requirement and material flow. Three dimensional finite element analyses are used to investigate the effect of some important parameters in this process. The experiment has been carried out using solid commercial lead billets with two different billet diameters and four different dies.

A Novel Method Based on Monte Carlo for Simulation of Variable Resolution X-ray CT Scanner: Measurement of System Presampling MTF

The purpose of this work is measurement of the system presampling MTF of a variable resolution x-ray (VRX) CT scanner. In this paper, we used the parameters of an actual VRX CT scanner for simulation and study of effect of different focal spot sizes on system presampling MTF by Monte Carlo method (GATE simulation software). Focal spot size of 0.6 mm limited the spatial resolution of the system to 5.5 cy/mm at incident angles of below 17º for cell#1. By focal spot size of 0.3 mm the spatial resolution increased up to 11 cy/mm and the limiting effect of focal spot size appeared at incident angles of below 9º. The focal spot size of 0.3 mm could improve the spatial resolution to some extent but because of magnification non-uniformity, there is a 10 cy/mm difference between spatial resolution of cell#1 and cell#256. The focal spot size of 0.1 mm acted as an ideal point source for this system. The spatial resolution increased to more than 35 cy/mm and at all incident angles the spatial resolution was a function of incident angle. By the way focal spot size of 0.1 mm minimized the effect of magnification nonuniformity.

Polymerisation Shrinkage of Light−Cured Hydroxyapatite (HA)−Reinforced Dental Composites

The dental composites are preferably used as filling materials due to their esthetic appearances. Nevertheless one of the major problems, during the application of the dental composites, is shape change named as “polymerisation shrinkage" affecting clinical success of the dental restoration while photo-polymerisation. Polymerisation shrinkage of composites arises basically from the formation of a polymer due to the monomer transformation which composes of an organic matrix phase. It was sought, throughout this study, to detect and evaluate the structural polymerisation shrinkage of prepared dental composites in order to optimize the effects of various fillers included in hydroxyapatite (HA)-reinforced dental composites and hence to find a means to modify the properties of these dental composites prepared with defined parameters. As a result, the shrinkage values of the experimental dental composites were decreased by increasing the filler content of composites and the composition of different fillers used had effect on the shrinkage of the prepared composite systems.

Green Synthesis of Butyl Acetate, A Pineapple Flavour via Lipase-Catalyzed Reaction

Nowadays, butyl acetate, a pineapple flavor has been applied widely in food, beverage, cosmetic and pharmaceutical industries. In this study, Butyl acetate, a flavor ester was successfully synthesized via green synthesis of enzymatic reaction route. Commercial immobilized lipase from Rhizomucor miehei (Lipozyme RMIM) was used as biocatalyst in the esterification reaction between acetic acid and butanol. Various reaction parameters such as reaction time (RT), temperature (T) and amount of enzyme (E) were chosen to optimize the reaction synthesis in solvent-free system. The optimum condition to produce butyl acetate was at reaction time (RT), 18 hours; temperature (T), 37°C and amount of enzyme, 25 % (w/w of total substrate). Analysis of yield showed that at optimum condition, >78 % of butyl acetate was produced. The product was confirmed as butyl acetate from FTIR analysis whereby the presence of an ester group was observed at wavenumber of 1742 cm-1.

Neuro-Fuzzy Networks for Identification of Mathematical Model Parameters of Geofield

The new technology of fuzzy neural networks for identification of parameters for mathematical models of geofields is proposed and checked. The effectiveness of that soft computing technology is demonstrated, especially in the early stage of modeling, when the information is uncertain and limited.

Identification of Ductile Damage Parameters for Austenitic Steel

The modeling of inelastic behavior of plastic materials requires measurements providing information on material response to different multiaxial loading conditions. Different triaxiality conditions and values of Lode parameters have to be covered for complex description of the material plastic behavior. Samples geometries providing material plastic behavoiur over the range of interest are proposed with the use of FEM analysis. Round samples with 3 different notches and smooth surface are used together with butterfly type of samples tested at angle ranging for 0 to 90°. Identification of ductile damage parameters is carried out on the basis of obtained experimental data for austenitic stainless steel. The obtained material plastic damage parameters are subsequently applied to FEM simulation of notched CT normally samples used for fracture mechanics testing and results from the simulation are compared with real tests.

Delay and Energy Consumption Analysis of Conventional SRAM

The energy consumption and delay in read/write operation of conventional SRAM is investigated analytically as well as by simulation. Explicit analytical expressions for the energy consumption and delay in read and write operation as a function of device parameters and supply voltage are derived. The expressions are useful in predicting the effect of parameter changes on the energy consumption and speed as well as in optimizing the design of conventional SRAM. HSPICE simulation in standard 0.25μm CMOS technology confirms precision of analytical expressions derived from this paper.

Application of Spreadsheet and Queuing Network Model to Capacity Optimization in Product Development

Modeling of a manufacturing system enables one to identify the effects of key design parameters on the system performance and as a result to make correct decision. This paper proposes a manufacturing system modeling approach using a spreadsheet model based on queuing network theory, in which a static capacity planning model and stochastic queuing model are integrated. The model was used to improve the existing system utilization in relation to product design. The model incorporates few parameters such as utilization, cycle time, throughput, and batch size. The study also showed that the validity of developed model is good enough to apply and the maximum value of relative error is 10%, far below the limit value 32%. Therefore, the model developed in this study is a valuable alternative model in evaluating a manufacturing system

Computer Software Applicable in Rehabilitation, Cardiology and Molecular Biology

We have developed a computer program consisting of 6 subtests assessing the children hand dexterity applicable in the rehabilitation medicine. We have carried out a normative study on a representative sample of 285 children aged from 7 to 15 (mean age 11.3) and we have proposed clinical standards for three age groups (7-9, 9-11, 12-15 years). We have shown statistical significance of differences among the corresponding mean values of the task time completion. We have also found a strong correlation between the task time completion and the age of the subjects, as well as we have performed the test-retest reliability checks in the sample of 84 children, giving the high values of the Pearson coefficients for the dominant and non-dominant hand in the range 0.740.97 and 0.620.93, respectively. A new MATLAB-based programming tool aiming at analysis of cardiologic RR intervals and blood pressure descriptors, is worked out, too. For each set of data, ten different parameters are extracted: 2 in time domain, 4 in frequency domain and 4 in Poincaré plot analysis. In addition twelve different parameters of baroreflex sensitivity are calculated. All these data sets can be visualized in time domain together with their power spectra and Poincaré plots. If available, the respiratory oscillation curves can be also plotted for comparison. Another application processes biological data obtained from BLAST analysis.

Noise Analysis of Single-Ended Input Differential Amplifier using Stochastic Differential Equation

In this paper, we analyze the effect of noise in a single- ended input differential amplifier working at high frequencies. Both extrinsic and intrinsic noise are analyzed using time domain method employing techniques from stochastic calculus. Stochastic differential equations are used to obtain autocorrelation functions of the output noise voltage and other solution statistics like mean and variance. The analysis leads to important design implications and suggests changes in the device parameters for improved noise characteristics of the differential amplifier.

Numerical Optimization within Vector of Parameters Estimation in Volatility Models

In this paper usefulness of quasi-Newton iteration procedure in parameters estimation of the conditional variance equation within BHHH algorithm is presented. Analytical solution of maximization of the likelihood function using first and second derivatives is too complex when the variance is time-varying. The advantage of BHHH algorithm in comparison to the other optimization algorithms is that requires no third derivatives with assured convergence. To simplify optimization procedure BHHH algorithm uses the approximation of the matrix of second derivatives according to information identity. However, parameters estimation in a/symmetric GARCH(1,1) model assuming normal distribution of returns is not that simple, i.e. it is difficult to solve it analytically. Maximum of the likelihood function can be founded by iteration procedure until no further increase can be found. Because the solutions of the numerical optimization are very sensitive to the initial values, GARCH(1,1) model starting parameters are defined. The number of iterations can be reduced using starting values close to the global maximum. Optimization procedure will be illustrated in framework of modeling volatility on daily basis of the most liquid stocks on Croatian capital market: Podravka stocks (food industry), Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla stocks (information-s-communications industry).

Estimating Reaction Rate Constants with Neural Networks

Solutions are proposed for the central problem of estimating the reaction rate coefficients in homogeneous kinetics. The first is based upon the fact that the right hand side of a kinetic differential equation is linear in the rate constants, whereas the second one uses the technique of neural networks. This second one is discussed deeply and its advantages, disadvantages and conditions of applicability are analyzed in the mirror of the first one. Numerical analysis carried out on practical models using simulated data, and our programs written in Mathematica.

Tropical Cyclogenesis Response to Solar Activity in the Eastern Pacific Region

The relationship between tropical cyclogenesis and solar activity is addressed in this paper, analyzing the relationship between important parameters in the evolution of tropical cyclones as the CAPE, wind shear and relative vorticity, and the Dst geomagnetic index as a parameter of solar activity. The apparent relationship between all this phenomena has a different response depending on the phase of the solar cycles.

Energy Loss at Drops using Neuro Solutions

Energy dissipation in drops has been investigated by physical models. After determination of effective parameters on the phenomenon, three drops with different heights have been constructed from Plexiglas. They have been installed in two existing flumes in the hydraulic laboratory. Several runs of physical models have been undertaken to measured required parameters for determination of the energy dissipation. Results showed that the energy dissipation in drops depend on the drop height and discharge. Predicted relative energy dissipations varied from 10.0% to 94.3%. This work has also indicated that the energy loss at drop is mainly due to the mixing of the jet with the pool behind the jet that causes air bubble entrainment in the flow. Statistical model has been developed to predict the energy dissipation in vertical drops denotes nonlinear correlation between effective parameters. Further an artificial neural networks (ANNs) approach was used in this paper to develop an explicit procedure for calculating energy loss at drops using NeuroSolutions. Trained network was able to predict the response with R2 and RMSE 0.977 and 0.0085 respectively. The performance of ANN was found effective when compared to regression equations in predicting the energy loss.

Design a Three-dimensional Pursuit Guidance Law with Feedback Linearization Method

In this paper, we will implement three-dimensional pursuit guidance law with feedback linearization control method and study the effects of parameters. First, we introduce guidance laws and equations of motion of a missile. Pursuit guidance law is our highlight. We apply feedback linearization control method to obtain the accelerations to implement pursuit guidance law. The solution makes warhead direction follow with line-of-sight. Final, the simulation results show that the exact solution derived in this paper is correct and some factors e.g. control gain, time delay, are important to implement pursuit guidance law.

Unsteady Laminar Boundary Layer Forced Flow in the Region of the Stagnation Point on a Stretching Flat Sheet

This paper analyses the unsteady, two-dimensional stagnation point flow of an incompressible viscous fluid over a flat sheet when the flow is started impulsively from rest and at the same time, the sheet is suddenly stretched in its own plane with a velocity proportional to the distance from the stagnation point. The partial differential equations governing the laminar boundary layer forced convection flow are non-dimensionalised using semi-similar transformations and then solved numerically using an implicit finitedifference scheme known as the Keller-box method. Results pertaining to the flow and heat transfer characteristics are computed for all dimensionless time, uniformly valid in the whole spatial region without any numerical difficulties. Analytical solutions are also obtained for both small and large times, respectively representing the initial unsteady and final steady state flow and heat transfer. Numerical results indicate that the velocity ratio parameter is found to have a significant effect on skin friction and heat transfer rate at the surface. Furthermore, it is exposed that there is a smooth transition from the initial unsteady state flow (small time solution) to the final steady state (large time solution).

Enhanced-Delivery Overlay Multicasting Scheme by Optimizing Bandwidth and Latency Discrepancy Ratios

With optimized bandwidth and latency discrepancy ratios, Node Gain Scores (NGSs) are determined and used as a basis for shaping the max-heap overlay. The NGSs - determined as the respective bandwidth-latency-products - govern the construction of max-heap-form overlays. Each NGS is earned as a synergy of discrepancy ratio of the bandwidth requested with respect to the estimated available bandwidth, and latency discrepancy ratio between the nodes and the source node. The tree leads to enhanceddelivery overlay multicasting – increasing packet delivery which could, otherwise, be hindered by induced packet loss occurring in other schemes not considering the synergy of these parameters on placing the nodes on the overlays. The NGS is a function of four main parameters – estimated available bandwidth, Ba; individual node's requested bandwidth, Br; proposed node latency to its prospective parent (Lp); and suggested best latency as advised by source node (Lb). Bandwidth discrepancy ratio (BDR) and latency discrepancy ratio (LDR) carry weights of α and (1,000 - α ) , respectively, with arbitrary chosen α ranging between 0 and 1,000 to ensure that the NGS values, used as node IDs, maintain a good possibility of uniqueness and balance between the most critical factor between the BDR and the LDR. A max-heap-form tree is constructed with assumption that all nodes possess NGS less than the source node. To maintain a sense of load balance, children of each level's siblings are evenly distributed such that a node can not accept a second child, and so on, until all its siblings able to do so, have already acquired the same number of children. That is so logically done from left to right in a conceptual overlay tree. The records of the pair-wise approximate available bandwidths as measured by a pathChirp scheme at individual nodes are maintained. Evaluation measures as compared to other schemes – Bandwidth Aware multicaSt architecturE (BASE), Tree Building Control Protocol (TBCP), and Host Multicast Tree Protocol (HMTP) - have been conducted. This new scheme generally performs better in terms of trade-off between packet delivery ratio; link stress; control overhead; and end-to-end delays.

Using the Monte Carlo Simulation to Predict the Assembly Yield

Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.

Statistical Optimization of Process Variables for Direct Fermentation of 226 White Rose Tapioca Stem to Ethanol by Fusarium oxysporum

Direct fermentation of 226 white rose tapioca stem to ethanol by Fusarium oxysporum was studied in a batch reactor. Fermentation of ethanol can be achieved by sequential pretreatment using dilute acid and dilute alkali solutions using 100 mesh tapioca stem particles. The quantitative effects of substrate concentration, pH and temperature on ethanol concentration were optimized using a full factorial central composite design experiment. The optimum process conditions were then obtained using response surface methodology. The quadratic model indicated that substrate concentration of 33g/l, pH 5.52 and a temperature of 30.13oC were found to be optimum for maximum ethanol concentration of 8.64g/l. The predicted optimum process conditions obtained using response surface methodology was verified through confirmatory experiments. Leudeking-piret model was used to study the product formation kinetics for the production of ethanol and the model parameters were evaluated using experimental data.

Generalized Mean-field Theory of Phase Unwrapping via Multiple Interferograms

On the basis of Bayesian inference using the maximizer of the posterior marginal estimate, we carry out phase unwrapping using multiple interferograms via generalized mean-field theory. Numerical calculations for a typical wave-front in remote sensing using the synthetic aperture radar interferometry, phase diagram in hyper-parameter space clarifies that the present method succeeds in phase unwrapping perfectly under the constraint of surface- consistency condition, if the interferograms are not corrupted by any noises. Also, we find that prior is useful for extending a phase in which phase unwrapping under the constraint of the surface-consistency condition. These results are quantitatively confirmed by the Monte Carlo simulation.