Optimization the Process of Osmo – Convective Drying of Edible Button Mushrooms using Response Surface Methodology (RSM)

Simultaneous effects of temperature, immersion time, salt concentration, sucrose concentration, pressure and convective dryer temperature on the combined osmotic dehydration - convective drying of edible button mushrooms were investigated. Experiments were designed according to Central Composite Design with six factors each at five different levels. Response Surface Methodology (RSM) was used to determine the optimum processing conditions that yield maximum water loss and rehydration ratio and minimum solid gain and shrinkage in osmotic-convective drying of edible button mushrooms. Applying surfaces profiler and contour plots optimum operation conditions were found to be temperature of 39 °C, immersion time of 164 min, salt concentration of 14%, sucrose concentration of 53%, pressure of 600 mbar and drying temperature of 40 °C. At these optimum conditions, water loss, solid gain, rehydration ratio and shrinkage were found to be 63.38 (g/100 g initial sample), 3.17 (g/100 g initial sample), 2.26 and 7.15%, respectively.

Statistical Optimization of Process Variables for Direct Fermentation of 226 White Rose Tapioca Stem to Ethanol by Fusarium oxysporum

Direct fermentation of 226 white rose tapioca stem to ethanol by Fusarium oxysporum was studied in a batch reactor. Fermentation of ethanol can be achieved by sequential pretreatment using dilute acid and dilute alkali solutions using 100 mesh tapioca stem particles. The quantitative effects of substrate concentration, pH and temperature on ethanol concentration were optimized using a full factorial central composite design experiment. The optimum process conditions were then obtained using response surface methodology. The quadratic model indicated that substrate concentration of 33g/l, pH 5.52 and a temperature of 30.13oC were found to be optimum for maximum ethanol concentration of 8.64g/l. The predicted optimum process conditions obtained using response surface methodology was verified through confirmatory experiments. Leudeking-piret model was used to study the product formation kinetics for the production of ethanol and the model parameters were evaluated using experimental data.

A Calibration Approach towards Reducing ASM2d Parameter Subsets in Phosphorus Removal Processes

A novel calibration approach that aims to reduce ASM2d parameter subsets and decrease the model complexity is presented. This approach does not require high computational demand and reduces the number of modeling parameters required to achieve the ASMs calibration by employing a sensitivity and iteration methodology. Parameter sensitivity is a crucial factor and the iteration methodology enables refinement of the simulation parameter values. When completing the iteration process, parameters values are determined in descending order of their sensitivities. The number of iterations required is equal to the number of model parameters of the parameter significance ranking. This approach was used for the ASM2d model to the evaluated EBPR phosphorus removal and it was successful. Results of the simulation provide calibration parameters. These included YPAO, YPO4, YPHA, qPHA, qPP, μPAO, bPAO, bPP, bPHA, KPS, YA, μAUT, bAUT, KO2 AUT, and KNH4 AUT. Those parameters were corresponding to the experimental data available.

Learning Monte Carlo Data for Circuit Path Length

This paper analyzes the patterns of the Monte Carlo data for a large number of variables and minterms, in order to characterize the circuit path length behavior. We propose models that are determined by training process of shortest path length derived from a wide range of binary decision diagram (BDD) simulations. The creation of the model was done use of feed forward neural network (NN) modeling methodology. Experimental results for ISCAS benchmark circuits show an RMS error of 0.102 for the shortest path length complexity estimation predicted by the NN model (NNM). Use of such a model can help reduce the time complexity of very large scale integrated (VLSI) circuitries and related computer-aided design (CAD) tools that use BDDs.

The Spiral_OWL Model – Towards Spiral Knowledge Engineering

The Spiral development model has been used successfully in many commercial systems and in a good number of defense systems. This is due to the fact that cost-effective incremental commitment of funds, via an analogy of the spiral model to stud poker and also can be used to develop hardware or integrate software, hardware, and systems. To support adaptive, semantic collaboration between domain experts and knowledge engineers, a new knowledge engineering process, called Spiral_OWL is proposed. This model is based on the idea of iterative refinement, annotation and structuring of knowledge base. The Spiral_OWL model is generated base on spiral model and knowledge engineering methodology. A central paradigm for Spiral_OWL model is the concentration on risk-driven determination of knowledge engineering process. The collaboration aspect comes into play during knowledge acquisition and knowledge validation phase. Design rationales for the Spiral_OWL model are to be easy-to-implement, well-organized, and iterative development cycle as an expanding spiral.

The Data Mining usage in Production System Management

The paper gives the pilot results of the project that is oriented on the use of data mining techniques and knowledge discoveries from production systems through them. They have been used in the management of these systems. The simulation models of manufacturing systems have been developed to obtain the necessary data about production. The authors have developed the way of storing data obtained from the simulation models in the data warehouse. Data mining model has been created by using specific methods and selected techniques for defined problems of production system management. The new knowledge has been applied to production management system. Gained knowledge has been tested on simulation models of the production system. An important benefit of the project has been proposal of the new methodology. This methodology is focused on data mining from the databases that store operational data about the production process.

Investigations Into the Turning Parameters Effect on the Surface Roughness of Flame Hardened Medium Carbon Steel with TiN-Al2O3-TiCN Coated Inserts based on Taguchi Techniques

The aim of this research is to evaluate surface roughness and develop a multiple regression model for surface roughness as a function of cutting parameters during the turning of flame hardened medium carbon steel with TiN-Al2O3-TiCN coated inserts. An experimental plan of work and signal-to-noise ratio (S/N) were used to relate the influence of turning parameters to the workpiece surface finish utilizing Taguchi methodology. The effects of turning parameters were studied by using the analysis of variance (ANOVA) method. Evaluated parameters were feed, cutting speed, and depth of cut. It was found that the most significant interaction among the considered turning parameters was between depth of cut and feed. The average surface roughness (Ra) resulted by TiN-Al2O3- TiCN coated inserts was about 2.44 μm and minimum value was 0.74 μm. In addition, the regression model was able to predict values for surface roughness in comparison with experimental values within reasonable limit.

Optimization of Acid Treatments by Assessing Diversion Strategies in Carbonate and Sandstone Formations

When acid is pumped into damaged reservoirs for damage removal/stimulation, distorted inflow of acid into the formation occurs caused by acid preferentially traveling into highly permeable regions over low permeable regions, or (in general) into the path of least resistance. This can lead to poor zonal coverage and hence warrants diversion to carry out an effective placement of acid. Diversion is desirably a reversible technique of temporarily reducing the permeability of high perm zones, thereby forcing the acid into lower perm zones. The uniqueness of each reservoir can pose several challenges to engineers attempting to devise optimum and effective diversion strategies. Diversion techniques include mechanical placement and/or chemical diversion of treatment fluids, further sub-classified into ball sealers, bridge plugs, packers, particulate diverters, viscous gels, crosslinked gels, relative permeability modifiers (RPMs), foams, and/or the use of placement techniques, such as coiled tubing (CT) and the maximum pressure difference and injection rate (MAPDIR) methodology. It is not always realized that the effectiveness of diverters greatly depends on reservoir properties, such as formation type, temperature, reservoir permeability, heterogeneity, and physical well characteristics (e.g., completion type, well deviation, length of treatment interval, multiple intervals, etc.). This paper reviews the mechanisms by which each variety of diverter functions and discusses the effect of various reservoir properties on the efficiency of diversion techniques. Guidelines are recommended to help enhance productivity from zones of interest by choosing the best methods of diversion while pumping an optimized amount of treatment fluid. The success of an overall acid treatment often depends on the effectiveness of the diverting agents.

Interactive Chinese Character Learning System though Pictograph Evolution

This paper proposes an Interactive Chinese Character Learning System (ICCLS) based on pictorial evolution as an edutainment concept in computer-based learning of language. The advantage of the language origination itself is taken as a learning platform due to the complexity in Chinese language as compared to other types of languages. Users especially children enjoy more by utilize this learning system because they are able to memories the Chinese Character easily and understand more of the origin of the Chinese character under pleasurable learning environment, compares to traditional approach which children need to rote learning Chinese Character under un-pleasurable environment. Skeletonization is used as the representation of Chinese character and object with an animated pictograph evolution to facilitate the learning of the language. Shortest skeleton path matching technique is employed for fast and accurate matching in our implementation. User is required to either write a word or draw a simple 2D object in the input panel and the matched word and object will be displayed as well as the pictograph evolution to instill learning. The target of computer-based learning system is for pre-school children between 4 to 6 years old to learn Chinese characters in a flexible and entertaining manner besides utilizing visual and mind mapping strategy as learning methodology.

Design of a Neural Networks Classifier for Face Detection

Face detection and recognition has many applications in a variety of fields such as security system, videoconferencing and identification. Face classification is currently implemented in software. A hardware implementation allows real-time processing, but has higher cost and time to-market. The objective of this work is to implement a classifier based on neural networks MLP (Multi-layer Perceptron) for face detection. The MLP is used to classify face and non-face patterns. The systm is described using C language on a P4 (2.4 Ghz) to extract weight values. Then a Hardware implementation is achieved using VHDL based Methodology. We target Xilinx FPGA as the implementation support.

A New Kind Methodology for Controlling Complex Systems

Control of complex systems is one of important files in complex systems, that not only relies on the essence of complex systems which is denoted by the core concept – emergence, but also embodies the elementary concept in control theory. Aiming at giving a clear and self-contained description of emergence, the paper introduces a formal way to completely describe the formation and dynamics of emergence in complex systems. Consequently, this paper indicates the Emergence-Oriented Control methodology that contains three kinds of basic control schemes: the direct control, the system re-structuring and the system calibration. As a universal ontology, the Emergence-Oriented Control provides a powerful tool for identifying and resolving control problems in specific systems.

Determination of Volatile Organic Compounds in Human Breath by Optical Fiber Sensing

This work proposes an optical fiber system (OF) for sensing various volatile organic compounds (VOCs) in human breath for the diagnosis of some metabolic disorders as a non-invasive methodology. The analyzed VOCs are alkanes (i.e., ethane, pentane, heptane, octane, and decane), and aromatic compounds (i.e., benzene, toluene, and styrene). The OF displays high analytical performance since it provides near real-time responses, rapid analysis, and low instrumentation costs, as well as it exhibits useful linear range and detection limits; the developed OF sensor is also comparable to a reference methodology (gas chromatography-mass spectrometry) for the eight tested VOCs.

Participatory Patterns of Community in Water and Waste Management: A Case Study of Municipality in Amphawa District, Samut Songkram Province

This is a survey research using quantitative and qualitative methodology. There were three objectives: 1) To study participatory level of community in water and waste environment management. 2) To study the affecting factors for community participation in water and waste environment management in Ampawa District, Samut Songkram Province. 3) To search for the participatory patterns in water and waste management. The population sample for the quantitative research was 1,364 people living in Ampawa District. The methodology was simple random sampling. Research instrument was a questionnaire and the qualitative research used purposive sampling in 6 Sub Districts which are Ta Ka, Suanluang, Bangkae, Muangmai, Kwae-om, and Bangnanglee Sub District Administration Organization. Total population is 63. For data analysis, the study used content analysis from quantitative research to synthesize and build question frame from the content for interview and conducting focus group interview. The study found that the community participatory in the issue of level in water and waste management are moderate of planning, operation, and evaluation. The issue of being beneficial is at low level. Therefore, the overall participatory level of community in water and waste environment management is at a medium level. The factors affecting the participatory of community in water and waste management are age, the period dwelling in the community and membership in which the mean difference is statistic significant at 0.05 in area of operation, being beneficial, and evaluation. For patterns of community participation, there is the correlation with water and waste management in 4 concerns which are 1) Participation in planning 2) Participation in operation 3) Participation in being beneficial both directly and indirectly benefited 4) Participation in evaluation and monitoring. The recommendation from this study is the need to create conscious awareness in order to increase participation level of people by organizing activities that promote participation with volunteer spirit. Government should open opportunities for people to participate in sharing ideas and create the culture of living together with equality which would build more concrete participation.

Probabilistic Method of Wind Generation Placement for Congestion Management

Wind farms (WFs) with high level of penetration are being established in power systems worldwide more rapidly than other renewable resources. The Independent System Operator (ISO), as a policy maker, should propose appropriate places for WF installation in order to maximize the benefits for the investors. There is also a possibility of congestion relief using the new installation of WFs which should be taken into account by the ISO when proposing the locations for WF installation. In this context, efficient wind farm (WF) placement method is proposed in order to reduce burdens on congested lines. Since the wind speed is a random variable and load forecasts also contain uncertainties, probabilistic approaches are used for this type of study. AC probabilistic optimal power flow (P-OPF) is formulated and solved using Monte Carlo Simulations (MCS). In order to reduce computation time, point estimate methods (PEM) are introduced as efficient alternative for time-demanding MCS. Subsequently, WF optimal placement is determined using generation shift distribution factors (GSDF) considering a new parameter entitled, wind availability factor (WAF). In order to obtain more realistic results, N-1 contingency analysis is employed to find the optimal size of WF, by means of line outage distribution factors (LODF). The IEEE 30-bus test system is used to show and compare the accuracy of proposed methodology.

Design of EDFA Gain Controller based on Disturbance Observer Technique

Based on a theoretical erbium-doped fiber amplifier (EDFA) model, we have proposed an application of disturbance observer(DOB) with proportional/integral/differential(PID) controller to EDFA for minimizing gain-transient time of wavelength -division-multiplexing (WDM) multi channels in optical amplifier in channel add/drop networks. We have dramatically reduced the gain-transient time to less than 30μsec by applying DOB with PID controller to the control of amplifier gain. The proposed DOB-based gain control algorithm for EDFA was implemented as a digital control system using TI's DSP(TMS320C28346) chip and experimental results of the system verify the excellent performance of the proposed gain control methodology.

mCRM-s New Opportunities of Customer Satisfaction

This paper aims at a new challenge of customer satisfaction on mobile customer relationship management. In this paper presents a conceptualization of mCRM on its unique characteristics of customer satisfaction. Also, this paper develops an empirical framework in conception of customer satisfaction in mCRM. A single-case study is applied as the methodology. In order to gain an overall view of the empirical case, this paper accesses to invisible and important information of company in this investigation. Interview is the key data source form the main informants of the company through which the issues are identified and the proposed framework is built. It supports the development of customer satisfaction in mCRM; links this theoretical framework into practice; and provides the direction for future research. Therefore, this paper is very useful for the industries as it helps them to understand how customer satisfaction changes the mCRM structure and increase the business competitive advantage. Finally, this paper provides a contribution in practice by linking a theoretical framework in conception of customer satisfaction in mCRM for companies to a practical real case.

The Investigations of Water-ethanol Mixture by Monte Carlo Method

Energetic and structural results for ethanol-water mixtures as a function of the mole fraction were calculated using Monte Carlo methodology. Energy partitioning results obtained for equimolar water-ethanol mixture and ether organic liquids are compared. It has been shown that at xet=0.22 the RDFs for waterethanol and ethanol-ethanol interactions indicated strong hydrophobic interactions between ethanol molecules and the local structure of solution is less structured at this concentration as at ether ones. Results obtained for ethanol-water mixture as a function of concentration are in good agreement with the experimental data.

Comparative Study of Evolutionary Model and Clustering Methods in Circuit Partitioning Pertaining to VLSI Design

Partitioning is a critical area of VLSI CAD. In order to build complex digital logic circuits its often essential to sub-divide multi -million transistor design into manageable Pieces. This paper looks at the various partitioning techniques aspects of VLSI CAD, targeted at various applications. We proposed an evolutionary time-series model and a statistical glitch prediction system using a neural network with selection of global feature by making use of clustering method model, for partitioning a circuit. For evolutionary time-series model, we made use of genetic, memetic & neuro-memetic techniques. Our work focused in use of clustering methods - K-means & EM methodology. A comparative study is provided for all techniques to solve the problem of circuit partitioning pertaining to VLSI design. The performance of all approaches is compared using benchmark data provided by MCNC standard cell placement benchmark net lists. Analysis of the investigational results proved that the Neuro-memetic model achieves greater performance then other model in recognizing sub-circuits with minimum amount of interconnections between them.

Determination of Surface Roughness by Ball Burnishing Process Using Factorial Techniques

Burnishing is a method of finishing and hardening machined parts by plastic deformation of the surface. Experimental work based on central composite second order rotatable design has been carried out on a lathe machine to establish the effects of ball burnishing parameters on the surface roughness of brass material. Analysis of the results by the analysis of variance technique and the F-test show that the parameters considered, have significant effects on the surface roughness.

Effect of Size of the Step in the Response Surface Methodology using Nonlinear Test Functions

The response surface methodology (RSM) is a collection of mathematical and statistical techniques useful in the modeling and analysis of problems in which the dependent variable receives the influence of several independent variables, in order to determine which are the conditions under which should operate these variables to optimize a production process. The RSM estimated a regression model of first order, and sets the search direction using the method of maximum / minimum slope up / down MMS U/D. However, this method selects the step size intuitively, which can affect the efficiency of the RSM. This paper assesses how the step size affects the efficiency of this methodology. The numerical examples are carried out through Monte Carlo experiments, evaluating three response variables: efficiency gain function, the optimum distance and the number of iterations. The results in the simulation experiments showed that in response variables efficiency and gain function at the optimum distance were not affected by the step size, while the number of iterations is found that the efficiency if it is affected by the size of the step and function type of test used.