Ensembling Adaptively Constructed Polynomial Regression Models

The approach of subset selection in polynomial regression model building assumes that the chosen fixed full set of predefined basis functions contains a subset that is sufficient to describe the target relation sufficiently well. However, in most cases the necessary set of basis functions is not known and needs to be guessed – a potentially non-trivial (and long) trial and error process. In our research we consider a potentially more efficient approach – Adaptive Basis Function Construction (ABFC). It lets the model building method itself construct the basis functions necessary for creating a model of arbitrary complexity with adequate predictive performance. However, there are two issues that to some extent plague the methods of both the subset selection and the ABFC, especially when working with relatively small data samples: the selection bias and the selection instability. We try to correct these issues by model post-evaluation using Cross-Validation and model ensembling. To evaluate the proposed method, we empirically compare it to ABFC methods without ensembling, to a widely used method of subset selection, as well as to some other well-known regression modeling methods, using publicly available data sets.

Equilibrium, Kinetic and Thermodynamic Studies on Biosorption of Cd (II) and Pb (II) from Aqueous Solution Using a Spore Forming Bacillus Isolated from Wastewater of a Leather Factory

The equilibrium, thermodynamics and kinetics of the biosorption of Cd (II) and Pb(II) by a Spore Forming Bacillus (MGL 75) were investigated at different experimental conditions. The Langmuir and Freundlich, and Dubinin-Radushkevich (D-R) equilibrium adsorption models were applied to describe the biosorption of the metal ions by MGL 75 biomass. The Langmuir model fitted the equilibrium data better than the other models. Maximum adsorption capacities q max for lead (II) and cadmium (II) were found equal to 158.73mg/g and 91.74 mg/g by Langmuir model. The values of the mean free energy determined with the D-R equation showed that adsorption process is a physiosorption process. The thermodynamic parameters Gibbs free energy (ΔG°), enthalpy (ΔH°), and entropy (ΔS°) changes were also calculated, and the values indicated that the biosorption process was exothermic and spontaneous. Experiment data were also used to study biosorption kinetics using pseudo-first-order and pseudo-second-order kinetic models. Kinetic parameters, rate constants, equilibrium sorption capacities and related correlation coefficients were calculated and discussed. The results showed that the biosorption processes of both metal ions followed well pseudo-second-order kinetics.

Design and Economical Performance of Gray Water Treatment Plant in Rural Region

In India, the quarrel between the budding human populace and the planet-s unchanging supply of freshwater and falling water tables has strained attention the reuse of gray water as an alternative water resource in rural development. This paper present the finest design of laboratory scale gray water treatment plant, which is a combination of natural and physical operations such as primary settling with cascaded water flow, aeration, agitation and filtration, hence called as hybrid treatment process. The economical performance of the plant for treatment of bathrooms, basins and laundries gray water showed in terms of deduction competency of water pollutants such as COD (83%), TDS (70%), TSS (83%), total hardness (50%), oil and grease (97%), anions (46%) and cations (49%). Hence, this technology could be a good alternative to treat gray water in residential rural area.

Use of Bayesian Network in Information Extraction from Unstructured Data Sources

This paper applies Bayesian Networks to support information extraction from unstructured, ungrammatical, and incoherent data sources for semantic annotation. A tool has been developed that combines ontologies, machine learning, and information extraction and probabilistic reasoning techniques to support the extraction process. Data acquisition is performed with the aid of knowledge specified in the form of ontology. Due to the variable size of information available on different data sources, it is often the case that the extracted data contains missing values for certain variables of interest. It is desirable in such situations to predict the missing values. The methodology, presented in this paper, first learns a Bayesian network from the training data and then uses it to predict missing data and to resolve conflicts. Experiments have been conducted to analyze the performance of the presented methodology. The results look promising as the methodology achieves high degree of precision and recall for information extraction and reasonably good accuracy for predicting missing values.

Modeling and Simulation of Robotic Arm Movement using Soft Computing

In this research paper we have presented control architecture for robotic arm movement and trajectory planning using Fuzzy Logic (FL) and Genetic Algorithms (GAs). This architecture is used to compensate the uncertainties like; movement, friction and settling time in robotic arm movement. The genetic algorithms and fuzzy logic is used to meet the objective of optimal control movement of robotic arm. This proposed technique represents a general model for redundant structures and may extend to other structures. Results show optimal angular movement of joints as result of evolutionary process. This technique has edge over the other techniques as minimum mathematics complexity used.

The Upconversion of co-doped Nd3+/Er3+Tellurite Glass

Series of tellurite glass of the system 78TeO2-10PbO- 10Li2O-(2-x)Nd2O3-xEr2O3, where x = 0.5, 1.0, 1.5 and 2.0 was successfully been made. A study of upconversion luminescence of the Nd3+/Er3+ co-doped tellurite glass has been carried out. From Judd-Ofelt analysis, the experimental lifetime, exp. τ of the glass serie are found higher in the visible region as they varies from 65.17ms to 114.63ms, whereas in the near infrared region (NIR) the lifetime are varies from 2.133ms to 2.270ms. Meanwhile, the emission cross section,σ results are found varies from 0.004 x 1020 cm2 to 1.007 x 1020 cm2 with respect to composition. The emission spectra of the glass are found been contributed from Nd3+ and Er3+ ions by which nine significant transition peaks are observed. The upconversion mechanism of the co-doped tellurite glass has been shown in the schematic energy diagrams. In this works, it is found that the excited state-absorption (ESA) is still dominant in the upconversion excitation process as the upconversion excitation mechanism of the Nd3+ excited-state levels is accomplished through a stepwise multiphonon process. An efficient excitation energy transfer (ET) has been observed between Nd3+ as a donor and Er3+ as the acceptor. As a result, respective emission spectra had been observed.

Vermicomposting of Waste Corn Pulp Blended with Cow Dung Manure using Eisenia Fetida

Waste corn pulp was investigated as a potential feedstock during vermicomposting using Eisenia fetida. Corn pulp is the major staple food in Southern Africa and constitutes about 25% of the total organic waste. Wastecooked corn pulp was blended with cow dung in the ratio 6:1 respectively to optimize the vermicomposting process. The feedstock was allowed to vermicompost for 30 days. The vermicomposting took place in a 3- tray plastic worm bin. Moisture content, temperature, pH, and electrical conductivity were monitoreddaily. The NPK content was determined at day 30. During vermicomposting, moisture content increased from 27.68% to 52.41%, temperature ranged between 19- 25◦C, pH increased from 5.5 to 7.7, and electrical conductivity decreased from 80000μS/cm to 60000μS/cm. The ash content increased from 11.40% to 28.15%; additionally the volatile matter increased from 1.45% to 10.02%. An odorless, dark brown vermicompost was obtained. The vermicompost NPK content was 4.19%, 1.15%, and 6.18% respectively.

Treatment of Oily Wastewater by Fibrous Coalescer Process: Stage Coalescer and Model Prediction

The coalescer process is one of the methods for oily water treatment by increasing the oil droplet size in order to enhance the separating velocity and thus effective separation. However, the presence of surfactants in an oily emulsion can limit the obtained mechanisms due to the small oil size related with stabilized emulsion. In this regard, the purpose of this research is to improve the efficiency of the coalescer process for treating the stabilized emulsion. The effects of bed types, bed height, liquid flow rate and stage coalescer (step-bed) on the treatment efficiencies in term of COD values were studied. Note that the treatment efficiency obtained experimentally was estimated by using the COD values and oil droplet size distribution. The study has shown that the plastic media has more effective to attach with oil particles than the stainless one due to their hydrophobic properties. Furthermore, the suitable bed height (3.5 cm) and step bed (3.5 cm with 2 steps) were necessary in order to well obtain the coalescer performance. The application of step bed coalescer process in reactor has provided the higher treatment efficiencies in term of COD removal than those obtained with classical process. The proposed model for predicting the area under curve and thus treatment efficiency, based on the single collector efficiency (ηT) and the attachment efficiency (α), provides relatively a good coincidence between the experimental and predicted values of treatment efficiencies in this study.

A Growing Natural Gas Approach for Evaluating Quality of Software Modules

The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.

Gated Community: The Past and Present in China

Gated community has gained its dominant in residential areas development that it has become the standard development pattern of the newly built residential areas in contemporary China. The form of gated community has its own advantages and rationality that meet the needs of quite a lot of residents, but it-s also believed by researchers that the form has great damage to the urban morphology and development, and has a negative impact on residents- living style. However, there is still a considerable controversy of the origins and outcomes. Though recognized as a global phenomenon, gated community developed in China is greatly to do with the specific local forces, respect to the unique historical, political and socio-cultural momentums. A historical review of the traditional settlements in China and the trends that how Gated community has gained its contemporary form, is indispensable for comprehending the local forces, and provide a new perspective to solve the controversy.

A Dynamic Programming Model for Maintenance of Electric Distribution System

The paper presents dynamic programming based model as a planning tool for the maintenance of electric power systems. Every distribution component has an exponential age depending reliability function to model the fault risk. In the moment of time when the fault costs exceed the investment costs of the new component the reinvestment of the component should be made. However, in some cases the overhauling of the old component may be more economical than the reinvestment. The comparison between overhauling and reinvestment is made by optimisation process. The goal of the optimisation process is to find the cost minimising maintenance program for electric power distribution system.

Numerical Simulation of Investment Casting of Gold Jewelry: Experiments and Validations

This paper proposes the numerical simulation of the investment casting of gold jewelry. It aims to study the behavior of fluid flow during mould filling and solidification and to optimize the process parameters, which lead to predict and control casting defects such as gas porosity and shrinkage porosity. A finite difference method, computer simulation software FLOW-3D was used to simulate the jewelry casting process. The simplified model was designed for both numerical simulation and real casting production. A set of sensor acquisitions were allocated on the different positions of the wax tree of the model to detect filling times, while a set of thermocouples were allocated to detect the temperature during casting and cooling. Those detected data were applied to validate the results of the numerical simulation to the results of the real casting. The resulting comparisons signify that the numerical simulation can be used as an effective tool in investment-casting-process optimization and casting-defect prediction.

Implementing Knowledge Transfer Solution through Web-based Help Desk System

Knowledge management is a process taking any steps that needed to get the most out of available knowledge resources. KM involved several steps; capturing the knowledge discovering new knowledge, sharing the knowledge and applied the knowledge in the decision making process. In applying the knowledge, it is not necessary for the individual that use the knowledge to comprehend it as long as the available knowledge is used in guiding the decision making and actions. When an expert is called and he provides stepby- step procedure on how to solve the problems to the caller, the expert is transferring the knowledge or giving direction to the caller. And the caller is 'applying' the knowledge by following the instructions given by the expert. An appropriate mechanism is needed to ensure effective knowledge transfer which in this case is by telephone or email. The problem with email and telephone is that the knowledge is not fully circulated and disseminated to all users. In this paper, with related experience of local university Help Desk, it is proposed the usage of Information Technology (IT)to effectively support the knowledge transfer in the organization. The issues covered include the existing knowledge, the related works, the methodology used in defining the knowledge management requirements as well the overview of the prototype.

The Impact of e-Learning and e-Teaching

With the exponential progress of technological development comes a strong sense that events are moving too quickly for our schools and that teachers may be losing control of them in the process. This paper examines the impact of e-learning and e-teaching in universities, from both the student and teacher perspective. In particular, it is shown that e-teachers should focus not only on the technical capacities and functions of IT materials and activities, but must attempt to more fully understand how their e-learners perceive the learning environment. From the e-learner perspective, this paper indicates that simply having IT tools available does not automatically translate into all students becoming effective learners. More evidence-based evaluative research is needed to allow e-learning and e-teaching to reach full potential.

Performance Evaluation of a Diesel Engine Fueled with Methyl Ester of shea Butter

Biodiesel as an alternative fuel for diesel engines has been developed for some three decades now. While it is gaining wide acceptance in Europe, USA and some parts of Asia, the same cannot be said of Africa. With more than 35 countries in the continent depending on imported crude oil, it is necessary to look for alternative fuels which can be produced from resources available locally within any country. Hence this study presents performance of single cylinder diesel engine using blends of shea butter biodiesel. Shea butter was transformed into biodiesel by transesterification process. Tests are conducted to compare the biodiesel with baseline diesel fuel in terms of engine performance and exhaust emission characteristics. The results obtained showed that the addition of biodiesel to diesel fuel decreases the brake thermal efficiency (BTE) and increases the brake specific fuel consumption (BSFC). These results are expected due to the lower energy content of biodiesel fuel. On the other hand while the NOx emissions increased with increase in biodiesel content in the fuel blends, the emissions of carbon monoxide (CO), un-burnt hydrocarbon (UHC) and smoke opacity decreased. The engine performance which indicates that the biodiesel has properties and characteristics similar to diesel fuel and the reductions in exhaust emissions make shea butter biodiesel a viable additive or substitute to diesel fuel.

A Novel Convergence Accelerator for the LMS Adaptive Algorithm

The least mean square (LMS) algorithmis one of the most well-known algorithms for mobile communication systems due to its implementation simplicity. However, the main limitation is its relatively slow convergence rate. In this paper, a booster using the concept of Markov chains is proposed to speed up the convergence rate of LMS algorithms. The nature of Markov chains makes it possible to exploit the past information in the updating process. Moreover, since the transition matrix has a smaller variance than that of the weight itself by the central limit theorem, the weight transition matrix converges faster than the weight itself. Accordingly, the proposed Markov-chain based booster thus has the ability to track variations in signal characteristics, and meanwhile, it can accelerate the rate of convergence for LMS algorithms. Simulation results show that the LMS algorithm can effectively increase the convergence rate and meantime further approach the Wiener solution, if the Markov-chain based booster is applied. The mean square error is also remarkably reduced, while the convergence rate is improved.

Prioritizing Service Quality Dimensions:A Neural Network Approach

One of the determinants of a firm-s prosperity is the customers- perceived service quality and satisfaction. While service quality is wide in scope, and consists of various dimensions, there may be differences in the relative importance of these dimensions in affecting customers- overall satisfaction of service quality. Identifying the relative rank of different dimensions of service quality is very important in that it can help managers to find out which service dimensions have a greater effect on customers- overall satisfaction. Such an insight will consequently lead to more effective resource allocation which will finally end in higher levels of customer satisfaction. This issue –despite its criticality- has not received enough attention so far. Therefore, using a sample of 240 bank customers in Iran, an artificial neural network is developed to address this gap in the literature. As customers- evaluation of service quality is a subjective process, artificial neural networks –as a brain metaphor- may appear to have a potentiality to model such a complicated process. Proposing a neural network which is able to predict the customers- overall satisfaction of service quality with a promising level of accuracy is the first contribution of this study. In addition, prioritizing the service quality dimensions in affecting customers- overall satisfaction –by using sensitivity analysis of neural network- is the second important finding of this paper.

Rheological Modeling for Production of High Quality Polymeric

The fundamental defect inherent to the thermoforming technology is wall-thickness variation of the products due to inadequate thermal processing during production of polymer. A nonlinear viscoelastic rheological model is implemented for developing the process model. This model describes deformation process of a sheet in thermoforming process. Because of relaxation pause after plug-assist stage and also implementation of two stage thermoforming process have minor wall-thickness variation and consequently better mechanical properties of polymeric articles. For model validation, a comparative analysis of the theoretical and experimental data is presented.

On the Dynamic Model of Service Innovation in Manufacturing Industry

As the trend of manufacturing is being dominated depending on services, products and processes are more and more related with sophisticated services. Thus, this research starts with the discussion about integration of the product, process, and service in the innovation process. In particular, this paper sets out some foundations for a theory of service innovation in the field of manufacturing, and proposes the dynamic model of service innovation related to product and process. Two dynamic models of service innovation are suggested to investigate major tendencies and dynamic variations during the innovation cycle: co-innovation and sequential innovation. To structure dynamic models of product, process, and service innovation, the innovation stages in which two models are mainly achieved are identified. The research would encourage manufacturers to formulate strategy and planning for service development with product and process.

Analysis of Delay and Throughput in MANET for DSR Protocol

A wireless Ad-hoc network consists of wireless nodes communicating without the need for a centralized administration, in which all nodes potentially contribute to the routing process.In this paper, we report the simulation results of four different scenarios for wireless ad hoc networks having thirty nodes. The performances of proposed networks are evaluated in terms of number of hops per route, delay and throughput with the help of OPNET simulator. Channel speed 1 Mbps and simulation time 600 sim-seconds were taken for all scenarios. For the above analysis DSR routing protocols has been used. The throughput obtained from the above analysis (four scenario) are compared as shown in Figure 3. The average media access delay at node_20 for two routes and at node_20 for four different scenario are compared as shown in Figures 4 and 5. It is observed that the throughput will degrade when it will follow different hops for same source to destination (i.e. it has dropped from 1.55 Mbps to 1.43 Mbps which is around 9.7%, and then dropped to 0.48Mbps which is around 35%).