On Formalizing Predefined OCL Properties

The ability of UML to handle the modeling process of complex industrial software applications has increased its popularity to the extent of becoming the de-facto language in serving the design purpose. Although, its rich graphical notation naturally oriented towards the object-oriented concept, facilitates the understandability, it hardly successes to report all domainspecific aspects in a satisfactory way. OCL, as the standard language for expressing additional constraints on UML models, has great potential to help improve expressiveness. Unfortunately, it suffers from a weak formalism due to its poor semantic resulting in many obstacles towards the build of tools support and thus its application in the industry field. For this reason, many researches were established to formalize OCL expressions using a more rigorous approach. Our contribution join this work in a complementary way since it focuses specifically on OCL predefined properties which constitute an important part in the construction of OCL expressions. Using formal methods, we mainly succeed in expressing rigorously OCL predefined functions.

Enhancing the Error-Correcting Performance of LDPC Codes through an Efficient Use of Decoding Iterations

The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.

Bayesian Network Model for Students- Laboratory Work Performance Assessment: An Empirical Investigation of the Optimal Construction Approach

There are three approaches to complete Bayesian Network (BN) model construction: total expert-centred, total datacentred, and semi data-centred. These three approaches constitute the basis of the empirical investigation undertaken and reported in this paper. The objective is to determine, amongst these three approaches, which is the optimal approach for the construction of a BN-based model for the performance assessment of students- laboratory work in a virtual electronic laboratory environment. BN models were constructed using all three approaches, with respect to the focus domain, and compared using a set of optimality criteria. In addition, the impact of the size and source of the training, on the performance of total data-centred and semi data-centred models was investigated. The results of the investigation provide additional insight for BN model constructors and contribute to literature providing supportive evidence for the conceptual feasibility and efficiency of structure and parameter learning from data. In addition, the results highlight other interesting themes.

Analytical Modeling of Channel Noise for Gate Material Engineered Surrounded/Cylindrical Gate (SGT/CGT) MOSFET

In this paper, an analytical modeling is presentated to describe the channel noise in GME SGT/CGT MOSFET, based on explicit functions of MOSFETs geometry and biasing conditions for all channel length down to deep submicron and is verified with the experimental data. Results shows the impact of various parameters such as gate bias, drain bias, channel length ,device diameter and gate material work function difference on drain current noise spectral density of the device reflecting its applicability for circuit design applications.

Design and Analysis of a Novel 8-DOF Hybrid Manipulator

This paper presents kinematic and dynamic analysis of a novel 8-DOF hybrid robot manipulator. The hybrid robot manipulator under consideration consists of a parallel robot which is followed by a serial mechanism. The parallel mechanism has three translational DOF, and the serial mechanism has five DOF so that the overall degree of freedom is eight. The introduced manipulator has a wide workspace and a high capability to reduce the actuating energy. The inverse and forward kinematic solutions are described in closed form. The theoretical results are verified by a numerical example. Inverse dynamic analysis of the robot is presented by utilizing the Iterative Newton-Euler and Lagrange dynamic formulation methods. Finally, for performing a multi-step arc welding process, results have indicated that the introduced manipulator is highly capable of reducing the actuating energy.

Hub Port Positioning and Route Planning of Feeder Lines for Regional Transportation Network

In this paper, we seek to determine one reasonable local hub port and optimal routes for a containership fleet, performing pick-ups and deliveries, between the hub and spoke ports in a same region. The relationship between a hub port, and traffic in feeder lines is analyzed. A new network planning method is proposed, an integrated hub port location and route design, a capacitated vehicle routing problem with pick-ups, deliveries and time deadlines are formulated and solved using an improved genetic algorithm for positioning the hub port and establishing routes for a containership fleet. Results on the performance of the algorithm and the feasibility of the approach show that a relatively small fleet of containerships could provide efficient services within deadlines.

Multidimensional and Data Mining Analysis for Property Investment Risk Analysis

Property investment in the real estate industry has a high risk due to the uncertainty factors that will affect the decisions made and high cost. Analytic hierarchy process has existed for some time in which referred to an expert-s opinion to measure the uncertainty of the risk factors for the risk analysis. Therefore, different level of experts- experiences will create different opinion and lead to the conflict among the experts in the field. The objective of this paper is to propose a new technique to measure the uncertainty of the risk factors based on multidimensional data model and data mining techniques as deterministic approach. The propose technique consist of a basic framework which includes four modules: user, technology, end-user access tools and applications. The property investment risk analysis defines as a micro level analysis as the features of the property will be considered in the analysis in this paper.

Distributed e-Learning System with Client-Server and P2P Hybrid Architecture

We have developed a distributed asynchronous Web based training system. In order to improve the scalability and robustness of this system, all contents and a function are realized on mobile agents. These agents are distributed to computers, and they can use a Peer to Peer network that modified Content-Addressable Network. In this system, all computers offer the function and exercise by themselves. However, the system that all computers do the same behavior is not realistic. In this paper, as a solution of this issue, we present an e-Learning system that is composed of computers of different participation types. Enabling the computer of different participation types will improve the convenience of the system.

Structure and Magnetic Properties of Nanocomposite Fe2O3/TiO2 Catalysts Fabricated by Heterogeneous Precipitation

The aim of our work is to study phase composition, particle size and magnetic response of Fe2O3/TiO2 nanocomposites with respect to the final annealing temperature. Those nanomaterials are considered as smart catalysts, separable from a liquid/gaseous phase by applied magnetic field. The starting product was obtained by an ecologically acceptable route, based on heterogeneous precipitation of the TiO2 on modified g-Fe2O3 nanocrystals dispersed in water. The precursor was subsequently annealed on air at temperatures ranging from 200 oC to 900 oC. The samples were investigated by synchrotron X-ray powder diffraction (S-PXRD), magnetic measurements and Mössbauer spectroscopy. As evidenced by S-PXRD and Mössbauer spectroscopy, increasing the annealing temperature causes evolution of the phase composition from anatase/maghemite to rutile/hematite, finally above 700 oC the pseudobrookite (Fe2TiO5) also forms. The apparent particle size of the various Fe2O3/TiO2 phases has been determined from the highquality S-PXRD data by using two different approaches: the Rietveld refinement and the Debye method. Magnetic response of the samples is discussed in considering the phase composition and the particle size.

An Energy Efficient Protocol for Target Localization in Wireless Sensor Networks

Target tracking and localization are important applications in wireless sensor networks. In these applications, sensor nodes collectively monitor and track the movement of a target. They have limited energy supplied by batteries, so energy efficiency is essential for sensor networks. Most existing target tracking protocols need to wake up sensors periodically to perform tracking. Some unnecessary energy waste is thus introduced. In this paper, an energy efficient protocol for target localization is proposed. In order to preserve energy, the protocol fixes the number of sensors for target tracking, but it retains the quality of target localization in an acceptable level. By selecting a set of sensors for target localization, the other sensors can sleep rather than periodically wake up to track the target. Simulation results show that the proposed protocol saves a significant amount of energy and also prolongs the network lifetime.

Partial 3D Reconstruction using Evolutionary Algorithms

When reconstructing a scenario, it is necessary to know the structure of the elements present on the scene to have an interpretation. In this work we link 3D scenes reconstruction to evolutionary algorithms through the vision stereo theory. We consider vision stereo as a method that provides the reconstruction of a scene using only a couple of images of the scene and performing some computation. Through several images of a scene, captured from different positions, vision stereo can give us an idea about the threedimensional characteristics of the world. Vision stereo usually requires of two cameras, making an analogy to the mammalian vision system. In this work we employ only a camera, which is translated along a path, capturing images every certain distance. As we can not perform all computations required for an exhaustive reconstruction, we employ an evolutionary algorithm to partially reconstruct the scene in real time. The algorithm employed is the fly algorithm, which employ “flies" to reconstruct the principal characteristics of the world following certain evolutionary rules.

Contributory Factors to Diabetes Dietary Regimen Non Adherence in Adults with Diabetes

A cross sectional survey design was used to collect data from 370 diabetic patients. Two instruments were used in obtaining data; in-depth interview guide and researchers- developed questionnaire. Fisher's exact test was used to investigate association between the identified factors and nonadherence. Factors identified were: socio-demographic factors such as: gender, age, marital status, educational level and occupation; psychosocial obstacles such as: non-affordability of prescribed diet, frustration due to the restriction, limited spousal support, feelings of deprivation, feeling that temptation is inevitable, difficulty in adhering in social gatherings and difficulty in revealing to host that one is diabetic; health care providers obstacles were: poor attitude of health workers, irregular diabetes education in clinics , limited number of nutrition education sessions/ inability of the patients to estimate the desired quantity of food, no reminder post cards or phone calls about upcoming patient appointments and delayed start of appointment / time wasting in clinics.

On the Analysis and a Few Optimization Issues of a New iCIM 3000 System at an Academic-Research Oriented Institution

In the past years, the world has witnessed significant work in the field of Manufacturing. Special efforts have been made in the implementation of new technologies, management and control systems, among many others which have all evolved the field. Closely following all this, due to the scope of new projects and the need of turning the existing flexible ideas into more autonomous and intelligent ones, i.e.: moving toward a more intelligent manufacturing, the present paper emerges with the main aim of contributing to the analysis and a few customization issues of a new iCIM 3000 system at the IPSAM. In this process, special emphasis in made on the material flow problem. For this, besides offering a description and analysis of the system and its main parts, also some tips on how to define other possible alternative material flow scenarios and a partial analysis of the combinatorial nature of the problem are offered as well. All this is done with the intentions of relating it with the use of simulation tools, for which these have been briefly addressed with a special focus on the Witness simulation package. For a better comprehension, the previous elements are supported by a few figures and expressions which would help obtaining necessary data. Such data and others will be used in the future, when simulating the scenarios in the search of the best material flow configurations.

A Few Descriptive and Optimization Issues on the Material Flow at a Research-Academic Institution: The Role of Simulation

Lately, significant work in the area of Intelligent Manufacturing has become public and mainly applied within the frame of industrial purposes. Special efforts have been made in the implementation of new technologies, management and control systems, among many others which have all evolved the field. Aware of all this and due to the scope of new projects and the need of turning the existing flexible ideas into more autonomous and intelligent ones, i.e.: Intelligent Manufacturing, the present paper emerges with the main aim of contributing to the design and analysis of the material flow in either systems, cells or work stations under this new “intelligent" denomination. For this, besides offering a conceptual basis in some of the key points to be taken into account and some general principles to consider in the design and analysis of the material flow, also some tips on how to define other possible alternative material flow scenarios and a classification of the states a system, cell or workstation are offered as well. All this is done with the intentions of relating it with the use of simulation tools, for which these have been briefly addressed with a special focus on the Witness simulation package. For a better comprehension, the previous elements are supported by a detailed layout, other figures and a few expressions which could help obtaining necessary data. Such data and others will be used in the future, when simulating the scenarios in the search of the best material flow configurations.

Improved Back Propagation Algorithm to Avoid Local Minima in Multiplicative Neuron Model

The back propagation algorithm calculates the weight changes of artificial neural networks, and a common approach is to use a training algorithm consisting of a learning rate and a momentum factor. The major drawbacks of above learning algorithm are the problems of local minima and slow convergence speeds. The addition of an extra term, called a proportional factor reduces the convergence of the back propagation algorithm. We have applied the three term back propagation to multiplicative neural network learning. The algorithm is tested on XOR and parity problem and compared with the standard back propagation training algorithm.

Correlation of Viscosity in Nanofluids using Genetic Algorithm-neural Network (GA-NN)

An accurate and proficient artificial neural network (ANN) based genetic algorithm (GA) is developed for predicting of nanofluids viscosity. A genetic algorithm (GA) is used to optimize the neural network parameters for minimizing the error between the predictive viscosity and the experimental one. The experimental viscosity in two nanofluids Al2O3-H2O and CuO-H2O from 278.15 to 343.15 K and volume fraction up to 15% were used from literature. The result of this study reveals that GA-NN model is outperform to the conventional neural nets in predicting the viscosity of nanofluids with mean absolute relative error of 1.22% and 1.77% for Al2O3-H2O and CuO-H2O, respectively. Furthermore, the results of this work have also been compared with others models. The findings of this work demonstrate that the GA-NN model is an effective method for prediction viscosity of nanofluids and have better accuracy and simplicity compared with the others models.

Ensembling Adaptively Constructed Polynomial Regression Models

The approach of subset selection in polynomial regression model building assumes that the chosen fixed full set of predefined basis functions contains a subset that is sufficient to describe the target relation sufficiently well. However, in most cases the necessary set of basis functions is not known and needs to be guessed – a potentially non-trivial (and long) trial and error process. In our research we consider a potentially more efficient approach – Adaptive Basis Function Construction (ABFC). It lets the model building method itself construct the basis functions necessary for creating a model of arbitrary complexity with adequate predictive performance. However, there are two issues that to some extent plague the methods of both the subset selection and the ABFC, especially when working with relatively small data samples: the selection bias and the selection instability. We try to correct these issues by model post-evaluation using Cross-Validation and model ensembling. To evaluate the proposed method, we empirically compare it to ABFC methods without ensembling, to a widely used method of subset selection, as well as to some other well-known regression modeling methods, using publicly available data sets.

Three Dimensional Modeling of Mixture Formation and Combustion in a Direct Injection Heavy-Duty Diesel Engine

Due to the stringent legislation for emission of diesel engines and also increasing demand on fuel consumption, the importance of detailed 3D simulation of fuel injection, mixing and combustion have been increased in the recent years. In the present work, FIRE code has been used to study the detailed modeling of spray and mixture formation in a Caterpillar heavy-duty diesel engine. The paper provides an overview of the submodels implemented, which account for liquid spray atomization, droplet secondary break-up, droplet collision, impingement, turbulent dispersion and evaporation. The simulation was performed from intake valve closing (IVC) to exhaust valve opening (EVO). The predicted in-cylinder pressure is validated by comparing with existing experimental data. A good agreement between the predicted and experimental values ensures the accuracy of the numerical predictions collected with the present work. Predictions of engine emissions were also performed and a good quantitative agreement between measured and predicted NOx and soot emission data were obtained with the use of the present Zeldowich mechanism and Hiroyasu model. In addition, the results reported in this paper illustrate that the numerical simulation can be one of the most powerful and beneficial tools for the internal combustion engine design, optimization and performance analysis.

A Practical Approach for Electricity Load Forecasting

This paper is a continuation of our daily energy peak load forecasting approach using our modified network which is part of the recurrent networks family and is called feed forward and feed back multi context artificial neural network (FFFB-MCANN). The inputs to the network were exogenous variables such as the previous and current change in the weather components, the previous and current status of the day and endogenous variables such as the past change in the loads. Endogenous variable such as the current change in the loads were used on the network output. Experiment shows that using endogenous and exogenous variables as inputs to the FFFBMCANN rather than either exogenous or endogenous variables as inputs to the same network produces better results. Experiments show that using the change in variables such as weather components and the change in the past load as inputs to the FFFB-MCANN rather than the absolute values for the weather components and past load as inputs to the same network has a dramatic impact and produce better accuracy.

An Analytical Framework for Multi-Site Supply Chain Planning Problems

As the gradual increase of the enterprise scale, the firms may possess many manufacturing plants located in different places geographically. This change will result in the multi-site production planning problems under the environment of multiple plants or production resources. Our research proposes the structural framework to analyze the multi-site planning problems. The analytical framework is composed of six elements: multi-site conceptual model, product structure (bill of manufacturing), production strategy, manufacturing capability and characteristics, production planning constraints, and key performance indicators. As well as the discussion of these six ingredients, we also review related literatures in this paper to match our analytical framework. Finally we take a real-world practical example of a TFT-LCD manufacturer in Taiwan to explain our proposed analytical framework for the multi-site production planning problems.