An Agent-based Model for Analyzing Interaction of Two Stable Social Networks

In this research, the authors analyze network stability using agent-based simulation. Firstly, the authors focus on analyzing large networks (eight agents) by connecting different two stable small social networks (A small stable network is consisted on four agents.). Secondly, the authors analyze the network (eight agents) shape which is added one agent to a stable network (seven agents). Thirdly, the authors analyze interpersonal comparison of utility. The “star-network "was not found on the result of interaction among stable two small networks. On the other hand, “decentralized network" was formed from several combination. In case of added one agent to a stable network (seven agents), if the value of “c"(maintenance cost of per a link) was larger, the number of patterns of stable network was also larger. In this case, the authors identified the characteristics of a large stable network. The authors discovered the cases of decreasing personal utility under condition increasing total utility.

Robust UKF Insensitive to Measurement Faults for Pico Satellite Attitude Estimation

In the normal operation conditions of a pico satellite, conventional Unscented Kalman Filter (UKF) gives sufficiently good estimation results. However, if the measurements are not reliable because of any kind of malfunction in the estimation system, UKF gives inaccurate results and diverges by time. This study, introduces Robust Unscented Kalman Filter (RUKF) algorithms with the filter gain correction for the case of measurement malfunctions. By the use of defined variables named as measurement noise scale factor, the faulty measurements are taken into the consideration with a small weight and the estimations are corrected without affecting the characteristic of the accurate ones. Two different RUKF algorithms, one with single scale factor and one with multiple scale factors, are proposed and applied for the attitude estimation process of a pico satellite. The results of these algorithms are compared for different types of measurement faults in different estimation scenarios and recommendations about their applications are given.

Customer Loyalty and the Impacts of Service Quality:The Case of Five Star Hotels in Jordan

In the present Jordan hotels scenario, service quality is a vital competitive policy to keep customer support and build great base. Hotels are trying to win customer loyalty by providing enhanced quality services. This paper attempts to examine the impact of tourism service quality dimension in the Jordanian five star hotels. A total of 322 surveys were administrated to tourists who were staying at three branches Marriott hotel in Jordan. The results show that dimensions of service quality such as empathy, reliability, responsiveness and tangibility significantly predict customer loyalty. Specifically, among the dimension of tourism service quality, the most significant predictor of customer loyalty is tangibility. This paper implies that five star hotels in Jordan should also come forward and try their best to present better tourism service quality to win back their customers- loyalty.

CFD Simulation of Dense Gas Extraction through Polymeric Membranes

In this study is presented a general methodology to predict the performance of a continuous near-critical fluid extraction process to remove compounds from aqueous solutions using hollow fiber membrane contactors. A comprehensive 2D mathematical model was developed to study Porocritical extraction process. The system studied in this work is a membrane based extractor of ethanol and acetone from aqueous solutions using near-critical CO2. Predictions of extraction percentages obtained by simulations have been compared to the experimental values reported by Bothun et al. [5]. Simulations of extraction percentage of ethanol and acetone show an average difference of 9.3% and 6.5% with the experimental data, respectively. More accurate predictions of the extraction of acetone could be explained by a better estimation of the transport properties in the aqueous phase that controls the extraction of this solute.

Grid Computing for the Bi-CGSTAB Applied to the Solution of the Modified Helmholtz Equation

The problem addressed herein is the efficient management of the Grid/Cluster intense computation involved, when the preconditioned Bi-CGSTAB Krylov method is employed for the iterative solution of the large and sparse linear system arising from the discretization of the Modified Helmholtz-Dirichlet problem by the Hermite Collocation method. Taking advantage of the Collocation ma-trix's red-black ordered structure we organize efficiently the whole computation and map it on a pipeline architecture with master-slave communication. Implementation, through MPI programming tools, is realized on a SUN V240 cluster, inter-connected through a 100Mbps and 1Gbps ethernet network,and its performance is presented by speedup measurements included.

Spectral Analysis of Radiation-Induced Natural Convection in Littoral Waters

The mixing of pollutions and sediments in near shore regions of natural water bodies depends heavily on the characteristics such as the strength and frequency of flow instability. In the present paper, the instability of natural convection induced by absorption of solar radiation in littoral regions is considered. Spectral analysis is conducted on the quasi-steady state flow to reveal the power and frequency modes of the instability at various positions. Results indicate that the power of instability, the number of frequency modes, the prominence of higher frequency modes, and the highest frequency mode increase with the offshore distance and/or Rayleigh number. Harmonic modes are present at relatively low Rayleigh numbers. For a given offshore distance, the position with the strongest power of instability is located adjacent to the sloping bottom while the frequency modes are the same over the local depth. As the Rayleigh number increases, the unstable region extends toward the shore.

Optimization of Energy Consumption in Sequential Distillation Column

Distillation column is one of the most common operations in process industries and is while the most expensive unit of the amount of energy consumption. Many ideas have been presented in the related literature for optimizing energy consumption in distillation columns. This paper studies the different heat integration methods in a distillation column which separate Benzene, Toluene, Xylene, and C9+. Three schemes of heat integration including, indirect sequence (IQ), indirect sequence with forward energy integration (IQF), and indirect sequence with backward energy integration (IQB) has been studied in this paper. Using shortcut method these heat integration schemes were simulated with Aspen HYSYS software and compared with each other with regarding economic considerations. The result shows that the energy consumption has been reduced 33% in IQF and 28% in IQB in comparison with IQ scheme. Also the economic result shows that the total annual cost has been reduced 12% in IQF and 8% in IQB regarding with IQ scheme. Therefore, the IQF scheme is most economic than IQB and IQ scheme.

Influence of Taguchi Selected Parameters on Properties of CuO-ZrO2 Nanoparticles Produced via Sol-gel Method

The present paper discusses the selection of process parameters for obtaining optimal nanocrystallites size in the CuOZrO2 catalyst. There are some parameters changing the inorganic structure which have an influence on the role of hydrolysis and condensation reaction. A statistical design test method is implemented in order to optimize the experimental conditions of CuO-ZrO2 nanoparticles preparation. This method is applied for the experiments and L16 orthogonal array standard. The crystallites size is considered as an index. This index will be used for the analysis in the condition where the parameters vary. The effect of pH, H2O/ precursor molar ratio (R), time and temperature of calcination, chelating agent and alcohol volume are particularity investigated among all other parameters. In accordance with the results of Taguchi, it is found that temperature has the greatest impact on the particle size. The pH and H2O/ precursor molar ratio have low influences as compared with temperature. The alcohol volume as well as the time has almost no effect as compared with all other parameters. Temperature also has an influence on the morphology and amorphous structure of zirconia. The optimal conditions are determined by using Taguchi method. The nanocatalyst is studied by DTA-TG, XRD, EDS, SEM and TEM. The results of this research indicate that it is possible to vary the structure, morphology and properties of the sol-gel by controlling the above-mentioned parameters.

Performance Comparison of Particle Swarm Optimization with Traditional Clustering Algorithms used in Self-Organizing Map

Self-organizing map (SOM) is a well known data reduction technique used in data mining. It can reveal structure in data sets through data visualization that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOM, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of an adaptive heuristic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOM. The application of our method to several standard data sets demonstrates its feasibility. PSO algorithm utilizes a so-called U-matrix of SOM to determine cluster boundaries; the results of this novel automatic method compare very favorably to boundary detection through traditional algorithms namely k-means and hierarchical based approach which are normally used to interpret the output of SOM.

Effect of Laser Power and Powder Flow Rate on Properties of Laser Metal Deposited Ti6Al4V

Laser Metal Deposition (LMD) is an additive manufacturing process with capabilities that include: producing new part directly from 3 Dimensional Computer Aided Design (3D CAD) model, building new part on the existing old component and repairing an existing high valued component parts that would have been discarded in the past. With all these capabilities and its advantages over other additive manufacturing techniques, the underlying physics of the LMD process is yet to be fully understood probably because of high interaction between the processing parameters and studying many parameters at the same time makes it further complex to understand. In this study, the effect of laser power and powder flow rate on physical properties (deposition height and deposition width), metallurgical property (microstructure) and mechanical (microhardness) properties on laser deposited most widely used aerospace alloy are studied. Also, because the Ti6Al4V is very expensive, and LMD is capable of reducing buy-to-fly ratio of aerospace parts, the material utilization efficiency is also studied. Four sets of experiments were performed and repeated to establish repeatability using laser power of 1.8 kW and 3.0 kW, powder flow rate of 2.88 g/min and 5.67 g/min, and keeping the gas flow rate and scanning speed constant at 2 l/min and 0.005 m/s respectively. The deposition height / width are found to increase with increase in laser power and increase in powder flow rate. The material utilization is favoured by higher power while higher powder flow rate reduces material utilization. The results are presented and fully discussed.

ROI Based Embedded Watermarking of Medical Images for Secured Communication in Telemedicine

Medical images require special safety and confidentiality because critical judgment is done on the information provided by medical images. Transmission of medical image via internet or mobile phones demands strong security and copyright protection in telemedicine applications. Here, highly secured and robust watermarking technique is proposed for transmission of image data via internet and mobile phones. The Region of Interest (ROI) and Non Region of Interest (RONI) of medical image are separated. Only RONI is used for watermark embedding. This technique results in exact recovery of watermark with standard medical database images of size 512x512, giving 'correlation factor' equals to 1. The correlation factor for different attacks like noise addition, filtering, rotation and compression ranges from 0.90 to 0.95. The PSNR with weighting factor 0.02 is up to 48.53 dBs. The presented scheme is non blind and embeds hospital logo of 64x64 size.

A Fuzzy Logic Based Model to Predict Surface Roughness of A Machined Surface in Glass Milling Operation Using CBN Grinding Tool

Nowadays, the demand for high product quality focuses extensive attention to the quality of machined surface. The (CNC) milling machine facilities provides a wide variety of parameters set-up, making the machining process on the glass excellent in manufacturing complicated special products compared to other machining processes. However, the application of grinding process on the CNC milling machine could be an ideal solution to improve the product quality, but adopting the right machining parameters is required. In glass milling operation, several machining parameters are considered to be significant in affecting surface roughness. These parameters include the lubrication pressure, spindle speed, feed rate and depth of cut. In this research work, a fuzzy logic model is offered to predict the surface roughness of a machined surface in glass milling operation using CBN grinding tool. Four membership functions are allocated to be connected with each input of the model. The predicted results achieved via fuzzy logic model are compared to the experimental result. The result demonstrated settlement between the fuzzy model and experimental results with the 93.103% accuracy.

Classifier Based Text Mining for Neural Network

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.

Multimedia Games for Elementary/Primary School Education and Entertainment

Computers are increasingly being used as educational tools in elementary/primary schools worldwide. A specific application of such computer use, is that of multimedia games, where the aim is to combine pedagogy and entertainment. This study reports on a case-study whereby an educational multimedia game has been developed for use by elementary school children. The stages of the application-s design, implementation and evaluation are presented. Strengths of the game are identified and discussed, and its weaknesses are identified, allowing for suggestions for future redesigns. The results show that the use of games can engage children in the learning process for longer periods of time with the added benefit of the entertainment factor.

Study on the Variation Effects of Diverging Angleon Characteristics of Flow in Converging and Diverging Ducts by Numerical Method

The present paper develops and validates a numerical procedure for the calculation of turbulent combustive flow in converging and diverging ducts and throuh simulation of the heat transfer processes, the amount of production and spread of Nox pollutant has been measured. A marching integration solution procedure employing the TDMA is used to solve the discretized equations. The turbulence model is the Prandtl Mixing Length method. Modeling the combustion process is done by the use of Arrhenius and Eddy Dissipation method. Thermal mechanism has been utilized for modeling the process of forming the nitrogen oxides. Finite difference method and Genmix numerical code are used for numerical solution of equations. Our results indicate the important influence of the limiting diverging angle of diffuser on the coefficient of recovering of pressure. Moreover, due to the intense dependence of Nox pollutant to the maximum temperature in the domain with this feature, the Nox pollutant amount is also in maximum level.

Energy Efficient Resource Allocation in Distributed Computing Systems

The problem of mapping tasks onto a computational grid with the aim to minimize the power consumption and the makespan subject to the constraints of deadlines and architectural requirements is considered in this paper. To solve this problem, we propose a solution from cooperative game theory based on the concept of Nash Bargaining Solution. The proposed game theoretical technique is compared against several traditional techniques. The experimental results show that when the deadline constraints are tight, the proposed technique achieves superior performance and reports competitive performance relative to the optimal solution.

Comparative Emission Analysis of Gasoline/LPG Automotive Bifuel Engine

This paper presents comparative emission study of newly introduced gasoline/LPG bifuel automotive engine in Indian market. Emissions were tested as per LPG-Bharat stage III driving cycle. Emission tests were carried out for urban cycle and extra urban cycle. Total time for urban and extra urban cycle was 1180 sec. Engine was run in LPG mode by using conversion system. Emissions were tested as per standard procedure and were compared. Corrected emissions were computed by deducting ambient reading from sample reading. Paper describes detail emission test procedure and results obtained. CO emissions were in the range of38.9 to 111.3 ppm. HC emissions were in the range of 18.2 to 62.6 ppm. Nox emissions were 08 to 3.9 ppm and CO2 emissions were from 6719.2 to 8051 ppm. Paper throws light on emission results of LPG vehicles recently introduced in Indian automobile market. Objectives of this experimental study were to measure emissions of engines in gasoline & LPG mode and compare them.

Solving One-dimensional Hyperbolic Telegraph Equation Using Cubic B-spline Quasi-interpolation

In this paper, the telegraph equation is solved numerically by cubic B-spline quasi-interpolation .We obtain the numerical scheme, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the temporal derivative of the dependent variable. The advantage of the resulting scheme is that the algorithm is very simple so it is very easy to implement. The results of numerical experiments are presented, and are compared with analytical solutions by calculating errors L2 and L∞ norms to confirm the good accuracy of the presented scheme.

A New Framework and a Model for Product Development with an Application in the Telecommunications Services Sector

This paper argues that a product development exercise involves in addition to the conventional stages, several decisions regarding other aspects. These aspects should be addressed simultaneously in order to develop a product that responds to the customer needs and that helps realize objectives of the stakeholders in terms of profitability, market share and the like. We present a framework that encompasses these different development dimensions. The framework shows that a product development methodology such as the Quality Function Deployment (QFD) is the basic tool which allows definition of the target specifications of a new product. Creativity is the first dimension that enables the development exercise to live and end successfully. A number of group processes need to be followed by the development team in order to ensure enough creativity and innovation. Secondly, packaging is considered to be an important extension of the product. Branding strategies, quality and standardization requirements, identification technologies, design technologies, production technologies and costing and pricing are also integral parts to the development exercise. These dimensions constitute the proposed framework. The paper also presents a mathematical model used to calculate the design targets based on the target costing principle. The framework is used to study a case of a new product development in the telecommunications services sector.

Matrix Based Synthesis of EXOR dominated Combinational Logic for Low Power

This paper discusses a new, systematic approach to the synthesis of a NP-hard class of non-regenerative Boolean networks, described by FON[FOFF]={mi}[{Mi}], where for every mj[Mj]∈{mi}[{Mi}], there exists another mk[Mk]∈{mi}[{Mi}], such that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where 'n' represents the number of distinct primary inputs). The method automatically ensures exact minimization for certain important selfdual functions with 2n-1 points in its one-set. The elements meant for grouping are determined from a newly proposed weighted incidence matrix. Then the binary value corresponding to the candidate pair is correlated with the proposed binary value matrix to enable direct synthesis. We recommend algebraic factorization operations as a post processing step to enable reduction in literal count. The algorithm can be implemented in any high level language and achieves best cost optimization for the problem dealt with, irrespective of the number of inputs. For other cases, the method is iterated to subsequently reduce it to a problem of O(n-1), O(n-2),.... and then solved. In addition, it leads to optimal results for problems exhibiting higher degree of adjacency, with a different interpretation of the heuristic, and the results are comparable with other methods. In terms of literal cost, at the technology independent stage, the circuits synthesized using our algorithm enabled net savings over AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of- Products or ESOP forms) and AND-OR-EXOR logic by 45.57%, 41.78% and 41.78% respectively for the various problems. Circuit level simulations were performed for a wide variety of case studies at 3.3V and 2.5V supply to validate the performance of the proposed method and the quality of the resulting synthesized circuits at two different voltage corners. Power estimation was carried out for a 0.35micron TSMC CMOS process technology. In comparison with AOI logic, the proposed method enabled mean savings in power by 42.46%. With respect to AND-EXOR logic, the proposed method yielded power savings to the tune of 31.88%, while in comparison with AND-OR-EXOR level networks; average power savings of 33.23% was obtained.