Clustering based Voltage Control Areas for Localized Reactive Power Management in Deregulated Power System

In this paper, a new K-means clustering based approach for identification of voltage control areas is developed. Voltage control areas are important for efficient reactive power management in power systems operating under deregulated environment. Although, voltage control areas are formed using conventional hierarchical clustering based method, but the present paper investigate the capability of K-means clustering for the purpose of forming voltage control areas. The proposed method is tested and compared for IEEE 14 bus and IEEE 30 bus systems. The results show that this K-means based method is competing with conventional hierarchical approach

Implementation of Lower-Limb Rehabilitation System Using Attraction Motors with a Treadmill

This paper proposes a prototype of a lower-limb rehabilitation system for recovering and strengthening patients- injured lower limbs. The system is composed of traction motors for each leg position, a treadmill as a walking base, tension sensors, microcontrollers controlling motor functions and a main system with graphic user interface. For derivation of reference or normal velocity profiles of the body segment point, kinematic method is applied based on the humanoid robot model using the reference joint angle data of normal walking.

Modeling and Performance Evaluation of LTE Networks with Different TCP Variants

Long Term Evolution (LTE) is a 4G wireless broadband technology developed by the Third Generation Partnership Project (3GPP) release 8, and it's represent the competitiveness of Universal Mobile Telecommunications System (UMTS) for the next 10 years and beyond. The concepts for LTE systems have been introduced in 3GPP release 8, with objective of high-data-rate, low-latency and packet-optimized radio access technology. In this paper, performance of different TCP variants during LTE network investigated. The performance of TCP over LTE is affected mostly by the links of the wired network and total bandwidth available at the serving base station. This paper describes an NS-2 based simulation analysis of TCP-Vegas, TCP-Tahoe, TCPReno, TCP-Newreno, TCP-SACK, and TCP-FACK, with full modeling of all traffics of LTE system. The Evaluation of the network performance with all TCP variants is mainly based on throughput, average delay and lost packet. The analysis of TCP performance over LTE ensures that all TCP's have a similar throughput and the best performance return to TCP-Vegas than other variants.

Analysis of Relation between Unlabeled and Labeled Data to Self-Taught Learning Performance

Obtaining labeled data in supervised learning is often difficult and expensive, and thus the trained learning algorithm tends to be overfitting due to small number of training data. As a result, some researchers have focused on using unlabeled data which may not necessary to follow the same generative distribution as the labeled data to construct a high-level feature for improving performance on supervised learning tasks. In this paper, we investigate the impact of the relationship between unlabeled and labeled data for classification performance. Specifically, we will apply difference unlabeled data which have different degrees of relation to the labeled data for handwritten digit classification task based on MNIST dataset. Our experimental results show that the higher the degree of relation between unlabeled and labeled data, the better the classification performance. Although the unlabeled data that is completely from different generative distribution to the labeled data provides the lowest classification performance, we still achieve high classification performance. This leads to expanding the applicability of the supervised learning algorithms using unsupervised learning.

A Simple Affymetrix Ratio-transformation Method Yields Comparable Expression Level Quantifications with cDNA Data

Gene expression profiling is rapidly evolving into a powerful technique for investigating tumor malignancies. The researchers are overwhelmed with the microarray-based platforms and methods that confer them the freedom to conduct large-scale gene expression profiling measurements. Simultaneously, investigations into cross-platform integration methods have started gaining momentum due to their underlying potential to help comprehend a myriad of broad biological issues in tumor diagnosis, prognosis, and therapy. However, comparing results from different platforms remains to be a challenging task as various inherent technical differences exist between the microarray platforms. In this paper, we explain a simple ratio-transformation method, which can provide some common ground for cDNA and Affymetrix platform towards cross-platform integration. The method is based on the characteristic data attributes of Affymetrix- and cDNA- platform. In the work, we considered seven childhood leukemia patients and their gene expression levels in either platform. With a dataset of 822 differentially expressed genes from both these platforms, we carried out a specific ratio-treatment to Affymetrix data, which subsequently showed an improvement in the relationship with the cDNA data.

Marketing Strategy Analysis of Boon Rawd Brewery Company

Boon Rawd Brewery is a beer company based in Thailand that has an exemplary image, both as a good employer and a well-managed company with a strong record of social responsibility. The most famous of the company’s products is Singha beer. To study the company’s marketing strategy, a case study analysis was conducted together with qualitative research methods. The study analyzed the marketing strategy of Boon Rawd Brewery before the liberalization of the liquor market in 2000. The company’s marketing strategies consisted of the following: product line strategy, product development strategy, block channel strategy, media strategy, trade strategy, and consumer incentive strategy. Additionally, the company employed marketing mix strategy based on the 4Ps: product, price, promotion and place (of distribution).

Modified Fuzzy ARTMAP and Supervised Fuzzy ART: Comparative Study with Multispectral Classification

In this article a modification of the algorithm of the fuzzy ART network, aiming at returning it supervised is carried out. It consists of the search for the comparison, training and vigilance parameters giving the minimum quadratic distances between the output of the training base and those obtained by the network. The same process is applied for the determination of the parameters of the fuzzy ARTMAP giving the most powerful network. The modification consist in making learn the fuzzy ARTMAP a base of examples not only once as it is of use, but as many time as its architecture is in evolution or than the objective error is not reached . In this way, we don-t worry about the values to impose on the eight (08) parameters of the network. To evaluate each one of these three networks modified, a comparison of their performances is carried out. As application we carried out a classification of the image of Algiers-s bay taken by SPOT XS. We use as criterion of evaluation the training duration, the mean square error (MSE) in step control and the rate of good classification per class. The results of this study presented as curves, tables and images show that modified fuzzy ARTMAP presents the best compromise quality/computing time.

Object Recognition on Horse Riding Simulator System

In recent years, IT convergence technology has been developed to get creative solution by combining robotics or sports science technology. Object detection and recognition have mainly applied to sports science field that has processed by recognizing face and by tracking human body. But object detection and recognition using vision sensor is challenge task in real world because of illumination. In this paper, object detection and recognition using vision sensor applied to sports simulator has been introduced. Face recognition has been processed to identify user and to update automatically a person athletic recording. Human body has tracked to offer a most accurate way of riding horse simulator. Combined image processing has been processed to reduce illumination adverse affect because illumination has caused low performance in detection and recognition in real world application filed. Face has recognized using standard face graph and human body has tracked using pose model, which has composed of feature nodes generated diverse face and pose images. Face recognition using Gabor wavelet and pose recognition using pose graph is robust to real application. We have simulated using ETRI database, which has constructed on horse riding simulator.

Prediction of Natural Gas Viscosity using Artificial Neural Network Approach

Prediction of viscosity of natural gas is an important parameter in the energy industries such as natural gas storage and transportation. In this study viscosity of different compositions of natural gas is modeled by using an artificial neural network (ANN) based on back-propagation method. A reliable database including more than 3841 experimental data of viscosity for testing and training of ANN is used. The designed neural network can predict the natural gas viscosity using pseudo-reduced pressure and pseudo-reduced temperature with AARD% of 0.221. The accuracy of designed ANN has been compared to other published empirical models. The comparison indicates that the proposed method can provide accurate results.

A Complexity Measure for Java Bean based Software Components

The traditional software product and process metrics are neither suitable nor sufficient in measuring the complexity of software components, which ultimately is necessary for quality and productivity improvement within organizations adopting CBSE. Researchers have proposed a wide range of complexity metrics for software systems. However, these metrics are not sufficient for components and component-based system and are restricted to the module-oriented systems and object-oriented systems. In this proposed study it is proposed to find the complexity of the JavaBean Software Components as a reflection of its quality and the component can be adopted accordingly to make it more reusable. The proposed metric involves only the design issues of the component and does not consider the packaging and the deployment complexity. In this way, the software components could be kept in certain limit which in turn help in enhancing the quality and productivity.

Marketing Strategy Analysis of Thai Asia Pacific Brewery Company

The study was a case study analysis about Thai Asia Pacific Brewery Company. The purpose was to analyze the company’s marketing objective, marketing strategy at company level, and marketing mix before liquor liberalization in 2000. Methods used in this study were qualitative and descriptive research approach which demonstrated the following results of the study demonstrated as follows: (1) Marketing objective was to increase market share of Heineken and Amtel, (2) the company’s marketing strategies were brand building strategy and distribution strategy. Additionally, the company also conducted marketing mix strategy as follows. Product strategy: The company added more beer brands namely Amstel and Tiger to provide additional choice to consumers, product and marketing research, and product development. Price strategy: the company had taken the following into consideration: cost, competitor, market, economic situation and tax. Promotion strategy: the company conducted sales promotion and advertising. Distribution strategy: the company extended channels its channels of distribution into food shops, pubs and various entertainment places. This strategy benefited interested persons and people who were engaged in the beer business.

Three-player Domineering

Domineering is a classic two-player combinatorial game usually played on a rectangular board. Three-player Domineering is the three-player version of Domineering played on a three dimensional board. Experimental results are presented for x×y ×z boards with x + y + z < 10 and x, y, z ≥ 2. Also, some theoretical results are shown for 2 × 2 × n board with n even and n ≥ 4.

A CFD Study of Turbulent Convective Heat Transfer Enhancement in Circular Pipeflow

Addition of milli or micro sized particles to the heat transfer fluid is one of the many techniques employed for improving heat transfer rate. Though this looks simple, this method has practical problems such as high pressure loss, clogging and erosion of the material of construction. These problems can be overcome by using nanofluids, which is a dispersion of nanosized particles in a base fluid. Nanoparticles increase the thermal conductivity of the base fluid manifold which in turn increases the heat transfer rate. Nanoparticles also increase the viscosity of the basefluid resulting in higher pressure drop for the nanofluid compared to the base fluid. So it is imperative that the Reynolds number (Re) and the volume fraction have to be optimum for better thermal hydraulic effectiveness. In this work, the heat transfer enhancement using aluminium oxide nanofluid using low and high volume fraction nanofluids in turbulent pipe flow with constant wall temperature has been studied by computational fluid dynamic modeling of the nanofluid flow adopting the single phase approach. Nanofluid, up till a volume fraction of 1% is found to be an effective heat transfer enhancement technique. The Nusselt number (Nu) and friction factor predictions for the low volume fractions (i.e. 0.02%, 0.1 and 0.5%) agree very well with the experimental values of Sundar and Sharma (2010). While, predictions for the high volume fraction nanofluids (i.e. 1%, 4% and 6%) are found to have reasonable agreement with both experimental and numerical results available in the literature. So the computationally inexpensive single phase approach can be used for heat transfer and pressure drop prediction of new nanofluids.

Synchronization of Oestrus in Goats with Progestogen Sponges and Short Term Combined FGA, PGF2α Protocols

The study aimed to evaluated the reproductive performance response to short term oestrus synchronization during the transition period. One hundred and sixty-five indigenous multiparous non-lactating goats were subdivided into the following six treatment groups for oestrus synchronization: NT control Group (N= 30), Fe-21d, FGA vaginal sponge for 21days+eCG at 19thd; FPe- 11d, FGA 11d + PGF2α and eCG at 9th d; FPe-10d, FGA 10d+ PGF2α and eCG at 8th d; FPe-9d, FGA 9d +PGF2α and eCG at 7thd; PFe-5d, PGF2α at d0 + FGA 5d + eCG at 5thd. The goats were natural mated (1 male/6 females). Fecundity rates (n. births /n. females treated x 100) were statistically higher (P < 0.05) in short term FPe-9d (157.9%), FPe- 11d (115.4%), FPe-10d (111.1%) and PFe-5d (107.7%) groups compared to the NT control Group (66.7%).

Information Transmission between Large and Small Stocks in the Korean Stock Market

Little attention has been paid to information transmission between the portfolios of large stocks and small stocks in the Korean stock market. This study investigates the return and volatility transmission mechanisms between large and small stocks in the Korea Exchange (KRX). This study also explores whether bad news in the large stock market leads to a volatility of the small stock market that is larger than the good news volatility of the large stock market. By employing the Granger causality test, we found unidirectional return transmissions from the large stocks to medium and small stocks. This evidence indicates that pat information about the large stocks has a better ability to predict the returns of the medium and small stocks in the Korean stock market. Moreover, by using the asymmetric GARCH-BEKK model, we observed the unidirectional relationship of asymmetric volatility transmission from large stocks to the medium and small stocks. This finding suggests that volatility in the medium and small stocks following a negative shock in the large stocks is larger than that following a positive shock in the large stocks.

Real-Time Defects Detection Algorithm for High-Speed Steel Bar in Coil

This paper presents a real-time defect detection algorithm for high-speed steel bar in coil. Because the target speed is very high, proposed algorithm should process quickly the large volumes of image for real-time processing. Therefore, defect detection algorithm should satisfy two conflicting requirements of reducing the processing time and improving the efficiency of defect detection. To enhance performance of detection, edge preserving method is suggested for noise reduction of target image. Finally, experiment results show that the proposed algorithm guarantees the condition of the real-time processing and accuracy of detection.

Connectivity Characteristic of Transcription Factor

Transcription factors are a group of proteins that helps for interpreting the genetic information in DNA. Protein-protein interactions play a major role in the execution of key biological functions of a cell. These interactions are represented in the form of a graph with nodes and edges. Studies have showed that some nodes have high degree of connectivity and such nodes, known as hub nodes, are the inevitable parts of the network. In the present paper a method is proposed to identify hub transcription factor proteins using sequence information. On a complete data set of transcription factor proteins available from the APID database, the proposed method showed an accuracy of 77%, sensitivity of 79% and specificity of 76%.

Improvement in Performance and Emission Characteristics of a Single Cylinder S.I. Engine Operated on Blends of CNG and Hydrogen

This paper presents the experimental results of a single cylinder Enfield engine using an electronically controlled fuel injection system which was developed to carry out exhaustive tests using neat CNG, and mixtures of hydrogen in compressed natural gas (HCNG) as 0, 5, 10, 15 and 20% by energy. Experiments were performed at 2000 and 2400 rpm with wide open throttle and varying the equivalence ratio. Hydrogen which has fast burning rate, when added to compressed natural gas, enhances its flame propagation rate. The emissions of HC, CO, decreased with increasing percentage of hydrogen but NOx was found to increase. The results indicated a marked improvement in the brake thermal efficiency with the increase in percentage of hydrogen added. The improved thermal efficiency was clearly observed to be more in lean region as compared to rich region. This study is expected to reduce vehicular emissions along with increase in thermal efficiency and thus help in reduction of further environmental degradation.

Identifying the Objectives of Outsourcing Logistics Services as a Basis for Measuring Its Financial and Operational Performance

Logistics outsourcing is a growing trend and measuring its performance, a challenge. It must be consistent with the objectives set for logistics outsourcing, but we have found no objective-based performance measurement system. We have conducted a comprehensive review of the specialist literature to cover this gap, which has led us to identify and define these objectives. The outcome is that we have obtained a list of the most relevant objectives and their descriptions. This will enable us to analyse in a future study whether the indicators used for measuring logistics outsourcing performance are consistent with the objectives pursued with the outsourcing. If this is not the case, a proposal will be made for a set of financial and operational indicators to measure performance in logistics outsourcing that take the goals being pursued into account.

A Model for Estimation of Efforts in Development of Software Systems

Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.