A Hybrid Genetic Algorithm for the Sequence Dependent Flow-Shop Scheduling Problem

Flow-shop scheduling problem (FSP) deals with the scheduling of a set of jobs that visit a set of machines in the same order. The FSP is NP-hard, which means that an efficient algorithm for solving the problem to optimality is unavailable. To meet the requirements on time and to minimize the make-span performance of large permutation flow-shop scheduling problems in which there are sequence dependent setup times on each machine, this paper develops one hybrid genetic algorithms (HGA). Proposed HGA apply a modified approach to generate population of initial chromosomes and also use an improved heuristic called the iterated swap procedure to improve initial solutions. Also the author uses three genetic operators to make good new offspring. The results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.

Performance Comparison of Single and Multi-Path Routing Protocol in MANET with Selfish Behaviors

Mobile Ad Hoc network is an infrastructure less network which operates with the coordination of each node. Each node believes to help another node, by forwarding its data to/from another node. Unlike a wired network, nodes in an ad hoc network are resource (i.e. battery, bandwidth computational capability and so on) constrained. Such dependability of one node to another and limited resources of nodes can result in non cooperation by any node to accumulate its resources. Such non cooperation is known as selfish behavior. This paper discusses the performance analysis of very well known MANET single-path (i.e. AODV) and multi-path (i.e. AOMDV) routing protocol, in the presence of selfish behaviors. Along with existing selfish behaviors, a new variation is also studied. Extensive simulations were carried out using ns-2 and the study concluded that the multi-path protocol (i.e. AOMDV) with link disjoint configuration outperforms the other two configurations.

An Enhanced Tool for Implementing Dialogue Forms in Conversational Applications

Natural Language Understanding Systems (NLU) will not be widely deployed unless they are technically mature and cost effective to develop. Cost effective development hinges on the availability of tools and techniques enabling the rapid production of NLU applications through minimal human resources. Further, these tools and techniques should allow quick development of applications in a user friendly way and should be easy to upgrade in order to continuously follow the evolving technologies and standards. This paper presents a visual tool for the structuring and editing of dialog forms, the key element of driving conversation in NLU applications based on IBM technology. The main focus is given on the basic component used to describe Human – Machine interactions of that kind, the Dialogue Manager. In essence, the description of a tool that enables the visual representation of the Dialogue Manager mainly during the implementation phase is illustrated.

Optimal Image Compression Based on Sign and Magnitude Coding of Wavelet Coefficients

Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.

An Integrated Design Evaluation and Assembly Sequence Planning Model using a Particle Swarm Optimization Approach

In the traditional concept of product life cycle management, the activities of design, manufacturing, and assembly are performed in a sequential way. The drawback is that the considerations in design may contradict the considerations in manufacturing and assembly. The different designs of components can lead to different assembly sequences. Therefore, in some cases, a good design may result in a high cost in the downstream assembly activities. In this research, an integrated design evaluation and assembly sequence planning model is presented. Given a product requirement, there may be several design alternative cases to design the components for the same product. If a different design case is selected, the assembly sequence for constructing the product can be different. In this paper, first, the designed components are represented by using graph based models. The graph based models are transformed to assembly precedence constraints and assembly costs. A particle swarm optimization (PSO) approach is presented by encoding a particle using a position matrix defined by the design cases and the assembly sequences. The PSO algorithm simultaneously performs design evaluation and assembly sequence planning with an objective of minimizing the total assembly costs. As a result, the design cases and the assembly sequences can both be optimized. The main contribution lies in the new concept of integrated design evaluation and assembly sequence planning model and the new PSO solution method. The test results show that the presented method is feasible and efficient for solving the integrated design evaluation and assembly planning problem. In this paper, an example product is tested and illustrated.

Numerical Study on Parametrical Design of Long Shrouded Contra-Rotating Propulsion System in Hovering

The parametrical study of Shrouded Contra-rotating Rotor was done in this paper based on 2D axisymmetric simulations. The calculations were made with an actuator disk as double rotor model. It objects to explore and quantify the effects of different shroud geometry parameters mainly using the performance of power loading (PL), which could evaluate the whole propulsion system capability as 5 Newtontotal thrust generationfor hover demand. The numerical results show that:The increase of nozzle radius is desired but limited by the flow separation, its optimal design is around 1.15 times rotor radius, the viscosity effects greatly constraint the influence of nozzle shape, the divergent angle around 10.5° performs best for chosen nozzle length;The parameters of inlet such as leading edge curvature, radius and internal shape do not affect thrust great but play an important role in pressure distribution which could produce most part of shroud thrust, they should be chosen according to the reduction of adverse pressure gradients to reduce the risk of boundary separation.

Structural Integrity Management for Fixed Offshore Platforms in Malaysia

Structural Integrity Management (SIM) is important for the protection of offshore crew, environment, business assets and company and industry reputation. API RP 2A contained guidelines for assessment of existing platforms mostly for the Gulf of Mexico (GOM). ISO 19902 SIM framework also does not specifically cater for Malaysia. There are about 200 platforms in Malaysia with 90 exceeding their design life. The Petronas Carigali Sdn Bhd (PCSB) uses the Asset Integrity Management System and the very subjective Risk based Inspection Program for these platforms. Petronas currently doesn-t have a standalone Petronas Technical Standard PTS-SIM. This study proposes a recommended practice for the SIM process for offshore structures in Malaysia, including studies by API and ISO and local elements such as the number of platforms, types of facilities, age and risk ranking. Case study on SMG-A platform in Sabah shows missing or scattered platform data and a gap in inspection history. It is to undergo a level 3 underwater inspection in year 2015.

Extraction of Data from Web Pages: A Vision Based Approach

With the explosive growth of information sources available on the World Wide Web, it has become increasingly difficult to identify the relevant pieces of information, since web pages are often cluttered with irrelevant content like advertisements, navigation-panels, copyright notices etc., surrounding the main content of the web page. Hence, tools for the mining of data regions, data records and data items need to be developed in order to provide value-added services. Currently available automatic techniques to mine data regions from web pages are still unsatisfactory because of their poor performance and tag-dependence. In this paper a novel method to extract data items from the web pages automatically is proposed. It comprises of two steps: (1) Identification and Extraction of the data regions based on visual clues information. (2) Identification of data records and extraction of data items from a data region. For step1, a novel and more effective method is proposed based on visual clues, which finds the data regions formed by all types of tags using visual clues. For step2 a more effective method namely, Extraction of Data Items from web Pages (EDIP), is adopted to mine data items. The EDIP technique is a list-based approach in which the list is a linear data structure. The proposed technique is able to mine the non-contiguous data records and can correctly identify data regions, irrespective of the type of tag in which it is bound. Our experimental results show that the proposed technique performs better than the existing techniques.

Homotopy Analysis Method for Hydromagnetic Plane and Axisymmetric Stagnation-point Flow with Velocity Slip

This work is focused on the steady boundary layer flow near the forward stagnation point of plane and axisymmetric bodies towards a stretching sheet. The no slip condition on the solid boundary is replaced by the partial slip condition. The analytical solutions for the velocity distributions are obtained for the various values of the ratio of free stream velocity and stretching velocity, slip parameter, the suction and injection velocity parameter, magnetic parameter and dimensionality index parameter in the series forms with the help of homotopy analysis method (HAM). Convergence of the series is explicitly discussed. Results show that the flow and the skin friction coefficient depend heavily on the velocity slip factor. In addition, the effects of all the parameters mentioned above were more pronounced for plane flows than for axisymmetric flows.

Soil Resistivity Structure and Its Implication on the Pole Grid Resistance for Transmission Lines

High Voltage (HV) transmission lines are widely spread around residential places. They take all forms of shapes: concrete, steel, and timber poles. Earth grid always form part of the HV transmission structure, whereat soil resistivity value is one of the main inputs when it comes to determining the earth grid requirements. In this paper, the soil structure and its implication on the electrode resistance of HV transmission poles will be explored. In Addition, this paper will present simulation for various soil structures using IEEE and Australian standards to verify the computation with CDEGS software. Furthermore, the split factor behavior under different soil resistivity structure will be presented using CDEGS simulations.

Comparative Analysis of Various Multiuser Detection Techniques in SDMA-OFDM System Over the Correlated MIMO Channel Model for IEEE 802.16n

SDMA (Space-Division Multiple Access) is a MIMO (Multiple-Input and Multiple-Output) based wireless communication network architecture which has the potential to significantly increase the spectral efficiency and the system performance. The maximum likelihood (ML) detection provides the optimal performance, but its complexity increases exponentially with the constellation size of modulation and number of users. The QR decomposition (QRD) MUD can be a substitute to ML detection due its low complexity and near optimal performance. The minimum mean-squared-error (MMSE) multiuser detection (MUD) minimises the mean square error (MSE), which may not give guarantee that the BER of the system is also minimum. But the minimum bit error rate (MBER) MUD performs better than the classic MMSE MUD in term of minimum probability of error by directly minimising the BER cost function. Also the MBER MUD is able to support more users than the number of receiving antennas, whereas the rest of MUDs fail in this scenario. In this paper the performance of various MUD techniques is verified for the correlated MIMO channel models based on IEEE 802.16n standard.

Computational Study on Cardiac-Coronary Interaction in Terms of Coronary Flow-Pressure Waveforms in Presence of Drugs: Comparison Between Simulated and In Vivo Data

Cardiovascular human simulator can be a useful tool in understanding complex physiopathological process in cardiocirculatory system. It can also be a useful tool in order to investigate the effects of different drugs on hemodynamic parameters. The aim of this work is to test the potentiality of our cardiovascular numerical simulator CARDIOSIM© in reproducing flow/pressure coronary waveforms in presence of two different drugs: Amlodipine (AMLO) and Adenosine (ADO). In particular a time-varying intramyocardial compression, assumed to be proportional to the left ventricular pressure, was related to the venous coronary compliances in order to study its effects on the coronary blood flow and the flow/pressure loop. Considering that coronary circulation dynamics is strongly interrelated with the mechanics of the left ventricular contraction, relaxation, and filling, the numerical model allowed to analyze the effects induced by the left ventricular pressure on the coronary flow.

Voice in Pre-service Teacher Development

Recently, Thai education system is engaged in serious and promising reforms. One of the crucial elements in most of these educational reforms is the teacher professional development. Teachers today are under growing pressure to perform. However, most new teachers are not adequately prepared to meet the expectation. Consequently, this paper seeks to investigate the opinion of mentor teachers and university supervisors about professional development in the aspect of learning management skill of the preservice teachers in Rajabhat Universities, then compare the opinion between the mentor teachers and university supervisors about professional development in the aspect of learning management skill of the pre-service teachers. The study involved a cohort of 40 university supervisors and 77 mentor teachers. The research concludes by showing that mentor teachers viewed pre-service teacher as a professional teacher with an effective learning management skill. However, in the perspective of the university supervisor, pre-service teachers still have inadequate learning management skill.

Seismic Response Reduction of Structures using Smart Base Isolation System

In this study, control performance of a smart base isolation system consisting of a friction pendulum system (FPS) and a magnetorheological (MR) damper has been investigated. A fuzzy logic controller (FLC) is used to modulate the MR damper so as to minimize structural acceleration while maintaining acceptable base displacement levels. To this end, a multi-objective optimization scheme is used to optimize parameters of membership functions and find appropriate fuzzy rules. To demonstrate effectiveness of the proposed multi-objective genetic algorithm for FLC, a numerical study of a smart base isolation system is conducted using several historical earthquakes. It is shown that the proposed method can find optimal fuzzy rules and that the optimized FLC outperforms not only a passive control strategy but also a human-designed FLC and a conventional semi-active control algorithm.

Region Based Hidden Markov Random Field Model for Brain MR Image Segmentation

In this paper, we present the region based hidden Markov random field model (RBHMRF), which encodes the characteristics of different brain regions into a probabilistic framework for brain MR image segmentation. The recently proposed TV+L1 model is used for region extraction. By utilizing different spatial characteristics in different brain regions, the RMHMRF model performs beyond the current state-of-the-art method, the hidden Markov random field model (HMRF), which uses identical spatial information throughout the whole brain. Experiments on both real and synthetic 3D MR images show that the segmentation result of the proposed method has higher accuracy compared to existing algorithms.

Comparing Hilditch, Rosenfeld, Zhang-Suen,and Nagendraprasad -Wang-Gupta Thinning

This paper compares Hilditch, Rosenfeld, Zhang- Suen, dan Nagendraprasad Wang Gupta (NWG) thinning algorithms for Javanese character image recognition. Thinning is an effective process when the focus in not on the size of the pattern, but rather on the relative position of the strokes in the pattern. The research analyzes the thinning of 60 Javanese characters. Time-wise, Zhang-Suen algorithm gives the best results with the average process time being 0.00455188 seconds. But if we look at the percentage of pixels that meet one-pixel thickness, Rosenfelt algorithm gives the best results, with a 99.98% success rate. From the number of pixels that are erased, NWG algorithm gives the best results with the average number of pixels erased being 84.12%. It can be concluded that the Hilditch algorithm performs least successfully compared to the other three algorithms.

Local Curvelet Based Classification Using Linear Discriminant Analysis for Face Recognition

In this paper, an efficient local appearance feature extraction method based the multi-resolution Curvelet transform is proposed in order to further enhance the performance of the well known Linear Discriminant Analysis(LDA) method when applied to face recognition. Each face is described by a subset of band filtered images containing block-based Curvelet coefficients. These coefficients characterize the face texture and a set of simple statistical measures allows us to form compact and meaningful feature vectors. The proposed method is compared with some related feature extraction methods such as Principal component analysis (PCA), as well as Linear Discriminant Analysis LDA, and independent component Analysis (ICA). Two different muti-resolution transforms, Wavelet (DWT) and Contourlet, were also compared against the Block Based Curvelet-LDA algorithm. Experimental results on ORL, YALE and FERET face databases convince us that the proposed method provides a better representation of the class information and obtains much higher recognition accuracies.

A PIM (Processor-In-Memory) for Computer Graphics : Data Partitioning and Placement Schemes

The demand for higher performance graphics continues to grow because of the incessant desire towards realism. And, rapid advances in fabrication technology have enabled us to build several processor cores on a single die. Hence, it is important to develop single chip parallel architectures for such data-intensive applications. In this paper, we propose an efficient PIM architectures tailored for computer graphics which requires a large number of memory accesses. We then address the two important tasks necessary for maximally exploiting the parallelism provided by the architecture, namely, partitioning and placement of graphic data, which affect respectively load balances and communication costs. Under the constraints of uniform partitioning, we develop approaches for optimal partitioning and placement, which significantly reduce search space. We also present heuristics for identifying near-optimal placement, since the search space for placement is impractically large despite our optimization. We then demonstrate the effectiveness of our partitioning and placement approaches via analysis of example scenes; simulation results show considerable search space reductions, and our heuristics for placement performs close to optimal – the average ratio of communication overheads between our heuristics and the optimal was 1.05. Our uniform partitioning showed average load-balance ratio of 1.47 for geometry processing and 1.44 for rasterization, which is reasonable.

Seamless Flow of Voluminous Data in High Speed Network without Congestion Using Feedback Mechanism

Continuously growing needs for Internet applications that transmit massive amount of data have led to the emergence of high speed network. Data transfer must take place without any congestion and hence feedback parameters must be transferred from the receiver end to the sender end so as to restrict the sending rate in order to avoid congestion. Even though TCP tries to avoid congestion by restricting the sending rate and window size, it never announces the sender about the capacity of the data to be sent and also it reduces the window size by half at the time of congestion therefore resulting in the decrease of throughput, low utilization of the bandwidth and maximum delay. In this paper, XCP protocol is used and feedback parameters are calculated based on arrival rate, service rate, traffic rate and queue size and hence the receiver informs the sender about the throughput, capacity of the data to be sent and window size adjustment, resulting in no drastic decrease in window size, better increase in sending rate because of which there is a continuous flow of data without congestion. Therefore as a result of this, there is a maximum increase in throughput, high utilization of the bandwidth and minimum delay. The result of the proposed work is presented as a graph based on throughput, delay and window size. Thus in this paper, XCP protocol is well illustrated and the various parameters are thoroughly analyzed and adequately presented.

A Novel Architecture for Wavelet based Image Fusion

In this paper, we focus on the fusion of images from different sources using multiresolution wavelet transforms. Based on reviews of popular image fusion techniques used in data analysis, different pixel and energy based methods are experimented. A novel architecture with a hybrid algorithm is proposed which applies pixel based maximum selection rule to low frequency approximations and filter mask based fusion to high frequency details of wavelet decomposition. The key feature of hybrid architecture is the combination of advantages of pixel and region based fusion in a single image which can help the development of sophisticated algorithms enhancing the edges and structural details. A Graphical User Interface is developed for image fusion to make the research outcomes available to the end user. To utilize GUI capabilities for medical, industrial and commercial activities without MATLAB installation, a standalone executable application is also developed using Matlab Compiler Runtime.