Bounds on Reliability of Parallel Computer Interconnection Systems

The evaluation of residual reliability of large sized parallel computer interconnection systems is not practicable with the existing methods. Under such conditions, one must go for approximation techniques which provide the upper bound and lower bound on this reliability. In this context, a new approximation method for providing bounds on residual reliability is proposed here. The proposed method is well supported by two algorithms for simulation purpose. The bounds on residual reliability of three different categories of interconnection topologies are efficiently found by using the proposed method

Enhancing K-Means Algorithm with Initial Cluster Centers Derived from Data Partitioning along the Data Axis with the Highest Variance

In this paper, we propose an algorithm to compute initial cluster centers for K-means clustering. Data in a cell is partitioned using a cutting plane that divides cell in two smaller cells. The plane is perpendicular to the data axis with the highest variance and is designed to reduce the sum squared errors of the two cells as much as possible, while at the same time keep the two cells far apart as possible. Cells are partitioned one at a time until the number of cells equals to the predefined number of clusters, K. The centers of the K cells become the initial cluster centers for K-means. The experimental results suggest that the proposed algorithm is effective, converge to better clustering results than those of the random initialization method. The research also indicated the proposed algorithm would greatly improve the likelihood of every cluster containing some data in it.

Using Visual Technologies to Promote Excellence in Computer Science Education

The purposes of this paper are to (1) promote excellence in computer science by suggesting a cohesive innovative approach to fill well documented deficiencies in current computer science education, (2) justify (using the authors' and others anecdotal evidence from both the classroom and the real world) why this approach holds great potential to successfully eliminate the deficiencies, (3) invite other professionals to join the authors in proof of concept research. The authors' experiences, though anecdotal, strongly suggest that a new approach involving visual modeling technologies should allow computer science programs to retain a greater percentage of prospective and declared majors as students become more engaged learners, more successful problem-solvers, and better prepared as programmers. In addition, the graduates of such computer science programs will make greater contributions to the profession as skilled problem-solvers. Instead of wearily rememorizing code as they move to the next course, students will have the problem-solving skills to think and work in more sophisticated and creative ways.

Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding

HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.

A Comparison among Wolf Pack Search and Four other Optimization Algorithms

The main objective of this paper is applying a comparison between the Wolf Pack Search (WPS) as a newly introduced intelligent algorithm with several other known algorithms including Particle Swarm Optimization (PSO), Shuffled Frog Leaping (SFL), Binary and Continues Genetic algorithms. All algorithms are applied on two benchmark cost functions. The aim is to identify the best algorithm in terms of more speed and accuracy in finding the solution, where speed is measured in terms of function evaluations. The simulation results show that the SFL algorithm with less function evaluations becomes first if the simulation time is important, while if accuracy is the significant issue, WPS and PSO would have a better performance.

Wireless Sensor Networks:A Survey on Ultra-Low Power-Aware Design

Distributed wireless sensor network consist on several scattered nodes in a knowledge area. Those sensors have as its only power supplies a pair of batteries that must let them live up to five years without substitution. That-s why it is necessary to develop some power aware algorithms that could save battery lifetime as much as possible. In this is document, a review of power aware design for sensor nodes is presented. As example of implementations, some resources and task management, communication, topology control and routing protocols are named.

An ACO Based Algorithm for Distribution Networks Including Dispersed Generations

With Power system movement toward restructuring along with factors such as life environment pollution, problems of transmission expansion and with advancement in construction technology of small generation units, it is expected that small units like wind turbines, fuel cells, photovoltaic, ... that most of the time connect to the distribution networks play a very essential role in electric power industry. With increase in developing usage of small generation units, management of distribution networks should be reviewed. The target of this paper is to present a new method for optimal management of active and reactive power in distribution networks with regard to costs pertaining to various types of dispersed generations, capacitors and cost of electric energy achieved from network. In other words, in this method it-s endeavored to select optimal sources of active and reactive power generation and controlling equipments such as dispersed generations, capacitors, under load tapchanger transformers and substations in a way that firstly costs in relation to them are minimized and secondly technical and physical constraints are regarded. Because the optimal management of distribution networks is an optimization problem with continuous and discrete variables, the new evolutionary method based on Ant Colony Algorithm has been applied. The simulation results of the method tested on two cases containing 23 and 34 buses exist and will be shown at later sections.

Prediction the Limiting Drawing Ratio in Deep Drawing Process by Back Propagation Artificial Neural Network

In this paper back-propagation artificial neural network (BPANN) with Levenberg–Marquardt algorithm is employed to predict the limiting drawing ratio (LDR) of the deep drawing process. To prepare a training set for BPANN, some finite element simulations were carried out. die and punch radius, die arc radius, friction coefficient, thickness, yield strength of sheet and strain hardening exponent were used as the input data and the LDR as the specified output used in the training of neural network. As a result of the specified parameters, the program will be able to estimate the LDR for any new given condition. Comparing FEM and BPANN results, an acceptable correlation was found.

An Approach to Polynomial Curve Comparison in Geometric Object Database

In image processing and visualization, comparing two bitmapped images needs to be compared from their pixels by matching pixel-by-pixel. Consequently, it takes a lot of computational time while the comparison of two vector-based images is significantly faster. Sometimes these raster graphics images can be approximately converted into the vector-based images by various techniques. After conversion, the problem of comparing two raster graphics images can be reduced to the problem of comparing vector graphics images. Hence, the problem of comparing pixel-by-pixel can be reduced to the problem of polynomial comparisons. In computer aided geometric design (CAGD), the vector graphics images are the composition of curves and surfaces. Curves are defined by a sequence of control points and their polynomials. In this paper, the control points will be considerably used to compare curves. The same curves after relocated or rotated are treated to be equivalent while two curves after different scaled are considered to be similar curves. This paper proposed an algorithm for comparing the polynomial curves by using the control points for equivalence and similarity. In addition, the geometric object-oriented database used to keep the curve information has also been defined in XML format for further used in curve comparisons.

Tool Path Generation and Manufacturing Process for Blades of a Compressor Rotor

This paper presents a complete procedure for tool path planning and blade machining in 5-axis manufacturing. The actual cutting contact and cutter locations can be determined by lead and tilt angles. The tool path generation is implemented by piecewise curved approximation and chordal deviation detection. An application about drive surface method promotes flexibility of tool control and stability of machine motion. A real manufacturing process is proposed to separate the operation into three regions with five stages and to modify the local tool orientation with an interactive algorithm.

Transformation of Vocal Characteristics: A Review of Literature

The transformation of vocal characteristics aims at modifying voice such that the intelligibility of aphonic voice is increased or the voice characteristics of a speaker (source speaker) to be perceived as if another speaker (target speaker) had uttered it. In this paper, the current state-of-the-art voice characteristics transformation methodology is reviewed. Special emphasis is placed on voice transformation methodology and issues for improving the transformed speech quality in intelligibility and naturalness are discussed. In particular, it is suggested to use the modulation theory of speech as a base for research on high quality voice transformation. This approach allows one to separate linguistic, expressive, organic and perspective information of speech, based on an analysis of how they are fused when speech is produced. Therefore, this theory provides the fundamentals not only for manipulating non-linguistic, extra-/paralinguistic and intra-linguistic variables for voice transformation, but also for paving the way for easily transposing the existing voice transformation methods to emotion-related voice quality transformation and speaking style transformation. From the perspectives of human speech production and perception, the popular voice transformation techniques are described and classified them based on the underlying principles either from the speech production or perception mechanisms or from both. In addition, the advantages and limitations of voice transformation techniques and the experimental manipulation of vocal cues are discussed through examples from past and present research. Finally, a conclusion and road map are pointed out for more natural voice transformation algorithms in the future.

Aerodynamics and Optimization of Airfoil Under Ground Effect

The Prediction of aerodynamic characteristics and shape optimization of airfoil under the ground effect have been carried out by integration of computational fluid dynamics and the multiobjective Pareto-based genetic algorithm. The main flow characteristics around an airfoil of WIG craft are lift force, lift-to-drag ratio and static height stability (H.S). However, they show a strong trade-off phenomenon so that it is not easy to satisfy the design requirements simultaneously. This difficulty can be resolved by the optimal design. The above mentioned three characteristics are chosen as the objective functions and NACA0015 airfoil is considered as a baseline model in the present study. The profile of airfoil is constructed by Bezier curves with fourteen control points and these control points are adopted as the design variables. For multi-objective optimization problems, the optimal solutions are not unique but a set of non-dominated optima and they are called Pareto frontiers or Pareto sets. As the results of optimization, forty numbers of non- dominated Pareto optima can be obtained at thirty evolutions.

A “Greedy“ Czech Manufacturing Case

The article describes a case study on one of Czech Republic-s manufacturing middle size enterprises (ME), where due to the European financial crisis, production lines had to be redesigned and optimized in order to minimize the total costs of the production of goods. It is considered an optimization problem of minimizing the total cost of the work load, according to the costs of the possible locations of the workplaces, with an application of the Greedy algorithm and a partial analogy to a Set Packing Problem. The displacement of working tables in a company should be as a one-toone monotone increasing function in order for the total costs of production of the goods to be at minimum. We use a heuristic approach with greedy algorithm for solving this linear optimization problem, regardless the possible greediness which may appear and we apply it in a Czech ME.

A Post Processing Method for Quantum Prime Factorization Algorithm based on Randomized Approach

Prime Factorization based on Quantum approach in two phases has been performed. The first phase has been achieved at Quantum computer and the second phase has been achieved at the classic computer (Post Processing). At the second phase the goal is to estimate the period r of equation xrN ≡ 1 and to find the prime factors of the composite integer N in classic computer. In this paper we present a method based on Randomized Approach for estimation the period r with a satisfactory probability and the composite integer N will be factorized therefore with the Randomized Approach even the gesture of the period is not exactly the real period at least we can find one of the prime factors of composite N. Finally we present some important points for designing an Emulator for Quantum Computer Simulation.

Object-Based Image Indexing and Retrieval in DCT Domain using Clustering Techniques

In this paper, we present a new and effective image indexing technique that extracts features directly from DCT domain. Our proposed approach is an object-based image indexing. For each block of size 8*8 in DCT domain a feature vector is extracted. Then, feature vectors of all blocks of image using a k-means algorithm is clustered into groups. Each cluster represents a special object of the image. Then we select some clusters that have largest members after clustering. The centroids of the selected clusters are taken as image feature vectors and indexed into the database. Also, we propose an approach for using of proposed image indexing method in automatic image classification. Experimental results on a database of 800 images from 8 semantic groups in automatic image classification are reported.

New Wavelet-Based Superresolution Algorithm for Speckle Reduction in SAR Images

This paper describes a novel projection algorithm, the Projection Onto Span Algorithm (POSA) for wavelet-based superresolution and removing speckle (in wavelet domain) of unknown variance from Synthetic Aperture Radar (SAR) images. Although the POSA is good as a new superresolution algorithm for image enhancement, image metrology and biometric identification, here one will use it like a tool of despeckling, being the first time that an algorithm of super-resolution is used for despeckling of SAR images. Specifically, the speckled SAR image is decomposed into wavelet subbands; POSA is applied to the high subbands, and reconstruct a SAR image from the modified detail coefficients. Experimental results demonstrate that the new method compares favorably to several other despeckling methods on test SAR images.

Instance-Based Ontology Matching Using Different Kinds of Formalism

Ontology Matching is a task needed in various applica-tions, for example for comparison or merging purposes. In literature,many algorithms solving the matching problem can be found, butmost of them do not consider instances at all. Mappings are deter-mined by calculating the string-similarity of labels, by recognizinglinguistic word relations (synonyms, subsumptions etc.) or by ana-lyzing the (graph) structure. Due to the facts that instances are oftenmodeled within the ontology and that the set of instances describesthe meaning of the concepts better than their meta information,instances should definitely be incorporated into the matching process.In this paper several novel instance-based matching algorithms arepresented which enhance the quality of matching results obtainedwith common concept-based methods. Different kinds of formalismsare use to classify concepts on account of their instances and finallyto compare the concepts directly.KeywordsInstances, Ontology Matching, Semantic Web

Inferring Hierarchical Pronunciation Rules from a Phonetic Dictionary

This work presents a new phonetic transcription system based on a tree of hierarchical pronunciation rules expressed as context-specific grapheme-phoneme correspondences. The tree is automatically inferred from a phonetic dictionary by incrementally analyzing deeper context levels, eventually representing a minimum set of exhaustive rules that pronounce without errors all the words in the training dictionary and that can be applied to out-of-vocabulary words. The proposed approach improves upon existing rule-tree-based techniques in that it makes use of graphemes, rather than letters, as elementary orthographic units. A new linear algorithm for the segmentation of a word in graphemes is introduced to enable outof- vocabulary grapheme-based phonetic transcription. Exhaustive rule trees provide a canonical representation of the pronunciation rules of a language that can be used not only to pronounce out-of-vocabulary words, but also to analyze and compare the pronunciation rules inferred from different dictionaries. The proposed approach has been implemented in C and tested on Oxford British English and Basic English. Experimental results show that grapheme-based rule trees represent phonetically sound rules and provide better performance than letter-based rule trees.

A Flexible Flowshop Scheduling Problem with Machine Eligibility Constraint and Two Criteria Objective Function

This research deals with a flexible flowshop scheduling problem with arrival and delivery of jobs in groups and processing them individually. Due to the special characteristics of each job, only a subset of machines in each stage is eligible to process that job. The objective function deals with minimization of sum of the completion time of groups on one hand and minimization of sum of the differences between completion time of jobs and delivery time of the group containing that job (waiting period) on the other hand. The problem can be stated as FFc / rj , Mj / irreg which has many applications in production and service industries. A mathematical model is proposed, the problem is proved to be NPcomplete, and an effective heuristic method is presented to schedule the jobs efficiently. This algorithm can then be used within the body of any metaheuristic algorithm for solving the problem.

Hybridizing Genetic Algorithm with Biased Chance Local Search

This paper explores university course timetabling problem. There are several characteristics that make scheduling and timetabling problems particularly difficult to solve: they have huge search spaces, they are often highly constrained, they require sophisticated solution representation schemes, and they usually require very time-consuming fitness evaluation routines. Thus standard evolutionary algorithms lack of efficiency to deal with them. In this paper we have proposed a memetic algorithm that incorporates the problem specific knowledge such that most of chromosomes generated are decoded into feasible solutions. Generating vast amount of feasible chromosomes makes the progress of search process possible in a time efficient manner. Experimental results exhibit the advantages of the developed Hybrid Genetic Algorithm than the standard Genetic Algorithm.