H-ARQ Techniques for Wireless Systems with Punctured Non-Binary LDPC as FEC Code

This paper presents the H-ARQ techniques comparison for OFDM systems with a new family of non-binary LDPC codes which has been developed within the EU FP7 DAVINCI project. The punctured NB-LDPC codes have been used in a simulated model of the transmission system. The link level performance has been evaluated in terms of spectral efficiency, codeword error rate and average number of retransmissions. The NB-LDPC codes can be easily and effective implemented with different methods of the retransmission needed if correct decoding of a codeword failed. Here the Optimal Symbol Selection method is proposed as a Chase Combining technique.

Target Concept Selection by Property Overlap in Ontology Population

An ontology is widely used in many kinds of applications as a knowledge representation tool for domain knowledge. However, even though an ontology schema is well prepared by domain experts, it is tedious and cost-intensive to add instances into the ontology. The most confident and trust-worthy way to add instances into the ontology is to gather instances from tables in the related Web pages. In automatic populating of instances, the primary task is to find the most proper concept among all possible concepts within the ontology for a given table. This paper proposes a novel method for this problem by defining the similarity between the table and the concept using the overlap of their properties. According to a series of experiments, the proposed method achieves 76.98% of accuracy. This implies that the proposed method is a plausible way for automatic ontology population from Web tables.

A Hybrid Feature Selection by Resampling, Chi squared and Consistency Evaluation Techniques

In this paper a combined feature selection method is proposed which takes advantages of sample domain filtering, resampling and feature subset evaluation methods to reduce dimensions of huge datasets and select reliable features. This method utilizes both feature space and sample domain to improve the process of feature selection and uses a combination of Chi squared with Consistency attribute evaluation methods to seek reliable features. This method consists of two phases. The first phase filters and resamples the sample domain and the second phase adopts a hybrid procedure to find the optimal feature space by applying Chi squared, Consistency subset evaluation methods and genetic search. Experiments on various sized datasets from UCI Repository of Machine Learning databases show that the performance of five classifiers (Naïve Bayes, Logistic, Multilayer Perceptron, Best First Decision Tree and JRIP) improves simultaneously and the classification error for these classifiers decreases considerably. The experiments also show that this method outperforms other feature selection methods.

Application of Genetic Algorithms to Feature Subset Selection in a Farsi OCR

Dealing with hundreds of features in character recognition systems is not unusual. This large number of features leads to the increase of computational workload of recognition process. There have been many methods which try to remove unnecessary or redundant features and reduce feature dimensionality. Besides because of the characteristics of Farsi scripts, it-s not possible to apply other languages algorithms to Farsi directly. In this paper some methods for feature subset selection using genetic algorithms are applied on a Farsi optical character recognition (OCR) system. Experimental results show that application of genetic algorithms (GA) to feature subset selection in a Farsi OCR results in lower computational complexity and enhanced recognition rate.

Towards a Systematic, Cost-Effective Approach for ERP Selection

Existing experiences indicate that one of the most prominent reasons that some ERP implementations fail is related to selecting an improper ERP package. Among those important factors resulting in inappropriate ERP selections, one is to ignore preliminary activities that should be done before the evaluation of ERP packages. Another factor yielding these unsuitable selections is that usually organizations employ prolonged and costly selection processes in such extent that sometimes the process would never be finalized or sometimes the evaluation team might perform many key final activities in an incomplete or inaccurate way due to exhaustion, lack of interest or out-of-date data. In this paper, a systematic approach that recommends some activities to be done before and after the main selection phase is introduced for choosing an ERP package. On the other hand, the proposed approach has utilized some ideas that accelerates the selection process at the same time that reduces the probability of an erroneous final selection.

The Maximum Likelihood Method of Random Coefficient Dynamic Regression Model

The Random Coefficient Dynamic Regression (RCDR) model is to developed from Random Coefficient Autoregressive (RCA) model and Autoregressive (AR) model. The RCDR model is considered by adding exogenous variables to RCA model. In this paper, the concept of the Maximum Likelihood (ML) method is used to estimate the parameter of RCDR(1,1) model. Simulation results have shown the AIC and BIC criterion to compare the performance of the the RCDR(1,1) model. The variables as the stationary and weakly stationary data are good estimates where the exogenous variables are weakly stationary. However, the model selection indicated that variables are nonstationarity data based on the stationary data of the exogenous variables.

On the Move to Semantic Web Services

Semantic Web services will enable the semiautomatic and automatic annotation, advertisement, discovery, selection, composition, and execution of inter-organization business logic, making the Internet become a common global platform where organizations and individuals communicate with each other to carry out various commercial activities and to provide value-added services. There is a growing consensus that Web services alone will not be sufficient to develop valuable solutions due the degree of heterogeneity, autonomy, and distribution of the Web. This paper deals with two of the hottest R&D and technology areas currently associated with the Web – Web services and the Semantic Web. It presents the synergies that can be created between Web Services and Semantic Web technologies to provide a new generation of eservices.

Selection of Best Band Combination for Soil Salinity Studies using ETM+ Satellite Images (A Case study: Nyshaboor Region,Iran)

One of the main environmental problems which affect extensive areas in the world is soil salinity. Traditional data collection methods are neither enough for considering this important environmental problem nor accurate for soil studies. Remote sensing data could overcome most of these problems. Although satellite images are commonly used for these studies, however there are still needs to find the best calibration between the data and real situations in each specified area. Neyshaboor area, North East of Iran was selected as a field study of this research. Landsat satellite images for this area were used in order to prepare suitable learning samples for processing and classifying the images. 300 locations were selected randomly in the area to collect soil samples and finally 273 locations were reselected for further laboratory works and image processing analysis. Electrical conductivity of all samples was measured. Six reflective bands of ETM+ satellite images taken from the study area in 2002 were used for soil salinity classification. The classification was carried out using common algorithms based on the best composition bands. The results showed that the reflective bands 7, 3, 4 and 1 are the best band composition for preparing the color composite images. We also found out, that hybrid classification is a suitable method for identifying and delineation of different salinity classes in the area.

Optimization of GAMM Francis Turbine Runner

Nowadays, the challenge in hydraulic turbine design is the multi-objective design of turbine runner to reach higher efficiency. The hydraulic performance of a turbine is strictly depends on runner blades shape. The present paper focuses on the application of the multi-objective optimization algorithm to the design of a small Francis turbine runner. The optimization exercise focuses on the efficiency improvement at the best efficiency operating point (BEP) of the GAMM Francis turbine. A global optimization method based on artificial neural networks (ANN) and genetic algorithms (GA) coupled by 3D Navier-Stokes flow solver has been used to improve the performance of an initial geometry of a Francis runner. The results show the good ability of optimization algorithm and the final geometry has better efficiency with initial geometry. The goal was to optimize the geometry of the blades of GAMM turbine runner which leads to maximum total efficiency by changing the design parameters of camber line in at least 5 sections of a blade. The efficiency of the optimized geometry is improved from 90.7% to 92.5%. Finally, design parameters and the way of selection have been considered and discussed.

Reliability-based Selection of Wind Turbines for Large-Scale Wind Farms

This paper presents a reliability-based approach to select appropriate wind turbine types for a wind farm considering site-specific wind speed patterns. An actual wind farm in the northern region of Iran with the wind speed registration of one year is studied in this paper. An analytic approach based on total probability theorem is utilized in this paper to model the probabilistic behavior of both turbines- availability and wind speed. Well-known probabilistic reliability indices such as loss of load expectation (LOLE), expected energy not supplied (EENS) and incremental peak load carrying capability (IPLCC) for wind power integration in the Roy Billinton Test System (RBTS) are examined. The most appropriate turbine type achieving the highest reliability level is chosen for the studied wind farm.

Validation and Selection between Machine Learning Technique and Traditional Methods to Reduce Bullwhip Effects: a Data Mining Approach

The aim of this paper is to present a methodology in three steps to forecast supply chain demand. In first step, various data mining techniques are applied in order to prepare data for entering into forecasting models. In second step, the modeling step, an artificial neural network and support vector machine is presented after defining Mean Absolute Percentage Error index for measuring error. The structure of artificial neural network is selected based on previous researchers' results and in this article the accuracy of network is increased by using sensitivity analysis. The best forecast for classical forecasting methods (Moving Average, Exponential Smoothing, and Exponential Smoothing with Trend) is resulted based on prepared data and this forecast is compared with result of support vector machine and proposed artificial neural network. The results show that artificial neural network can forecast more precisely in comparison with other methods. Finally, forecasting methods' stability is analyzed by using raw data and even the effectiveness of clustering analysis is measured.

A Unity Gain Fully-Differential 10bit and 40MSps Sample-And-Hold Amplifier in 0.18um CMOS

A 10bit, 40 MSps, sample and hold, implemented in 0.18-μm CMOS technology with 3.3V supply, is presented for application in the front-end stage of an analog-to-digital converter. Topology selection, biasing, compensation and common mode feedback are discussed. Cascode technique has been used to increase the dc gain. The proposed opamp provides 149MHz unity-gain bandwidth (wu), 80 degree phase margin and a differential peak to peak output swing more than 2.5v. The circuit has 55db Total Harmonic Distortion (THD), using the improved fully differential two stage operational amplifier of 91.7dB gain. The power dissipation of the designed sample and hold is 4.7mw. The designed system demonstrates relatively suitable response in different process, temperature and supply corners (PVT corners).

Influence of Hydrocarbons on Plant Cell Ultrastructure and Main Metabolic Enzymes

Influence of octane and benzene on plant cell ultrastructure and enzymes of basic metabolism, such as nitrogen assimilation and energy generation have been studied. Different plants: perennial ryegrass (Lolium perenne) and alfalfa (Medicago sativa); crops- maize (Zea mays L.) and bean (Phaseolus vulgaris); shrubs – privet (Ligustrum sempervirens) and trifoliate orange (Poncirus trifoliate); trees - poplar (Populus deltoides) and white mulberry (Morus alba L.) were exposed to hydrocarbons of different concentrations (1, 10 and 100 mM). Destructive changes in bean and maize leaves cells ultrastructure under the influence of benzene vapour were revealed at the level of photosynthetic and energy generation subcellular organells. Different deviations at the level of subcellular organelles structure and distribution were observed in alfalfa and ryegrass root cells under the influence of benzene and octane, absorbed through roots. The level of destructive changes is concentration dependent. Benzene at low 1 and 10 mM concentration caused the increase in glutamate dehydrogenase (GDH) activity in maize roots and leaves and in poplar and mulberry shoots, though to higher extent in case of lower, 1mM concentration. The induction was more intensive in plant roots. The highest tested 100mM concentration of benzene was inhibitory to the enzyme in all plants. Octane caused induction of GDH in all grassy plants at all tested concentrations; however the rate of induction decreased parallel to increase of the hydrocarbon concentration. Octane at concentration 1 mM caused induction of GDH in privet, trifoliate and white mulberry shoots. The highest, 100mM octane was characterized by inhibitory effect to GDH activity in all plants. Octane had inductive effect on malate dehydrogenase in almost all plants and tested concentrations, indicating the intensification of Trycarboxylic Acid Cycle. The data could be suggested for elaboration of criteria for plant selection for phytoremediation of oil hydrocarbons contaminated soils.

The Defects Reduction in Injection Molding by Fuzzy Logic based Machine Selection System

The effective machine-job assignment of injection molding machines is very important for industry because it is not only directly affects the quality of the product but also the performance and lifetime of the machine as well. The phase of machine selection was mostly done by professionals or experienced planners, so the possibility of matching a job with an inappropriate machine might occur when it was conducted by an inexperienced person. It could lead to an uneconomical plan and defects. This research aimed to develop a machine selection system for plastic injection machines as a tool to help in decision making of the user. This proposed system could be used both in normal times and in times of emergency. Fuzzy logic principle is applied to deal with uncertainty and mechanical factors in the selection of both quantity and quality criteria. The six criteria were obtained from a plastic manufacturer's case study to construct a system based on fuzzy logic theory using MATLAB. The results showed that the system was able to reduce the defects of Short Shot and Sink Mark to 24.0% and 8.0% and the total defects was reduced around 8.7% per month.

Adaptive Path Planning for Mobile Robot Obstacle Avoidance

Generally speaking, the mobile robot is capable of sensing its surrounding environment, interpreting the sensed information to obtain the knowledge of its location and the environment, planning a real-time trajectory to reach the object. In this process, the issue of obstacle avoidance is a fundamental topic to be challenged. Thus, an adaptive path-planning control scheme is designed without detailed environmental information, large memory size and heavy computation burden in this study for the obstacle avoidance of a mobile robot. In this scheme, the robot can gradually approach its object according to the motion tracking mode, obstacle avoidance mode, self-rotation mode, and robot state selection. The effectiveness of the proposed adaptive path-planning control scheme is verified by numerical simulations of a differential-driving mobile robot under the possible occurrence of obstacle shapes.

Kernel’s Parameter Selection for Support Vector Domain Description

Support Vector Domain Description (SVDD) is one of the best-known one-class support vector learning methods, in which one tries the strategy of using balls defined on the feature space in order to distinguish a set of normal data from all other possible abnormal objects. As all kernel-based learning algorithms its performance depends heavily on the proper choice of the kernel parameter. This paper proposes a new approach to select kernel's parameter based on maximizing the distance between both gravity centers of normal and abnormal classes, and at the same time minimizing the variance within each class. The performance of the proposed algorithm is evaluated on several benchmarks. The experimental results demonstrate the feasibility and the effectiveness of the presented method.

Linguistic, Pragmatic and Evolutionary Factors in Wason Selection Task

In two studies we tested the hypothesis that the appropriate linguistic formulation of a deontic rule – i.e. the formulation which clarifies the monadic nature of deontic operators - should produce more correct responses than the conditional formulation in Wason selection task. We tested this assumption by presenting a prescription rule and a prohibition rule in conditional vs. proper deontic formulation. We contrasted this hypothesis with two other hypotheses derived from social contract theory and relevance theory. According to the first theory, a deontic rule expressed in terms of cost-benefit should elicit a cheater detection module, sensible to mental states attributions and thus able to discriminate intentional rule violations from accidental rule violations. We tested this prevision by distinguishing the two types of violations. According to relevance theory, performance in selection task should improve by increasing cognitive effect and decreasing cognitive effort. We tested this prevision by focusing experimental instructions on the rule vs. the action covered by the rule. In study 1, in which 480 undergraduates participated, we tested these predictions through a 2 x 2 x 2 x 2 (type of the rule x rule formulation x type of violation x experimental instructions) between-subjects design. In study 2 – carried out by means of a 2 x 2 (rule formulation x type of violation) between-subjects design - we retested the hypothesis of rule formulation vs. the cheaterdetection hypothesis through a new version of selection task in which intentional vs. accidental rule violations were better discriminated. 240 undergraduates participated in this study. Results corroborate our hypothesis and challenge the contrasting assumptions. However, they show that the conditional formulation of deontic rules produces a lower performance than what is reported in literature.

A Scenario Oriented Supplier Selection by Considering a Multi Tier Supplier Network

One of the main processes of supply chain management is supplier selection process which its accurate implementation can dramatically increase company competitiveness. In presented article model developed based on the features of second tiers suppliers and four scenarios are predicted in order to help the decision maker (DM) in making up his/her mind. In addition two tiers of suppliers have been considered as a chain of suppliers. Then the proposed approach is solved by a method combined of concepts of fuzzy set theory (FST) and linear programming (LP) which has been nourished by real data extracted from an engineering design and supplying parts company. At the end results reveal the high importance of considering second tier suppliers features as criteria for selecting the best supplier.

Evaluation of Beauveria bassiana Spore Compatibility with Surfactants

The spores of entomopathogenic fungi, Beauveria bassiana was evaluated for their compatibility with four surfactants; SDS (sodium dodyl sulphate) and CABS-65 (calcium alkyl benzene sulphonate), Tween 20 (polyethylene sorbitan monolaureate) and Tween 80 (polyoxyethylene sorbitan monoleate) at six different concentrations (0.1%, 0.5%, 1%, 2.5%, 5% and 10%). Incubated spores showed decrease in concentrations due to conversion of spores to hyphae. The maximum germination recorded in 72 h incubated spores varied with surfactant concentration at 49-68% (SDS), 39- 53% (CABS), 78-92% (Tween 80) and 80-92% (Tween 20), while the optimal surfactant concentration for spore germination was found to be 2.5-5%. The surfactant effect on spores was more pronounced with SDS and CABS-65, where significant deterioration and loss in viability of the incubated spores was observed. The effect of Tween 20 and Tween 80 were comparatively less inhibiting. The results of the study would help in surfactant selection for B. bassiana emulsion preparation.

Application of Genetic Algorithms for Evolution of Quantum Equivalents of Boolean Circuits

Due to the non- intuitive nature of Quantum algorithms, it becomes difficult for a classically trained person to efficiently construct new ones. So rather than designing new algorithms manually, lately, Genetic algorithms (GA) are being implemented for this purpose. GA is a technique to automatically solve a problem using principles of Darwinian evolution. This has been implemented to explore the possibility of evolving an n-qubit circuit when the circuit matrix has been provided using a set of single, two and three qubit gates. Using a variable length population and universal stochastic selection procedure, a number of possible solution circuits, with different number of gates can be obtained for the same input matrix during different runs of GA. The given algorithm has also been successfully implemented to obtain two and three qubit Boolean circuits using Quantum gates. The results demonstrate the effectiveness of the GA procedure even when the search spaces are large.