GEP Considering Purchase Prices, Profits of IPPs and Reliability Criteria Using Hybrid GA and PSO

In this paper, optimal generation expansion planning (GEP) is investigated considering purchase prices, profits of independent power producers (IPPs) and reliability criteria using a new method based on hybrid coded Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). In this approach, optimal purchase price of each IPP is obtained by HCGA and reliability criteria are calculated by PSO technique. It should be noted that reliability criteria and the rate of carbon dioxide (CO2) emission have been considered as constraints of the GEP problem. Finally, the proposed method has been tested on the case study system. The results evaluation show that the proposed method can simply obtain optimal purchase prices of IPPs and is a fast method for calculation of reliability criteria in expansion planning. Also, considering the optimal purchase prices and profits of IPPs in generation expansion planning are caused that the expansion costs are decreased and the problem is solved more exactly.

Quantity and Quality Aware Artificial Bee Colony Algorithm for Clustering

Artificial Bee Colony (ABC) algorithm is a relatively new swarm intelligence technique for clustering. It produces higher quality clusters compared to other population-based algorithms but with poor energy efficiency, cluster quality consistency and typically slower in convergence speed. Inspired by energy saving foraging behavior of natural honey bees this paper presents a Quality and Quantity Aware Artificial Bee Colony (Q2ABC) algorithm to improve quality of cluster identification, energy efficiency and convergence speed of the original ABC. To evaluate the performance of Q2ABC algorithm, experiments were conducted on a suite of ten benchmark UCI datasets. The results demonstrate Q2ABC outperformed ABC and K-means algorithm in the quality of clusters delivered.

Exploring the Combinatorics of Motif Alignments Foraccurately Computing E-values from P-values

In biological and biomedical research motif finding tools are important in locating regulatory elements in DNA sequences. There are many such motif finding tools available, which often yield position weight matrices and significance indicators. These indicators, p-values and E-values, describe the likelihood that a motif alignment is generated by the background process, and the expected number of occurrences of the motif in the data set, respectively. The various tools often estimate these indicators differently, making them not directly comparable. One approach for comparing motifs from different tools, is computing the E-value as the product of the p-value and the number of possible alignments in the data set. In this paper we explore the combinatorics of the motif alignment models OOPS, ZOOPS, and ANR, and propose a generic algorithm for computing the number of possible combinations accurately. We also show that using the wrong alignment model can give E-values that significantly diverge from their true values.

Design Optimization of Cutting Parameters when Turning Inconel 718 with Cermet Inserts

Inconel 718, a nickel based super-alloy is an extensively used alloy, accounting for about 50% by weight of materials used in an aerospace engine, mainly in the gas turbine compartment. This is owing to their outstanding strength and oxidation resistance at elevated temperatures in excess of 5500 C. Machining is a requisite operation in the aircraft industries for the manufacture of the components especially for gas turbines. This paper is concerned with optimization of the surface roughness when turning Inconel 718 with cermet inserts. Optimization of turning operation is very useful to reduce cost and time for machining. The approach is based on Response Surface Method (RSM). In this work, second-order quadratic models are developed for surface roughness, considering the cutting speed, feed rate and depth of cut as the cutting parameters, using central composite design. The developed models are used to determine the optimum machining parameters. These optimized machining parameters are validated experimentally, and it is observed that the response values are in reasonable agreement with the predicted values.

Inspection of Geometrical Integrity of Work Piece and Measurement of Tool Wear by the Use of Photo Digitizing Method

Considering complexity of products, new geometrical design and investment tolerances that are necessary, measuring and dimensional controlling involve modern and more precise methods. Photo digitizing method using two cameras to record pictures and utilization of conventional method named “cloud points" and data analysis by the use of ATOUS software, is known as modern and efficient in mentioned context. In this paper, benefits of photo digitizing method in evaluating sampling of machining processes have been put forward. For example, assessment of geometrical integrity surface in 5-axis milling process and measurement of carbide tool wear in turning process, can be can be brought forward. Advantages of this method comparing to conventional methods have been expressed.

Convective Heat Transfer Enhancement in an Enclosure with Fin Utilizing Nano Fluids

The objective of the present work is to conduct investigations leading to a more complete explanation of single phase natural convective heat transfer in an enclosure with fin utilizing nano fluids. The nano fluid used, which is composed of Aluminum oxide nano particles in suspension of Ethylene glycol, is provided at various volume fractions. The study is carried out numerically for a range of Rayleigh numbers, fin heights and aspect ratio. The flow and temperature distributions are taken to be two-dimensional. Regions with the same velocity and temperature distributions are identified as symmetry of sections. One half of such a rectangular region is chosen as the computational domain taking into account the symmetry about the fin. Transport equations are modeled by a stream functionvorticity formulation and are solved numerically by finite-difference schemes. Comparisons with previously published works on the basis of special cases are done. Results are presented in the form of streamline, vector and isotherm plots as well as the variation of local Nusselt number along the fin under different conditions.

Removal of Ni(II), Zn(II) and Pb(II) ions from Single Metal Aqueous Solution using Activated Carbon Prepared from Rice Husk

The abundance and availability of rice husk, an agricultural waste, make them as a good source for precursor of activated carbon. In this work, rice husk-based activated carbons were prepared via base treated chemical activation process prior the carbonization process. The effect of carbonization temperatures (400, 600 and 800oC) on their pore structure was evaluated through morphology analysis using scanning electron microscope (SEM). Sample carbonized at 800oC showed better evolution and development of pores as compared to those carbonized at 400 and 600oC. The potential of rice husk-based activated carbon as an alternative adsorbent was investigated for the removal of Ni(II), Zn(II) and Pb(II) from single metal aqueous solution. The adsorption studies using rice husk-based activated carbon as an adsorbent were carried out as a function of contact time at room temperature and the metal ions were analyzed using atomic absorption spectrophotometer (AAS). The ability to remove metal ion from single metal aqueous solution was found to be improved with the increasing of carbonization temperature. Among the three metal ions tested, Pb(II) ion gave the highest adsorption on rice husk-based activated carbon. The results obtained indicate the potential to utilize rice husk as a promising precursor for the preparation of activated carbon for removal of heavy metals.

Adaptive Rfid Positioning System Using Signal Level Matrix

In this paper, we present a method named Signal Level Matrix (SLM) which can improve the accuracy and stability of active RFID indoor positioning system. Considering the accuracy and cost, we use uniform distribution mode to set up and separate the overlapped signal covering areas, in order to achieve preliminary location setting. Then, based on the proposed SLM concept and the characteristic of the signal strength value that attenuates as the distance increases, this system cross-examines the distribution of adjacent signals to locate the users more accurately. The experimental results indicate that the adaptive positioning method proposed in this paper could improve the accuracy and stability of the positioning system effectively and satisfyingly.

Utilizing Analytic Hierarchy Process to Analyze Consumers- Purchase Evaluation Factors of Smartphones

Due to the fast development of technology, the competition of technological products is turbulent; therefore, it is important to understand the market trend, consumers- demand and preferences. As the smartphones are prevalent, the main purpose of this paper is to utilize Analytic Hierarchy Process (AHP) to analyze consumer-s purchase evaluation factors of smartphones. Through the AHP expert questionnaire, the smartphones- main functions are classified as “user interface", “mobile commerce functions", “hardware and software specifications", “entertainment functions" and “appearance and design", five aspects to analyze the weights. Then four evaluation criteria are evaluated under each aspect to rank the weights. Based on an analysis of data shows that consumers consider when purchase factors are “hardware and software specifications", “user interface", “appearance and design", “mobile commerce functions" and “entertainment functions" in sequence. The “hardware and software specifications" aspect obtains the weight of 33.18%; it is the most important factor that consumers are taken into account. In addition, the most important evaluation criteria are central processing unit, operating system, touch screen, and battery function in sequence. The results of the study can be adopted as reference data for mobile phone manufacturers in the future on the design and marketing strategy to satisfy the voice of customer.

Mechanical Behavior of Deep-Drawn Cups with Aluminum/Duralumin Multi-Layered Clad Structures

This study aims to investigate mechanical behavior of deep-drawn cups consisting of aluminum (A1050)/ duralumin (A2017) multi-layered clad structures with micro- and macro-scale functional gradients. Such multi-layered clad structures are possibly used for a new type of crash-boxes in automobiles to effectively absorb the impact forces generated when automobiles having collisions. The effect of heat treatments on microstructure, compositional gradient, micro hardness in 2 and 6-layered aluminum/ duralumin clad structures, which were fabricated by hot rolling, have been investigated. Impact compressive behavior of deep-drawn cups consisting of such aluminum/ duralumin clad structures has been also investigated in terms of energy absorption and maximum force. Deep-drawn cups consisting of 6-layerd clad structures with microand macro-scale functional gradients exhibit superior properties in impact compressive tests.

Relationship between Transparency, Liquidity and Valuation

Recent evidences on liquidity and valuation of securities in the capital markets clearly show the importance of stock market liquidity and valuation of firms. In this paper, relationship between transparency, liquidity, and valuation is studied by using data obtained from 70 companies listed in Tehran Stock Exchange during2003-2012. In this study, discriminatory earnings management, as a sign of lack of transparency and Tobin's Q, was used as the criteria of valuation. The results indicate that there is a significant and reversed relationship between earnings management and liquidity. On the other hand, there is a relationship between liquidity and transparency.The results also indicate a significant relationship between transparency and valuation. Transparency has an indirect effect on firm valuation alone or through the liquidity channel. Although the effect of transparency on the value of a firm was reduced by adding the variable of liquidity, the cumulative effect of transparency and liquidity increased.

Geostatistical Analysis and Mapping of Groundlevel Ozone in a Medium Sized Urban Area

Ground-level tropospheric ozone is one of the air pollutants of most concern. It is mainly produced by photochemical processes involving nitrogen oxides and volatile organic compounds in the lower parts of the atmosphere. Ozone levels become particularly high in regions close to high ozone precursor emissions and during summer, when stagnant meteorological conditions with high insolation and high temperatures are common. In this work, some results of a study about urban ozone distribution patterns in the city of Badajoz, which is the largest and most industrialized city in Extremadura region (southwest Spain) are shown. Fourteen sampling campaigns, at least one per month, were carried out to measure ambient air ozone concentrations, during periods that were selected according to favourable conditions to ozone production, using an automatic portable analyzer. Later, to evaluate the ozone distribution at the city, the measured ozone data were analyzed using geostatistical techniques. Thus, first, during the exploratory analysis of data, it was revealed that they were distributed normally, which is a desirable property for the subsequent stages of the geostatistical study. Secondly, during the structural analysis of data, theoretical spherical models provided the best fit for all monthly experimental variograms. The parameters of these variograms (sill, range and nugget) revealed that the maximum distance of spatial dependence is between 302-790 m and the variable, air ozone concentration, is not evenly distributed in reduced distances. Finally, predictive ozone maps were derived for all points of the experimental study area, by use of geostatistical algorithms (kriging). High prediction accuracy was obtained in all cases as cross-validation showed. Useful information for hazard assessment was also provided when probability maps, based on kriging interpolation and kriging standard deviation, were produced.

Finite Element Analysis of Full Ceramic Crowns with and without Zirconia Framework

Simulation of occlusal function during laboratory material-s testing becomes essential in predicting long-term performance before clinical usage. The aim of the study was to assess the influence of chamfer preparation depth on failure risk of heat pressed ceramic crowns with and without zirconia framework by means of finite element analysis. 3D models of maxillary central incisor, prepared for full ceramic crowns with different depths of the chamfer margin (between 0.8 and 1.2 mm) and 6-degree tapered walls together with the overlying crowns were generated using literature data (Fig. 1, 2). The crowns were designed with and without a zirconia framework with a thickness of 0.4 mm. For all preparations and crowns, stresses in the pressed ceramic crown, zirconia framework, pressed ceramic veneer, and dentin were evaluated separately. The highest stresses were registered in the dentin. The depth of the preparations had no significant influence on the stress values of the teeth and pressed ceramics for the studied cases, only for the zirconia framework. The zirconia framework decreases the stress values in the veneer.

Frequency-Domain Design of Fractional-Order FIR Differentiators

In this paper, a fractional-order FIR differentiator design method using the differential evolution (DE) algorithm is presented. In the proposed method, the FIR digital filter is designed to meet the frequency response of a desired fractal-order differentiator, which is evaluated in the frequency domain. To verify the design performance, another design method considered in the time-domain is also provided. Simulation results reveal the efficiency of the proposed method.

An Augmented Beam-search Based Algorithm for the Strip Packing Problem

In this paper, the use of beam search and look-ahead strategies for solving the strip packing problem (SPP) is investigated. Given a strip of fixed width W, unlimited length L, and a set of n circular pieces of known radii, the objective is to determine the minimum length of the initial strip that packs all the pieces. An augmented algorithm which combines beam search and a look-ahead strategies is proposed. The look-ahead is used in order to evaluate the nodes at each level of the tree search. The best nodes are then retained for branching. The computational investigation showed that the proposed augmented algorithm is able to improve the best known solutions of the literature on most instances used.

Dynamic Admission Control for Quality of Service in IP Networks

The goal of admission control is to support the Quality of Service demands of real-time applications via resource reservation in IP networks. In this paper we introduce a novel Dynamic Admission Control (DAC) mechanism for IP networks. The DAC dynamically allocates network resources using the previous network pattern for each path and uses the dynamic admission algorithm to improve bandwidth utilization using bandwidth brokers. We evaluate the performance of the proposed mechanism through trace-driven simulation experiments in view point of blocking probability, throughput and normalized utilization.

The Accuracy of the Flight Derivative Estimates Derived from Flight Data

The accuracy of estimated stability and control derivatives of a light aircraft from flight test data were evaluated. The light aircraft, named ChangGong-91, is the first certified aircraft from the Korean government. The output error method, which is a maximum likelihood estimation technique and considers measurement noise only, was used to analyze the aircraft responses measures. The multi-step control inputs were applied in order to excite the short period mode for the longitudinal and Dutch-roll mode for the lateral-directional motion. The estimated stability/control derivatives of Chan Gong-91 were analyzed for the assessment of handling qualities comparing them with those of similar aircraft. The accuracy of the flight derivative estimates derived from flight test measurement was examined in engineering judgment, scatter and Cramer-Rao bound, which turned out to be satisfactory with minor defects..

Kosovo- A Unique Experiment in Europe- in the International Context at the End of the Cold War?

The question of interethnic and interreligious conflicts in ex-Yugoslavia receives much attention within the framework of the international context created after 1991 because of the impact of these conflicts on the security and the stability of the region of Balkans and of Europe. This paper focuses on the rationales leading to the declaration of independence by Kosovo according to ethnic and religious criteria and analyzes why these same rationales were not applied in Bosnia and Herzegovina. The approach undertaken aims at comparatively examining the cases of Kosovo, and Bosnia and Herzegovina. At the same time, it aims at understanding the political decision making of the international community in the case of Kosovo. Specifically, was this a good political decision for the security and the stability of the region of Balkans, of Europe, or even for global security and stability? This research starts with an overview on the European security framework post 1991, paying particular attention to Kosovo and Bosnia and Herzegovina. It then presents the theoretical and methodological framework and compares the representative cases. Using the constructivism issue and the comparative methodology, it arrives at the results of the study. An important issue of the paper is the thesis that this event modifies the principles of international law and creates dangerous precedents for regional stability in the Balkans.

Manufacturing Dispersions Based Simulation and Synthesis of Design Tolerances

The objective of this work which is based on the approach of simultaneous engineering is to contribute to the development of a CIM tool for the synthesis of functional design dimensions expressed by average values and tolerance intervals. In this paper, the dispersions method known as the Δl method which proved reliable in the simulation of manufacturing dimensions is used to develop a methodology for the automation of the simulation. This methodology is constructed around three procedures. The first procedure executes the verification of the functional requirements by automatically extracting the functional dimension chains in the mechanical sub-assembly. Then a second procedure performs an optimization of the dispersions on the basis of unknown variables. The third procedure uses the optimized values of the dispersions to compute the optimized average values and tolerances of the functional dimensions in the chains. A statistical and cost based approach is integrated in the methodology in order to take account of the capabilities of the manufacturing processes and to distribute optimal values among the individual components of the chains.

An Agent Based Simulation for Network Formation with Heterogeneous Agents

We investigate an asymmetric connections model with a dynamic network formation process, using an agent based simulation. We permit heterogeneity of agents- value. Valuable persons seem to have many links on real social networks. We focus on this point of view, and examine whether valuable agents change the structures of the terminal networks. Simulation reveals that valuable agents diversify the terminal networks. We can not find evidence that valuable agents increase the possibility that star networks survive the dynamic process. We find that valuable agents disperse the degrees of agents in each terminal network on an average.