Estimation of Production Function in Fishery on the Coasts of Caspian Sea

This research was conducted for the first time at the southeastern coasts of the Caspian Sea in order to evaluate the performance of osteichthyes cooperatives through production (catch) function. Using one of the indirect valuation methods in this research, contributory factors in catch were identified and were inserted into the function as independent variables. In order to carry out this research, the performance of 25 Osteichthyes catching cooperatives in the utilization year of 2009 which were involved in fishing in Miankale wildlife refuge region. The contributory factors in catch were divided into groups of economic, ecological and biological factors. In the mentioned function, catch rate of the cooperative were inserted into as the dependant variable and fourteen partial variables in terms of nine general variables as independent variables. Finally, after function estimation, seven variables were rendered significant at 99 percent reliably level. The results of the function estimation indicated that human resource (fisherman quantity) had the greatest positive effect on catch rate with an influence coefficient of 1.7 while weather conditions had the greatest negative effect on the catch rate of cooperatives with an influence coefficient of -2.07. Moreover, factors like member's share, experience and fisherman training and fishing effort played the main roles in the catch rate of cooperative with influence coefficients of 0.81, 0.5 and 0.21, respectively.

High Performance Computing Using Out-of- Core Sparse Direct Solvers

In-core memory requirement is a bottleneck in solving large three dimensional Navier-Stokes finite element problem formulations using sparse direct solvers. Out-of-core solution strategy is a viable alternative to reduce the in-core memory requirements while solving large scale problems. This study evaluates the performance of various out-of-core sequential solvers based on multifrontal or supernodal techniques in the context of finite element formulations for three dimensional problems on a Windows platform. Here three different solvers, HSL_MA78, MUMPS and PARDISO are compared. The performance of these solvers is evaluated on a 64-bit machine with 16GB RAM for finite element formulation of flow through a rectangular channel. It is observed that using out-of-core PARDISO solver, relatively large problems can be solved. The implementation of Newton and modified Newton's iteration is also discussed.

Study on Ultrasonic Vibration Effects on Grinding Process of Alumina Ceramic (Al2O3)

Nowadays, engineering ceramics have significant applications in different industries such as; automotive, aerospace, electrical, electronics and even martial industries due to their attractive physical and mechanical properties like very high hardness and strength at elevated temperatures, chemical stability, low friction and high wear resistance. However, these interesting properties plus low heat conductivity make their machining processes too hard, costly and time consuming. Many attempts have been made in order to make the grinding process of engineering ceramics easier and many scientists have tried to find proper techniques to economize ceramics' machining processes. This paper proposes a new diamond plunge grinding technique using ultrasonic vibration for grinding Alumina ceramic (Al2O3). For this purpose, a set of laboratory equipments have been designed and simulated using Finite Element Method (FEM) and constructed in order to be used in various measurements. The results obtained have been compared with the conventional plunge grinding process without ultrasonic vibration and indicated that the surface roughness and fracture strength improved and the grinding forces decreased.

Data Traffic Dynamics and Saturation on a Single Link

The dynamics of User Datagram Protocol (UDP) traffic over Ethernet between two computers are analyzed using nonlinear dynamics which shows that there are two clear regimes in the data flow: free flow and saturated. The two most important variables affecting this are the packet size and packet flow rate. However, this transition is due to a transcritical bifurcation rather than phase transition in models such as in vehicle traffic or theorized large-scale computer network congestion. It is hoped this model will help lay the groundwork for further research on the dynamics of networks, especially computer networks.

Modeling and Analysis of Adaptive Buffer Sharing Scheme for Consecutive Packet Loss Reduction in Broadband Networks

High speed networks provide realtime variable bit rate service with diversified traffic flow characteristics and quality requirements. The variable bit rate traffic has stringent delay and packet loss requirements. The burstiness of the correlated traffic makes dynamic buffer management highly desirable to satisfy the Quality of Service (QoS) requirements. This paper presents an algorithm for optimization of adaptive buffer allocation scheme for traffic based on loss of consecutive packets in data-stream and buffer occupancy level. Buffer is designed to allow the input traffic to be partitioned into different priority classes and based on the input traffic behavior it controls the threshold dynamically. This algorithm allows input packets to enter into buffer if its occupancy level is less than the threshold value for priority of that packet. The threshold is dynamically varied in runtime based on packet loss behavior. The simulation is run for two priority classes of the input traffic – realtime and non-realtime classes. The simulation results show that Adaptive Partial Buffer Sharing (ADPBS) has better performance than Static Partial Buffer Sharing (SPBS) and First In First Out (FIFO) queue under the same traffic conditions.

A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Performance and Emission Characteristics of a DI Diesel Engine Fuelled with Cashew Nut Shell Liquid (CNSL)-Diesel Blends

The increased number of automobiles in recent years has resulted in great demand for fossil fuel. This has led to the development of automobile by using alternative fuels which include gaseous fuels, biofuels and vegetables oils as fuel. Energy from biomass and more specific bio-diesel is one of the opportunities that could cover the future demand of fossil fuel shortage. Biomass in the form of cashew nut shell represents a new energy source and abundant source of energy in India. The bio-fuel is derived from cashew nut shell oil and its blend with diesel are promising alternative fuel for diesel engine. In this work the pyrolysis Cashew Nut Shell Liquid (CNSL)-Diesel Blends (CDB) was used to run the Direct Injection (DI) diesel engine. The experiments were conducted with various blends of CNSL and Diesel namely B20, B40, B60, B80 and B100. The results are compared with neat diesel operation. The brake thermal efficiency was decreased for blends of CNSL and Diesel except the lower blends of B20. The brake thermal efficiency of B20 is nearly closer to that of diesel fuel. Also the emission level of the all CNSL and Diesel blends was increased compared to neat diesel. The higher viscosity and lower volatility of CNSL leads to poor mixture formation and hence lower brake thermal efficiency and higher emission levels. The higher emission level can be reduced by adding suitable additives and oxygenates with CNSL and Diesel blends.

Technique for Grounding System Design in Distribution Substation

This paper presents the significant factor and give some suggestion that should know before design. The main objective of this paper is guide the first step for someone who attends to design of grounding system before study in details later. The overview of grounding system can protect damage from fault such as can save a human life and power system equipment. The unsafe conditions have three cases. Case 1) maximum touch voltage exceeds the safety criteria. In this case, the conductor compression ratio of the ground gird should be first adjusted to have optimal spacing of ground grid conductors. If it still over limit, earth resistivity should be consider afterward. Case 2) maximum step voltage exceeds the safety criteria. In this case, increasing the number of ground grid conductors around the boundary can solve this problem. Case 3) both of maximum touch and step voltage exceed the safety criteria. In this case, follow the solutions explained in case 1 and case 2. Another suggestion, vary depth of ground grid until maximum step and touch voltage do not exceed the safety criteria.

Adaptive MPC Using a Recursive Learning Technique

A model predictive controller based on recursive learning is proposed. In this SISO adaptive controller, a model is automatically updated using simple recursive equations. The identified models are then stored in the memory to be re-used in the future. The decision for model update is taken based on a new control performance index. The new controller allows the use of simple linear model predictive controllers in the control of nonlinear time varying processes.

Shape Optimization of Permanent Magnet Motors Using the Reduced Basis Technique

In this paper, a tooth shape optimization method for cogging torque reduction in Permanent Magnet (PM) motors is developed by using the Reduced Basis Technique (RBT) coupled by Finite Element Analysis (FEA) and Design of Experiments (DOE) methods. The primary objective of the method is to reduce the enormous number of design variables required to define the tooth shape. RBT is a weighted combination of several basis shapes. The aim of the method is to find the best combination using the weights for each tooth shape as the design variables. A multi-level design process is developed to find suitable basis shapes or trial shapes at each level that can be used in the reduced basis technique. Each level is treated as a separated optimization problem until the required objective – minimum cogging torque – is achieved. The process is started with geometrically simple basis shapes that are defined by their shape co-ordinates. The experimental design of Taguchi method is used to build the approximation model and to perform optimization. This method is demonstrated on the tooth shape optimization of a 8-poles/12-slots PM motor.

A Hybrid Ontology Based Approach for Ranking Documents

Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques to extract phrases from documents and the query and doing stemming on words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done flexible and in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.

PIELG: A Protein Interaction Extraction Systemusing a Link Grammar Parser from Biomedical Abstracts

Due to the ever growing amount of publications about protein-protein interactions, information extraction from text is increasingly recognized as one of crucial technologies in bioinformatics. This paper presents a Protein Interaction Extraction System using a Link Grammar Parser from biomedical abstracts (PIELG). PIELG uses linkage given by the Link Grammar Parser to start a case based analysis of contents of various syntactic roles as well as their linguistically significant and meaningful combinations. The system uses phrasal-prepositional verbs patterns to overcome preposition combinations problems. The recall and precision are 74.4% and 62.65%, respectively. Experimental evaluations with two other state-of-the-art extraction systems indicate that PIELG system achieves better performance. For further evaluation, the system is augmented with a graphical package (Cytoscape) for extracting protein interaction information from sequence databases. The result shows that the performance is remarkably promising.

A Laboratory Assistance Module

We propose that Virtual Learning Environments (VLEs) should be designed by taking into account the characteristics, the special needs and the specific operating rules of the academic institutions in which they are employed. In this context, we describe a VLE module that extends the support of the organization and delivery of course material by including administration activities related to the various stages of teaching. These include the co-ordination, collaboration and monitoring of the course material development process and institution-specific course material delivery modes. Our specialized module, which enhances VLE capabilities by Helping Educators and Learners through a Laboratory Assistance System, is willing to assist the Greek tertiary technological sector, which includes Technological Educational Institutes (T.E.I.).

A New Approach to Face Recognition Using Dual Dimension Reduction

In this paper a new approach to face recognition is presented that achieves double dimension reduction, making the system computationally efficient with better recognition results and out perform common DCT technique of face recognition. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results change with change in face image resolution and provide optimal results when arriving at a certain resolution level. In the proposed model of face recognition, initially image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to increased computational speed and feature extraction potential of Discrete Cosine Transform (DCT), it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A tradeoff between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL , Yale and EME color database.

Neuro-fuzzy Model and Regression Model a Comparison Study of MRR in Electrical Discharge Machining of D2 Tool Steel

In the current research, neuro-fuzzy model and regression model was developed to predict Material Removal Rate in Electrical Discharge Machining process for AISI D2 tool steel with copper electrode. Extensive experiments were conducted with various levels of discharge current, pulse duration and duty cycle. The experimental data are split into two sets, one for training and the other for validation of the model. The training data were used to develop the above models and the test data, which was not used earlier to develop these models were used for validation the models. Subsequently, the models are compared. It was found that the predicted and experimental results were in good agreement and the coefficients of correlation were found to be 0.999 and 0.974 for neuro fuzzy and regression model respectively

Some Solid Transportation Models with Crisp and Rough Costs

In this paper, some practical solid transportation models are formulated considering per trip capacity of each type of conveyances with crisp and rough unit transportation costs. This is applicable for the system in which full vehicles, e.g. trucks, rail coaches are to be booked for transportation of products so that transportation cost is determined on the full of the conveyances. The models with unit transportation costs as rough variables are transformed into deterministic forms using rough chance constrained programming with the help of trust measure. Numerical examples are provided to illustrate the proposed models in crisp environment as well as with unit transportation costs as rough variables.

Thermal Modeling of Dry-Transformers and Estimating Temperature Rise

Temperature rise in a transformer depends on variety of parameters such as ambient temperature, output current and type of the core. Considering these parameters, temperature rise estimation is still complicated procedure. In this paper, we present a new model based on simple electrical equivalent circuit. This method avoids the complication associated to accurate estimation and is in very good agreement with practice.

Size Control of Nanoparticles Using a Microfluidic Device

We have developed a microfluidic device system for the continuous producting of nanoparticles, and we have clarified the relationship between the mixing performance of reactors and the particle size. First, we evaluated the mixing performance of reactors by carring out the Villermaux–Dushman reaction and determined the experimental conditions for producing AgCl nanoparticles. Next, we produced AgCl nanoparticles and evaluated the mixing performance and the particle size. We found that as the mixing performance improves the size of produced particles decreases and the particle size distribution becomes sharper. We produced AgCl nanoparticles with a size of 86 nm using the microfluidic device that had the best mixing performance among the three reactors we tested in this study; the coefficient of variation (Cv) of the size distribution of the produced nanoparticles was 26.1%.

A New Measure of Herding Behavior: Derivation and Implications

If price and quantity are the fundamental building blocks of any theory of market interactions, the importance of trading volume in understanding the behavior of financial markets is clear. However, while many economic models of financial markets have been developed to explain the behavior of prices -predictability, variability, and information content- far less attention has been devoted to explaining the behavior of trading volume. In this article, we hope to expand our understanding of trading volume by developing a new measure of herding behavior based on a cross sectional dispersion of volumes betas. We apply our measure to the Toronto stock exchange using monthly data from January 2000 to December 2002. Our findings show that the herd phenomenon consists of three essential components: stationary herding, intentional herding and the feedback herding.

Enhancing K-Means Algorithm with Initial Cluster Centers Derived from Data Partitioning along the Data Axis with the Highest Variance

In this paper, we propose an algorithm to compute initial cluster centers for K-means clustering. Data in a cell is partitioned using a cutting plane that divides cell in two smaller cells. The plane is perpendicular to the data axis with the highest variance and is designed to reduce the sum squared errors of the two cells as much as possible, while at the same time keep the two cells far apart as possible. Cells are partitioned one at a time until the number of cells equals to the predefined number of clusters, K. The centers of the K cells become the initial cluster centers for K-means. The experimental results suggest that the proposed algorithm is effective, converge to better clustering results than those of the random initialization method. The research also indicated the proposed algorithm would greatly improve the likelihood of every cluster containing some data in it.