Abstract: There is a great deal of interest in constructing Double Skin Facade (DSF) structures which are considered as modern movement in field of Energy Conservation, renewable energies, and Architecture design. This trend provides many conclusive alternatives which are frequently associated with sustainable building. In this paper a building with Double Skin Facade is considered in the semiarid climate of Tehran, Iran, in order to consider the DSF-s performance during hot seasons. Mathematical formulations calculate solar heat gain by the external skin. Moreover, Computational Fluid Dynamics (CFD) simulations were performed on the case study building to enhance effectiveness of the facade. The conclusion divulged difference of gained energy by the cavity and room with and without blind and louvers. Some solutions were introduced to surge the performance of natural ventilation by plunging the cooling loads in summer.
Abstract: In most of the popular implementation of Parallel GAs
the whole population is divided into a set of subpopulations, each
subpopulation executes GA independently and some individuals are
migrated at fixed intervals on a ring topology. In these studies,
the migrations usually occur 'synchronously' among subpopulations.
Therefore, CPUs are not used efficiently and the communication
do not occur efficiently either. A few studies tried asynchronous
migration but it is hard to implement and setting proper parameter
values is difficult.
The aim of our research is to develop a migration method which is
easy to implement, which is easy to set parameter values, and which
reduces communication traffic. In this paper, we propose a traffic
reduction method for the Asynchronous Parallel Distributed GA by
migration of elites only. This is a Server-Client model. Every client
executes GA on a subpopulation and sends an elite information to the
server. The server manages the elite information of each client and
the migrations occur according to the evolution of sub-population in
a client. This facilitates the reduction in communication traffic.
To evaluate our proposed model, we apply it to many function optimization
problems. We confirm that our proposed method performs
as well as current methods, the communication traffic is less, and
setting of the parameters are much easier.
Abstract: In the last decades, a number of robust fuzzy clustering algorithms have been proposed to partition data sets affected by noise and outliers. Robust fuzzy C-means (robust-FCM) is certainly one of the most known among these algorithms. In robust-FCM, noise is modeled as a separate cluster and is characterized by a prototype that has a constant distance δ from all data points. Distance δ determines the boundary of the noise cluster and therefore is a critical parameter of the algorithm. Though some approaches have been proposed to automatically determine the most suitable δ for the specific application, up to today an efficient and fully satisfactory solution does not exist. The aim of this paper is to propose a novel method to compute the optimal δ based on the analysis of the distribution of the percentage of objects assigned to the noise cluster in repeated executions of the robust-FCM with decreasing values of δ . The extremely encouraging results obtained on some data sets found in the literature are shown and discussed.
Abstract: The primary objective of the paper is to propose a new method for solving assignment problem under uncertain situation. In the classical assignment problem (AP), zpqdenotes the cost for assigning the qth job to the pth person which is deterministic in nature. Here in some uncertain situation, we have assigned a cost in the form of composite relative degree Fpq instead of and this replaced cost is in the maximization form. In this paper, it has been solved and validated by the two proposed algorithms, a new mathematical formulation of IVIF assignment problem has been presented where the cost has been considered to be an IVIFN and the membership of elements in the set can be explained by positive and negative evidences. To determine the composite relative degree of similarity of IVIFS the concept of similarity measure and the score function is used for validating the solution which is obtained by Composite relative similarity degree method. Further, hypothetical numeric illusion is conducted to clarify the method’s effectiveness and feasibility developed in the study. Finally, conclusion and suggestion for future work are also proposed.
Abstract: In large Internet backbones, Service Providers
typically have to explicitly manage the traffic flows in order to
optimize the use of network resources. This process is often referred
to as Traffic Engineering (TE). Common objectives of traffic
engineering include balance traffic distribution across the network
and avoiding congestion hot spots. Raj P H and SVK Raja designed
the Bayesian network approach to identify congestion hors pots in
MPLS. In this approach for every node in the network the
Conditional Probability Distribution (CPD) is specified. Based on
the CPD the congestion hot spots are identified. Then the traffic can
be distributed so that no link in the network is either over utilized or
under utilized. Although the Bayesian network approach has been
implemented in operational networks, it has a number of well known
scaling issues.
This paper proposes a new approach, which we call the Pragati
(means Progress) Node Popularity (PNP) approach to identify the
congestion hot spots with the network topology alone. In the new
Pragati Node Popularity approach, IP routing runs natively over the
physical topology rather than depending on the CPD of each node as
in Bayesian network. We first illustrate our approach with a simple
network, then present a formal analysis of the Pragati Node
Popularity approach. Our PNP approach shows that for any given
network of Bayesian approach, it exactly identifies the same result
with minimum efforts. We further extend the result to a more
generic one: for any network topology and even though the network
is loopy. A theoretical insight of our result is that the optimal routing
is always shortest path routing with respect to some considerations of
hot spots in the networks.
Abstract: The effect of thermally induced stress on the modal
properties of highly elliptical core optical fibers is studied in this
work using a finite element method. The stress analysis is carried out
and anisotropic refractive index change is calculated using both the
conventional plane strain approximation and the generalized plane
strain approach. After considering the stress optical effect, the modal
analysis of the fiber is performed to obtain the solutions of
fundamental and higher order modes. The modal effective index,
modal birefringence, group effective index, group birefringence, and
dispersion of different modes of the fiber are presented. For
propagation properties, it can be seen that the results depend much on
the approach of stress analysis.
Abstract: For the communication between human and computer
in an interactive computing environment, the gesture recognition is
studied vigorously. Therefore, a lot of studies have proposed efficient
methods about the recognition algorithm using 2D camera captured
images. However, there is a limitation to these methods, such as the
extracted features cannot fully represent the object in real world.
Although many studies used 3D features instead of 2D features for
more accurate gesture recognition, the problem, such as the processing
time to generate 3D objects, is still unsolved in related researches.
Therefore we propose a method to extract the 3D features combined
with the 3D object reconstruction. This method uses the modified
GPU-based visual hull generation algorithm which disables unnecessary
processes, such as the texture calculation to generate three kinds
of 3D projection maps as the 3D feature: a nearest boundary, a farthest
boundary, and a thickness of the object projected on the base-plane. In
the section of experimental results, we present results of proposed
method on eight human postures: T shape, both hands up, right hand
up, left hand up, hands front, stand, sit and bend, and compare the
computational time of the proposed method with that of the previous
methods.
Abstract: The present work consecutively on synthesis and
characterization of composites, Al/Al alloy A 384.1 as matrix in
which the main ingredient as Al/Al-5% MgO alloy based metal
matrix composite. As practical implications the low cost processing
route for the fabrication of Al alloy A 384.1 and operational
difficulties of presently available manufacturing processes based in
liquid manipulation methods. As all new developments, complete
understanding of the influence of processing variables upon the final
quality of the product. And the composite is applied comprehensively
to the acquaintance for achieving superiority of information
concerning the specific heat measurement of a material through the
aid of thermographs. Products are evaluated concerning relative
particle size and mechanical behavior under tensile strength.
Furthermore, Taguchi technique was employed to examine the
experimental optimum results are achieved, owing to effectiveness of
this approach.
Abstract: This paper proposes an investment cost recovery
based efficient and fast sequential optimization approach to optimal
allocation of thyristor controlled series compensator (TCSC) in
competitive power market. The optimization technique has been used
with an objective to maximizing the social welfare and minimizing
the device installation cost by suitable location and rating of TCSC in
the system. The effectiveness of proposed approach for location of
TCSC has been compared with some existing methods of TCSC
placement, in terms of its impact on social welfare, TCSC investment
recovery and optimal generation as well as load patterns. The results
have been obtained on modified IEEE 14-bus system.
Abstract: This paper presents the development of an MODAPTS based cost estimating system to help designers in estimating the manufacturing cost of a assembly products which is belonged from the workers in working fields. Competitiveness of manufacturing cost is getting harder because of the development of Information and telecommunication, but also globalization. Therefore, the accuracy of the assembly cost estimation is getting important. DFA and MODAPTS is useful method for measuring the working hour. But these two methods are used just as a timetable. Therefore, in this paper, we suggest the process of measuring the working hours by MODAPTS which includes the working field-s accurate information. In addition, we adduce the estimation method of accuracy assembly cost with the real information. This research could be useful for designers that can estimate the assembly cost more accurately, and also effective for the companies that which are concerned to reduce the product cost.
Abstract: The design of a steam turbine is a very complex
engineering operation that can be simplified and improved thanks to
computer-aided multi-objective optimization. This process makes use
of existing optimization algorithms and losses correlations to identify
those geometries that deliver the best balance of performance (i.e.
Pareto-optimal points).
This paper deals with a one-dimensional multi-objective and
multi-point optimization of a single-stage steam turbine. Using a
genetic optimization algorithm and an algebraic one-dimensional
ideal gas-path model based on loss and deviation correlations, a code
capable of performing the optimization of a predefined steam turbine
stage was developed. More specifically, during this study the
parameters modified (i.e. decision variables) to identify the best
performing geometries were solidity and angles both for stator and
rotor cascades, while the objective functions to maximize were totalto-
static efficiency and specific work done.
Finally, an accurate analysis of the obtained results was carried
out.
Abstract: Overhead conveyor systems satisfy by their simple
construction, wide application range and their full compatibility with
other manufacturing systems, which are designed according to
international standards. Ultra-light overhead conveyor systems are
rope-based conveying systems with individually driven vehicles. The
vehicles can move automatically on the rope and this can be realized
by energy and signals. Crossings are realized by switches. Overhead
conveyor systems are particularly used in the automotive industry but
also at post offices. Overhead conveyor systems always must be
integrated with a logistical process by finding the best way for a
cheaper material flow and in order to guarantee precise and fast
workflows. With their help, any transport can take place without
wasting ground and space, without excessive company capacity, lost
or damaged products, erroneous delivery, endless travels and without
wasting time. Ultra-light overhead conveyor systems provide optimal
material flow, which produces profit and saves time. This article
illustrates the advantages of the structure of the ultra-light overhead
conveyor systems in logistics applications and explains the steps of
their system design. After an illustration of the steps, currently
available systems on the market will be shown by means of their
technical characteristics. Due to their simple construction, demands
to an ultra-light overhead conveyor system will be illustrated.
Abstract: In this paper, the computation of the electrical field distribution around AC high-voltage lines is demonstrated. The advantages and disadvantages of two different methods are described to evaluate the electrical field quantity. The first method is a seminumerical method using the laws of electrostatic techniques to simulate the two-dimensional electric field under the high-voltage overhead line. The second method which will be discussed is the finite element method (FEM) using specific boundary conditions to compute the two- dimensional electric field distributions in an efficient way.
Abstract: The Beshar River is one aquatic ecosystem,which is
affected by pollutants. This study was conducted to evaluate the
effects of human activities on the water quality of the Beshar river.
This river is approximately 190 km in length and situated at the
geographical positions of 51° 20' to 51° 48' E and 30° 18' to 30° 52'
N it is one of the most important aquatic ecosystems of Kohkiloye
and Boyerahmad province next to the city of Yasuj in southern Iran.
The Beshar river has been contaminated by industrial, agricultural
and other activities in this region such as factories, hospitals,
agricultural farms, urban surface runoff and effluent of wastewater
treatment plants. In order to evaluate the effects of these pollutants
on the quality of the Beshar river, five monitoring stations were
selected along its course. The first station is located upstream of
Yasuj near the Dehnow village; stations 2 to 4 are located east, south
and west of city; and the 5th station is located downstream of Yasuj.
Several water quality parameters were sampled. These include pH,
dissolved oxygen, biological oxygen demand (BOD), temperature,
conductivity, turbidity, total dissolved solids and discharge or flow
measurements. Water samples from the five stations were collected
and analysed to determine the following physicochemical
parameters: EC, pH, T.D.S, T.H, No2, DO, BOD5, COD during 2008
to 2009. The study shows that the BOD5 value of station 1 is at a
minimum (1.5 ppm) and increases downstream from stations 2 to 4 to
a maximum (7.2 ppm), and then decreases at station 5. The DO
values of station 1 is a maximum (9.55 ppm), decreases downstream
to stations 2 - 4 which are at a minimum (3.4 ppm), before increasing
at station 5. The amount of BOD and TDS are highest at the 4th
station and the amount of DO is lowest at this station, marking the
4th station as more highly polluted than the other stations. The
physicochemical parameters improve at the 5th station due to
pollutant degradation and dilution. Finally the point and nonpoint
pollutant sources of Beshar river were determined and compared to
the monitoring results.
Abstract: this paper gives a novel approach towards real-time speed estimation of multiple traffic vehicles using fuzzy logic and image processing techniques with proper arrangement of camera parameters. The described algorithm consists of several important steps. First, the background is estimated by computing median over time window of specific frames. Second, the foreground is extracted using fuzzy similarity approach (FSA) between estimated background pixels and the current frame pixels containing foreground and background. Third, the traffic lanes are divided into two parts for both direction vehicles for parallel processing. Finally, the speeds of vehicles are estimated by Maximum a Posterior Probability (MAP) estimator. True ground speed is determined by utilizing infrared sensors for three different vehicles and the results are compared to the proposed algorithm with an accuracy of ± 0.74 kmph.
Abstract: Homogeneous Charge Compression (HCCI) Ignition technology has been around for a long time, but has recently received renewed attention and enthusiasm. This paper deals with experimental investigations of HCCI engine using hydrous methanol as a primary fuel and Dimethyl Ether (DME) as an ignition improver. A regular diesel engine has been modified to work as HCCI engine for this investigation. The hydrous methanol is inducted and DME is injected into a single cylinder engine. Hence, hydrous methanol is used with 15% water content in HCCI engine and its performance and emission behavior is documented. The auto-ignition of Methanol is enabled by DME. The quantity of DME varies with respect to the load. In this study, the experiments are conducted independently and the effect of the hydrous methanol on the engine operating limit, heat release rate and exhaust emissions at different load conditions are investigated. The investigation also proves that the Hydrous Methanol with DME operation reduces the oxides of Nitrogen and smoke to an extreme low level which is not possible by the direct injection CI engine. Therefore, it is beneficial to use hydrous methanol-DME HCCI mode while using hydrous methanol in internal Combustion Engines.
Abstract: This paper considers the problem of scheduling maintenance actions for identical aircraft gas turbine engines. Each one of the turbines consists of parts which frequently require replacement. A finite inventory of spare parts is available and all parts are ready for replacement at any time. The inventory consists of both new and refurbished parts. Hence, these parts have different field lives. The goal is to find a replacement part sequencing that maximizes the time that the aircraft will keep functioning before the inventory is replenished. The problem is formulated as an identical parallel machine scheduling problem where the minimum completion time has to be maximized. Two models have been developed. The first one is an optimization model which is based on a 0-1 linear programming formulation, while the second one is an approximate procedure which consists in decomposing the problem into several two-machine subproblems. Each subproblem is optimally solved using the first model. Both models have been implemented using Lingo and have been tested on two sets of randomly generated data with up to 150 parts and 10 turbines. Experimental results show that the optimization model is able to solve only instances with no more than 4 turbines, while the decomposition procedure often provides near-optimal solutions within a maximum CPU time of 3 seconds.
Abstract: This paper concerns a formal model to help the
simulation of agent societies where institutional roles and
institutional links can be specified operationally. That is, this paper
concerns institutional roles that can be specified in terms of a minimal behavioral capability that an agent should have in order to
enact that role and, thus, to perform the set of institutional functions that role is responsible for. Correspondingly, the paper concerns
institutional links that can be specified in terms of a minimal
interactional capability that two agents should have in order to, while
enacting the two institutional roles that are linked by that institutional
link, perform for each other the institutional functions supported by
that institutional link. The paper proposes a cognitive architecture
approach to institutional roles and institutional links, that is, an approach in which a institutional role is seen as an abstract cognitive
architecture that should be implemented by any concrete agent (or set of concrete agents) that enacts the institutional role, and in which
institutional links are seen as interactions between the two abstract
cognitive agents that model the two linked institutional roles. We
introduce a cognitive architecture for such purpose, called the
Institutional BCC (IBCC) model, which lifts Yoav Shoham-s BCC
(Beliefs-Capabilities-Commitments) agent architecture to social
contexts. We show how the resulting model can be taken as a means
for a cognitive architecture account of institutional roles and
institutional links of agent societies. Finally, we present an example
of a generic scheme for certain fragments of the social organization
of agent societies, where institutional roles and institutional links are
given in terms of the model.
Abstract: In conventional reliability assessment, the reliability data of system components are treated as crisp values. The collected data have some uncertainties due to errors by human beings/machines or any other sources. These uncertainty factors will limit the understanding of system component failure due to the reason of incomplete data. In these situations, we need to generalize classical methods to fuzzy environment for studying and analyzing the systems of interest. Fuzzy set theory has been proposed to handle such vagueness by generalizing the notion of membership in a set. Essentially, in a Fuzzy Set (FS) each element is associated with a point-value selected from the unit interval [0, 1], which is termed as the grade of membership in the set. A Vague Set (VS), as well as an Intuitionistic Fuzzy Set (IFS), is a further generalization of an FS. Instead of using point-based membership as in FS, interval-based membership is used in VS. The interval-based membership in VS is more expressive in capturing vagueness of data. In the present paper, vague set theory coupled with conventional Lambda-Tau method is presented for reliability analysis of repairable systems. The methodology uses Petri nets (PN) to model the system instead of fault tree because it allows efficient simultaneous generation of minimal cuts and path sets. The presented method is illustrated with the press unit of the paper mill.
Abstract: Estimating the lifetime distribution of computer networks in which nodes and links exist in time and are bound for failure is very useful in various applications. This problem is known to be NP-hard. In this paper we present efficient combinatorial approaches to Monte Carlo estimation of network lifetime distribution. We also present some simulation results.