Abstract: The choice of finite element to use in order to predict
nonlinear static or dynamic response of complex structures becomes
an important factor. Then, the main goal of this research work is to
focus a study on the effect of the in-plane rotational degrees of
freedom in linear and geometrically non linear static and dynamic
analysis of thin shell structures by flat shell finite elements. In this
purpose: First, simple triangular and quadrilateral flat shell finite
elements are implemented in an incremental formulation based on the
updated lagrangian corotational description for geometrically
nonlinear analysis. The triangular element is a combination of DKT
and CST elements, while the quadrilateral is a combination of DKQ
and the bilinear quadrilateral membrane element. In both elements,
the sixth degree of freedom is handled via introducing fictitious
stiffness. Secondly, in the same code, the sixth degrees of freedom in
these elements is handled differently where the in-plane rotational
d.o.f is considered as an effective d.o.f in the in-plane filed
interpolation. Our goal is to compare resulting shell elements. Third,
the analysis is enlarged to dynamic linear analysis by direct
integration using Newmark-s implicit method. Finally, the linear
dynamic analysis is extended to geometrically nonlinear dynamic
analysis where Newmark-s method is used to integrate equations of
motion and the Newton-Raphson method is employed for iterating
within each time step increment until equilibrium is achieved. The
obtained results demonstrate the effectiveness and robustness of the
interpolation of the in-plane rotational d.o.f. and present deficiencies
of using fictitious stiffness in dynamic linear and nonlinear analysis.
Abstract: Recent years, adaptive pushover methods have been
developed for seismic analysis of structures. Herein, the accuracy of
the displacement-based adaptive pushover (DAP) method, which is
introduced by Antoniou and Pinho [2004], is evaluated for Irregular
buildings. The results are compared to the force-based procedure.
Both concrete and steel frame structures, asymmetric in plan and
elevation are analyzed and also torsional effects are taking into the
account. These analyses are performed using both near fault and far
fault records. In order to verify the results, the Incremental Dynamic
Analysis (IDA) is performed.
Abstract: Transmission network expansion planning (TNEP) is an important component of power system planning that its task is to minimize the network construction and operational cost while satisfying the demand increasing, imposed technical and economic conditions. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, the lines adequacy rate has not been studied after the planning horizon, i.e. when the expanded network misses its adequacy and needs to be expanded again. In this paper, in order to take transmission lines condition after expansion in to account from the line loading view point, the adequacy of transmission network is considered for solution of STNEP problem. To obtain optimal network arrangement, a decimal codification genetic algorithm (DCGA) is being used for minimizing the network construction and operational cost. The effectiveness of the proposed idea is tested on the Garver's six-bus network. The results evaluation reveals that the annual worth of network adequacy has a considerable effect on the network arrangement. In addition, the obtained network, based on the DCGA, has lower investment cost and higher adequacy rate. Thus, the network satisfies the requirements of delivering electric power more safely and reliably to load centers.
Abstract: The field of biomedical materials plays an imperative
requisite and a critical role in manufacturing a variety of biological
artificial replacements in a modern world. Recently, titanium (Ti)
materials are being used as biomaterials because of their superior
corrosion resistance and tremendous specific strength, free- allergic
problems and the greatest biocompatibility compared to other
competing biomaterials such as stainless steel, Co-Cr alloys,
ceramics, polymers, and composite materials. However, regardless of
these excellent performance properties, Implantable Ti materials have
poor shear strength and wear resistance which limited their
applications as biomaterials. Even though the wear properties of Ti
alloys has revealed some improvements, the crucial effectiveness of
biomedical Ti alloys as wear components requires a comprehensive
deep understanding of the wear reasons, mechanisms, and techniques
that can be used to improve wear behavior. This review examines
current information on the effect of thermal and thermomechanical
processing of implantable Ti materials on the long-term prosthetic
requirement which related with wear behavior. This paper focuses
mainly on the evolution, evaluation and development of effective
microstructural features that can improve wear properties of bio
grade Ti materials using thermal and thermomechanical treatments.
Abstract: This study systemizes processes and methods in
wooden furniture design that contains uniqueness in function and
aesthetics. The study was done by research and analysis for
designer-s consideration factors that affect function and production.
Therefore, the study result indicates that such factors are design
process (planning for design, product specifications, concept design,
product architecture, industrial design, production), design evaluation
as well as wooden furniture design dependent factors i.e. art (art
style; furniture history, form), functionality (the strength and
durability, area place, using), material (appropriate to function, wood
mechanical properties), joints, cost, safety, and social responsibility.
Specifically, all aforementioned factors affect good design. Resulting
from direct experience gained through user-s usage, the designer
must design the wooden furniture systemically and effectively. As a
result, this study selected dinning armchair as a case study with all
involving factors and all design process stated in this study.
Abstract: Cognitive Science appeared about 40 years ago,
subsequent to the challenge of the Artificial Intelligence, as common
territory for several scientific disciplines such as: IT, mathematics,
psychology, neurology, philosophy, sociology, and linguistics. The
new born science was justified by the complexity of the problems
related to the human knowledge on one hand, and on the other by the
fact that none of the above mentioned sciences could explain alone
the mental phenomena. Based on the data supplied by the
experimental sciences such as psychology or neurology, models of
the human mind operation are built in the cognition science. These
models are implemented in computer programs and/or electronic
circuits (specific to the artificial intelligence) – cognitive systems –
whose competences and performances are compared to the human
ones, leading to the psychology and neurology data reinterpretation,
respectively to the construction of new models. During these
processes if psychology provides the experimental basis, philosophy
and mathematics provides the abstraction level utterly necessary for
the intermission of the mentioned sciences.
The ongoing general problematic of the cognitive approach
provides two important types of approach: the computational one,
starting from the idea that the mental phenomenon can be reduced to
1 and 0 type calculus operations, and the connection one that
considers the thinking products as being a result of the interaction
between all the composing (included) systems. In the field of
psychology measurements in the computational register use classical
inquiries and psychometrical tests, generally based on calculus
methods. Deeming things from both sides that are representing the
cognitive science, we can notice a gap in psychological product
measurement possibilities, regarded from the connectionist
perspective, that requires the unitary understanding of the quality –
quantity whole. In such approach measurement by calculus proves to
be inefficient. Our researches, deployed for longer than 20 years,
lead to the conclusion that measuring by forms properly fits to the
connectionism laws and principles.
Abstract: Robots- visual perception is a field that is gaining
increasing attention from researchers. This is partly due to emerging
trends in the commercial availability of 3D scanning systems or
devices that produce a high information accuracy level for a variety of
applications. In the history of mining, the mortality rate of mine workers
has been alarming and robots exhibit a great deal of potentials to
tackle safety issues in mines. However, an effective vision system
is crucial to safe autonomous navigation in underground terrains.
This work investigates robots- perception in underground terrains
(mines and tunnels) using statistical region merging (SRM) model.
SRM reconstructs the main structural components of an imagery
by a simple but effective statistical analysis. An investigation is
conducted on different regions of the mine, such as the shaft, stope
and gallery, using publicly available mine frames, with a stream of
locally captured mine images. An investigation is also conducted on a
stream of underground tunnel image frames, using the XBOX Kinect
3D sensors. The Kinect sensors produce streams of red, green and
blue (RGB) and depth images of 640 x 480 resolution at 30 frames per
second. Integrating the depth information to drivability gives a strong
cue to the analysis, which detects 3D results augmenting drivable and
non-drivable regions in 2D. The results of the 2D and 3D experiment
with different terrains, mines and tunnels, together with the qualitative
and quantitative evaluation, reveal that a good drivable region can be
detected in dynamic underground terrains.
Abstract: A recent neurospiking coding scheme for feature extraction from biosonar echoes of various plants is examined with avariety of stochastic classifiers. Feature vectors derived are employedin well-known stochastic classifiers, including nearest-neighborhood,single Gaussian and a Gaussian mixture with EM optimization.Classifiers' performances are evaluated by using cross-validation and bootstrapping techniques. It is shown that the various classifers perform equivalently and that the modified preprocessing configuration yields considerably improved results.
Abstract: This paper utilizes a finite element analysis to study
the bearing capacity of ring footings on a two-layered soil. The upper
layer, that the footing is placed on it, is soft clay and the underneath
layer is a cohesionless sand. For modeling soils, Mohr–Coulomb
plastic yield criterion is employed. The effects of two factors, the
clay layer thickness and the ratio of internal radius of the ring footing
to external radius of the ring, have been analyzed. It is found that the
bearing capacity decreases as the value of ri / ro increases.
Although, as the clay layer thickness increases the bearing capacity
was alleviated gradually.
Abstract: This paper describes the optimization of a complex
dairy farm simulation model using two quite different methods of
optimization, the Genetic algorithm (GA) and the Lipschitz
Branch-and-Bound (LBB) algorithm. These techniques have been
used to improve an agricultural system model developed by Dexcel
Limited, New Zealand, which describes a detailed representation of
pastoral dairying scenarios and contains an 8-dimensional parameter
space. The model incorporates the sub-models of pasture growth and
animal metabolism, which are themselves complex in many cases.
Each evaluation of the objective function, a composite 'Farm
Performance Index (FPI)', requires simulation of at least a one-year
period of farm operation with a daily time-step, and is therefore
computationally expensive. The problem of visualization of the
objective function (response surface) in high-dimensional spaces is
also considered in the context of the farm optimization problem.
Adaptations of the sammon mapping and parallel coordinates
visualization are described which help visualize some important
properties of the model-s output topography. From this study, it is
found that GA requires fewer function evaluations in optimization
than the LBB algorithm.
Abstract: This paper attempts to establish the fact that Multi
State Network Classification is essential for performance
enhancement of Transport protocols over Satellite based Networks. A
model to classify Multi State network condition taking into
consideration both congestion and channel error is evolved. In order
to arrive at such a model an analysis of the impact of congestion and
channel error on RTT values has been carried out using ns2. The
analysis results are also reported in the paper. The inference drawn
from this analysis is used to develop a novel statistical RTT based
model for multi state network classification.
An Adaptive Multi State Proactive Transport Protocol consisting
of Proactive Slow Start, State based Error Recovery, Timeout Action
and Proactive Reduction is proposed which uses the multi state
network state classification model. This paper also confirms through
detail simulation and analysis that a prior knowledge about the
overall characteristics of the network helps in enhancing the
performance of the protocol over satellite channel which is
significantly affected due to channel noise and congestion.
The necessary augmentation of ns2 simulator is done for
simulating the multi state network classification logic. This
simulation has been used in detail evaluation of the protocol under
varied levels of congestion and channel noise. The performance
enhancement of this protocol with reference to established protocols
namely TCP SACK and Vegas has been discussed. The results as
discussed in this paper clearly reveal that the proposed protocol
always outperforms its peers and show a significant improvement in
very high error conditions as envisaged in the design of the protocol.
Abstract: A new algorithm called Character-Comparison to Character-Access (CCCA) is developed to test the effect of both: 1) converting character-comparison and number-comparison into character-access and 2) the starting point of checking on the performance of the checking operation in string searching. An experiment is performed using both English text and DNA text with different sizes. The results are compared with five algorithms, namely, Naive, BM, Inf_Suf_Pref, Raita, and Cycle. With the CCCA algorithm, the results suggest that the evaluation criteria of the average number of total comparisons are improved up to 35%. Furthermore, the results suggest that the clock time required by the other algorithms is improved in range from 22.13% to 42.33% by the new CCCA algorithm.
Abstract: Steel corrosion in concrete is considered as a main
engineering problems for many countries and lots of expenses has been paid for their repair and maintenance annually. This problem
may occur in all engineering structures whether in coastal and offshore or other areas. Hence, concrete structures should be able to
withstand corrosion factors existing in water or soil. Reinforcing
steel corrosion enhancement can be measured by use of concrete
electrical resistance; and maintaining high electric resistivity in concrete is necessary for steel corrosion prevention. Lots of studies
devoted to different aspects of the subjects worldwide. In this paper, an evaluation of the effects of W/C ratio, cementitious materials, and
percent increase in silica fume were investigated on electric resistivity of high strength concrete. To do that, sixteen mix design
with one aggregate grading was planned. Five of them had varying amount of W/C ratio and other eleven mixes was prepared with
constant W/C ratio but different amount of cementitious materials.
Silica fume and super plasticizer were used with different proportions
in all specimens. Specimens were tested after moist curing for 28 days. A total of 80 cube specimens (50 mm) were tested for concrete
electrical resistance. Results show that concrete electric resistivity can be increased with increasing amount of cementitious materials
and silica fume.
Abstract: The design requirements for successful human
accommodation in urban spaces are well known; and the range of
facilities available for meeting urban water quality and quantity
requirements is also well established. Their competing requirements
must be reconciled in order for urban spaces to be successful for
both. This paper outlines the separate human and water imperatives
and their interactions in urban spaces. Stormwater management
facilities- relative potential contributions to urban spaces are
contrasted, and design choices for achieving those potentials are
described. This study uses human success of urban space as the
evaluative criterion of stormwater amenity: human values call on
stormwater facilities to contribute to successful human spaces.
Placing water-s contribution under the overall idea of successful
urban space is an evolution from previous subjective evaluations.
The information is based on photographs and notes from
approximately 1,000 stormwater facilities and urban sites collected
during the last 35 years in North America and overseas, and the
author-s experience on multi-disciplinary design teams. This
conceptual study combines the disciplinary roles of engineering,
landscape architecture, and sociology in effecting successful urban
design.
Abstract: Determination of nano particle size is substantial since
the nano particle size exerts a significant effect on various properties
of nano materials. Accordingly, proposing non-destructive, accurate
and rapid techniques for this aim is of high interest. There are some
conventional techniques to investigate the morphology and grain size
of nano particles such as scanning electron microscopy (SEM),
atomic force microscopy (AFM) and X-ray diffractometry (XRD).
Vibrational spectroscopy is utilized to characterize different
compounds and applied for evaluation of the average particle size
based on relationship between particle size and near infrared spectra
[1,4] , but it has never been applied in quantitative morphological
analysis of nano materials. So far, the potential application of nearinfrared
(NIR) spectroscopy with its ability in rapid analysis of
powdered materials with minimal sample preparation, has been
suggested for particle size determination of powdered
pharmaceuticals. The relationship between particle size and diffuse
reflectance (DR) spectra in near infrared region has been applied to
introduce a method for estimation of particle size. Back propagation
artificial neural network (BP-ANN) as a nonlinear model was applied
to estimate average particle size based on near infrared diffuse
reflectance spectra. Thirty five different nano TiO2 samples with
different particle size were analyzed by DR-FTNIR spectrometry and
the obtained data were processed by BP- ANN.
Abstract: A DEA model can generally evaluate the performance
using multiple inputs and outputs for the same period. However, it is
hard to avoid the production lead time phenomenon some times, such
as long-term project or marketing activity. A couple of models have
been suggested to capture this time lag issue in the context of DEA.
This paper develops a dual-MPO model to deal with time lag effect in
evaluating efficiency. A numerical example is also given to show that
the proposed model can be used to get efficiency and reference set of
inefficient DMUs and to obtain projected target value of input
attributes for inefficient DMUs to be efficient.
Abstract: This study adopted previous fault patterns, results of
detection analysis, historical records and data, and experts-
experiences to establish fuzzy principles and estimate the failure
probability index of components of a power transformer. Considering
that actual parameters and limiting conditions of parameters may
differ, this study used the standard data of IEC, IEEE, and CIGRE as
condition parameters. According to the characteristics of each
condition parameter, relative degradation was introduced to reflect the
degree of influence of the factors on the transformer condition. The
method of fuzzy mathematics was adopted to determine the
subordinate function of the transformer condition. The calculation
used the Matlab Fuzzy Tool Box to select the condition parameters of
coil winding, iron core, bushing, OLTC, insulating oil and other
auxiliary components and factors (e.g., load records, performance
history, and maintenance records) of the transformer to establish the
fuzzy principles. Examples were presented to support the rationality
and effectiveness of the evaluation method of power transformer
performance conditions, as based on fuzzy comprehensive evaluation.
Abstract: In Supply Chain Management (SCM), strengthening partnerships with suppliers is a significant factor for enhancing competitiveness. Hence, firms increasingly emphasize supplier evaluation processes. Supplier evaluation systems are basically developed in terms of criteria such as quality, cost, delivery, and flexibility. Because there are many variables to be analyzed, this process becomes hard to execute and needs expertise. On this account, this study aims to develop an expert system on supplier evaluation process by designing Artificial Neural Network (ANN) that is supported with Data Envelopment Analysis (DEA). The methods are applied on the data of 24 suppliers, which have longterm relationships with a medium sized company from German Iron and Steel Industry. The data of suppliers consists of variables such as material quality (MQ), discount of amount (DOA), discount of cash (DOC), payment term (PT), delivery time (DT) and annual revenue (AR). Meanwhile, the efficiency that is generated by using DEA is added to the supplier evaluation system in order to use them as system outputs.
Abstract: Irradiation is considered one of the most efficient technological processes for the reduction of microorganisms in food. It can be used to improve the safety of food products, and to extend their shelf lives. The aim of this study was to evaluate the effects of gamma irradiation for improvement of saffron shelf life. Samples were treated with 0 (none irradiated), 1.0, 2.0, 3.0 and 4.0 kGy of gamma irradiation and held for 2 months. The control and irradiated samples were underwent microbial analysis, chemical characteristics and sensory evaluation at 30 days intervals. Microbial analysis indicated that irradiation had a significant effect (P < 0.05) on the reduction of microbial loads. There was no significant difference in sensory quality and chemical characteristics during storage in saffron.
Abstract: A wireless sensor network with a large number of tiny sensor nodes can be used as an effective tool for gathering data in various situations. One of the major issues in wireless sensor networks is developing an energy-efficient routing protocol which has a significant impact on the overall lifetime of the sensor network. In this paper, we propose a novel hierarchical with static clustering routing protocol called Energy-Efficient Protocol with Static Clustering (EEPSC). EEPSC, partitions the network into static clusters, eliminates the overhead of dynamic clustering and utilizes temporary-cluster-heads to distribute the energy load among high-power sensor nodes; thus extends network lifetime. We have conducted simulation-based evaluations to compare the performance of EEPSC against Low-Energy Adaptive Clustering Hierarchy (LEACH). Our experiment results show that EEPSC outperforms LEACH in terms of network lifetime and power consumption minimization.