Abstract: The rangelands, as one of the largest dynamic biomes
in the world, have very capabilities. Regulation of greenhouse gases
in the Earth's atmosphere, particularly carbon dioxide as the main
these gases, is one of these cases. The attention to rangeland, as
cheep and reachable resources to sequestrate the carbon dioxide,
increases after the Industrial Revolution. Rangelands comprise the
large parts of Iran as a steppic area. Rudshur (Saveh), as area index of
steppic area, was selected under three sites include long-term
exclosure, medium-term exclosure, and grazable area in order to the
capable of carbon dioxide’s sequestration of dominated species.
Canopy cover’s percentage of two dominated species (Artemisia
sieberi Besser & Stipa barbata Desf) was determined via establishing
of random 1 square meter plot. The sampling of above and below
ground biomass style was obtained by complete random. After
determination of ash percentage in the laboratory; conversion ratio of
plant biomass to organic carbon was calculated by ignition method.
Results of the paired t-test showed that the amount of carbon
sequestration in above ground and underground biomass of Artemisia
sieberi Besser & Stipa barbata Desf is different in three regions. It,
of course, hasn’t any difference between under and surface ground’s
biomass of Artemisia sieberi Besser in long-term exclosure. The
independent t-test results indicate differences between underground
biomass corresponding each other in the studied sites. Carbon
sequestration in the Stipa barbata Desf was totally more than
Artemisia sieberi Besser. Altogether, the average sequestration of the
long-term exclosure was 5.842gr/m², the medium-term exclosure was
4.115gr/m², and grazable area was 5.975gr/m² so that there isn’t
valuable statistical difference in term of total amount of carbon
sequestration to three sites.
Abstract: The present study focuses on the discussion over the
parameter of Artificial Neural Network (ANN). Sensitivity analysis is
applied to assess the effect of the parameters of ANN on the prediction
of turbidity of raw water in the water treatment plant. The result shows
that transfer function of hidden layer is a critical parameter of ANN.
When the transfer function changes, the reliability of prediction of
water turbidity is greatly different. Moreover, the estimated water
turbidity is less sensitive to training times and learning velocity than
the number of neurons in the hidden layer. Therefore, it is important to
select an appropriate transfer function and suitable number of neurons
in the hidden layer in the process of parameter training and validation.
Abstract: This research proposes an algorithm for the simulation
of time-periodic unsteady problems via the solution unsteady Euler
and Navier-Stokes equations. This algorithm which is called Time
Spectral method uses a Fourier representation in time and hence
solve for the periodic state directly without resolving transients
(which consume most of the resources in a time-accurate scheme).
Mathematical tools used here are discrete Fourier transformations. It
has shown tremendous potential for reducing the computational cost
compared to conventional time-accurate methods, by enforcing
periodicity and using Fourier representation in time, leading to
spectral accuracy. The accuracy and efficiency of this technique is
verified by Euler and Navier-Stokes calculations for pitching airfoils.
Because of flow turbulence nature, Baldwin-Lomax turbulence
model has been used at viscous flow analysis. The results presented
by the Time Spectral method are compared with experimental data. It
has shown tremendous potential for reducing the computational cost
compared to the conventional time-accurate methods, by enforcing
periodicity and using Fourier representation in time, leading to
spectral accuracy, because results verify the small number of time
intervals per pitching cycle required to capture the flow physics.
Abstract: In this article we explore the application of a formal
proof system to verification problems in cryptography. Cryptographic
properties concerning correctness or security of some cryptographic
algorithms are of great interest. Beside some basic lemmata, we
explore an implementation of a complex function that is used in
cryptography. More precisely, we describe formal properties of this
implementation that we computer prove. We describe formalized
probability distributions (σ-algebras, probability spaces and conditional
probabilities). These are given in the formal language of the
formal proof system Isabelle/HOL. Moreover, we computer prove
Bayes- Formula. Besides, we describe an application of the presented
formalized probability distributions to cryptography. Furthermore,
this article shows that computer proofs of complex cryptographic
functions are possible by presenting an implementation of the Miller-
Rabin primality test that admits formal verification. Our achievements
are a step towards computer verification of cryptographic primitives.
They describe a basis for computer verification in cryptography.
Computer verification can be applied to further problems in cryptographic
research, if the corresponding basic mathematical knowledge
is available in a database.
Abstract: In this paper we address the problem of musical style
classification, which has a number of applications like indexing in
musical databases or automatic composition systems. Starting from
MIDI files of real-world improvisations, we extract the melody track
and cut it into overlapping segments of equal length. From these
fragments, some numerical features are extracted as descriptors of
style samples. We show that a standard Bayesian classifier can be
conveniently employed to build an effective musical style classifier,
once this set of features has been extracted from musical data.
Preliminary experimental results show the effectiveness of the
developed classifier that represents the first component of a musical
audio retrieval system
Abstract: Star graphs are Cayley graphs of symmetric groups of permutations, with transpositions as the generating sets. A star graph is a preferred interconnection network topology to a hypercube for its ability to connect a greater number of nodes with lower degree. However, an attractive property of the hypercube is that it has a Hamiltonian decomposition, i.e. its edges can be partitioned into disjoint Hamiltonian cycles, and therefore a simple routing can be found in the case of an edge failure. The existence of Hamiltonian cycles in Cayley graphs has been known for some time. So far, there are no published results on the much stronger condition of the existence of Hamiltonian decompositions. In this paper, we give a construction of a Hamiltonian decomposition of the star graph 5-star of degree 4, by defining an automorphism for 5-star and a Hamiltonian cycle which is edge-disjoint with its image under the automorphism.
Abstract: This paper present the implementation of a new ordering strategy on Successive Overrelaxation scheme on two dimensional boundary value problems. The strategy involve two directions alternatingly; from top and bottom of the solution domain. The method shows to significantly reduce the iteration number to converge. Four numerical experiments were carried out to examine the performance of the new strategy.
Abstract: Recently, information security has become a key issue
in information technology as the number of computer security
breaches are exposed to an increasing number of security threats. A
variety of intrusion detection systems (IDS) have been employed for
protecting computers and networks from malicious network-based or
host-based attacks by using traditional statistical methods to new data
mining approaches in last decades. However, today's commercially
available intrusion detection systems are signature-based that are not
capable of detecting unknown attacks. In this paper, we present a
new learning algorithm for anomaly based network intrusion
detection system using decision tree algorithm that distinguishes
attacks from normal behaviors and identifies different types of
intrusions. Experimental results on the KDD99 benchmark network
intrusion detection dataset demonstrate that the proposed learning
algorithm achieved 98% detection rate (DR) in comparison with
other existing methods.
Abstract: LES with mixed subgrid-scale model has been used to
simulate aerodynamic performance of hypersonic configuration. The
simulation was conducted to replicate conditions and geometry of a
model which has been previously tested. LES Model has been
successful in predict pressure coefficient with the max error 1.5%
besides afterbody. But in the high Mach number condition, it is poor in
predict ability and product 12.5% error. The calculation error are
mainly conducted by the distribution swirling. The fact of poor ability
in the high Mach number and afterbody region indicated that the
mixed subgrid-scale model should be improved in large eddied
especially in hypersonic separate region. In the condition of attach and
sideslip flight, the calculation results have waves. LES are successful
in the prediction the pressure wave in hypersonic flow.
Abstract: In this paper we proposed multistage adaptive
ARQ/HARQ/HARQ scheme. This method combines pure ARQ
(Automatic Repeat reQuest) mode in low channel bit error rate and
hybrid ARQ method using two different Reed-Solomon codes in
middle and high error rate conditions. It follows, that our scheme has
three stages. The main goal is to increase number of states in adaptive
HARQ methods and be able to achieve maximum throughput for
every channel bit error rate. We will prove the proposal by
calculation and then with simulations in land mobile satellite channel
environment. Optimization of scheme system parameters is described
in order to maximize the throughput in the whole defined Signal-to-
Noise Ratio (SNR) range in selected channel environment.
Abstract: Heat pipes are used to control the thermal problem for
electronic cooling. It is especially difficult to dissipate heat to a heat
sink in an environment in space compared to earth. For solving this
problem, in this study, the Poiseuille (Po) number, which is the main
measure of the performance of a heat pipe, is studied by CFD; then, the
heat pipe performance is verified with experimental results. A heat
pipe is then fabricated for a spatial environment, and an in-house code
is developed. Further, a heat pipe subsystem, which consists of a heat
pipe, MLI (Multi Layer Insulator), SSM (Second Surface Mirror), and
radiator, is tested and correlated with the TMM (Thermal
Mathematical Model) through a commercial code. The correlation
results satisfy the 3K requirement, and the generated thermal model is
verified for application to a spatial environment.
Abstract: This paper presents a novel iris recognition system
using 1D log polar Gabor wavelet and Euler numbers. 1D log polar
Gabor wavelet is used to extract the textural features, and Euler
numbers are used to extract topological features of the iris. The
proposed decision strategy uses these features to authenticate an
individual-s identity while maintaining a low false rejection rate. The
algorithm was tested on CASIA iris image database and found to
perform better than existing approaches with an overall accuracy of
99.93%.
Abstract: Many agricultural and especially greenhouse
applications like plant inspection, data gathering, spraying and
selective harvesting could be performed by robots. In this paper
multiple nonholonomic robots are used in order to create a desired
formation scheme for screening solar energy in a greenhouse through
data gathering. The formation consists from a leader and a team
member equipped with appropriate sensors. Each robot is dedicated
to its mission in the greenhouse that is predefined by the
requirements of the application. The feasibility of the proposed
application includes experimental results with three unmanned
ground vehicles (UGV).
Abstract: Advances in clinical medical imaging have brought about the routine production of vast numbers of medical images that need to be analyzed. As a result an enormous amount of computer vision research effort has been targeted at achieving automated medical image analysis. Computed Tomography (CT) is highly accurate for diagnosing liver tumors. This study aimed to evaluate the potential role of the wavelet and the neural network in the differential diagnosis of liver tumors in CT images. The tumors considered in this study are hepatocellular carcinoma, cholangio carcinoma, hemangeoma and hepatoadenoma. Each suspicious tumor region was automatically extracted from the CT abdominal images and the textural information obtained was used to train the Probabilistic Neural Network (PNN) to classify the tumors. Results obtained were evaluated with the help of radiologists. The system differentiates the tumor with relatively high accuracy and is therefore clinically useful.
Abstract: Cs-type nanocomposite zeolite membrane was successfully synthesized on an alumina ceramic hollow fibre with a mean outer diameter of 1.7 mm; cesium cationic exchange test was carried out inside test module with mean wall thickness of 230 μm and an average crossing pore size smaller than 0.2 μm. Separation factor of n-butane/H2 obtained indicate that a relatively high quality closed to 20. Maxwell-Stefan modeling provides an equivalent thickness lower than 1 µm. To compare the difference an application to CO2/N2 separation has been achieved, reaching separation factors close to (4,18) before and after cation exchange on H-zeolite membrane formed within the pores of a ceramic alumina substrate.
Abstract: It is important to predict yield in semiconductor test process in order to increase yield. In this study, yield prediction means finding out defective die, wafer or lot effectively. Semiconductor test process consists of some test steps and each test includes various test items. In other world, test data has a big and complicated characteristic. It also is disproportionably distributed as the number of data belonging to FAIL class is extremely low. For yield prediction, general data mining techniques have a limitation without any data preprocessing due to eigen properties of test data. Therefore, this study proposes an under-sampling method using support vector machine (SVM) to eliminate an imbalanced characteristic. For evaluating a performance, randomly under-sampling method is compared with the proposed method using actual semiconductor test data. As a result, sampling method using SVM is effective in generating robust model for yield prediction.
Abstract: This paper presents the optimum design for a double
stator, cup rotor machine; a novel type of BLDC PM Machine. The optimization approach is divided into two stages: the first stage is
calculating the machine configuration using Matlab, and the second stage is the optimization of the machine using Finite Element
Modeling (FEM). Under the design specifications, the machine
model will be selected from three pole numbers, namely, 8, 10 and 12 with an appropriate slot number. A double stator brushless DC
permanent magnet machine is designed to achieve low cogging torque; high electromagnetic torque and low ripple torque.
Abstract: The main objective of this study is to test the
relationship between numbers of variables representing the firm
characteristics (market-related variables) and the extent of voluntary
disclosure levels (forward-looking disclosure) in the annual reports of
Egyptian firms listed on the Egyptian Stock Exchange. The results
show that audit firm size is significantly positively correlated (in all
the three years) with the level of forward-looking disclosure.
However, industry type variable (which divided to: industries,
cement, construction, petrochemicals and services), is found being
insignificantly association with the level of forward-looking
information disclosed in the annual reports for all the three years.
Abstract: Eukaryotic protein-coding genes are interrupted by spliceosomal introns, which are removed from the RNA transcripts before translation into a protein. The exon-intron structures of different eukaryotic species are quite different from each other, and the evolution of such structures raises many questions. We try to address some of these questions using statistical analysis of whole genomes. We go through all the protein-coding genes in a genome and study correlations between the net length of all the exons in a gene, the number of the exons, and the average length of an exon. We also take average values of these features for each chromosome and study correlations between those averages on the chromosomal level. Our data show universal features of exon-intron structures common to animals, plants, and protists (specifically, Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Cryptococcus neoformans, Homo sapiens, Mus musculus, Oryza sativa, and Plasmodium falciparum). We have verified linear correlation between the number of exons in a gene and the length of a protein coded by the gene, while the protein length increases in proportion to the number of exons. On the other hand, the average length of an exon always decreases with the number of exons. Finally, chromosome clustering based on average chromosome properties and parameters of linear regression between the number of exons in a gene and the net length of those exons demonstrates that these average chromosome properties are genome-specific features.
Abstract: Coronary artery bypass grafts (CABG) are widely
studied with respect to hemodynamic conditions which play
important role in presence of a restenosis. However, papers which
concern with constitutive modeling of CABG are lacking in the
literature. The purpose of this study is to find a constitutive model for
CABG tissue. A sample of the CABG obtained within an autopsy
underwent an inflation–extension test. Displacements were
recoredered by CCD cameras and subsequently evaluated by digital
image correlation. Pressure – radius and axial force – elongation
data were used to fit material model. The tissue was modeled as onelayered
composite reinforced by two families of helical fibers. The
material is assumed to be locally orthotropic, nonlinear,
incompressible and hyperelastic. Material parameters are estimated
for two strain energy functions (SEF). The first is classical
exponential. The second SEF is logarithmic which allows
interpretation by means of limiting (finite) strain extensibility.
Presented material parameters are estimated by optimization based
on radial and axial equilibrium equation in a thick-walled tube. Both
material models fit experimental data successfully. The exponential
model fits significantly better relationship between axial force and
axial strain than logarithmic one.