Abstract: In order to study of The Effect of seed inoculation
with Pseudomonas putida+Bacillus lentus on yield and yield
components of wheat (Triticum aestivum L.) cultivars, an experiment
was carried out as factorial based on Randomized Complete Block
Design (RCBD) in Agricultural Research Station of Shahrood
University of Technology. Results showed that inoculation with
Pseudomonas putida+Bacillus lentus promoted seed germination.
Also, inoculation with Pseudomonas putida+Bacillus lentus
significantly affected grain yield, Number of spikes per m2,
Number of grain per spike and 1000-seed weight and There was not
statistically significant difference between Chamran and Pishtaz
cultivars . Finally, the dosages of chemical fertilizers currently
applied in commercial wheat field in Iran (Shahrood region) could be
reduced through proper combination of Pseudomonas
putida+Bacillus lentus inoculation plus fertilization.
Abstract: Selective oxidation of H2S to elemental sulfur in a
fixed bed reactor over newly synthesized alumina nanocatalysts was
physio-chemically investigated and results compared with a
commercial Claus catalyst. Amongst these new materials, Al2O3-
supported sodium oxide prepared with wet chemical technique and
Al2O3 nanocatalyst prepared with spray pyrolysis method were the
most active catalysts for selective oxidation of H2S to elemental
sulfur. Other prepared nanocatalysts were quickly deactivated,
mainly due to the interaction with H2S and conversion into sulfides.
Abstract: Conventional concentrically-braced frame (CBF)
systems have limited drift capacity before brace buckling and related
damage leads to deterioration in strength and stiffness. Self-centering
concentrically-braced frame (SC-CBF) systems have been developed
to increase drift capacity prior to initiation of damage and minimize
residual drift. SC-CBFs differ from conventional CBFs in that the
SC-CBF columns are designed to uplift from the foundation at a
specified level of lateral loading, initiating a rigid-body rotation
(rocking) of the frame. Vertically-aligned post-tensioning bars resist
uplift and provide a restoring force to return the SC-CBF columns to
the foundation (self-centering the system). This paper presents a
parametric study of different prototype buildings using SC-CBFs.
The bay widths of the SC-CBFs have been varied in these buildings
to study different geometries. Nonlinear numerical analyses of the
different SC-CBFs are presented to illustrate the effect of frame
geometry on the behavior and dynamic response of the SC-CBF
system.
Abstract: Segmentation, filtering out of measurement errors and
identification of breakpoints are integral parts of any analysis of
microarray data for the detection of copy number variation (CNV).
Existing algorithms designed for these tasks have had some successes
in the past, but they tend to be O(N2) in either computation time or
memory requirement, or both, and the rapid advance of microarray
resolution has practically rendered such algorithms useless. Here we
propose an algorithm, SAD, that is much faster and much less thirsty
for memory – O(N) in both computation time and memory requirement
-- and offers higher accuracy. The two key ingredients of SAD are the
fundamental assumption in statistics that measurement errors are
normally distributed and the mathematical relation that the product of
two Gaussians is another Gaussian (function). We have produced a
computer program for analyzing CNV based on SAD. In addition to
being fast and small it offers two important features: quantitative
statistics for predictions and, with only two user-decided parameters,
ease of use. Its speed shows little dependence on genomic profile.
Running on an average modern computer, it completes CNV analyses
for a 262 thousand-probe array in ~1 second and a 1.8 million-probe
array in 9 seconds
Abstract: Graph transformation has recently become more and
more popular as a general visual modeling language to formally state
the dynamic semantics of the designed models. Especially, it is a
very natural formalism for languages which basically are graph (e.g.
UML). Using this technique, we present a highly understandable yet
precise approach to formally model and analyze the behavioral
semantics of UML 2.0 Activity diagrams. In our proposal, AGG is
used to design Activities, then using our previous approach to model
checking graph transformation systems, designers can verify and
analyze designed Activity diagrams by checking the interesting
properties as combination of graph rules and LTL (Linear Temporal
Logic) formulas on the Activities.
Abstract: A generalized Dirichlet to Neumann map is
one of the main aspects characterizing a recently introduced
method for analyzing linear elliptic PDEs, through which it
became possible to couple known and unknown components
of the solution on the boundary of the domain without
solving on its interior. For its numerical solution, a well conditioned
quadratically convergent sine-Collocation method
was developed, which yielded a linear system of equations
with the diagonal blocks of its associated coefficient matrix
being point diagonal. This structural property, among others,
initiated interest for the employment of iterative methods for
its solution. In this work we present a conclusive numerical
study for the behavior of classical (Jacobi and Gauss-Seidel)
and Krylov subspace (GMRES and Bi-CGSTAB) iterative
methods when they are applied for the solution of the Dirichlet
to Neumann map associated with the Laplace-s equation
on regular polygons with the same boundary conditions on
all edges.
Abstract: The article contains results of the flour and bread
quality assessment from the grains of spring spelt, also called as an
ancient wheat. Spelt was cultivated on heavy and medium soils
observing principles of organic farming. Based on flour and bread
laboratory studies, as well as laboratory baking, the technological
usefulness of studied flour has been determined. These results were
referred to the standard derived from common wheat cultivated in the
same conditions. Grain of spring spelt is a good raw material for
manufacturing bread flour, from which to get high-quality bakery
products, but this is strictly dependent on the variety of ancient
wheat.
Abstract: A new approach to promote the generalization ability
of neural networks is presented. It is based on the point of view of
fuzzy theory. This approach is implemented through shrinking or
magnifying the input vector, thereby reducing the difference between
training set and testing set. It is called “shrinking-magnifying
approach" (SMA). At the same time, a new algorithm; α-algorithm is
presented to find out the appropriate shrinking-magnifying-factor
(SMF) α and obtain better generalization ability of neural networks.
Quite a few simulation experiments serve to study the effect of SMA
and α-algorithm. The experiment results are discussed in detail, and
the function principle of SMA is analyzed in theory. The results of
experiments and analyses show that the new approach is not only
simpler and easier, but also is very effective to many neural networks
and many classification problems. In our experiments, the proportions
promoting the generalization ability of neural networks have even
reached 90%.
Abstract: The usual correctness condition for a schedule of
concurrent database transactions is some form of serializability of
the transactions. For general forms, the problem of deciding whether
a schedule is serializable is NP-complete. In those cases other approaches
to proving correctness, using proof rules that allow the steps
of the proof of serializability to be guided manually, are desirable.
Such an approach is possible in the case of conflict serializability
which is proved algebraically by deriving serial schedules using
commutativity of non-conflicting operations. However, conflict serializability
can be an unnecessarily strong form of serializability restricting
concurrency and thereby reducing performance. In practice,
weaker, more general, forms of serializability for extended models of
transactions are used. Currently, there are no known methods using
proof rules for proving those general forms of serializability. In this
paper, we define serializability for an extended model of partitioned
transactions, which we show to be as expressive as serializability
for general partitioned transactions. An algebraic method for proving
general serializability is obtained by giving an initial-algebra specification
of serializable schedules of concurrent transactions in the
model. This demonstrates that it is possible to conduct algebraic
proofs of correctness of concurrent transactions in general cases.
Abstract: The frontal area in the brain is known to be involved in
behavioral judgement. Because a Kanji character can be discriminated
visually and linguistically from other characters, in Kanji character
discrimination, we hypothesized that frontal event-related potential
(ERP) waveforms reflect two discrimination processes in separate
time periods: one based on visual analysis and the other based
on lexcical access. To examine this hypothesis, we recorded ERPs
while performing a Kanji lexical decision task. In this task, either a
known Kanji character, an unknown Kanji character or a symbol was
presented and the subject had to report if the presented character was
a known Kanji character for the subject or not. The same response
was required for unknown Kanji trials and symbol trials. As a preprocessing
of signals, we examined the performance of a method
using independent component analysis for artifact rejection and found
it was effective. Therefore we used it. In the ERP results, there
were two time periods in which the frontal ERP wavefoms were
significantly different betweeen the unknown Kanji trials and the
symbol trials: around 170ms and around 300ms after stimulus onset.
This result supported our hypothesis. In addition, the result suggests
that Kanji character lexical access may be fully completed by around
260ms after stimulus onset.
Abstract: This paper proposes a bi-objective model for the
facility location problem under a congestion system. The idea of the
model is motivated by applications of locating servers in bank
automated teller machines (ATMS), communication networks, and so
on. This model can be specifically considered for situations in which
fixed service facilities are congested by stochastic demand within
queueing framework. We formulate this model with two perspectives
simultaneously: (i) customers and (ii) service provider. The
objectives of the model are to minimize (i) the total expected
travelling and waiting time and (ii) the average facility idle-time.
This model represents a mixed-integer nonlinear programming
problem which belongs to the class of NP-hard problems. In addition,
to solve the model, two metaheuristic algorithms including nondominated
sorting genetic algorithms (NSGA-II) and non-dominated
ranking genetic algorithms (NRGA) are proposed. Besides, to
evaluate the performance of the two algorithms some numerical
examples are produced and analyzed with some metrics to determine
which algorithm works better.
Abstract: It is by reason of the unified measure of varieties of resources and the unified processing of the disposal of varieties of resources, that these closely related three of new basic models called the resources assembled node and the disposition integrated node as well as the intelligent organizing node are put forth in this paper; the three closely related quantities of integrative analytical mechanics including the disposal intensity and disposal- weighted intensity as well as the charge of resource charge are set; and then the resources assembled space and the disposition integrated space as well as the intelligent organizing space are put forth. The system of fundamental equations and model of complete factor synergetics is preliminarily approached for the general situation in this paper, to form the analytical base of complete factor synergetics. By the essential variables constituting this system of equations we should set twenty variables respectively with relation to the essential dynamical effect, external synergetic action and internal synergetic action of the system.
Abstract: In industry, on of the most important subjects is die
and it's characteristics in which for cutting and forming different
mechanical pieces, various punch and matrix metal die are used.
whereas the common parts which form the main frame die are not
often proportion with pieces and dies therefore using a part as socalled
common part for frames in specified dimension ranges can
decrease the time of designing, occupied space of warehouse and
manufacturing costs. Parts in dies with getting uniform in their shape
and dimension make common parts of dies. Common parts of punch
and matrix metal die are as bolster, guide bush, guide pillar and
shank. In this paper the common parts and effective parameters in
selecting each of them as the primary information are studied,
afterward for selection and design of mechanical parts an
introduction and investigation based on the Mech. Desk. software is
done hence with developing this software can standardize the metal
common parts of punch and matrix. These studies will be so useful
for designer in their designing and also using it has with very much
advantage for manufactures of products in decreasing occupied
spaces by dies.
Abstract: The conventional GA combined with a local search
algorithm, such as the 2-OPT, forms a hybrid genetic algorithm(HGA)
for the traveling salesman problem (TSP). However, the geometric
properties which are problem specific knowledge can be used to
improve the search process of the HGA. Some tour segments (edges)
of TSPs are fine while some maybe too long to appear in a short tour.
This knowledge could constrain GAs to work out with fine tour
segments without considering long tour segments as often.
Consequently, a new algorithm is proposed, called intelligent-OPT
hybrid genetic algorithm (IOHGA), to improve the GA and the 2-OPT
algorithm in order to reduce the search time for the optimal solution.
Based on the geometric properties, all the tour segments are assigned
2-level priorities to distinguish between good and bad genes. A
simulation study was conducted to evaluate the performance of the
IOHGA. The experimental results indicate that in general the IOHGA
could obtain near-optimal solutions with less time and better accuracy
than the hybrid genetic algorithm with simulated annealing algorithm
(HGA(SA)).
Abstract: Text Mining is around applying knowledge discovery
techniques to unstructured text is termed knowledge discovery in text
(KDT), or Text data mining or Text Mining. In decision tree
approach is most useful in classification problem. With this
technique, tree is constructed to model the classification process.
There are two basic steps in the technique: building the tree and
applying the tree to the database. This paper describes a proposed
C5.0 classifier that performs rulesets, cross validation and boosting
for original C5.0 in order to reduce the optimization of error ratio.
The feasibility and the benefits of the proposed approach are
demonstrated by means of medial data set like hypothyroid. It is
shown that, the performance of a classifier on the training cases from
which it was constructed gives a poor estimate by sampling or using a
separate test file, either way, the classifier is evaluated on cases that
were not used to build and evaluate the classifier are both are large. If
the cases in hypothyroid.data and hypothyroid.test were to be
shuffled and divided into a new 2772 case training set and a 1000
case test set, C5.0 might construct a different classifier with a lower
or higher error rate on the test cases. An important feature of see5 is
its ability to classifiers called rulesets. The ruleset has an error rate
0.5 % on the test cases. The standard errors of the means provide an
estimate of the variability of results. One way to get a more reliable
estimate of predictive is by f-fold –cross- validation. The error rate of
a classifier produced from all the cases is estimated as the ratio of the
total number of errors on the hold-out cases to the total number of
cases. The Boost option with x trials instructs See5 to construct up to
x classifiers in this manner. Trials over numerous datasets, large and
small, show that on average 10-classifier boosting reduces the error
rate for test cases by about 25%.
Abstract: The paper presents the results of theoretical and
numerical modeling of propagation of shock waves in bubbly liquids
related to nonlinear effects (realistic equation of state, chemical
reactions, two-dimensional effects). On the basis on the Rankine-
Hugoniot equations the problem of determination of parameters of
passing and reflected shock waves in gas-liquid medium for
isothermal, adiabatic and shock compression of the gas component is
solved by using the wide-range equation of state of water in the
analitic form. The phenomenon of shock wave intensification is
investigated in the channel of variable cross section for the
propagation of a shock wave in the liquid filled with bubbles
containing chemically active gases. The results of modeling of the
wave impulse impact on the solid wall covered with bubble layer are
presented.
Abstract: The argument that self-disclosure will change the
psychoanalytic process into a socio-cultural niche distorting the
therapeutic alliance and compromise therapeutic effectiveness is still
the widely held belief amongst many psychotherapists. This paper
considers the issues surrounding culture, disclosure and concealment
since they remain largely untheorized and clinically problematic. The
first part of the paper will critically examine the theory and practice
of psychoanalysis across cultures, and explore the reasons for
culturally diverse patients to conceal rather than disclose their
feelings and thoughts in the transference. This is followed by a
discussion on how immigrant analysts- anonymity is difficult to
maintain since diverse nationalities, language and accents provide
clues to the therapist-s and patient-s origins. Through personal
clinical examples of one the author-s (who is an immigrant) the paper
analyses the transference-countertransference paradigm and how it
reflects in the analyst-s self-revelation.
Abstract: The objective of this study is to investigate the
combustion in a pilot-ignited supercharged dual-fuel engine, fueled
with different types of gaseous fuels under various equivalence ratios.
It is found that if certain operating conditions are maintained,
conventional dual-fuel engine combustion mode can be transformed to
the combustion mode with the two-stage heat release. This mode of
combustion was called the PREMIER (PREmixed Mixture Ignition in
the End-gas Region) combustion. During PREMIER combustion,
initially, the combustion progresses as the premixed flame
propagation and then, due to the mixture autoignition in the end-gas
region, ahead of the propagating flame front, the transition occurs with
the rapid increase in the heat release rate.
Abstract: In this paper, we investigate a blind channel estimation method for Multi-carrier CDMA systems that use a subspace decomposition technique. This technique exploits the orthogonality property between the noise subspace and the received user codes to obtain channel of each user. In the past we used Singular Value Decomposition (SVD) technique but SVD have most computational complexity so in this paper use a new algorithm called URV Decomposition, which serve as an intermediary between the QR decomposition and SVD, replaced in SVD technique to track the noise space of the received data. Because of the URV decomposition has almost the same estimation performance as the SVD, but has less computational complexity.
Abstract: This paper discusses a new model of Islamic code of
ethics for directors. Several corporate scandals and local (example
Transmile and Megan Media) and overseas corporate (example
Parmalat and Enron) collapses show that the current corporate
governance and regulatory reform are unable to prevent these events
from recurring. Arguably, the code of ethics for directors is under
research and the current code of ethics only concentrates on binding
the work of the employee of the organization as a whole, without
specifically putting direct attention to the directors, the group of
people responsible for the performance of the company. This study
used a semi-structured interview survey of well-known Islamic
scholars such as the Mufti to develop the model. It is expected that
the outcome of the research is a comprehensive model of code of
ethics based on the Islamic principles that can be applied and used by
the company to construct a code of ethics for their directors.