Abstract: A process flowsheet was developed in ChemCad 6.4
to study the effect of feed moisture contents on the pre-esterification
of waste oils. Waste oils were modelled as a mixture of triolein
(90%), oleic acid (5%) and water (5%). The process mainly consisted
of feed drying, pre-esterification reaction and methanol recovery. The
results showed that the process energy requirements would be
minimized when higher degrees of feed drying and higher preesterification
reaction temperatures are used.
Abstract: It is by reason of the unified measure of varieties of resources and the unified processing of the disposal of varieties of resources, that these closely related three of new basic models called the resources assembled node and the disposition integrated node as well as the intelligent organizing node are put forth in this paper; the three closely related quantities of integrative analytical mechanics including the disposal intensity and disposal- weighted intensity as well as the charge of resource charge are set; and then the resources assembled space and the disposition integrated space as well as the intelligent organizing space are put forth. The system of fundamental equations and model of complete factor synergetics is preliminarily approached for the general situation in this paper, to form the analytical base of complete factor synergetics. By the essential variables constituting this system of equations we should set twenty variables respectively with relation to the essential dynamical effect, external synergetic action and internal synergetic action of the system.
Abstract: MiRNAs participate in gene regulation of translation.
Some studies have investigated the interactions between genes and
intragenic miRNAs. It is important to study the miRNA binding sites
of genes involved in carcinogenesis. RNAHybrid 2.1 and ERNAhybrid
programmes were used to compute the hybridization free
energy of miRNA binding sites. Of these 54 mRNAs, 22.6%, 37.7%,
and 39.7% of miRNA binding sites were present in the 5'UTRs,
CDSs, and 3'UTRs, respectively. The density of the binding sites for
miRNAs in the 5'UTR ranged from 1.6 to 43.2 times and from 1.8 to
8.0 times greater than in the CDS and 3'UTR, respectively. Three
types of miRNA interactions with mRNAs have been revealed: 5'-
dominant canonical, 3'-compensatory, and complementary binding
sites. MiRNAs regulate gene expression, and information on the
interactions between miRNAs and mRNAs could be useful in
molecular medicine. We recommend that newly described sites
undergo validation by experimental investigation.
Abstract: The aim of this paper is to introduce a parametric
distribution model in fatigue life reliability analysis dealing with
variation in material properties. Service loads in terms of responsetime
history signal of Belgian pave were replicated on a multi-axial
spindle coupled road simulator and stress-life method was used to
estimate the fatigue life of automotive stub axle. A PSN curve was
obtained by monotonic tension test and two-parameter Weibull
distribution function was used to acquire the mean life of the
component. A Pearson system was developed to evaluate the fatigue
life reliability by considering stress range intercept and slope of the
PSN curve as random variables. Considering normal distribution of
fatigue strength, it is found that the fatigue life of the stub axle to
have the highest reliability between 10000 – 15000 cycles. Taking
into account the variation of material properties associated with the
size effect, machining and manufacturing conditions, the method
described in this study can be effectively applied in determination of
probability of failure of mass-produced parts.
Abstract: In industry, on of the most important subjects is die
and it's characteristics in which for cutting and forming different
mechanical pieces, various punch and matrix metal die are used.
whereas the common parts which form the main frame die are not
often proportion with pieces and dies therefore using a part as socalled
common part for frames in specified dimension ranges can
decrease the time of designing, occupied space of warehouse and
manufacturing costs. Parts in dies with getting uniform in their shape
and dimension make common parts of dies. Common parts of punch
and matrix metal die are as bolster, guide bush, guide pillar and
shank. In this paper the common parts and effective parameters in
selecting each of them as the primary information are studied,
afterward for selection and design of mechanical parts an
introduction and investigation based on the Mech. Desk. software is
done hence with developing this software can standardize the metal
common parts of punch and matrix. These studies will be so useful
for designer in their designing and also using it has with very much
advantage for manufactures of products in decreasing occupied
spaces by dies.
Abstract: Result of the study on knowledge management systems in businesses was shown that the most of these businesses provide internet accessibility for their employees in order to study new knowledge via internet, corporate website, electronic mail, and electronic learning system. These business organizations use information technology application for knowledge management because of convenience, time saving, ease of use, accuracy of information and knowledge usefulness. The result indicated prominent improvements for corporate knowledge management systems as the following; 1) administrations must support corporate knowledge management system 2) the goal of corporate knowledge management must be clear 3) corporate culture should facilitate the exchange and sharing of knowledge within the organization 4) cooperation of personnel of all levels must be obtained 5) information technology infrastructure must be provided 6) they must develop the system regularly and constantly.
Abstract: High level and high velocity flood flows are
potentially harmful to bridge piers as evidenced in many toppled
piers, and among them the single-column piers were considered as
the most vulnerable. The flood flow characteristic parameters
including drag coefficient, scouring and vortex shedding are built into
a pier-flood interaction model to investigate structural safety against
flood hazards considering the effects of local scouring, hydrodynamic
forces, and vortex induced resonance vibrations. By extracting the
pier-flood simulation results embedded in a neural networks code,
two cases of pier toppling occurred in typhoon days were reexamined:
(1) a bridge overcome by flash flood near a mountain side;
(2) a bridge washed off in flood across a wide channel near the
estuary. The modeling procedures and simulations are capable of
identifying the probable causes for the tumbled bridge piers during
heavy floods, which include the excessive pier bending moments and
resonance in structural vibrations.
Abstract: The conventional GA combined with a local search
algorithm, such as the 2-OPT, forms a hybrid genetic algorithm(HGA)
for the traveling salesman problem (TSP). However, the geometric
properties which are problem specific knowledge can be used to
improve the search process of the HGA. Some tour segments (edges)
of TSPs are fine while some maybe too long to appear in a short tour.
This knowledge could constrain GAs to work out with fine tour
segments without considering long tour segments as often.
Consequently, a new algorithm is proposed, called intelligent-OPT
hybrid genetic algorithm (IOHGA), to improve the GA and the 2-OPT
algorithm in order to reduce the search time for the optimal solution.
Based on the geometric properties, all the tour segments are assigned
2-level priorities to distinguish between good and bad genes. A
simulation study was conducted to evaluate the performance of the
IOHGA. The experimental results indicate that in general the IOHGA
could obtain near-optimal solutions with less time and better accuracy
than the hybrid genetic algorithm with simulated annealing algorithm
(HGA(SA)).
Abstract: Fossil fuels are the major source to meet the world
energy requirements but its rapidly diminishing rate and adverse
effects on our ecological system are of major concern. Renewable
energy utilization is the need of time to meet the future challenges.
Ocean energy is the one of these promising energy resources. Threefourths
of the earth-s surface is covered by the oceans. This enormous
energy resource is contained in the oceans- waters, the air above the
oceans, and the land beneath them. The renewable energy source of
ocean mainly is contained in waves, ocean current and offshore solar
energy. Very fewer efforts have been made to harness this reliable
and predictable resource. Harnessing of ocean energy needs detail
knowledge of underlying mathematical governing equation and their
analysis. With the advent of extra ordinary computational resources
it is now possible to predict the wave climatology in lab simulation.
Several techniques have been developed mostly stem from numerical
analysis of Navier Stokes equations. This paper presents a brief over
view of such mathematical model and tools to understand and
analyze the wave climatology. Models of 1st, 2nd and 3rd generations
have been developed to estimate the wave characteristics to assess the
power potential. A brief overview of available wave energy
technologies is also given. A novel concept of on-shore wave energy
extraction method is also presented at the end. The concept is based
upon total energy conservation, where energy of wave is transferred
to the flexible converter to increase its kinetic energy. Squeezing
action by the external pressure on the converter body results in
increase velocities at discharge section. High velocity head then can
be used for energy storage or for direct utility of power generation.
This converter utilizes the both potential and kinetic energy of the
waves and designed for on-shore or near-shore application. Increased
wave height at the shore due to shoaling effects increases the
potential energy of the waves which is converted to renewable
energy. This approach will result in economic wave energy
converter due to near shore installation and more dense waves due to
shoaling. Method will be more efficient because of tapping both
potential and kinetic energy of the waves.
Abstract: Text Mining is around applying knowledge discovery
techniques to unstructured text is termed knowledge discovery in text
(KDT), or Text data mining or Text Mining. In decision tree
approach is most useful in classification problem. With this
technique, tree is constructed to model the classification process.
There are two basic steps in the technique: building the tree and
applying the tree to the database. This paper describes a proposed
C5.0 classifier that performs rulesets, cross validation and boosting
for original C5.0 in order to reduce the optimization of error ratio.
The feasibility and the benefits of the proposed approach are
demonstrated by means of medial data set like hypothyroid. It is
shown that, the performance of a classifier on the training cases from
which it was constructed gives a poor estimate by sampling or using a
separate test file, either way, the classifier is evaluated on cases that
were not used to build and evaluate the classifier are both are large. If
the cases in hypothyroid.data and hypothyroid.test were to be
shuffled and divided into a new 2772 case training set and a 1000
case test set, C5.0 might construct a different classifier with a lower
or higher error rate on the test cases. An important feature of see5 is
its ability to classifiers called rulesets. The ruleset has an error rate
0.5 % on the test cases. The standard errors of the means provide an
estimate of the variability of results. One way to get a more reliable
estimate of predictive is by f-fold –cross- validation. The error rate of
a classifier produced from all the cases is estimated as the ratio of the
total number of errors on the hold-out cases to the total number of
cases. The Boost option with x trials instructs See5 to construct up to
x classifiers in this manner. Trials over numerous datasets, large and
small, show that on average 10-classifier boosting reduces the error
rate for test cases by about 25%.
Abstract: The paper presents the results of theoretical and
numerical modeling of propagation of shock waves in bubbly liquids
related to nonlinear effects (realistic equation of state, chemical
reactions, two-dimensional effects). On the basis on the Rankine-
Hugoniot equations the problem of determination of parameters of
passing and reflected shock waves in gas-liquid medium for
isothermal, adiabatic and shock compression of the gas component is
solved by using the wide-range equation of state of water in the
analitic form. The phenomenon of shock wave intensification is
investigated in the channel of variable cross section for the
propagation of a shock wave in the liquid filled with bubbles
containing chemically active gases. The results of modeling of the
wave impulse impact on the solid wall covered with bubble layer are
presented.
Abstract: In this paper, the potential use of an exponential
hidden Markov model to model a hidden pavement deterioration
process, i.e. one that is not directly measurable, is investigated. It is
assumed that the evolution of the physical condition, which is the
hidden process, and the evolution of the values of pavement distress
indicators, can be adequately described using discrete condition states
and modeled as a Markov processes. It is also assumed that condition
data can be collected by visual inspections over time and represented
continuously using an exponential distribution. The advantage of
using such a model in decision making process is illustrated through
an empirical study using real world data.
Abstract: The objective of this study is to investigate the
combustion in a pilot-ignited supercharged dual-fuel engine, fueled
with different types of gaseous fuels under various equivalence ratios.
It is found that if certain operating conditions are maintained,
conventional dual-fuel engine combustion mode can be transformed to
the combustion mode with the two-stage heat release. This mode of
combustion was called the PREMIER (PREmixed Mixture Ignition in
the End-gas Region) combustion. During PREMIER combustion,
initially, the combustion progresses as the premixed flame
propagation and then, due to the mixture autoignition in the end-gas
region, ahead of the propagating flame front, the transition occurs with
the rapid increase in the heat release rate.
Abstract: Iran is one of the greatest producers of date in the
world. However due to lack of information about its viscoelastic
properties, much of the production downgraded during harvesting
and postharvesting processes. In this study the effect of temperature
and moisture content of product were investigated on stress
relaxation characteristics. Therefore, the freshly harvested date
(kabkab) at tamar stage were put in controlled environment chamber
to obtain different temperature levels (25, 35, 45, and 55 0C) and
moisture contents (8.5, 8.7, 9.2, 15.3, 20, 32.2 %d.b.). A texture
analyzer TAXT2 (Stable Microsystems, UK) was used to apply
uniaxial compression tests. A chamber capable to control temperature
was designed and fabricated around the plunger of texture analyzer to
control the temperature during the experiment. As a new approach a
CCD camera (A4tech, 30 fps) was mounted on a cylindrical glass
probe to scan and record contact area between date and disk.
Afterwards, pictures were analyzed using image processing toolbox
of Matlab software. Individual date fruit was uniaxially compressed
at speed of 1 mm/s. The constant strain of 30% of thickness of date
was applied to the horizontally oriented fruit. To select a suitable
model for describing stress relaxation of date, experimental data were
fitted with three famous stress relaxation models including the
generalized Maxwell, Nussinovitch, and Pelege. The constant in
mentioned model were determined and correlated with temperature
and moisture content of product using non-linear regression analysis.
It was found that Generalized Maxwell and Nussinovitch models
appropriately describe viscoelastic characteristics of date fruits as
compared to Peleg mode.
Abstract: The ideal sinc filter, ignoring the noise statistics, is often
applied for generating an arbitrary sample of a bandlimited signal by
using the uniformly sampled data. In this article, an optimal interpolator is proposed; it reaches a minimum mean square error (MMSE)
at its output in the presence of noise. The resulting interpolator is
thus a Wiener filter, and both the optimal infinite impulse response
(IIR) and finite impulse response (FIR) filters are presented. The
mean square errors (MSE-s) for the interpolator of different length
impulse responses are obtained by computer simulations; it shows that
the MSE-s of the proposed interpolators with a reasonable length are
improved about 0.4 dB under flat power spectra in noisy environment with signal-to-noise power ratio (SNR) equal 10 dB. As expected,
the results also demonstrate the improvements for the MSE-s with various fractional delays of the optimal interpolator against the ideal
sinc filter under a fixed length impulse response.
Abstract: Lean production (or lean management respectively)
gained popularity in several waves. The last three decades have been
filled with numerous attempts to apply these concepts in companies.
However, this has only been partially successful. The roots of lean
production can be traced back to Toyota-s just-in-time production.
This concept, which according to Womack-s, Jones- and Roos-
research at MIT was employed by Japanese car manufacturers,
became popular under its international names “lean production",
“lean-manufacturing" and was termed “Schlanke Produktion" in
Germany. This contribution shows a review about lean production in
Germany over the last thirty years: development, trial & error and
implementation as well.
Abstract: In the recent past, there has been an increasing interest
in applying evolutionary methods to Knowledge Discovery in
Databases (KDD) and a number of successful applications of Genetic
Algorithms (GA) and Genetic Programming (GP) to KDD have been
demonstrated. The most predominant representation of the
discovered knowledge is the standard Production Rules (PRs) in the
form If P Then D. The PRs, however, are unable to handle
exceptions and do not exhibit variable precision. The Censored
Production Rules (CPRs), an extension of PRs, were proposed by
Michalski & Winston that exhibit variable precision and supports an
efficient mechanism for handling exceptions. A CPR is an
augmented production rule of the form:
If P Then D Unless C, where C (Censor) is an exception to the rule.
Such rules are employed in situations, in which the conditional
statement 'If P Then D' holds frequently and the assertion C holds
rarely. By using a rule of this type we are free to ignore the exception
conditions, when the resources needed to establish its presence are
tight or there is simply no information available as to whether it
holds or not. Thus, the 'If P Then D' part of the CPR expresses
important information, while the Unless C part acts only as a switch
and changes the polarity of D to ~D.
This paper presents a classification algorithm based on evolutionary
approach that discovers comprehensible rules with exceptions in the
form of CPRs.
The proposed approach has flexible chromosome encoding, where
each chromosome corresponds to a CPR. Appropriate genetic
operators are suggested and a fitness function is proposed that
incorporates the basic constraints on CPRs. Experimental results are
presented to demonstrate the performance of the proposed algorithm.
Abstract: In this paper, we investigate a blind channel estimation method for Multi-carrier CDMA systems that use a subspace decomposition technique. This technique exploits the orthogonality property between the noise subspace and the received user codes to obtain channel of each user. In the past we used Singular Value Decomposition (SVD) technique but SVD have most computational complexity so in this paper use a new algorithm called URV Decomposition, which serve as an intermediary between the QR decomposition and SVD, replaced in SVD technique to track the noise space of the received data. Because of the URV decomposition has almost the same estimation performance as the SVD, but has less computational complexity.
Abstract: The perfect operation of common Active Filters is depended on accuracy of identification system distortion. Also, using a suitable method in current injection and reactive power compensation, leads to increased filter performance. Due to this fact, this paper presents a method based on predictive current control theory in shunt active filter applications. The harmonics of the load current is identified by using o–d–q reference frame on load current and eliminating the DC part of d–q components. Then, the rest of these components deliver to predictive current controller as a Threephase reference current by using Park inverse transformation. System is modeled in discreet time domain. The proposed method has been tested using MATLAB model for a nonlinear load (with Total Harmonic Distortion=20%). The simulation results indicate that the proposed filter leads to flowing a sinusoidal current (THD=0.15%) through the source. In addition, the results show that the filter tracks the reference current accurately.
Abstract: In this paper, based on the past project cost and time
performance, a model for forecasting project cost performance is
developed. This study presents a probabilistic project control concept
to assure an acceptable forecast of project cost performance. In this
concept project activities are classified into sub-groups entitled
control accounts. Then obtain the Stochastic S-Curve (SS-Curve), for
each sub-group and the project SS-Curve is obtained by summing
sub-groups- SS-Curves. In this model, project cost uncertainties are
considered through Beta distribution functions of the project
activities costs required to complete the project at every selected time
sections through project accomplishment, which are extracted from a
variety of sources. Based on this model, after a percentage of the
project progress, the project performance is measured via Earned
Value Management to adjust the primary cost probability distribution
functions. Then, accordingly the future project cost performance is
predicted by using the Monte-Carlo simulation method.