Abstract: Basic ingredients of concrete are cement, fine aggregate, coarse aggregate and water. To produce a concrete of certain specific properties, optimum proportion of these ingredients are mixed. The important factors which govern the mix design are grade of concrete, type of cement and size, shape and grading of aggregates. Concrete mix design method is based on experimentally evolved empirical relationship between the factors in the choice of mix design. Basic draw backs of this method are that it does not produce desired strength, calculations are cumbersome and a number of tables are to be referred for arriving at trial mix proportion moreover, the variation in attainment of desired strength is uncertain below the target strength and may even fail. To solve this problem, a lot of cubes of standard grades were prepared and attained 28 days strength determined for different combination of cement, fine aggregate, coarse aggregate and water. An artificial neural network (ANN) was prepared using these data. The input of ANN were grade of concrete, type of cement, size, shape and grading of aggregates and output were proportions of various ingredients. With the help of these inputs and outputs, ANN was trained using feed forward back proportion model. Finally trained ANN was validated, it was seen that it gave the result with/ error of maximum 4 to 5%. Hence, specific type of concrete can be prepared from given material properties and proportions of these materials can be quickly evaluated using the proposed ANN.
Abstract: The sand production problem has led researchers into making various attempts to understand the phenomenon. The generally accepted concept is that the occurrence of sanding is due to the in-situ stress conditions and the induced changes in stress that results in the failure of the reservoir sandstone during hydrocarbon production from wellbores. By using a hypothetical cased (perforated) well, an approach to the problem is presented here by using Finite Element numerical modelling techniques. In addition to the examination of the erosion problem, the influence of certain key parameters is studied in order to ascertain their effect on the failure and subsequent erosion process. The major variables investigated include: drawdown, perforation depth, and the erosion criterion. Also included is the determination of the optimal mud pressure for given operational and reservoir conditions. The improved understanding between parameters enables the choice of optimal values to minimize sanding during oil production.
Abstract: This paper develops an unscented grid-based filter
and a smoother for accurate nonlinear modeling and analysis
of time series. The filter uses unscented deterministic sampling
during both the time and measurement updating phases, to approximate
directly the distributions of the latent state variable. A
complementary grid smoother is also made to enable computing
of the likelihood. This helps us to formulate an expectation
maximisation algorithm for maximum likelihood estimation of
the state noise and the observation noise. Empirical investigations
show that the proposed unscented grid filter/smoother compares
favourably to other similar filters on nonlinear estimation tasks.
Abstract: Throughout this paper, a relatively new technique, the Tabu search variable selection model, is elaborated showing how it can be efficiently applied within the financial world whenever researchers come across the selection of a subset of variables from a whole set of descriptive variables under analysis. In the field of financial prediction, researchers often have to select a subset of variables from a larger set to solve different type of problems such as corporate bankruptcy prediction, personal bankruptcy prediction, mortgage, credit scoring and the Arbitrage Pricing Model (APM). Consequently, to demonstrate how the method operates and to illustrate its usefulness as well as its superiority compared to other commonly used methods, the Tabu search algorithm for variable selection is compared to two main alternative search procedures namely, the stepwise regression and the maximum R 2 improvement method. The Tabu search is then implemented in finance; where it attempts to predict corporate bankruptcy by selecting the most appropriate financial ratios and thus creating its own prediction score equation. In comparison to other methods, mostly the Altman Z-Score model, the Tabu search model produces a higher success rate in predicting correctly the failure of firms or the continuous running of existing entities.
Abstract: The proof of concept experiments were conducted to
determine the feasibility of using small amounts of Dissolved
Sulphur (DS) from the gaseous phase to precipitate platinum ions in
chloride media. Two sets of precipitation experiments were
performed in which the source of sulphur atoms was either a
thiosulphate solution (Na2S2O3) or a sulphur dioxide gas (SO2). In
liquid-liquid (L-L) system, complete precipitation of Pt was achieved
at small dosages of Na2S2O3 (0.01 – 1.0 M) in a time interval of 3-5
minutes. On the basis of this result, gas absorption tests were carried
out mainly to achieve sulphur solubility equivalent to 0.018 M. The
idea that huge amounts of precious metals could be recovered
selectively from their dilute solutions by utilizing the waste SO2
streams at low pressure seemed attractive from the economic and
environmental point of views. Therefore, mass transfer characteristics
of SO2 gas associated with reactive absorption across the gas-liquid
(G-L) interface were evaluated under different conditions of pressure
(0.5 – 2 bar), solution temperature ranges from 20 – 50 oC and acid
strength (1 – 4 M, HCl). This paper concludes with information about
selective precipitation of Pt in the presence of cations (Fe2+, Co2+,
and Cr3+) in a CSTR and recommendation to scale up laboratory data
to industrial pilot scale operations.
Abstract: In recent years, most of the regions in the world are
exposed to degradation and erosion caused by increasing
population and over use of land resources. The understanding of
the most important factors on soil erosion and sediment yield are
the main keys for decision making and planning. In this study, the
sediment yield and soil erosion were estimated and the priority of
different soil erosion factors used in the MPSIAC method of soil
erosion estimation is evaluated in AliAbad watershed in southwest
of Isfahan Province, Iran. Different information layers of the
parameters were created using a GIS technique. Then, a
multivariate procedure was applied to estimate sediment yield and
to find the most important factors of soil erosion in the model. The
results showed that land use, geology, land and soil cover are the
most important factors describing the soil erosion estimated by
MPSIAC model.
Abstract: The aeration process via injectors is used to combat
the lack of oxygen in lakes due to eutrophication. A 3D numerical
simulation of the resulting flow using a simplified model is presented.
In order to generate the best dynamic in the fluid with respect to
the aeration purpose, the optimization of the injectors location is
considered. We propose to adapt to this problem the topological
sensitivity analysis method which gives the variation of a criterion
with respect to the creation of a small hole in the domain. The main
idea is to derive the topological sensitivity analysis of the physical
model with respect to the insertion of an injector in the fluid flow
domain. We propose in this work a topological optimization algorithm
based on the studied asymptotic expansion. Finally we present some
numerical results, showing the efficiency of our approach
Abstract: In this paper, a pipelined version of genetic algorithm,
called PLGA, and a corresponding hardware platform are described.
The basic operations of conventional GA (CGA) are made pipelined
using an appropriate selection scheme. The selection operator, used
here, is stochastic in nature and is called SA-selection. This helps
maintaining the basic generational nature of the proposed pipelined
GA (PLGA). A number of benchmark problems are used to compare
the performances of conventional roulette-wheel selection and the
SA-selection. These include unimodal and multimodal functions with
dimensionality varying from very small to very large. It is seen that
the SA-selection scheme is giving comparable performances with
respect to the classical roulette-wheel selection scheme, for all the
instances, when quality of solutions and rate of convergence are considered.
The speedups obtained by PLGA for different benchmarks
are found to be significant. It is shown that a complete hardware
pipeline can be developed using the proposed scheme, if parallel
evaluation of the fitness expression is possible. In this connection
a low-cost but very fast hardware evaluation unit is described.
Results of simulation experiments show that in a pipelined hardware
environment, PLGA will be much faster than CGA. In terms of
efficiency, PLGA is found to outperform parallel GA (PGA) also.
Abstract: The paper deals with calculation of the parameters of
ceramic material from a set of destruction tests of ceramic heads of
total hip joint endoprosthesis. The standard way of calculation of the
material parameters consists in carrying out a set of 3 or 4 point
bending tests of specimens cut out from parts of the ceramic material
to be analysed. In case of ceramic heads, it is not possible to cut out
specimens of required dimensions because the heads are too small (if
the cut out specimens were smaller than the normalised ones, the
material parameters derived from them would exhibit higher strength
values than those which the given ceramic material really has). On
that score, a special testing jig was made, in which 40 heads were
destructed. From the measured values of circumferential strains of the
head-s external spherical surface under destruction, the state of stress
in the head under destruction was established using the final elements
method (FEM). From the values obtained, the sought for parameters
of the ceramic material were calculated using Weibull-s weakest-link
theory.
Abstract: The characterization of κ-carrageenan could provide a
better understanding of its functions in biological, medical and
industrial applications. Chemical and physical analyses of
carrageenan from seaweeds, Euchema cottonii L., were done to offer
information on its properties and the effects of Co-60 γ-irradiation on
its thermochemical characteristics. The structural and morphological
characteristics of κ-carrageenan were determined using scanning
electron microscopy (SEM) while the composition, molecular weight
and thermal properties were determined using attenuated total
reflectance Fourier transform infrared spectroscopy (ATR-FTIR), gel
permeation chromatography (GPC), thermal gravimetric analysis
(TGA) and differential scanning calorimetry (DSC). Further chemical
analysis was done using hydrogen-1 nuclear magnetic resonance (1H
NMR) and functional characteristics in terms of biocompatibility
were evaluated using cytotoxicity test.
Abstract: As the Computed Tomography(CT) requires normally
hundreds of projections to reconstruct the image, patients are exposed
to more X-ray energy, which may cause side effects such as cancer.
Even when the variability of the particles in the object is very less,
Computed Tomography requires many projections for good quality
reconstruction. In this paper, less variability of the particles in an
object has been exploited to obtain good quality reconstruction.
Though the reconstructed image and the original image have same
projections, in general, they need not be the same. In addition
to projections, if a priori information about the image is known,
it is possible to obtain good quality reconstructed image. In this
paper, it has been shown by experimental results why conventional
algorithms fail to reconstruct from a few projections, and an efficient
polynomial time algorithm has been given to reconstruct a bi-level
image from its projections along row and column, and a known sub
image of unknown image with smoothness constraints by reducing the
reconstruction problem to integral max flow problem. This paper also
discusses the necessary and sufficient conditions for uniqueness and
extension of 2D-bi-level image reconstruction to 3D-bi-level image
reconstruction.
Abstract: Nowadays there are more than thirty maturity models
in different knowledge areas. Maturity model is an area of interest
that contributes organizations to find out where they are in a specific
knowledge area and how to improve it. As Information Resource
Management (IRM) is the concept that information is a major
corporate resource and must be managed using the same basic
principles used to manage other assets, assessment of the current
IRM status and reveal the improvement points can play a critical role
in developing an appropriate information structure in organizations.
In this paper we proposed a framework for information resource
management maturity model (IRM3) that includes ten best practices
for the maturity assessment of the organizations' IRM.
Abstract: This is a study on numerical simulation of the convection-diffusion transport of a chemical species in steady flow through a small-diameter tube, which is lined with a very thin layer made up of retentive and absorptive materials. The species may be subject to a first-order kinetic reversible phase exchange with the wall material and irreversible absorption into the tube wall. Owing to the velocity shear across the tube section, the chemical species may spread out axially along the tube at a rate much larger than that given by the molecular diffusion; this process is known as dispersion. While the long-time dispersion behavior, well described by the Taylor model, has been extensively studied in the literature, the early development of the dispersion process is by contrast much less investigated. By early development, that means a span of time, after the release of the chemical into the flow, that is shorter than or comparable to the diffusion time scale across the tube section. To understand the early development of the dispersion, the governing equations along with the reactive boundary conditions are solved numerically using the Flux Corrected Transport Algorithm (FCTA). The computation has enabled us to investigate the combined effects on the early development of the dispersion coefficient due to the reversible and irreversible wall reactions. One of the results is shown that the dispersion coefficient may approach its steady-state limit in a short time under the following conditions: (i) a high value of Damkohler number (say Da ≥ 10); (ii) a small but non-zero value of absorption rate (say Γ* ≤ 0.5).
Abstract: This paper describes the design and results of FROID,
an outbound intrusion detection system built with agent technology
and supported by an attacker-centric ontology. The prototype
features a misuse-based detection mechanism that identifies remote
attack tools in execution. Misuse signatures composed of attributes
selected through entropy analysis of outgoing traffic streams and
process runtime data are derived from execution variants of attack
programs. The core of the architecture is a mesh of self-contained
detection cells organized non-hierarchically that group agents in a
functional fashion. The experiments show performance gains when
the ontology is enabled as well as an increase in accuracy achieved
when correlation cells combine detection evidence received from
independent detection cells.
Abstract: Automotive engine air-ratio plays an important role of
emissions and fuel consumption reduction while maintains
satisfactory engine power among all of the engine control variables. In
order to effectively control the air-ratio, this paper presents a model
predictive fuzzy control algorithm based on online least-squares
support vector machines prediction model and fuzzy logic optimizer.
The proposed control algorithm was also implemented on a real car for
testing and the results are highly satisfactory. Experimental results
show that the proposed control algorithm can regulate the engine
air-ratio to the stoichiometric value, 1.0, under external disturbance
with less than 5% tolerance.
Abstract: How to efficiently assign system resource to route the
Client demand by Gateway servers is a tricky predicament. In this
paper, we tender an enhanced proposal for autonomous recital of
Gateway servers under highly vibrant traffic loads. We devise a
methodology to calculate Queue Length and Waiting Time utilizing
Gateway Server information to reduce response time variance in
presence of bursty traffic.
The most widespread contemplation is performance, because
Gateway Servers must offer cost-effective and high-availability
services in the elongated period, thus they have to be scaled to meet
the expected load. Performance measurements can be the base for
performance modeling and prediction. With the help of performance
models, the performance metrics (like buffer estimation, waiting
time) can be determined at the development process.
This paper describes the possible queue models those can be
applied in the estimation of queue length to estimate the final value
of the memory size. Both simulation and experimental studies using
synthesized workloads and analysis of real-world Gateway Servers
demonstrate the effectiveness of the proposed system.
Abstract: The objective of this research is parameters optimized
of the stair shape workpiece which is cut by CNC Wire-Cut EDM
(WEDW). The experiment material is SKD-11 steel of stair-shaped
with variable height workpiece 10, 20, 30 and 40 mm. with the same
10 mm. thickness are cut by Sodick's CNC Wire-Cut EDM model
AD325L.
The experiments are designed by 3k full factorial experimental
design at 3 level 2 factors and 9 experiments with 2 replicate. The
selected two factor are servo voltage (SV) and servo feed rate (SF)
and the response is cutting thickness error. The experiment is divided
in two experiments. The first experiment determines the significant
effective factor at confidential interval 95%. The SV factor is the
significant effective factor from first result. In order to result smallest
cutting thickness error of workpieces is 17 micron with the SV value
is 46 volt. Also show that the lower SV value, the smaller different
thickness error of workpiece. Then the second experiment is done to
reduce different cutting thickness error of workpiece as small as
possible by lower SV. The second experiment result show the
significant effective factor at confidential interval 95% is the SV
factor and the smallest cutting thickness error of workpieces reduce
to 11 micron with the experiment SV value is 36 volt.
Abstract: In this paper, we propose a morphing method by which face color images can be freely transformed. The main focus of this work is the transformation of one face image to another. This method is fully automatic in that it can morph two face images by automatically detecting all the control points necessary to perform the morph. A face detection neural network, edge detection and medium filters are employed to detect the face position and features. Five control points, for both the source and target images, are then extracted based on the facial features. Triangulation method is then used to match and warp the source image to the target image using the control points. Finally color interpolation is done using a color Gaussian model that calculates the color for each particular frame depending on the number of frames used. A real coded Genetic algorithm is used in both the image warping and color blending steps to assist in step size decisions and speed up the morphing. This method results in ''very smooth'' morphs and is fast to process.
Abstract: The estimation of overall on-site and off-site greenhouse gas (GHG) emissions by wastewater treatment plants revealed that in anaerobic and hybrid treatment systems greater emissions result from off-site processes compared to on-site processes. However, in aerobic treatment systems, onsite processes make a higher contribution to the overall GHG emissions. The total GHG emissions were estimated to be 1.6, 3.3 and 3.8 kg CO2-e/kg BOD in the aerobic, anaerobic and hybrid treatment systems, respectively. In the aerobic treatment system without the recovery and use of the generated biogas, the off-site GHG emissions were 0.65 kg CO2-e/kg BOD, accounting for 40.2% of the overall GHG emissions. This value changed to 2.3 and 2.6 kg CO2-e/kg BOD, and accounted for 69.9% and 68.1% of the overall GHG emissions in the anaerobic and hybrid treatment systems, respectively. The increased off-site GHG emissions in the anaerobic and hybrid treatment systems are mainly due to material usage and energy demand in these systems. The anaerobic digester can contribute up to 100%, 55% and 60% of the overall energy needs of plants in the aerobic, anaerobic and hybrid treatment systems, respectively.
Abstract: Raisin Concentrate (RC) are the most important
products obtained in the raisin processing industries. These RC
products are now used to make the syrups, drinks and confectionery
productions and introduced as natural substitute for sugar in food
applications. Iran is a one of the biggest raisin exporter in the world
but unfortunately despite a good raw material, no serious effort to
extract the RC has been taken in Iran. Therefore, in this paper, we
determined and analyzed affected parameters on extracting RC
process and then optimizing these parameters for design the
extracting RC process in two types of raisin (round and long)
produced in Khorasan region. Two levels of solvent (1:1 and 2:1),
three levels of extraction temperature (60°C, 70°C and 80°C), and
three levels of concentration temperature (50°C, 60°C and 70°C)
were the treatments. Finally physicochemical characteristics of the
obtained concentrate such as color, viscosity, percentage of reduction
sugar, acidity and the microbial tests (mould and yeast) were
counted. The analysis was performed on the basis of factorial in the
form of completely randomized design (CRD) and Duncan's multiple
range test (DMRT) was used for the comparison of the means.
Statistical analysis of results showed that optimal conditions for
production of concentrate is round raisins when the solvent ratio was
2:1 with extraction temperature of 60°C and then concentration
temperature of 50°C. Round raisin is cheaper than the long one, and
it is more economical to concentrate production. Furthermore, round
raisin has more aromas and the less color degree with increasing the
temperature of concentration and extraction. Finally, according to
mentioned factors the concentrate of round raisin is recommended.