Abstract: Traditional wind tunnel models are meticulously machined from metal in a process that can take several months. While very precise, the manufacturing process is too slow to assess a new design's feasibility quickly. Rapid prototyping technology makes this concurrent study of air vehicle concepts via computer simulation and in the wind tunnel possible. This paper described the Affects layer thickness models product with rapid prototyping on Aerodynamic Coefficients for Constructed wind tunnel testing models. Three models were evaluated. The first model was a 0.05mm layer thickness and Horizontal plane 0.1μm (Ra) second model was a 0.125mm layer thickness and Horizontal plane 0.22μm (Ra) third model was a 0.15mm layer thickness and Horizontal plane 4.6μm (Ra). These models were fabricated from somos 18420 by a stereolithography (SLA). A wing-body-tail configuration was chosen for the actual study. Testing covered the Mach range of Mach 0.3 to Mach 0.9 at an angle-of-attack range of -2° to +12° at zero sideslip. Coefficients of normal force, axial force, pitching moment, and lift over drag are shown at each of these Mach numbers. Results from this study show that layer thickness does have an effect on the aerodynamic characteristics in general; the data differ between the three models by fewer than 5%. The layer thickness does have more effect on the aerodynamic characteristics when Mach number is decreased and had most effect on the aerodynamic characteristics of axial force and its derivative coefficients.
Abstract: α-Pinene is the main component of the most
turpentine oils. The hydration of α-pinene with acid catalysts leads to
a complex mixture of monoterpenes. In order to obtain more valuable
products, the α-pinene in the turpentine can be hydrated in dilute
mineral acid solutions to produce α-terpineol. The design of
separation processes requires information on phase equilibrium and
related thermodynamic properties. This paper reports the results of
study on liquid-liquid equilibrium (LLE) of system containing α-
pinene + water and α-terpineol + water.
Binary LLE for α-pinene + water system, and α-terpineol + water
systems were determined by experiment at 301K and atmospheric
pressure. The two component mixture was stirred for about 30min,
then the mixture was left for about 2h for complete phase separation.
The composition of both phases was analyzed by using a Gas
Chromatograph. The experimental data were correlated by
considering both NRTL and UNIQUAC activity coefficient models.
The LLE data for the system of α-pinene + water and α-terpineol +
water were correlated successfully by the NRTL model. The
experimental data were not satisfactorily fitted by the UNIQUAC
model. The NRTL model (α =0.3) correlates the LLE data for the
system of α-pinene + water at 301K with RMSD of 0.0404%. And
the NRTL model (α =0.61) at 301K with RMSD of 0.0058 %. The
NRTL model (α =0.3) correlates the LLE data for the system of α-
terpineol + water at 301K with RMSD of 0.1487% and the NRTL
model (α =0.6) at 301K with RMSD of 0.0032%, between the
experimental and calculated mole fractions.
Abstract: In this work, a characterization and modeling of
packet loss of a Voice over Internet Protocol (VoIP) communication
is developed. The distributions of the number of consecutive received
and lost packets (namely gap and burst) are modeled from the
transition probabilities of two-state and four-state model.
Measurements show that both models describe adequately the burst
distribution, but the decay of gap distribution for non-homogeneous
losses is better fit by the four-state model. The respective
probabilities of transition between states for each model were
estimated with a proposed algorithm from a set of monitored VoIP
calls in order to obtain representative minimum, maximum and
average values for both models.
Abstract: In this paper, we propose a new modular approach called neuroglial consisting of two neural networks slow and fast which emulates a biological reality recently discovered. The implementation is based on complex multi-time scale systems; validation is performed on the model of the asynchronous machine. We applied the geometric approach based on the Gerschgorin circles for the decoupling of fast and slow variables, and the method of singular perturbations for the development of reductions models.
This new architecture allows for smaller networks with less complexity and better performance in terms of mean square error and convergence than the single network model.
Abstract: One of the most important issues in multi-criteria decision analysis (MCDA) is to determine the weights of criteria so that all alternatives can be compared based on the collective performance of criteria. In this paper, one of popular methods in data envelopment analysis (DEA) known as common weights (CWs) is used to determine the weights in MCDA. Two frontiers named ideal and anti-ideal frontiers, instead of ideal and anti-ideal alternatives, are defined based on two new proposed CWs models. Ideal and antiideal frontiers are more flexible than that of alternatives. According to the optimal solutions of these two models, the distances of an alternative from the ideal and anti-ideal frontiers are derived. Then, a relative distance is introduced to measure the value of each alternative. The suggested models are linear and despite weight restrictions are feasible. An example is presented for explaining the method and for comparing to the existing literature.
Abstract: A vast array of biological materials, especially algae have received increasing attention for heavy metal removal. Algae have been proven to be cheaper, more effective for the removal of metallic elements in aqueous solutions. A fresh water algal strain was isolated from Zoo Lake, Johannesburg, South Africa and identified as Desmodesmus sp. This paper investigates the efficacy of Desmodesmus sp.in removing heavy metals contaminating the Wonderfonteinspruit Catchment Area (WCA) water bodies. The biosorption data fitted the pseudo-second order and Langmuir isotherm models. The Langmuir maximum uptakes gave the sequence: Mn2+>Ni2+>Fe2+. The best results for kinetic study was obtained in concentration 120 ppm for Fe3+ and Mn2+, whilst for Ni2+ was at 20 ppm, which is about the same concentrations found in contaminated water in the WCA (Fe3+115 ppm, Mn2+ 121 ppm and Ni2+ 26.5 ppm).
Abstract: Prediction of highly non linear behavior of suspended
sediment flow in rivers has prime importance in the field of water
resources engineering. In this study the predictive performance of
two Artificial Neural Networks (ANNs) namely, the Radial Basis
Function (RBF) Network and the Multi Layer Feed Forward (MLFF)
Network have been compared. Time series data of daily suspended
sediment discharge and water discharge at Pari River was used for
training and testing the networks. A number of statistical parameters
i.e. root mean square error (RMSE), mean absolute error (MAE),
coefficient of efficiency (CE) and coefficient of determination (R2)
were used for performance evaluation of the models. Both the models
produced satisfactory results and showed a good agreement between
the predicted and observed data. The RBF network model provided
slightly better results than the MLFF network model in predicting
suspended sediment discharge.
Abstract: Study on suppression of interference in time domain equalizers is attempted for high data rate impulse radio (IR) ultra wideband communication system. The narrow band systems may cause interference with UWB devices as it is having very low transmission power and the large bandwidth. SRAKE receiver improves system performance by equalizing signals from different paths. This enables the use of SRAKE receiver techniques in IRUWB systems. But Rake receiver alone fails to suppress narrowband interference (NBI). A hybrid SRake-MMSE time domain equalizer is proposed to overcome this by taking into account both the effect of the number of rake fingers and equalizer taps. It also combats intersymbol interference. A semi analytical approach and Monte-Carlo simulation are used to investigate the BER performance of SRAKEMMSE receiver on IEEE 802.15.3a UWB channel models. Study on non-line of sight indoor channel models (both CM3 and CM4) illustrates that bit error rate performance of SRake-MMSE receiver with NBI performs better than that of Rake receiver without NBI. We show that for a MMSE equalizer operating at high SNR-s the number of equalizer taps plays a more significant role in suppressing interference.
Abstract: Fuzzy C-means Clustering algorithm (FCM) is a
method that is frequently used in pattern recognition. It has the
advantage of giving good modeling results in many cases, although,
it is not capable of specifying the number of clusters by itself. In
FCM algorithm most researchers fix weighting exponent (m) to a
conventional value of 2 which might not be the appropriate for all
applications. Consequently, the main objective of this paper is to use
the subtractive clustering algorithm to provide the optimal number of
clusters needed by FCM algorithm by optimizing the parameters of
the subtractive clustering algorithm by an iterative search approach
and then to find an optimal weighting exponent (m) for the FCM
algorithm. In order to get an optimal number of clusters, the iterative
search approach is used to find the optimal single-output Sugenotype
Fuzzy Inference System (FIS) model by optimizing the
parameters of the subtractive clustering algorithm that give minimum
least square error between the actual data and the Sugeno fuzzy
model. Once the number of clusters is optimized, then two
approaches are proposed to optimize the weighting exponent (m) in
the FCM algorithm, namely, the iterative search approach and the
genetic algorithms. The above mentioned approach is tested on the
generated data from the original function and optimal fuzzy models
are obtained with minimum error between the real data and the
obtained fuzzy models.
Abstract: Modeling and simulation of biochemical reactions is of great interest in the context of system biology. The central dogma of this re-emerging area states that it is system dynamics and organizing principles of complex biological phenomena that give rise to functioning and function of cells. Cell functions, such as growth, division, differentiation and apoptosis are temporal processes, that can be understood if they are treated as dynamic systems. System biology focuses on an understanding of functional activity from a system-wide perspective and, consequently, it is defined by two hey questions: (i) how do the components within a cell interact, so as to bring about its structure and functioning? (ii) How do cells interact, so as to develop and maintain higher levels of organization and functions? In recent years, wet-lab biologists embraced mathematical modeling and simulation as two essential means toward answering the above questions. The credo of dynamics system theory is that the behavior of a biological system is given by the temporal evolution of its state. Our understanding of the time behavior of a biological system can be measured by the extent to which a simulation mimics the real behavior of that system. Deviations of a simulation indicate either limitations or errors in our knowledge. The aim of this paper is to summarize and review the main conceptual frameworks in which models of biochemical networks can be developed. In particular, we review the stochastic molecular modelling approaches, by reporting the principal conceptualizations suggested by A. A. Markov, P. Langevin, A. Fokker, M. Planck, D. T. Gillespie, N. G. van Kampfen, and recently by D. Wilkinson, O. Wolkenhauer, P. S. Jöberg and by the author.
Abstract: Nowadays companies strive to survive in a
competitive global environment. To speed up product
development/modifications, it is suggested to adopt a collaborative
product development approach. However, despite the advantages of
new IT improvements still many CAx systems work separately and
locally. Collaborative design and manufacture requires a product
information model that supports related CAx product data models. To
solve this problem many solutions are proposed, which the most
successful one is adopting the STEP standard as a product data model
to develop a collaborative CAx platform. However, the improvement
of the STEP-s Application Protocols (APs) over the time, huge
number of STEP AP-s and cc-s, the high costs of implementation,
costly process for conversion of older CAx software files to the STEP
neutral file format; and lack of STEP knowledge, that usually slows
down the implementation of the STEP standard in collaborative data
exchange, management and integration should be considered. In this
paper the requirements for a successful collaborative CAx system is
discussed. The STEP standard capability for product data integration
and its shortcomings as well as the dominant platforms for supporting
CAx collaboration management and product data integration are
reviewed. Finally a platform named LAYMOD to fulfil the
requirements of CAx collaborative environment and integrating the
product data is proposed. The platform is a layered platform to enable
global collaboration among different CAx software
packages/developers. It also adopts the STEP modular architecture
and the XML data structures to enable collaboration between CAx
software packages as well as overcoming the STEP standard
limitations. The architecture and procedures of LAYMOD platform
to manage collaboration and avoid contradicts in product data
integration are introduced.
Abstract: In this paper we apply an Adaptive Network-Based
Fuzzy Inference System (ANFIS) with one input, the dependent
variable with one lag, for the forecasting of four macroeconomic
variables of US economy, the Gross Domestic Product, the inflation
rate, six monthly treasury bills interest rates and unemployment rate.
We compare the forecasting performance of ANFIS with those of the
widely used linear autoregressive and nonlinear smoothing transition
autoregressive (STAR) models. The results are greatly in favour of
ANFIS indicating that is an effective tool for macroeconomic
forecasting used in academic research and in research and application
by the governmental and other institutions
Abstract: Nowaday-s, many organizations use systems that
support business process as a whole or partially. However, in some
application domains, like software development and health care
processes, a normative Process Aware System (PAS) is not suitable,
because a flexible support is needed to respond rapidly to new
process models. On the other hand, a flexible Process Aware System
may be vulnerable to undesirable and fraudulent executions, which
imposes a tradeoff between flexibility and security. In order to make
this tradeoff available, a genetic-based anomaly detection model for
logs of Process Aware Systems is presented in this paper. The
detection of an anomalous trace is based on discovering an
appropriate process model by using genetic process mining and
detecting traces that do not fit the appropriate model as anomalous
trace; therefore, when used in PAS, this model is an automated
solution that can support coexistence of flexibility and security.
Abstract: Sudoku is a kind of logic puzzles. Each puzzle consists
of a board, which is a 9×9 cells, divided into nine 3×3 subblocks
and a set of numbers from 1 to 9. The aim of this puzzle is to
fill in every cell of the board with a number from 1 to 9 such
that in every row, every column, and every subblock contains each
number exactly one. Sudoku puzzles belong to combinatorial problem
(NP complete). Sudoku puzzles can be solved by using a variety of
techniques/algorithms such as genetic algorithms, heuristics, integer
programming, and so on. In this paper, we propose a new approach for
solving Sudoku which is by modelling them as block-world problems.
In block-world problems, there are a number of boxes on the table
with a particular order or arrangement. The objective of this problem
is to change this arrangement into the targeted arrangement with the
help of two types of robots. In this paper, we present three models
for Sudoku. We modellized Sudoku as parameterized multi-agent
systems. A parameterized multi-agent system is a multi-agent system
which consists of several uniform/similar agents and the number of
the agents in the system is stated as the parameter of this system. We
use Temporal Logic of Actions (TLA) for formalizing our models.
Abstract: In the oil and gas industry, energy prediction can help
the distributor and customer to forecast the outgoing and incoming
gas through the pipeline. It will also help to eliminate any
uncertainties in gas metering for billing purposes. The objective of
this paper is to develop Neural Network Model for energy
consumption and analyze the performance model. This paper
provides a comprehensive review on published research on the
energy consumption prediction which focuses on structures and the
parameters used in developing Neural Network models. This paper is
then focused on the parameter selection of the neural network
prediction model development for energy consumption and analysis
on the result. The most reliable model that gives the most accurate
result is proposed for the prediction. The result shows that the
proposed neural network energy prediction model is able to
demonstrate an adequate performance with least Root Mean Square
Error.
Abstract: Owning to the high-speed feed rate and ultra spindle
speed have been used in modern machine tools, the tool-path
generation plays a key role in the successful application of a
High-Speed Machining (HSM) system. Because of its importance in
both high-speed machining and tool-path generation, approximating a
contour by NURBS format is a potential function in CAD/CAM/CNC
systems. It is much more convenient to represent an ellipse by
parametric form than to connect points laboriously determined in a
CNC system. A new approximating method based on optimum
processes and NURBS curves of any degree to the ellipses is presented
in this study. Such operations can be the foundation of tool-radius
compensation interpolator of NURBS curves in CNC system. All
operating processes for a CAD tool is presented and demonstrated by
practical models.
Abstract: A three-dimensional finite element modeling for austenitic stainless steel AISI 304 annealed condition sheets of 1.0 mm thickness are developed using ABAQUS® software. This includes spot welded and weld bonded joints models. Both models undergo thermal heat caused by spot welding process and then are subjected to axial load up to the failure point. The properties of elastic and plastic regions, modulus of elasticity, fracture limit, nugget and heat affected zones are determined. Complete loaddisplacement curve for each joining model is obtained and compared with the experiment data and with the finite element models without including the effect of thermal process. In general, the results obtained for both spot welded and weld-bonded joints affected by thermal process showed an excellent agreement with the experimental data.
Abstract: Extensive rainfall disaggregation approaches have been developed and applied in climate change impact studies such as flood risk assessment and urban storm water management.In this study, five rainfall models that were capable ofdisaggregating daily rainfall data into hourly one were investigated for the rainfall record in theChangi Airport, Singapore. The objectives of this study were (i) to study the temporal characteristics of hourly rainfall in Singapore, and (ii) to evaluate the performance of variousdisaggregation models. The used models included: (i) Rectangular pulse Poisson model (RPPM), (ii) Bartlett-Lewis Rectangular pulse model (BLRPM), (iii) Bartlett-Lewis model with 2 cell types (BL2C), (iv) Bartlett-Lewis Rectangular with cell depth distribution dependent on duration (BLRD), and (v) Neyman-Scott Rectangular pulse model (NSRPM). All of these models werefitted using hourly rainfall data ranging from 1980 to 2005 (which was obtained from Changimeteorological station).The study results indicated that the weight scheme of inversely proportional variance could deliver more accurateoutputs for fitting rainfall patterns in tropical areas, and BLRPM performedrelatively better than other disaggregation models.
Abstract: Cloud Computing has recently emerged as a
compelling paradigm for managing and delivering services over the
internet. The rise of Cloud Computing is rapidly changing the
landscape of information technology, and ultimately turning the longheld
promise of utility computing into a reality. As the development
of Cloud Computing paradigm is speedily progressing, concepts, and
terminologies are becoming imprecise and ambiguous, as well as
different technologies are interfering. Thus, it becomes crucial to
clarify the key concepts and definitions. In this paper, we present the
anatomy of Cloud Computing, covering its essential concepts,
prominent characteristics, its affects, architectural design and key
technologies. We differentiate various service and deployment
models. Also, significant challenges and risks need are tackled in
order to guarantee the long-term success of Cloud Computing. The
aim of this paper is to provide a better understanding of the anatomy
of Cloud Computing and pave the way for further research in this
area.
Abstract: Computer based geostatistical methods can offer effective data analysis possibilities for agricultural areas by using
vectorial data and their objective informations. These methods will help to detect the spatial changes on different locations of the large
agricultural lands, which will lead to effective fertilization for optimal yield with reduced environmental pollution. In this study, topsoil (0-20 cm) and subsoil (20-40 cm) samples were taken from a
sugar beet field by 20 x 20 m grids. Plant samples were also collected
from the same plots. Some physical and chemical analyses for these
samples were made by routine methods. According to derived variation coefficients, topsoil organic matter (OM) distribution was more than subsoil OM distribution. The highest C.V. value of
17.79% was found for topsoil OM. The data were analyzed
comparatively according to kriging methods which are also used
widely in geostatistic. Several interpolation methods (Ordinary,Simple and Universal) and semivariogram models (Spherical,
Exponential and Gaussian) were tested in order to choose the suitable
methods. Average standard deviations of values estimated by simple
kriging interpolation method were less than average standard
deviations (topsoil OM ± 0.48, N ± 0.37, subsoil OM ± 0.18) of measured values. The most suitable interpolation method was simple
kriging method and exponantial semivariogram model for topsoil,
whereas the best optimal interpolation method was simple kriging
method and spherical semivariogram model for subsoil. The results
also showed that these computer based geostatistical methods should
be tested and calibrated for different experimental conditions and semivariogram models.