Abstract: Modelling techniques for a fluid coupling taken from
published literature have been extended to include the effects of the
filling and emptying of the coupling with oil and the variation in
losses when the coupling is partially full. In the model, the fluid flow
inside the coupling is considered to have two principal velocity
components; one circumferentially about the coupling axis
(centrifugal head) and the other representing the secondary vortex
within the coupling itself (vortex head). The calculation of liquid
mass flow rate circulating between the two halves of the coupling is
based on: the assumption of a linear velocity variation in the
circulating vortex flow; the head differential in the fluid due to the
speed difference between the two shafts; and the losses in the
circulating vortex flow as a result of the impingement of the flow
with the blades in the coupling and friction within the passages
between the blades.
Abstract: In this paper, fluid flow patterns of steady incompressible flow inside shear driven cavity are studied. The numerical simulations are conducted by using lattice Boltzmann method (LBM) for different Reynolds numbers. In order to simulate the flow, derivation of macroscopic hydrodynamics equations from the continuous Boltzmann equation need to be performed. Then, the numerical results of shear-driven flow inside square and triangular cavity are compared with results found in literature review. Present study found that flow patterns are affected by the geometry of the cavity and the Reynolds numbers used.
Abstract: Appropriate description of business processes through
standard notations has become one of the most important assets for
organizations. Organizations must therefore deal with quality faults
in business process models such as the lack of understandability and
modifiability. These quality faults may be exacerbated if business
process models are mined by reverse engineering, e.g., from existing
information systems that support those business processes. Hence,
business process refactoring is often used, which change the internal
structure of business processes whilst its external behavior is
preserved. This paper aims to choose the most appropriate set of
refactoring operators through the quality assessment concerning
understandability and modifiability. These quality features are
assessed through well-proven measures proposed in the literature.
Additionally, a set of measure thresholds are heuristically established
for applying the most promising refactoring operators, i.e., those that
achieve the highest quality improvement according to the selected
measures in each case.
Abstract: An accurate and proficient artificial neural network
(ANN) based genetic algorithm (GA) is developed for predicting of
nanofluids viscosity. A genetic algorithm (GA) is used to optimize
the neural network parameters for minimizing the error between the
predictive viscosity and the experimental one. The experimental
viscosity in two nanofluids Al2O3-H2O and CuO-H2O from 278.15
to 343.15 K and volume fraction up to 15% were used from
literature. The result of this study reveals that GA-NN model is
outperform to the conventional neural nets in predicting the viscosity
of nanofluids with mean absolute relative error of 1.22% and 1.77%
for Al2O3-H2O and CuO-H2O, respectively. Furthermore, the results
of this work have also been compared with others models. The
findings of this work demonstrate that the GA-NN model is an
effective method for prediction viscosity of nanofluids and have
better accuracy and simplicity compared with the others models.
Abstract: As the gradual increase of the enterprise scale, the
firms may possess many manufacturing plants located in different
places geographically. This change will result in the multi-site
production planning problems under the environment of multiple
plants or production resources. Our research proposes the structural
framework to analyze the multi-site planning problems. The analytical
framework is composed of six elements: multi-site conceptual model,
product structure (bill of manufacturing), production strategy,
manufacturing capability and characteristics, production planning
constraints, and key performance indicators. As well as the discussion
of these six ingredients, we also review related literatures in this paper
to match our analytical framework. Finally we take a real-world
practical example of a TFT-LCD manufacturer in Taiwan to explain
our proposed analytical framework for the multi-site production
planning problems.
Abstract: The purpose of this study is to identify ideal urban
design elements of waterfronts and to analyze the differences in users-
cognition among these elements. This study follows three steps as
following: first is identifying the urban design elements of waterfronts
from literature review and second is evaluating intended users-
cognition of urban design elements in urban waterfronts. Lastly, third
is analyzing the users- cognition differences. As the result, evaluations
of waterfront areas by users show similar features that non-waterfront
urban design elements contain the highest degree of importance. This
indicates the difference of users- cognition has dimensions of
frequency and distance, and demonstrates differences in the aspect of
importance than of satisfaction. Multi-Dimensional Scaling Method
verifies differences among their cognition. This study provides
elements to increase satisfaction of users from differences of their
cognition on design elements for waterfronts. It also suggests
implications on elements when waterfronts are built.
Abstract: Many recent high energy physics calculations
involving charm and beauty invoke wave function at the origin
(WFO) for the meson bound state. Uncertainties of charm and beauty
quark masses and different models for potentials governing these
bound states require a simple numerical algorithm for evaluation of
the WFO's for these bound states. We present a simple algorithm for
this propose which provides WFO's with high precision compared
with similar ones already obtained in the literature.
Abstract: In this paper, a particle swarm optimization (PSO)
algorithm is proposed to solve machine loading problem in flexible
manufacturing system (FMS), with bicriterion objectives of
minimizing system unbalance and maximizing system throughput in
the occurrence of technological constraints such as available
machining time and tool slots. A mathematical model is used to
select machines, assign operations and the required tools. The
performance of the PSO is tested by using 10 sample dataset and the
results are compared with the heuristics reported in the literature. The
results support that the proposed PSO is comparable with the
algorithms reported in the literature.
Abstract: The purpose of this study was to evaluate and
compare new indices based on the discrete wavelet transform
with another spectral parameters proposed in the literature as
mean average voltage, median frequency and ratios between
spectral moments applied to estimate acute exercise-induced
changes in power output, i.e., to assess peripheral muscle
fatigue during a dynamic fatiguing protocol. 15 trained
subjects performed 5 sets consisting of 10 leg press, with 2
minutes rest between sets. Surface electromyography was
recorded from vastus medialis (VM) muscle. Several surface
electromyographic parameters were compared to detect
peripheral muscle fatigue. These were: mean average voltage
(MAV), median spectral frequency (Fmed), Dimitrov spectral
index of muscle fatigue (FInsm5), as well as other five
parameters obtained from the discrete wavelet transform
(DWT) as ratios between different scales. The new wavelet
indices achieved the best results in Pearson correlation
coefficients with power output changes during acute dynamic
contractions. Their regressions were significantly different
from MAV and Fmed. On the other hand, they showed the
highest robustness in presence of additive white gaussian
noise for different signal to noise ratios (SNRs). Therefore,
peripheral impairments assessed by sEMG wavelet indices
may be a relevant factor involved in the loss of power output
after dynamic high-loading fatiguing task.
Abstract: Arc welding creates a weld pool to realize continuity between pieces of assembly. The thermal history of the weld is dependent on heat transfer and fluid flow in the weld pool. The metallurgical transformation during welding and cooling are modeled in the literature only at solid state neglecting the fluid flow. In the present paper we associate a heat transfer – fluid flow and metallurgical model for the 16MnD5 steel. The metallurgical transformation model is based on Leblond model for the diffusion kinetics and on the Koistinen-Marburger equation for Marteniste transformation. The predicted thermal history and metallurgical transformations are compared to a simulation without fluid phase. This comparison shows the great importance of the fluid flow modeling.
Abstract: Energy efficient protocol design is the aim of current
researches in the area of sensor networks where limited power
resources impose energy conservation considerations. In this paper
we care for Medium Access Control (MAC) protocols and after an
extensive literature review, two adaptive schemes are discussed. Of
them, adaptive-rate MACs which were introduced for throughput
enhancement show the potency to save energy, even more than
adaptive-power schemes. Then we propose an allocation algorithm
for getting accurate and reliable results. Through a simulation study
we validated our claim and showed the power saving of adaptive-rate
protocols.
Abstract: The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.
Abstract: Environment-assisted cracking (EAC) is one of the most serious causes of structural failure over a broad range of industrial applications including offshore structures. In EAC condition there is not a definite relation such as Paris equation in Linear Elastic Fracture Mechanics (LEFM). According to studying and searching a lot what the researchers said either a material has contact with hydrogen or any other corrosive environment, phenomenon of electrical and chemical reactions of material with its environment will be happened. In the literature, there are many different works to consider fatigue crack growing and solve it but they are experimental works. Thus, in this paper, authors have an aim to evaluate mathematically the pervious works in LEFM. Obviously, if an environment is more sour and corrosive, the changes of stress intensity factor is more and the calculation of stress intensity factor is difficult. A mathematical relation to deal with the stress intensity factor during the diffusion of sour environment especially hydrogen in a marine pipeline is presented. By using this relation having and some experimental relation an analytical formulation will be presented which enables the fatigue crack growth and critical crack length under cyclic loading to be predicted. In addition, we can calculate KSCC and stress intensity factor in the pipeline caused by EAC.
Abstract: Interest in Human Consciousness has been revived in the late 20th century from different scientific disciplines. Consciousness studies involve both its understanding and its application. In this paper, a computational model of the minimum consciousness functions necessary in my point of view for Artificial Intelligence applications is presented with the aim of improving the way computations will be made in the future. In section I, human consciousness is briefly described according to the scope of this paper. In section II, a minimum set of consciousness functions is defined - based on the literature reviewed - to be modelled, and then a computational model of these functions is presented in section III. In section IV, an analysis of the model is carried out to describe its functioning in detail.
Abstract: Continuation of an active call is one of the most important quality measurements in the cellular systems. Handoff process enables a cellular system to provide such a facility by transferring an active call from one cell to another. Different approaches are proposed and applied in order to achieve better handoff service. The principal parameters used to evaluate handoff techniques are: forced termination probability and call blocking probability. The mechanisms such as guard channels and queuing handoff calls decrease the forced termination probability while increasing the call blocking probability. In this paper we present an overview about the issues related to handoff initiation and decision and discuss about different types of handoff techniques available in the literature.
Abstract: The objective of the present communication is to
develop new genuine exponentiated mean codeword lengths and to
study deeply the problem of correspondence between well known
measures of entropy and mean codeword lengths. With the help of
some standard measures of entropy, we have illustrated such a
correspondence. In literature, we usually come across many
inequalities which are frequently used in information theory.
Keeping this idea in mind, we have developed such inequalities via
coding theory approach.
Abstract: The counting and analysis of blood cells allows the
evaluation and diagnosis of a vast number of diseases. In particular,
the analysis of white blood cells (WBCs) is a topic of great interest to
hematologists. Nowadays the morphological analysis of blood cells is
performed manually by skilled operators. This involves numerous
drawbacks, such as slowness of the analysis and a nonstandard
accuracy, dependent on the operator skills. In literature there are only
few examples of automated systems in order to analyze the white
blood cells, most of which only partial. This paper presents a
complete and fully automatic method for white blood cells
identification from microscopic images. The proposed method firstly
individuates white blood cells from which, subsequently, nucleus and
cytoplasm are extracted. The whole work has been developed using
MATLAB environment, in particular the Image Processing Toolbox.
Abstract: The purpose of this study was to develop and examine a
Teaching Commitment Scale of Health and Physical Education
(TCS-HPE) for Taiwanese elementary school teachers. First of all,
based on teaching commitment related theory and literatures to
develop a original scale with 40 items, later both stratified random
sampling and cluster sampling were used to sample participants.
During the first stage, 300 teachers were sampled and 251 valid scales
(83.7%) returned. Later, the data was analyzed by exploratory factor
analysis to obtain 74.30% of total variance for the construct validity.
The Cronbach-s alpha coefficient of sum scale reliability was 0.94, and
subscale coefficients were between 0.80 and 0.96. In the second stage,
400 teachers were sampled and 318 valid scales (79.5%) returned.
Finally, this study used confirmatory factor analysis to test validity and
reliability of TCS-HPE. The result showed that the fit indexes reached
acceptable criteria(¤ç2
(246 ) =557.64 , p
Abstract: One of the determinants of a firm-s prosperity is the
customers- perceived service quality and satisfaction. While service
quality is wide in scope, and consists of various dimensions, there
may be differences in the relative importance of these dimensions in
affecting customers- overall satisfaction of service quality.
Identifying the relative rank of different dimensions of service quality
is very important in that it can help managers to find out which
service dimensions have a greater effect on customers- overall
satisfaction. Such an insight will consequently lead to more effective
resource allocation which will finally end in higher levels of
customer satisfaction. This issue –despite its criticality- has not
received enough attention so far. Therefore, using a sample of 240
bank customers in Iran, an artificial neural network is developed to
address this gap in the literature. As customers- evaluation of service
quality is a subjective process, artificial neural networks –as a brain
metaphor- may appear to have a potentiality to model such a
complicated process. Proposing a neural network which is able to
predict the customers- overall satisfaction of service quality with a
promising level of accuracy is the first contribution of this study. In
addition, prioritizing the service quality dimensions in affecting
customers- overall satisfaction –by using sensitivity analysis of
neural network- is the second important finding of this paper.
Abstract: Intravitreal injection (IVI) is the most common treatment for eye posterior segment diseases such as endopthalmitis, retinitis, age-related macular degeneration, diabetic retinopathy, uveitis, and retinal detachment. Most of the drugs used to treat vitreoretinal diseases, have a narrow concentration range in which they are effective, and may be toxic at higher concentrations. Therefore, it is critical to know the drug distribution within the eye following intravitreal injection. Having knowledge of drug distribution, ophthalmologists can decide on drug injection frequency while minimizing damage to tissues. The goal of this study was to develop a computer model to predict intraocular concentrations and pharmacokinetics of intravitreally injected drugs. A finite volume model was created to predict distribution of two drugs with different physiochemical properties in the rabbit eye. The model parameters were obtained from literature review. To validate this numeric model, the in vivo data of spatial concentration profile from the lens to the retina were compared with the numeric data. The difference was less than 5% between the numerical and experimental data. This validation provides strong support for the numerical methodology and associated assumptions of the current study.