Abstract: To achieve reliable solutions, today-s numerical and
experimental activities need developing more accurate methods and
utilizing expensive facilities, respectfully in microchannels. The analytical
study can be considered as an alternative approach to alleviate
the preceding difficulties. Among the analytical solutions, those with
high robustness and low complexities are certainly more attractive.
The perturbation theory has been used by many researchers to analyze
microflows. In present work, a compressible microflow with constant
heat flux boundary condition is analyzed. The flow is assumed to be
fully developed and steady. The Mach and Reynolds numbers are also
assumed to be very small. For this case, the creeping phenomenon
may have some effect on the velocity profile. To achieve robustness
solution it is assumed that the flow is quasi-isothermal. In this study,
the creeping term which appears in the slip boundary condition
is formulated by different mathematical formulas. The difference
between this work and the previous ones is that the creeping term
is taken into account and presented in non-dimensionalized form.
The results obtained from perturbation theory are presented based
on four non-dimensionalized parameters including the Reynolds,
Mach, Prandtl and Brinkman numbers. The axial velocity, normal
velocity and pressure profiles are obtained. Solutions for velocities
and pressure for two cases with different Br numbers are compared
with each other and the results show that the effect of creeping
phenomenon on the velocity profile becomes more important when
Br number is less than O(ε).
Abstract: Group contribution methods such as the UNIFAC are
of major interest to researchers and engineers involved synthesis,
feasibility studies, design and optimization of separation processes as
well as other applications of industrial use. Reliable knowledge of
the phase equilibrium behavior is crucial for the prediction of the fate
of the chemical in the environment and other applications. The
objective of this study was to predict the solubility of selected
volatile organic compounds (VOCs) in glycol polymers and
biodiesel. Measurements can be expensive and time consuming,
hence the need for thermodynamic models. The results obtained in
this study for the infinite dilution activity coefficients compare very
well those published in literature obtained through measurements. It
is suggested that in preliminary design or feasibility studies of
absorption systems for the abatement of volatile organic compounds,
prediction procedures should be implemented while accurate fluid
phase equilibrium data should be obtained from experiment.
Abstract: In textile industry, besides the conventional textile
products, technical textile goods, that have been brought external
functional properties into, are being developed for technical textile
industry. Especially these products produced with weaving
technology are widely preferred in areas such as sports, geology,
medical, automotive, construction and marine sectors. These textile
products are exposed to various stresses and large deformations under
typical conditions of use. At this point, sufficient and reliable data
could not be obtained with uniaxial tensile tests for determination of
the mechanical properties of such products due to mainly biaxial
stress state. Therefore, the most preferred method is a biaxial tensile
test method and analysis. These tests and analysis is applied to fabrics
with different functional features in order to establish the textile
material with several characteristics and mechanical properties of the
product. Planar biaxial tensile test, cylindrical inflation and bulge
tests are generally required to apply for textile products that are used
in automotive, sailing and sports areas and construction industry to
minimize accidents as long as their service life. Airbags, seat belts
and car tires in the automotive sector are also subject to the same
biaxial stress states, and can be characterized by same types of
experiments. In this study, in accordance with the research literature
related to the various biaxial test methods are compared. Results with
discussions are elaborated mainly focusing on the design of a biaxial
test apparatus to obtain applicable experimental data for developing a
finite element model. Sample experimental results on a prototype
system are expressed.
Abstract: The paper presents the results of a series of
experiments conducted on physical models of Quarter-circle
breakwater (QBW) in a two dimensional monochromatic wave
flume. The purpose of the experiments was to evaluate the reflection
coefficient Kr of QBW models of different radii (R) for different
submergence ratios (d/hc), where d is the depth of water and hc is the
height of the breakwater crest from the sea bed. The radii of the
breakwater models studied were 20cm, 22.5cm, 25cm, 27.5cm and
submergence ratios used varied from 1.067 to 1.667. The wave
climate off the Mangalore coast was used for arriving at the various
model wave parameters. The incident wave heights (Hi) used in the
flume varied from 3 to 18cm, and wave periods (T) ranged from 1.2 s
to 2.2 s. The water depths (d) of 40cm, 45cm and 50cm were used in
the experiments. The data collected was analyzed to compute
variation of reflection coefficient Kr=Hr/Hi (where Hr=reflected wave
height) with the wave steepness Hi/gT2 for various R/Hi
(R=breakwater radius) values. It was found that the reflection
coefficient increased as incident wave steepness increased. Also as
wave height decreases reflection coefficient decreases and as
structure radius R increased Kr decreased slightly.
Abstract: We deal with the numerical solution of time-dependent convection-diffusion-reaction equations. We combine the local projection stabilization method for the space discretization with two different time discretization schemes: the continuous Galerkin-Petrov (cGP) method and the discontinuous Galerkin (dG) method of polynomial of degree k. We establish the optimal error estimates and present numerical results which shows that the cGP(k) and dG(k)- methods are accurate of order k +1, respectively, in the whole time interval. Moreover, the cGP(k)-method is superconvergent of order 2k and dG(k)-method is of order 2k +1 at the discrete time points. Furthermore, the dependence of the results on the choice of the stabilization parameter are discussed and compared.
Abstract: Providing Services at Home has become over the last
few years a very dynamic and promising technological domain. It is
likely to enable wide dissemination of secure and automated living
environments. We propose a methodology for identifying threats to
Services at Home Delivery systems, as well as a threat analysis
of a multi-provider Home Gateway architecture. This methodology
is based on a dichotomous positive/preventive study of the target
system: it aims at identifying both what the system must do, and
what it must not do. This approach completes existing methods with
a synthetic view of potential security flaws, thus enabling suitable
measures to be taken into account. Security implications of the
evolution of a given system become easier to deal with. A prototype
is built based on the conclusions of this analysis.
Abstract: In this study, we present an advanced detection
technique for mass type breast cancer based on texture information
of organs. The proposed method detects the cancer areas in three
stages. In the first stage, the midpoints of mass area are determined
based on AHE (Adaptive Histogram Equalization). In the second
stage, we set the threshold coefficient of homogeneity by using
MLE (Maximum Likelihood Estimation) to compute the uniformity
of texture. Finally, mass type cancer tissues are extracted from the
original image. As a result, it was observed that the proposed
method shows an improved detection performance on dense breast
tissues of Korean women compared with the existing methods. It is
expected that the proposed method may provide additional
diagnostic information for detection of mass-type breast cancer.
Abstract: In this paper we developed the Improved Runge-Kutta Nystrom (IRKN) method for solving second order ordinary differential equations. The methods are two step in nature and require lower number of function evaluations per step compared with the existing Runge-Kutta Nystrom (RKN) methods. Therefore, the methods are computationally more efficient at achieving the higher order of local accuracy. Algebraic order conditions of the method are obtained and the third and fourth order method are derived with two and three stages respectively. The numerical results are given to illustrate the efficiency of the proposed method compared to the existing RKN methods.
Abstract: Simulation and modeling computer programs are
concerned with construction of models for analyzing different
perspectives and possibilities in changing conditions environment.
The paper presents theoretical justification and evaluation of
qualitative e-learning development model in perspective of advancing
modern technologies. There have been analyzed principles of
qualitative e-learning in higher education, productivity of studying
process using modern technologies, different kind of methods and
future perspectives of e-learning in formal education. Theoretically
grounded and practically tested model of developing e-learning
methods using different technologies for different type of classroom,
which can be used in professor-s decision making process to choose
the most effective e-learning methods has been worked out.
Abstract: This paper presents a comparative study of statistical methods for the multi-response surface optimization of a cryogenic freezing process. Taguchi design and analysis and steepest ascent methods based on the desirability function were conducted to ascertain the influential factors of a cryogenic freezing process and their optimal levels. The more preferable levels of the set point, exhaust fan speed, retention time and flow direction are set at -90oC, 20 Hz, 18 minutes and Counter Current, respectively. The overall desirability level is 0.7044.
Abstract: This paper considers a robust recovery of sparse frequencies
from partial phase-only measurements. With the proposed
method, sparse frequencies can be reconstructed, which makes full
use of the sparse distribution in the Fourier representation of the
complex-valued time signal. Simulation experiments illustrate the
proposed method-s advantages over conventional methods in both
noiseless and additive white Gaussian noise cases.
Abstract: The aim of the article is extending and developing
econometrics and network structure based methods which are able to
distinguish price manipulation in Tehran stock exchange. The
principal goal of the present study is to offer model for
approximating price manipulation in Tehran stock exchange. In order
to do so by applying separation method a sample consisting of 397
companies accepted at Tehran stock exchange were selected and
information related to their price and volume of trades during years
2001 until 2009 were collected and then through performing runs
test, skewness test and duration correlative test the selected
companies were divided into 2 sets of manipulated and non
manipulated companies. In the next stage by investigating
cumulative return process and volume of trades in manipulated
companies, the date of starting price manipulation was specified and
in this way the logit model, artificial neural network, multiple
discriminant analysis and by using information related to size of
company, clarity of information, ratio of P/E and liquidity of stock
one year prior price manipulation; a model for forecasting price
manipulation of stocks of companies present in Tehran stock
exchange were designed. At the end the power of forecasting models
were studied by using data of test set. Whereas the power of
forecasting logit model for test set was 92.1%, for artificial neural
network was 94.1% and multi audit analysis model was 90.2%;
therefore all of the 3 aforesaid models has high power to forecast
price manipulation and there is no considerable difference among
forecasting power of these 3 models.
Abstract: Clustering is one of an interesting data mining topics
that can be applied in many fields. Recently, the problem of cluster
analysis is formulated as a problem of nonsmooth, nonconvex optimization,
and an algorithm for solving the cluster analysis problem
based on nonsmooth optimization techniques is developed. This
optimization problem has a number of characteristics that make it
challenging: it has many local minimum, the optimization variables
can be either continuous or categorical, and there are no exact
analytical derivatives. In this study we show how to apply a particular
class of optimization methods known as pattern search methods
to address these challenges. These methods do not explicitly use
derivatives, an important feature that has not been addressed in
previous studies. Results of numerical experiments are presented
which demonstrate the effectiveness of the proposed method.
Abstract: In this paper, we focus on the use of knowledge bases
in two different application areas – control of systems with unknown
or strongly nonlinear models (i.e. hardly controllable by the classical
methods), and robot motion planning in eight directions. The first
one deals with fuzzy logic and the paper presents approaches for
setting and aggregating the rules of a knowledge base. Te second one
is concentrated on a case-based reasoning strategy for finding the
path in a planar scene with obstacles.
Abstract: Current practice of indigenous Mapping production based on GIS, are mostly produced by professional GIS personnel. Given such persons maintain control over data collection and authoring, it is possible to conceive errors due to misrepresentation or cognitive misunderstanding, causing map production inconsistencies. In order to avoid such issues, this research into tribal GIS interface focuses not on customizing interfaces for individual tribes, but rather generalizing the interface and features based on indigenous tribal user needs. The methods employed differs from the traditional expert top-down approach, and instead gaining deeper understanding into indigenous Mappings and user needs, prior to applying mapping techniques and feature development.
Abstract: This study suggests how an order-receiving company
can avoid disclosing schedule information on unit tasks to the
order-placing company when carrying out a collaborative project on
the value chain in an order-oriented industry. Specifically, it suggests
methods for keeping schedule information confidential, and
categorizes potential situations by inter-task dependency. Lastly, an
approach to select the most optimal non-disclosure method is
discussed. With the methods for not disclosing work-related
information suggested in the study, order-receiving companies can
logically deal with political issues relating to the question of whether
or not to disclose information upon the execution of a collaborative
project in cooperation with an order-placing firm. Moreover,
order-placing companies can monitor undistorted information, while
respecting the legitimate rights of an order-receiving company.
Therefore, it is fair to say that the suggestions made in this study will
contribute to the smooth operation of collaborative intercompany
projects.
Abstract: Due to their high power-to-weight ratio and low cost, pneumatic actuators are attractive for robotics and automation applications; however, achieving fast and accurate control of their position have been known as a complex control problem. The paper presents a methodology for obtaining controllers that achieve high position accuracy and preserve the closed-loop characteristics over a broad operating range. Experimentation with a number of conventional (or "classical") three-term controllers shows that, as repeated operations accumulate, the characteristics of the pneumatic actuator change requiring frequent re-tuning of the controller parameters (PID gains). Furthermore, three-term controllers are found to perform poorly in recovering the closed-loop system after the application of load or other external disturbances. The key reason for these problems lies in the non-linear exchange of energy inside the cylinder relating, in particular, to the complex friction forces that develop on the piston-wall interface. In order to overcome this problem but still remain within the boundaries of classical control methods, we designed an auto selective classicaql controller so that the system performance would benefit from all three control gains (KP, Kd, Ki) according to system requirements and the characteristics of each type of controller. This challenging experimentation took place for consistent performance in the face of modelling imprecision and disturbances. In the work presented, a selective PID controller is presented for an experimental rig comprising an air cylinder driven by a variable-opening pneumatic valve and equipped with position and pressure sensors. The paper reports on tests carried out to investigate the capability of this specific controller to achieve consistent control performance under, repeated operations and other changes in operating conditions.
Abstract: In an electric power system, spinning reserve
requirements can be determined by using deterministic and/or
probabilistic measures. Although deterministic methods are usual in
many systems, application of probabilistic methods becomes
increasingly important in the new environment of the electric power
utility industry. This is because of the increased uncertainty
associated with competition. In this paper 1) a new probabilistic
method is presented which considers the reliability of transmission
system in a simplified manner and 2) deterministic and probabilistic
methods are compared. The studied methods are applied to the Roy
Billinton Test System (RBTS).
Abstract: Larval rearing and seed production of most of tetra fishes (Family: Characidae) is critical due to their small size larvae and limited numbers of spawning attempts. During the present study the effect of different live foods on growth and survival of neon tetra, Paracheirodon innesi larvae (length 3.1 ± 0.012mm, weight 0.048 ± 0.00015mg) and early fry (length = 6.44 ± 0.025mm, weight = 0.64 ± 0.003mg and 13 days old) was determined in two experiments. Experiment I was conducted for rearing the larvae by using mixed green water and Infusoria whereas, in Experiment II, early fry were fed with mixed zooplankton, decapsulated Artemia cyst and Artemia nauplii. The larvae fed on mixed green water showed significant (p
Abstract: A handful of propagation textbooks that discuss radio frequency (RF) propagation models merely list out the models and perhaps discuss them rather briefly; this may well be frustrating for the potential first time modeller who's got no idea on how these models could have been derived. This paper fundamentally provides an overture in modelling the radio channel. Explicitly, for the modelling practice discussed here, signal strength field measurements had to be conducted beforehand (this was done at 469 MHz); to be precise, this paper primarily concerns empirically/statistically modelling the radio channel, and thus provides results obtained from empirically modelling the environments in question. This paper, on the whole, proposes three propagation models, corresponding to three experimented environments. Perceptibly, the models have been derived by way of making the most use of statistical measures. Generally speaking, the first two models were derived via simple linear regression analysis, whereas the third have been originated using multiple regression analysis (with five various predictors). Additionally, as implied by the title of this paper, both indoor and outdoor environments have been experimented; however, (somewhat) two of the environments are neither entirely indoor nor entirely outdoor. The other environment, however, is completely indoor.