Abstract: Ratio and regression type estimators have been used by previous authors to estimate a population mean for the principal variable from samples in which both auxiliary x and principal y variable data are available. However, missing data are a common problem in statistical analyses with real data. Ratio and regression type estimators have also been used for imputing values of missing y data. In this paper, six new ratio and regression type estimators are proposed for imputing values for any missing y data and estimating a population mean for y from samples with missing x and/or y data. A simulation study has been conducted to compare the six ratio and regression type estimators with a previous estimator of Rueda. Two population sizes N = 1,000 and 5,000 have been considered with sample sizes of 10% and 30% and with correlation coefficients between population variables X and Y of 0.5 and 0.8. In the simulations, 10 and 40 percent of sample y values and 10 and 40 percent of sample x values were randomly designated as missing. The new ratio and regression type estimators give similar mean absolute percentage errors that are smaller than the Rueda estimator for all cases. The new estimators give a large reduction in errors for the case of 40% missing y values and sampling fraction of 30%.
Abstract: In this paper a numerical algorithm is described for solving the boundary value problem associated with axisymmetric, inviscid, incompressible, rotational (and irrotational) flow in order to obtain duct wall shapes from prescribed wall velocity distributions. The governing equations are formulated in terms of the stream function ψ (x,y)and the function φ (x,y)as independent variables where for irrotational flow φ (x,y)can be recognized as the velocity potential function, for rotational flow φ (x,y)ceases being the velocity potential function but does remain orthogonal to the stream lines. A numerical method based on the finite difference scheme on a uniform mesh is employed. The technique described is capable of tackling the so-called inverse problem where the velocity wall distributions are prescribed from which the duct wall shape is calculated, as well as the direct problem where the velocity distribution on the duct walls are calculated from prescribed duct geometries. The two different cases as outlined in this paper are in fact boundary value problems with Neumann and Dirichlet boundary conditions respectively. Even though both approaches are discussed, only numerical results for the case of the Dirichlet boundary conditions are given. A downstream condition is prescribed such that cylindrical flow, that is flow which is independent of the axial coordinate, exists.
Abstract: The electrical interaction between two axisymmetric
spheroidal particles in an electrolyte solution is examined numerically.
A Galerkin finite element method combined with a Newton-Raphson
iteration scheme is proposed to evaluate the spatial variation in the
electrical potential, and the result obtained used to estimate the
interaction energy between two particles. We show that if the surface
charge density is fixed, the potential gradient is larger at a point, which
has a larger curvature, and if surface potential is fixed, surface charge
density is proportional to the curvature. Also, if the total interaction
energy against closest surface-to-surface curve exhibits a primary
maximum, the maximum follows the order (oblate-oblate) >
(sphere-sphere)>(oblate-prolate)>(prolate-prolate), and if the curve
has a secondary minimum, the absolute value of the minimum follows
the same order.
Abstract: Com Poisson distribution is capable of modeling the count responses irrespective of their mean variance relation and the parameters of this distribution when fitted to a simple cross sectional data can be efficiently estimated using maximum likelihood (ML) method. In the regression setup, however, ML estimation of the parameters of the Com Poisson based generalized linear model is computationally intensive. In this paper, we propose to use quasilikelihood (QL) approach to estimate the effect of the covariates on the Com Poisson counts and investigate the performance of this method with respect to the ML method. QL estimates are consistent and almost as efficient as ML estimates. The simulation studies show that the efficiency loss in the estimation of all the parameters using QL approach as compared to ML approach is quite negligible, whereas QL approach is lesser involving than ML approach.
Abstract: The IFS is a scheme for describing and manipulating complex fractal attractors using simple mathematical models. More precisely, the most popular “fractal –based" algorithms for both representation and compression of computer images have involved some implementation of the method of Iterated Function Systems (IFS) on complete metric spaces. In this paper a new generalized space called Multi-Fuzzy Fractal Space was constructed. On these spases a distance function is defined, and its completeness is proved. The completeness property of this space ensures the existence of a fixed-point theorem for the family of continuous mappings. This theorem is the fundamental result on which the IFS methods are based and the fractals are built. The defined mappings are proved to satisfy some generalizations of the contraction condition.
Abstract: There have been various methods created based on the regression ideas to resolve the problem of data set containing censored observations, i.e. the Buckley-James method, Miller-s method, Cox method, and Koul-Susarla-Van Ryzin estimators. Even though comparison studies show the Buckley-James method performs better than some other methods, it is still rarely used by researchers mainly because of the limited diagnostics analysis developed for the Buckley-James method thus far. Therefore, a diagnostic tool for the Buckley-James method is proposed in this paper. It is called the renovated Cook-s Distance, (RD* i ) and has been developed based on the Cook-s idea. The renovated Cook-s Distance (RD* i ) has advantages (depending on the analyst demand) over (i) the change in the fitted value for a single case, DFIT* i as it measures the influence of case i on all n fitted values Yˆ∗ (not just the fitted value for case i as DFIT* i) (ii) the change in the estimate of the coefficient when the ith case is deleted, DBETA* i since DBETA* i corresponds to the number of variables p so it is usually easier to look at a diagnostic measure such as RD* i since information from p variables can be considered simultaneously. Finally, an example using Stanford Heart Transplant data is provided to illustrate the proposed diagnostic tool.
Abstract: Truss spars are used for oil exploitation in deep and ultra-deep water if storage crude oil is not needed. The linear hydrodynamic analysis of truss spar in random sea wave load is necessary for determining the behaviour of truss spar. This understanding is not only important for design of the mooring lines, but also for optimising the truss spar design. In this paper linear hydrodynamic analysis of truss spar is carried out in frequency domain. The hydrodynamic forces are calculated using the modified Morison equation and diffraction theory. Added mass and drag coefficients of truss section computed by transmission matrix and normal acceleration and velocity component acting on each element and for hull section computed by strip theory. The stiffness properties of the truss spar can be separated into two components; hydrostatic stiffness and mooring line stiffness. Then, platform response amplitudes obtained by solved the equation of motion. This equation is non-linear due to viscous damping term therefore linearised by iteration method [1]. Finally computed RAOs and significant response amplitude and results are compared with experimental data.
Abstract: In the present work, we introduce the particle swarm optimization called (PSO in short) to avoid the Runge-s phenomenon occurring in many numerical problems. This new approach is tested with some numerical examples including the generalized integral quadrature method in order to solve the Volterra-s integral equations
Abstract: The purpose of this study is to introduce a new
interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues
in proton therapy. This interface program was developed under
MATLAB software and includes a friendly graphical user interface
with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image
segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton
beam. The result of the mentioned technique is a number of nonoverlapped
squares with different sizes in every image. By this way
the resolution of image segmentation is high enough in and near
heterogeneous areas to preserve the precision of dose calculations
and is low enough in homogeneous areas to reduce the number of
cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron
and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.
Abstract: A multicriteria linear programming problem with integer variables and parameterized optimality principle "from lexicographic to Slater" is considered. A situation in which initial coefficients of penalty cost functions are not fixed but may be potentially a subject to variations is studied. For any efficient solution, appropriate measures of the quality are introduced which incorporate information about variations of penalty cost function coefficients. These measures correspond to the so-called stability and accuracy functions defined earlier for efficient solutions of a generic multicriteria combinatorial optimization problem with Pareto and lexicographic optimality principles. Various properties of such functions are studied and maximum norms of perturbations for which an efficient solution preserves the property of being efficient are calculated.
Abstract: Recent years have seen a growing trend towards the
integration of multiple information sources to support large-scale
prediction of protein-protein interaction (PPI) networks in model
organisms. Despite advances in computational approaches, the
combination of multiple “omic" datasets representing the same type
of data, e.g. different gene expression datasets, has not been
rigorously studied. Furthermore, there is a need to further investigate
the inference capability of powerful approaches, such as fullyconnected
Bayesian networks, in the context of the prediction of PPI
networks. This paper addresses these limitations by proposing a
Bayesian approach to integrate multiple datasets, some of which
encode the same type of “omic" data to support the identification of
PPI networks. The case study reported involved the combination of
three gene expression datasets relevant to human heart failure (HF).
In comparison with two traditional methods, Naive Bayesian and
maximum likelihood ratio approaches, the proposed technique can
accurately identify known PPI and can be applied to infer potentially
novel interactions.
Abstract: An original Direct Numerical Simulation (DNS) method to tackle the problem of particulate flows at moderate to high concentration and finite Reynolds number is presented. Our method is built on the framework established by Glowinski and his coworkers [1] in the sense that we use their Distributed Lagrange Multiplier/Fictitious Domain (DLM/FD) formulation and their operator-splitting idea but differs in the treatment of particle collisions. The novelty of our contribution relies on replacing the simple artificial repulsive force based collision model usually employed in the literature by an efficient Discrete Element Method (DEM) granular solver. The use of our DEM solver enables us to consider particles of arbitrary shape (at least convex) and to account for actual contacts, in the sense that particles actually touch each other, in contrast with the simple repulsive force based collision model. We recently upgraded our serial code, GRIFF 1 [2], to full MPI capabilities. Our new code, PeliGRIFF 2, is developed under the framework of the full MPI open source platform PELICANS [3]. The new MPI capabilities of PeliGRIFF open new perspectives in the study of particulate flows and significantly increase the number of particles that can be considered in a full DNS approach: O(100000) in 2D and O(10000) in 3D. Results on the 2D/3D sedimentation/fluidization of isometric polygonal/polyedral particles with collisions are presented.
Abstract: A steady two-phase flow model has been developed to simulate the drying process of porous particle in a pneumatic conveying dryer. The model takes into account the momentum, heat and mass transfer between the continuous phase and the dispersed phase. A single particle model was employed to calculate the evaporation rate. In this model the pore structure is simplified to allow the dominant evaporation mechanism to be readily identified at all points within the duct. The predominant mechanism at any time depends upon the pressure, temperature and the diameter of pore from which evaporating is occurring. The model was validated against experimental studies of pneumatic transport at low and high speeds as well as pneumatic drying. The effects of operating conditions on the dryer parameters are studied numerically. The present results show that the drying rate is enhanced as the inlet gas temperature and the gas flow rate increase and as the solid mass flow rate deceases. The present results also demonstrate the necessity of measuring the inlet gas velocity or the solid concentration in any experimental analysis.
Abstract: The problem of complex use of water resources in
Central Asia by taking into consideration the sovereignty of the states
and increasing demand on use of water for economic aspects are
considered. Complex program with appropriate mathematical
software intended for calculation of possible variants of using the
Amudarya up-stream water resources according to satisfaction of
incompatible requirements of the national economics in irrigation
and energy generation is proposed.
Abstract: The mixing of pollutions and sediments in near shore regions of natural water bodies depends heavily on the characteristics such as the strength and frequency of flow instability. In the present paper, the instability of natural convection induced by absorption of solar radiation in littoral regions is considered. Spectral analysis is conducted on the quasi-steady state flow to reveal the power and frequency modes of the instability at various positions. Results indicate that the power of instability, the number of frequency modes, the prominence of higher frequency modes, and the highest frequency mode increase with the offshore distance and/or Rayleigh number. Harmonic modes are present at relatively low Rayleigh numbers. For a given offshore distance, the position with the strongest power of instability is located adjacent to the sloping bottom while the frequency modes are the same over the local depth. As the Rayleigh number increases, the unstable region extends toward the shore.
Abstract: Performance appraisal of employee is important in
managing the human resource of an organization. With the change
towards knowledge-based capitalism, maintaining talented
knowledge workers is critical. However, management classification
of “outstanding", “poor" and “average" performance may not be an
easy decision. Besides that, superior might also tend to judge the
work performance of their subordinates informally and arbitrarily
especially without the existence of a system of appraisal. In this
paper, we propose a performance appraisal system using
multifactorial evaluation model in dealing with appraisal grades
which are often express vaguely in linguistic terms. The proposed
model is for evaluating staff performance based on specific
performance appraisal criteria. The project was collaboration with
one of the Information and Communication Technology company in
Malaysia with reference to its performance appraisal process.
Abstract: This paper discusses the landscape design that could
increase energy efficiency in a house. By planting trees in a house
compound, the tree shades prevent direct sunlight from heating up
the building, and it enables cooling off the surrounding air. The
requirement for air-conditioning could be minimized and the air
quality could be improved. During the life time of a tree, the saving
cost from the mentioned benefits could be up to US $ 200 for each
tree. The project intends to visually describe the landscape design in
a house compound that could enhance energy efficiency and
consequently lead to energy saving. The house compound model was
developed in three dimensions by using AutoCAD 2005, the
animation was programmed by using LightWave 3D softwares i.e.
Modeler and Layout to display the tree shadings in the wall. The
visualization was executed on a VRML Pad platform and
implemented on a web environment.