Estimating Regression Effects in Com Poisson Generalized Linear Model

Com Poisson distribution is capable of modeling the count responses irrespective of their mean variance relation and the parameters of this distribution when fitted to a simple cross sectional data can be efficiently estimated using maximum likelihood (ML) method. In the regression setup, however, ML estimation of the parameters of the Com Poisson based generalized linear model is computationally intensive. In this paper, we propose to use quasilikelihood (QL) approach to estimate the effect of the covariates on the Com Poisson counts and investigate the performance of this method with respect to the ML method. QL estimates are consistent and almost as efficient as ML estimates. The simulation studies show that the efficiency loss in the estimation of all the parameters using QL approach as compared to ML approach is quite negligible, whereas QL approach is lesser involving than ML approach.

Developing Forecasting Tool for Humanitarian Relief Organizations in Emergency Logistics Planning

Despite the availability of natural disaster related time series data for last 110 years, there is no forecasting tool available to humanitarian relief organizations to determine forecasts for emergency logistics planning. This study develops a forecasting tool based on identifying probability distributions. The estimates of the parameters are used to calculate natural disaster forecasts. Further, the determination of aggregate forecasts leads to efficient pre-disaster planning. Based on the research findings, the relief agencies can optimize the various resources allocation in emergency logistics planning.

An Investigation on the Accuracy of Nonlinear Static Procedures for Seismic Evaluation of Buckling-restrained Braced Frames

Presented herein is an assessment of current nonlinear static procedures (NSPs) for seismic evaluation of bucklingrestrained braced frames (BRBFs) which have become a favorable lateral-force resisting system for earthquake resistant buildings. The bias and accuracy of modal, improved modal pushover analysis (MPA, IMPA) and mass proportional pushover (MPP) procedures are comparatively investigated when they are applied to BRBF buildings subjected to two sets of strong ground motions. The assessment is based on a comparison of seismic displacement demands such as target roof displacements, peak floor/roof displacements and inter-story drifts. The NSP estimates are compared to 'exact' results from nonlinear response history analysis (NLRHA). The response statistics presented show that the MPP procedure tends to significantly overestimate seismic demands of lower stories of tall buildings considered in this study while MPA and IMPA procedures provide reasonably accurate results in estimating maximum inter-story drift over all stories of studied BRBF systems.

Development of a Software about Calculating the Production Parameters in Knitted Garment Plants

Apparel product development is an important stage in the life cycle of a product. Shortening this stage will help to reduce the costs of a garment. The aim of this study is to examine the production parameters in knitwear apparel companies by defining the unit costs, and developing a software to calculate the unit costs of garments and make the cost estimates. In this study, with the help of a questionnaire, different companies- systems of unit cost estimating and cost calculating were tried to be analyzed. Within the scope of the questionnaire, the importance of cost estimating process for apparel companies and the expectations from a new cost estimating program were investigated. According to the results of the questionnaire, it was seen that the majority of companies which participated to the questionnaire use manual cost calculating methods or simple Microsoft Excel spreadsheets to make cost estimates. Furthermore, it was discovered that many companies meet with difficulties in archiving the cost data for future use and as a solution to that problem, it is thought that prior to making a cost estimate, sub units of garment costs which are fabric, accessory and the labor costs should be analyzed and added to the database of the programme beforehand. Another specification of the cost estimating unit prepared in this study is that the programme was designed to consist of two main units, one of which makes the product specification and the other makes the cost calculation. The programme is prepared as a web-based application in order that the supplier, the manufacturer and the customer can have the opportunity to communicate through the same platform.

A Nonconforming Mixed Finite Element Method for Semilinear Pseudo-Hyperbolic Partial Integro-Differential Equations

In this paper, a nonconforming mixed finite element method is studied for semilinear pseudo-hyperbolic partial integrodifferential equations. By use of the interpolation technique instead of the generalized elliptic projection, the optimal error estimates of the corresponding unknown function are given.

Mathematical Programming on Multivariate Calibration Estimation in Stratified Sampling

Calibration estimation is a method of adjusting the original design weights to improve the survey estimates by using auxiliary information such as the known population total (or mean) of the auxiliary variables. A calibration estimator uses calibrated weights that are determined to minimize a given distance measure to the original design weights while satisfying a set of constraints related to the auxiliary information. In this paper, we propose a new multivariate calibration estimator for the population mean in the stratified sampling design, which incorporates information available for more than one auxiliary variable. The problem of determining the optimum calibrated weights is formulated as a Mathematical Programming Problem (MPP) that is solved using the Lagrange multiplier technique.

Stabilization of Nonnecessarily Inversely Stable First-Order Adaptive Systems under Saturated Input

This paper presents an indirect adaptive stabilization scheme for first-order continuous-time systems under saturated input which is described by a sigmoidal function. The singularities are avoided through a modification scheme for the estimated plant parameter vector so that its associated Sylvester matrix is guaranteed to be non-singular and then the estimated plant model is controllable. The modification mechanism involves the use of a hysteresis switching function. An alternative hybrid scheme, whose estimated parameters are updated at sampling instants is also given to solve a similar adaptive stabilization problem. Such a scheme also uses hysteresis switching for modification of the parameter estimates so as to ensure the controllability of the estimated plant model.

A New Splitting H1-Galerkin Mixed Method for Pseudo-hyperbolic Equations

A new numerical scheme based on the H1-Galerkin mixed finite element method for a class of second-order pseudohyperbolic equations is constructed. The proposed procedures can be split into three independent differential sub-schemes and does not need to solve a coupled system of equations. Optimal error estimates are derived for both semidiscrete and fully discrete schemes for problems in one space dimension. And the proposed method dose not requires the LBB consistency condition. Finally, some numerical results are provided to illustrate the efficacy of our method.

Trimmed Mean as an Adaptive Robust Estimator of a Location Parameter for Weibull Distribution

One of the purposes of the robust method of estimation is to reduce the influence of outliers in the data, on the estimates. The outliers arise from gross errors or contamination from distributions with long tails. The trimmed mean is a robust estimate. This means that it is not sensitive to violation of distributional assumptions of the data. It is called an adaptive estimate when the trimming proportion is determined from the data rather than being fixed a “priori-. The main objective of this study is to find out the robustness properties of the adaptive trimmed means in terms of efficiency, high breakdown point and influence function. Specifically, it seeks to find out the magnitude of the trimming proportion of the adaptive trimmed mean which will yield efficient and robust estimates of the parameter for data which follow a modified Weibull distribution with parameter λ = 1/2 , where the trimming proportion is determined by a ratio of two trimmed means defined as the tail length. Secondly, the asymptotic properties of the tail length and the trimmed means are also investigated. Finally, a comparison is made on the efficiency of the adaptive trimmed means in terms of the standard deviation for the trimming proportions and when these were fixed a “priori". The asymptotic tail lengths defined as the ratio of two trimmed means and the asymptotic variances were computed by using the formulas derived. While the values of the standard deviations for the derived tail lengths for data of size 40 simulated from a Weibull distribution were computed for 100 iterations using a computer program written in Pascal language. The findings of the study revealed that the tail lengths of the Weibull distribution increase in magnitudes as the trimming proportions increase, the measure of the tail length and the adaptive trimmed mean are asymptotically independent as the number of observations n becomes very large or approaching infinity, the tail length is asymptotically distributed as the ratio of two independent normal random variables, and the asymptotic variances decrease as the trimming proportions increase. The simulation study revealed empirically that the standard error of the adaptive trimmed mean using the ratio of tail lengths is relatively smaller for different values of trimming proportions than its counterpart when the trimming proportions were fixed a 'priori'.

Spatial Query Localization Method in Limited Reference Point Environment

Task of object localization is one of the major challenges in creating intelligent transportation. Unfortunately, in densely built-up urban areas, localization based on GPS only produces a large error, or simply becomes impossible. New opportunities arise for the localization due to the rapidly emerging concept of a wireless ad-hoc network. Such network, allows estimating potential distance between these objects measuring received signal level and construct a graph of distances in which nodes are the localization objects, and edges - estimates of the distances between pairs of nodes. Due to the known coordinates of individual nodes (anchors), it is possible to determine the location of all (or part) of the remaining nodes of the graph. Moreover, road map, available in digital format can provide localization routines with valuable additional information to narrow node location search. However, despite abundance of well-known algorithms for solving the problem of localization and significant research efforts, there are still many issues that currently are addressed only partially. In this paper, we propose localization approach based on the graph mapped distances on the digital road map data basis. In fact, problem is reduced to distance graph embedding into the graph representing area geo location data. It makes possible to localize objects, in some cases even if only one reference point is available. We propose simple embedding algorithm and sample implementation as spatial queries over sensor network data stored in spatial database, allowing employing effectively spatial indexing, optimized spatial search routines and geometry functions.

Performance Boundaries for Interactive Finite Element Applications

This paper presents work characterizing finite element performance boundaries within which live, interactive finite element modeling is feasible on current and emerging systems. These results are based on wide-ranging tests performed using a prototype finite element program implemented specifically for this study, thereby enabling the unified investigation of numerous direct and iterative solver strategies and implementations in a variety of modeling contexts. The results are intended to be useful for researchers interested in interactive analysis by providing baseline performance estimates, to give guidance in matching solution strategies to problem domains, and to spur further work addressing the challenge of extending the present boundaries.

A New Time Discontinuous Expanded Mixed Element Method for Convection-dominated Diffusion Equation

In this paper, a new time discontinuous expanded mixed finite element method is proposed and analyzed for two-order convection-dominated diffusion problem. The proofs of the stability of the proposed scheme and the uniqueness of the discrete solution are given. Moreover, the error estimates of the scalar unknown, its gradient and its flux in the L1( ¯ J,L2( )-norm are obtained.

Eukaryotic Gene Prediction by an Investigation of Nonlinear Dynamical Modeling Techniques on EIIP Coded Sequences

Many digital signal processing, techniques have been used to automatically distinguish protein coding regions (exons) from non-coding regions (introns) in DNA sequences. In this work, we have characterized these sequences according to their nonlinear dynamical features such as moment invariants, correlation dimension, and largest Lyapunov exponent estimates. We have applied our model to a number of real sequences encoded into a time series using EIIP sequence indicators. In order to discriminate between coding and non coding DNA regions, the phase space trajectory was first reconstructed for coding and non-coding regions. Nonlinear dynamical features are extracted from those regions and used to investigate a difference between them. Our results indicate that the nonlinear dynamical characteristics have yielded significant differences between coding (CR) and non-coding regions (NCR) in DNA sequences. Finally, the classifier is tested on real genes where coding and non-coding regions are well known.

Certain Estimates of Oscillatory Integrals and Extrapolation

In this paper we study the boundedness properties of certain oscillatory integrals with polynomial phase. We obtain sharp estimates for these oscillatory integrals. By the virtue of these estimates and extrapolation we obtain Lp boundedness for these oscillatory integrals under rather weak size conditions on the kernel function.

Effect of the Rise/Span Ratio of a Spherical Cap Shell on the Buckling Load

Rise/span ratio has been mentioned as one of the reasons which contribute to the lower buckling load as compared to the Classical theory buckling load but this ratio has not been quantified in the equation. The purpose of this study was to determine a more realistic buckling load by quantifying the effect of the rise/span ratio because experiments have shown that the Classical theory overestimates the load. The buckling load equation was derived based on the theorem of work done and strain energy. Thereafter, finite element modeling and simulation using ABAQUS was done to determine the variables that determine the constant in the derived equation. The rise/span was found to be the determining factor of the constant in the buckling load equation. The derived buckling load correlates closely to the load obtained from experiments.

Enhanced Parallel-Connected Comb Filter Method for Multiple Pitch Estimation

This paper presents an improvement method of the multiple pitch estimation algorithm using comb filters. Conventionally the pitch was estimated by using parallel -connected comb filters method (PCF). However, PCF has problems which often fail in the pitch estimation when there is the fundamental frequency of higher tone near harmonics of lower tone. Therefore the estimation is assigned to a wrong note when shared frequencies happen. This issue often occurs in estimating octave 3 or more. Proposed method, for solving the problem, estimates the pitch with every harmonic instead of every octave. As a result, our method reaches the accuracy of more than 80%.

A Formulation of the Latent Class Vector Model for Pairwise Data

In this research, a latent class vector model for pairwise data is formulated. As compared to the basic vector model, this model yields consistent estimates of the parameters since the number of parameters to be estimated does not increase with the number of subjects. The result of the analysis reveals that the model was stable and could classify each subject to the latent classes representing the typical scales used by these subjects.

The Accuracy of the Flight Derivative Estimates Derived from Flight Data

The accuracy of estimated stability and control derivatives of a light aircraft from flight test data were evaluated. The light aircraft, named ChangGong-91, is the first certified aircraft from the Korean government. The output error method, which is a maximum likelihood estimation technique and considers measurement noise only, was used to analyze the aircraft responses measures. The multi-step control inputs were applied in order to excite the short period mode for the longitudinal and Dutch-roll mode for the lateral-directional motion. The estimated stability/control derivatives of Chan Gong-91 were analyzed for the assessment of handling qualities comparing them with those of similar aircraft. The accuracy of the flight derivative estimates derived from flight test measurement was examined in engineering judgment, scatter and Cramer-Rao bound, which turned out to be satisfactory with minor defects..

Helicopter Adaptive Control with Parameter Estimation Based on Feedback Linearization

This paper presents an adaptive feedback linearization approach to derive helicopter. Ideal feedback linearization is defined for the cases when the system model is known. Adaptive feedback linearization is employed to get asymptotically exact cancellation for the inherent uncertainty in the knowledge of the given parameters of system. The control algorithm is implemented using the feedback linearization technique and adaptive method. The controller parameters are unknown where an adaptive control law aims to drive them towards their ideal values for providing perfect model matching between the reference model and the closed-loop plant model. The converged parameters of controller would then provide good estimates for the unknown plant parameters.

Speaker Independent Quranic Recognizer Basedon Maximum Likelihood Linear Regression

An automatic speech recognition system for the formal Arabic language is needed. The Quran is the most formal spoken book in Arabic, it is spoken all over the world. In this research, an automatic speech recognizer for Quranic based speakerindependent was developed and tested. The system was developed based on the tri-phone Hidden Markov Model and Maximum Likelihood Linear Regression (MLLR). The MLLR computes a set of transformations which reduces the mismatch between an initial model set and the adaptation data. It uses the regression class tree, as well as, estimates a set of linear transformations for the mean and variance parameters of a Gaussian mixture HMM system. The 30th Chapter of the Quran, with five of the most famous readers of the Quran, was used for the training and testing of the data. The chapter includes about 2000 distinct words. The advantages of using the Quranic verses as the database in this developed recognizer are the uniqueness of the words and the high level of orderliness between verses. The level of accuracy from the tested data ranged 68 to 85%.