Employee Motivation Factors That Affect Job Performance of Suan Sunandha Rajabhat University Employee

The purpose of this research is to study motivation factors and also to study factors relation to job performance to compare motivation factors under the personal factor classification such as gender, age, income, educational level, marital status, and working duration; and to study the relationship between Motivation Factors and Job Performance with job satisfactions. The sample groups utilized in this research were 400 Suan Sunandha Rajabhat University employees. This research is a quantitative research using questionnaires as research instrument. The statistics applied for data analysis including percentage, mean, and standard deviation. In addition, the difference analysis was conducted by t value computing, one-way analysis of variance and Pearson’s correlation coefficient computing. The findings of the study results were as follows the findings showed that the aspects of job promotion and salary were at the moderate levels. Additionally, the findings also showed that the motivations that affected the revenue branch chiefs’ job performance were job security, job accomplishment, policy and management, job promotion, and interpersonal relation.

A Novel Deinterlacing Algorithm Based on Adaptive Polynomial Interpolation

In this paper, a novel deinterlacing algorithm is proposed. The proposed algorithm approximates the distribution of the luminance into a polynomial function. Instead of using one polynomial function for all pixels, different polynomial functions are used for the uniform, texture, and directional edge regions. The function coefficients for each region are computed by matrix multiplications. Experimental results demonstrate that the proposed method performs better than the conventional algorithms.

Staling and Quality of Iranian Flat Bread Stored at Modified Atmosphere in Different Packaging

This study investigated the use of modified atmosphere packaging (MAP) and different packaging to extend the shelf life of Barbari flat bread. Three atmospheres including 70%CO2 and 30%N2, 50% CO2 and 50%N2 and a normal air as control were used. The bread samples were packaged in three type pouches. The shelf life was determined by appearance of mold and yeast (M +Y) in Barbari bread samples stored at 25 ± 1°C and 38 ± 2% relative humidity. The results showed that it is possible to prolong the shelf life of Barbari bread from four days to about 21 days by using modified atmosphere packaging with high carbon dioxide concentration and high-barrier laminated and vacuum bags packages. However, the hardness of samples kept in MAP increase significantly by increase of carbon dioxide concentration. The correlation coefficient (r) between headspace CO2 concentration and hardness was 0.997, 0.997 and 0.599 for A, B and C packaging respectively. High negative correlation coefficients were found between the crumb moisture and the hardness values in various packaging. There were significant negative correlation coefficients between sensory parameters and hardness of texture.

Cross-Cultural Socio-Economic Status Attainment between Muslim and Santal Couple in Rural Bangladesh

This study compared socio-economic status attainment between the Muslim and Santal couples in rural Bangladesh. For this we hypothesized that socio-economic status attainment (occupation, education and income) of the Muslim couples was higher than the Santal ones in rural Bangladesh. In order to examine the hypothesis 288 couples (145 couples for Muslim and 143 couples for Santal) selected by cluster random sampling from Kalna village, Bangladesh were individually interviewed with semistructured questionnaire method. The results of Pearson Chi-Squire test suggest that there were significant differences in socio-economic status attainment between the two communities- couples. In addition, Pearson correlation coefficients also suggest that there were significant associations between the socio-economic statuses attained by the two communities- couples in rural Bangladesh. Further crosscultural study should conduct on how inter-community relations in rural social structure of Bangladesh influence the differences among the couples- socio-economic status attainment

Generalized Maximal Ratio Combining as a Supra-optimal Receiver Diversity Scheme

Maximal Ratio Combining (MRC) is considered the most complex combining technique as it requires channel coefficients estimation. It results in the lowest bit error rate (BER) compared to all other combining techniques. However the BER starts to deteriorate as errors are introduced in the channel coefficients estimation. A novel combining technique, termed Generalized Maximal Ratio Combining (GMRC) with a polynomial kernel, yields an identical BER as MRC with perfect channel estimation and a lower BER in the presence of channel estimation errors. We show that GMRC outperforms the optimal MRC scheme in general and we hereinafter introduce it to the scientific community as a new “supraoptimal" algorithm. Since diversity combining is especially effective in small femto- and pico-cells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to IP-based 4th generation networks.

Identifying and Prioritizing Factors Affecting Consumer Behavior Based on Product Value

Nowadays, without the awareness of consumer behavior and correct understanding of it, it is not possible for organizations to take appropriate measures to meet the consumer needs and demands. The aim of this paper is the identification and prioritization of the factors affecting the consumer behavior based on the product value. The population of the study includes all the consumers of furniture producing firms in East Azarbaijan province, Iran. The research sample includes 93 people selected by the sampling formula in unlimited population. The data collection instrument was a questionnaire, the validity of which was confirmed through face validity and the reliability of which was determined, using Cronbach's alpha coefficient. The Kolmogorov-Smironov test was used to test data normality, the t-test for identification of factors affecting the product value, and Friedman test for prioritizing the factors. The results show that quality, satisfaction, styling, price, finishing operation, performance, safety, worth, shape, use, and excellence are placed from 1 to 11 priorities, respectively.

Bitrate Reduction Using FMO for Video Streaming over Packet Networks

Flexible macroblock ordering (FMO), adopted in the H.264 standard, allows to partition all macroblocks (MBs) in a frame into separate groups of MBs called Slice Groups (SGs). FMO can not only support error-resilience, but also control the size of video packets for different network types. However, it is well-known that the number of bits required for encoding the frame is increased by adopting FMO. In this paper, we propose a novel algorithm that can reduce the bitrate overhead caused by utilizing FMO. In the proposed algorithm, all MBs are grouped in SGs based on the similarity of the transform coefficients. Experimental results show that our algorithm can reduce the bitrate as compared with conventional FMO.

Objective Performance of Compressed Image Quality Assessments

Measurement of the quality of image compression is important for image processing application. In this paper, we propose an objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken. The new objective image quality measurement is developed from a few fundamental of objective measurements to evaluate the compressed image quality based on JPEG and JPEG2000. The reliability between each fundamental objective measurement and subjective measurement (MOS) is found. From the experimental results, we found that the Maximum Difference measurement (MD) and a new proposed measurement, Structural Content Laplacian Mean Square Error (SCLMSE), are the suitable measurements that can be used to evaluate the quality of JPEG200 and JPEG compressed image, respectively. In addition, MD and SCLMSE measurements are scaled to make them equivalent to MOS, given the rate of compressed image quality from 1 to 5 (unacceptable to excellent quality).

Image Compression Using Hybrid Vector Quantization

In this paper, image compression using hybrid vector quantization scheme such as Multistage Vector Quantization (MSVQ) and Pyramid Vector Quantization (PVQ) are introduced. A combined MSVQ and PVQ are utilized to take advantages provided by both of them. In the wavelet decomposition of the image, most of the information often resides in the lowest frequency subband. MSVQ is applied to significant low frequency coefficients. PVQ is utilized to quantize the coefficients of other high frequency subbands. The wavelet coefficients are derived using lifting scheme. The main aim of the proposed scheme is to achieve high compression ratio without much compromise in the image quality. The results are compared with the existing image compression scheme using MSVQ.

Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging

In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.

Detection and Correction of Ectopic Beats for HRV Analysis Applying Discrete Wavelet Transforms

The clinical usefulness of heart rate variability is limited to the range of Holter monitoring software available. These software algorithms require a normal sinus rhythm to accurately acquire heart rate variability (HRV) measures in the frequency domain. Premature ventricular contractions (PVC) or more commonly referred to as ectopic beats, frequent in heart failure, hinder this analysis and introduce ambiguity. This investigation demonstrates an algorithm to automatically detect ectopic beats by analyzing discrete wavelet transform coefficients. Two techniques for filtering and replacing the ectopic beats from the RR signal are compared. One technique applies wavelet hard thresholding techniques and another applies linear interpolation to replace ectopic cycles. The results demonstrate through simulation, and signals acquired from a 24hr ambulatory recorder, that these techniques can accurately detect PVC-s and remove the noise and leakage effects produced by ectopic cycles retaining smooth spectra with the minimum of error.

Parameter Estimation for Viewing Rank Distribution of Video-on-Demand

Video-on-demand (VOD) is designed by using content delivery networks (CDN) to minimize the overall operational cost and to maximize scalability. Estimation of the viewing pattern (i.e., the relationship between the number of viewings and the ranking of VOD contents) plays an important role in minimizing the total operational cost and maximizing the performance of the VOD systems. In this paper, we have analyzed a large body of commercial VOD viewing data and found that the viewing rank distribution fits well with the parabolic fractal distribution. The weighted linear model fitting function is used to estimate the parameters (coefficients) of the parabolic fractal distribution. This paper presents an analytical basis for designing an optimal hierarchical VOD contents distribution system in terms of its cost and performance.

A Renovated Cook's Distance Based On The Buckley-James Estimate In Censored Regression

There have been various methods created based on the regression ideas to resolve the problem of data set containing censored observations, i.e. the Buckley-James method, Miller-s method, Cox method, and Koul-Susarla-Van Ryzin estimators. Even though comparison studies show the Buckley-James method performs better than some other methods, it is still rarely used by researchers mainly because of the limited diagnostics analysis developed for the Buckley-James method thus far. Therefore, a diagnostic tool for the Buckley-James method is proposed in this paper. It is called the renovated Cook-s Distance, (RD* i ) and has been developed based on the Cook-s idea. The renovated Cook-s Distance (RD* i ) has advantages (depending on the analyst demand) over (i) the change in the fitted value for a single case, DFIT* i as it measures the influence of case i on all n fitted values Yˆ∗ (not just the fitted value for case i as DFIT* i) (ii) the change in the estimate of the coefficient when the ith case is deleted, DBETA* i since DBETA* i corresponds to the number of variables p so it is usually easier to look at a diagnostic measure such as RD* i since information from p variables can be considered simultaneously. Finally, an example using Stanford Heart Transplant data is provided to illustrate the proposed diagnostic tool.

All Proteins Have a Basic Molecular Formula

This study proposes a basic molecular formula for all proteins. A total of 10,739 proteins belonging to 9 different protein groups classified on the basis of their functions were selected randomly. They included enzymes, storage proteins, hormones, signalling proteins, structural proteins, transport proteins, immunoglobulins or antibodies, motor proteins and receptor proteins. After obtaining the protein molecular formula using the ProtParam tool, the H/C, N/C, O/C, and S/C ratios were determined for each randomly selected sample. In this case, H, N, O, and S coefficients were specified per carbon atom. Surprisingly, the results demonstrated that H, N, O, and S coefficients for all 10,739 proteins are similar and highly correlated. This study demonstrates that despite differences in the structure and function, all known proteins have a similar basic molecular formula CnH1.58 ± 0.015nN0.28 ± 0.005nO0.30 ± 0.007nS0.01 ± 0.002n. The total correlation between all coefficients was found to be 0.9999.

Thermo-Sensitive Hydrogel: Control of Hydrophilic-Hydrophobic Transition

The study investigated the hydrophilic to hydrophobic transition of modified polyacrylamide hydrogel with the inclusion of N-isopropylacrylamide (NIAM). The modification was done by mimicking micellar polymerization, which resulted in better arrangement of NIAM chains in the polyacrylamide network. The degree of NIAM arrangement is described by NH number. The hydrophilic to hydrophobic transition was measured through the partition coefficient, K, of Orange II and Methylene Blue in hydrogel and in water. These dyes were chosen as a model for solutes with different degree of hydrophobicity. The study showed that the hydrogel with higher NH values resulted in better solubility of both dyes. Moreover, in temperature above the lower critical solution temperature (LCST) of Poly(N-isopropylacrylamide) (PNIAM)also caused the collapse of NIPAM chains which results in a more hydrophobic environment that increases the solubility of Methylene Blue and decreases the solubility of Orange II in the hydrogels with NIPAM present.

Puff Noise Detection and Cancellation for Robust Speech Recognition

In this paper, an algorithm for detecting and attenuating puff noises frequently generated under the mobile environment is proposed. As a baseline system, puff detection system is designed based on Gaussian Mixture Model (GMM), and 39th Mel Frequency Cepstral Coefficient (MFCC) is extracted as feature parameters. To improve the detection performance, effective acoustic features for puff detection are proposed. In addition, detected puff intervals are attenuated by high-pass filtering. The speech recognition rate was measured for evaluation and confusion matrix and ROC curve are used to confirm the validity of the proposed system.

Physico-chemical State of the Air at the Stagnation Point during the Atmospheric Reentry of a Spacecraft

Hypersonic flows around spatial vehicles during their reentry phase in planetary atmospheres are characterized by intense aerothermal phenomena. The aim of this work is to analyze high temperature flows around an axisymmetric blunt body taking into account chemical and vibrational non-equilibrium for air mixture species. For this purpose, a finite volume methodology is employed to determine the supersonic flow parameters around the axisymmetric blunt body, especially at the stagnation point and along the wall of spacecraft for several altitudes. This allows the capture shock wave before a blunt body placed in supersonic free stream. The numerical technique uses the Flux Vector Splitting method of Van Leer. Here, adequate time stepping parameter, along with CFL coefficient and mesh size level are selected to ensure numerical convergence, sought with an order of 10-8

Efficient DTW-Based Speech Recognition System for Isolated Words of Arabic Language

Despite the fact that Arabic language is currently one of the most common languages worldwide, there has been only a little research on Arabic speech recognition relative to other languages such as English and Japanese. Generally, digital speech processing and voice recognition algorithms are of special importance for designing efficient, accurate, as well as fast automatic speech recognition systems. However, the speech recognition process carried out in this paper is divided into three stages as follows: firstly, the signal is preprocessed to reduce noise effects. After that, the signal is digitized and hearingized. Consequently, the voice activity regions are segmented using voice activity detection (VAD) algorithm. Secondly, features are extracted from the speech signal using Mel-frequency cepstral coefficients (MFCC) algorithm. Moreover, delta and acceleration (delta-delta) coefficients have been added for the reason of improving the recognition accuracy. Finally, each test word-s features are compared to the training database using dynamic time warping (DTW) algorithm. Utilizing the best set up made for all affected parameters to the aforementioned techniques, the proposed system achieved a recognition rate of about 98.5% which outperformed other HMM and ANN-based approaches available in the literature.

Design of Multiplier-free State-Space Digital Filters

In this paper, a novel approach is presented for designing multiplier-free state-space digital filters. The multiplier-free design is obtained by finding power-of-2 coefficients and also quantizing the state variables to power-of-2 numbers. Expressions for the noise variance are derived for the quantized state vector and the output of the filter. A “structuretransformation matrix" is incorporated in these expressions. It is shown that quantization effects can be minimized by properly designing the structure-transformation matrix. Simulation results are very promising and illustrate the design algorithm.

Design of an Stable GPC for Nonminimum Phase LTI Systems

The current methods of predictive controllers are utilized for those processes in which the rate of output variations is not high. For such processes, therefore, stability can be achieved by implementing the constrained predictive controller or applying infinite prediction horizon. When the rate of the output growth is high (e.g. for unstable nonminimum phase process) the stabilization seems to be problematic. In order to avoid this, it is suggested to change the method in the way that: first, the prediction error growth should be decreased at the early stage of the prediction horizon, and second, the rate of the error variation should be penalized. The growth of the error is decreased through adjusting its weighting coefficients in the cost function. Reduction in the error variation is possible by adding the first order derivate of the error into the cost function. By studying different examples it is shown that using these two remedies together, the closed-loop stability of unstable nonminimum phase process can be achieved.