An Empirical Study about RFID Acceptance- Focus on the Employees in Korea -

The number of the companies accepting RFID in Korea has been increased continuously due to the domestic development of information technology. The acceptance of RFID by companies in Korea enabled them to do business with many global enterprises in a much more efficient and effective way. According to a survey[33, p76], many companies in Korea have used RFID for inventory or distribution manages. But, the use of RFID in the companies in Korea is in the early stages and its potential value hasn-t fully been realized yet. At this time, it would be very important to investigate the factors that affect RFID acceptance. For this study, many previous studies were referenced and some RFID experts were interviewed. Through the pilot test, four factors were selected - Security Trust, Employee Knowledge, Partner Influence, Service Provider Trust - affecting RFID acceptance and an extended technology acceptance model(e-TAM) was presented with those factors. The proposed model was empirically tested using data collected from employees in companies or public enterprises. In order to analyze some relationships between exogenous variables and four variables in TAM, structural equation modeling(SEM) was developed and SPSS12.0 and AMOS 7.0 were used for analyses. The results are summarized as follows: 1) security trust perceived by employees positively influences on perceived usefulness and perceived ease of use; 2) employee-s knowledge on RFID positively influences on only perceived ease of use; 3) a partner-s influence for RFID acceptance positively influences on only perceived usefulness; 4) service provider trust very positively influences on perceived usefulness and perceived ease of use 5) the relationships between TAM variables are the same as the previous studies.

The Citizen Participation in Preventing Illegal Drugs Program in Bangkok, Thailand

The purposes of this research were to study the citizen participation in preventing illegal drugs in one of a poor and small community of Bangkok, Thailand and to compare the level of participation and concern of illegal drugs problem by using demographic variables. This paper drew upon data collected from a local citizens survey conducted in Bangkok, Thailand during summer of 2012. A total of 200 respondents were elicited as data input for, and one way ANOVA test. The findings revealed that the overall citizen participation was in the level of medium. The mean score showed that benefit from the program was ranked as the highest and the decision to participate was ranked as second while the follow-up of the program was ranked as the lowest. In terms of the difference in demographic such as gender, age, level of education, income, and year of residency, the hypothesis testing’s result disclosed that there were no difference in their level of participation. However, difference in occupation showed a difference in their level of participation and concern which was significant at the 0.05 confidence level.

Certain Data Dimension Reduction Techniques for application with ANN based MCS for Study of High Energy Shower

Cosmic showers, from their places of origin in space, after entering earth generate secondary particles called Extensive Air Shower (EAS). Detection and analysis of EAS and similar High Energy Particle Showers involve a plethora of experimental setups with certain constraints for which soft-computational tools like Artificial Neural Network (ANN)s can be adopted. The optimality of ANN classifiers can be enhanced further by the use of Multiple Classifier System (MCS) and certain data - dimension reduction techniques. This work describes the performance of certain data dimension reduction techniques like Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Self Organizing Map (SOM) approximators for application with an MCS formed using Multi Layer Perceptron (MLP), Recurrent Neural Network (RNN) and Probabilistic Neural Network (PNN). The data inputs are obtained from an array of detectors placed in a circular arrangement resembling a practical detector grid which have a higher dimension and greater correlation among themselves. The PCA, ICA and SOM blocks reduce the correlation and generate a form suitable for real time practical applications for prediction of primary energy and location of EAS from density values captured using detectors in a circular grid.

Using HMM-based Classifier Adapted to Background Noises with Improved Sounds Features for Audio Surveillance Application

Discrimination between different classes of environmental sounds is the goal of our work. The use of a sound recognition system can offer concrete potentialities for surveillance and security applications. The first paper contribution to this research field is represented by a thorough investigation of the applicability of state-of-the-art audio features in the domain of environmental sound recognition. Additionally, a set of novel features obtained by combining the basic parameters is introduced. The quality of the features investigated is evaluated by a HMM-based classifier to which a great interest was done. In fact, we propose to use a Multi-Style training system based on HMMs: one recognizer is trained on a database including different levels of background noises and is used as a universal recognizer for every environment. In order to enhance the system robustness by reducing the environmental variability, we explore different adaptation algorithms including Maximum Likelihood Linear Regression (MLLR), Maximum A Posteriori (MAP) and the MAP/MLLR algorithm that combines MAP and MLLR. Experimental evaluation shows that a rather good recognition rate can be reached, even under important noise degradation conditions when the system is fed by the convenient set of features.

A Fast Neural Algorithm for Serial Code Detection in a Stream of Sequential Data

In recent years, fast neural networks for object/face detection have been introduced based on cross correlation in the frequency domain between the input matrix and the hidden weights of neural networks. In our previous papers [3,4], fast neural networks for certain code detection was introduced. It was proved in [10] that for fast neural networks to give the same correct results as conventional neural networks, both the weights of neural networks and the input matrix must be symmetric. This condition made those fast neural networks slower than conventional neural networks. Another symmetric form for the input matrix was introduced in [1-9] to speed up the operation of these fast neural networks. Here, corrections for the cross correlation equations (given in [13,15,16]) to compensate for the symmetry condition are presented. After these corrections, it is proved mathematically that the number of computation steps required for fast neural networks is less than that needed by classical neural networks. Furthermore, there is no need for converting the input data into symmetric form. Moreover, such new idea is applied to increase the speed of neural networks in case of processing complex values. Simulation results after these corrections using MATLAB confirm the theoretical computations.

Multidimensional Performance Management

In order to maximize efficiency of an information management platform and to assist in decision making, the collection, storage and analysis of performance-relevant data has become of fundamental importance. This paper addresses the merits and drawbacks provided by the OLAP paradigm for efficiently navigating large volumes of performance measurement data hierarchically. The system managers or database administrators navigate through adequately (re)structured measurement data aiming to detect performance bottlenecks, identify causes for performance problems or assessing the impact of configuration changes on the system and its representative metrics. Of particular importance is finding the root cause of an imminent problem, threatening availability and performance of an information system. Leveraging OLAP techniques, in contrast to traditional static reporting, this is supposed to be accomplished within moderate amount of time and little processing complexity. It is shown how OLAP techniques can help improve understandability and manageability of measurement data and, hence, improve the whole Performance Analysis process.

Doping Profile Measurement and Characterization by Scanning Capacitance Microscope for PocketImplanted Nano Scale n-MOSFET

This paper presents the doping profile measurement and characterization technique for the pocket implanted nano scale n-MOSFET. Scanning capacitance microscopy and atomic force microscopy have been used to image the extent of lateral dopant diffusion in MOS structures. The data are capacitance vs. voltage measurements made on a nano scale device. The technique is nondestructive when imaging uncleaved samples. Experimental data from the published literature are presented here on actual, cleaved device structures which clearly indicate the two-dimensional dopant profile in terms of a spatially varying modulated capacitance signal. Firstorder deconvolution indicates the technique has much promise for the quantitative characterization of lateral dopant profiles. The pocket profile is modeled assuming the linear pocket profiles at the source and drain edges. From the model, the effective doping concentration is found to use in modeling and simulation results of the various parameters of the pocket implanted nano scale n-MOSFET. The potential of the technique to characterize important device related phenomena on a local scale is also discussed.

A Heuristics Approach for Fast Detecting Suspicious Money Laundering Cases in an Investment Bank

Today, money laundering (ML) poses a serious threat not only to financial institutions but also to the nation. This criminal activity is becoming more and more sophisticated and seems to have moved from the cliché of drug trafficking to financing terrorism and surely not forgetting personal gain. Most international financial institutions have been implementing anti-money laundering solutions (AML) to fight investment fraud. However, traditional investigative techniques consume numerous man-hours. Recently, data mining approaches have been developed and are considered as well-suited techniques for detecting ML activities. Within the scope of a collaboration project for the purpose of developing a new solution for the AML Units in an international investment bank, we proposed a data mining-based solution for AML. In this paper, we present a heuristics approach to improve the performance for this solution. We also show some preliminary results associated with this method on analysing transaction datasets.

A New Face Detection Technique using 2D DCT and Self Organizing Feature Map

This paper presents a new technique for detection of human faces within color images. The approach relies on image segmentation based on skin color, features extracted from the two-dimensional discrete cosine transform (DCT), and self-organizing maps (SOM). After candidate skin regions are extracted, feature vectors are constructed using DCT coefficients computed from those regions. A supervised SOM training session is used to cluster feature vectors into groups, and to assign “face" or “non-face" labels to those clusters. Evaluation was performed using a new image database of 286 images, containing 1027 faces. After training, our detection technique achieved a detection rate of 77.94% during subsequent tests, with a false positive rate of 5.14%. To our knowledge, the proposed technique is the first to combine DCT-based feature extraction with a SOM for detecting human faces within color images. It is also one of a few attempts to combine a feature-invariant approach, such as color-based skin segmentation, together with appearance-based face detection. The main advantage of the new technique is its low computational requirements, in terms of both processing speed and memory utilization.

A Rule-based Approach for Anomaly Detection in Subscriber Usage Pattern

In this report we present a rule-based approach to detect anomalous telephone calls. The method described here uses subscriber usage CDR (call detail record) data sampled over two observation periods: study period and test period. The study period contains call records of customers- non-anomalous behaviour. Customers are first grouped according to their similar usage behaviour (like, average number of local calls per week, etc). For customers in each group, we develop a probabilistic model to describe their usage. Next, we use maximum likelihood estimation (MLE) to estimate the parameters of the calling behaviour. Then we determine thresholds by calculating acceptable change within a group. MLE is used on the data in the test period to estimate the parameters of the calling behaviour. These parameters are compared against thresholds. Any deviation beyond the threshold is used to raise an alarm. This method has the advantage of identifying local anomalies as compared to techniques which identify global anomalies. The method is tested for 90 days of study data and 10 days of test data of telecom customers. For medium to large deviations in the data in test window, the method is able to identify 90% of anomalous usage with less than 1% false alarm rate.

A Discrete Filtering Algorithm for Impulse Wave Parameter Estimation

This paper presents a new method for estimating the mean curve of impulse voltage waveforms that are recorded during impulse tests. In practice, these waveforms are distorted by noise, oscillations and overshoot. The problem is formulated as an estimation problem. Estimation of the current signal parameters is achieved using a fast and accurate technique. The method is based on discrete dynamic filtering algorithm (DDF). The main advantage of the proposed technique is its ability in producing the estimates in a very short time and at a very high degree of accuracy. The algorithm uses sets of digital samples of the recorded impulse waveform. The proposed technique has been tested using simulated data of practical waveforms. Effects of number of samples and data window size are studied. Results are reported and discussed.

Investigation of Genetic Epidemiology of Metabolic Compromises in ß Thalassemia Minor Mutation: Phenotypic Pleiotropy

Human genome is not only the evolutionary summation of all advantageous events, but also houses lesions of deleterious foot prints. A single gene mutation sometimes may express multiple consequences in numerous tissues and a linear relationship of the genotype and the phenotype may often be obscure. ß Thalassemia minor, a transfusion independent mild anaemia, coupled with environment among other factors may articulate into phenotypic pleotropy with Hypocholesterolemia, Vitamin D deficiency, Tissue hypoxia, Hyper-parathyroidism and Psychological alterations. Occurrence of Pancreatic insufficiency, resultant steatorrhoea, Vitamin-D (25-OH) deficiency (13.86 ngm/ml) with Hypocholesterolemia (85mg/dl) in a 30 years old male ß Thal-minor patient (Hemoglobin 11mg/dl with Fetal Hemoglobin 2.10%, Hb A2 4.60% and Hb Adult 84.80% and altered Hemogram) with increased Para thyroid hormone (62 pg/ml) & moderate Serum Ca+2 (9.5mg/ml) indicate towards a cascade of phenotypic pleotropy where the ß Thalassemia mutation ,be it in the 5’ cap site of the mRNA , differential splicing etc in heterozygous state is effecting several metabolic pathways. Compensatory extramedulary hematopoiesis may not coped up well with the stressful life style of the young individual and increased erythropoietic stress with high demand for cholesterol for RBC membrane synthesis may have resulted in Hypocholesterolemia.Oxidative stress and tissue hypoxia may have caused the pancreatic insufficiency, leading to Vitamin D deficiency. This may in turn have caused the secondary hyperparathyroidism to sustain serum Calcium level. Irritability and stress intolerance of the patient was a cumulative effect of the vicious cycle of metabolic compromises. From these findings we propose that the metabolic deficiencies in the ß Thalassemia mutations may be considered as the phenotypic display of the pleotropy to explain the genetic epidemiology. According to the recommendations from the NIH Workshop on Gene-Environment Interplay in Common Complex Diseases: Forging an Integrative Model, study design of observations should be informed by gene-environment hypotheses and results of a study (genetic diseases) should be published to inform future hypotheses. Variety of approaches is needed to capture data on all possible aspects, each of which is likely to contribute to the etiology of disease. Speakers also agreed that there is a need for development of new statistical methods and measurement tools to appraise information that may be missed out by conventional method where large sample size is needed to segregate considerable effect. A meta analytic cohort study in future may bring about significant insight on to the title comment.

On the Fast Convergence of DD-LMS DFE Using a Good Strategy Initialization

In wireless communication system, a Decision Feedback Equalizer (DFE) to cancel the intersymbol interference (ISI) is required. In this paper, an exact convergence analysis of the (DFE) adapted by the Least Mean Square (LMS) algorithm during the training phase is derived by taking into account the finite alphabet context of data transmission. This allows us to determine the shortest training sequence that allows to reach a given Mean Square Error (MSE). With the intention of avoiding the problem of ill-convergence, the paper proposes an initialization strategy for the blind decision directed (DD) algorithm. This then yields a semi-blind DFE with high speed and good convergence.

Comparison of Different Neural Network Approaches for the Prediction of Kidney Dysfunction

This paper presents the prediction of kidney dysfunction using different neural network (NN) approaches. Self organization Maps (SOM), Probabilistic Neural Network (PNN) and Multi Layer Perceptron Neural Network (MLPNN) trained with Back Propagation Algorithm (BPA) are used in this study. Six hundred and sixty three sets of analytical laboratory tests have been collected from one of the private clinical laboratories in Baghdad. For each subject, Serum urea and Serum creatinin levels have been analyzed and tested by using clinical laboratory measurements. The collected urea and cretinine levels are then used as inputs to the three NN models in which the training process is done by different neural approaches. SOM which is a class of unsupervised network whereas PNN and BPNN are considered as class of supervised networks. These networks are used as a classifier to predict whether kidney is normal or it will have a dysfunction. The accuracy of prediction, sensitivity and specificity were found for each type of the proposed networks .We conclude that PNN gives faster and more accurate prediction of kidney dysfunction and it works as promising tool for predicting of routine kidney dysfunction from the clinical laboratory data.

A New Precautionary Method for Measurement and Improvement the Data Quality

the data quality is a kind of complex and unstructured concept, which is concerned by information systems managers. The reason of this attention is the high amount of Expenses for maintenance and cleaning of the inefficient data. Such a data more than its expenses of lack of quality, cause wrong statistics, analysis and decisions in organizations. Therefor the managers intend to improve the quality of their information systems' data. One of the basic subjects of quality improvement is the evaluation of the amount of it. In this paper, we present a precautionary method, which with its application the data of information systems would have a better quality. Our method would cover different dimensions of data quality; therefor it has necessary integrity. The presented method has tested on three dimensions of accuracy, value-added and believability and the results confirm the improvement and integrity of this method.

Color Image Segmentation Using Competitive and Cooperative Learning Approach

Color image segmentation can be considered as a cluster procedure in feature space. k-means and its adaptive version, i.e. competitive learning approach are powerful tools for data clustering. But k-means and competitive learning suffer from several drawbacks such as dead-unit problem and need to pre-specify number of cluster. In this paper, we will explore to use competitive and cooperative learning approach to perform color image segmentation. In competitive and cooperative learning approach, seed points not only compete each other, but also the winner will dynamically select several nearest competitors to form a cooperative team to adapt to the input together, finally it can automatically select the correct number of cluster and avoid the dead-units problem. Experimental results show that CCL can obtain better segmentation result.

Entrepreneurship, Innovation, Incubator and Economic Development: A Case Study

The objective of this paper is twofold: (1) discuss and analyze the successful case studies worldwide, and (2) identify the similarities and differences of case studies worldwide. Design methodology/approach: The nature of this research is mainly method qualitative (multi-case studies, literature review). This investigation uses ten case studies, and the data was mainly collected and organizational documents from the international countries. Finding: The finding of this research can help incubator manager, policy maker and government parties for successful implementation. Originality/value: This paper contributes to the current literate review on the best practices worldwide. Additionally, it presents future perspective for academicians and practitioners.

Single-Camera EKF-vSLAM

This paper presents an Extended Kaman Filter implementation of a single-camera Visual Simultaneous Localization and Mapping algorithm, a novel algorithm for simultaneous localization and mapping problem widely studied in mobile robotics field. The algorithm is vision and odometry-based, The odometry data is incremental, and therefore it will accumulate error over time, since the robot may slip or may be lifted, consequently if the odometry is used alone we can not accurately estimate the robot position, in this paper we show that a combination of odometry and visual landmark via the extended Kalman filter can improve the robot position estimate. We use a Pioneer II robot and motorized pan tilt camera models to implement the algorithm.

Application of Exact String Matching Algorithms towards SMILES Representation of Chemical Structure

Bioinformatics and Cheminformatics use computer as disciplines providing tools for acquisition, storage, processing, analysis, integrate data and for the development of potential applications of biological and chemical data. A chemical database is one of the databases that exclusively designed to store chemical information. NMRShiftDB is one of the main databases that used to represent the chemical structures in 2D or 3D structures. SMILES format is one of many ways to write a chemical structure in a linear format. In this study we extracted Antimicrobial Structures in SMILES format from NMRShiftDB and stored it in our Local Data Warehouse with its corresponding information. Additionally, we developed a searching tool that would response to user-s query using the JME Editor tool that allows user to draw or edit molecules and converts the drawn structure into SMILES format. We applied Quick Search algorithm to search for Antimicrobial Structures in our Local Data Ware House.

Impregnation of Cupper into Kanuma Volcanic Ash Soil to Improve Mercury Sorption Capacity

The present study attempted to improve the Mercury (Hg) sorption capacity of kanuma volcanic ash soil (KVAS) by impregnating the cupper (Cu). Impregnation was executed by 1 and 5% Cu powder and sorption characterization of optimum Hg removing Cu impregnated KVAS was performed under different operational conditions, contact time, solution pH, sorbent dosage and Hg concentration using the batch operation studies. The 1% Cu impregnated KVAS pronounced optimum improvement (79%) in removing Hg from water compare to control. The present investigation determined the equilibrium state of maximum Hg adsorption at 6 h contact period. The adsorption revealed a pH dependent response and pH 3.5 showed maximum sorption capacity of Hg. Freundlich isotherm model is well fitted with the experimental data than that of Langmuir isotherm. It can be concluded that the Cu impregnation improves the Hg sorption capacity of KVAS and 1% Cu impregnated KVAS could be employed as cost-effective adsorbent media for treating Hg contaminated water.