Abstract: The use of new technologies such internet (e-mail, chat
rooms) and cell phones has steeply increased in recent years.
Especially among children and young people, use of technological
tools and equipments is widespread. Although many teachers and
administrators now recognize the problem of school bullying, few are
aware that students are being harassed through electronic
communication. Referred to as electronic bullying, cyber bullying, or
online social cruelty, this phenomenon includes bullying through email,
instant messaging, in a chat room, on a website, or through
digital messages or images sent to a cell phone. Cyber bullying is
defined as causing deliberate/intentional harm to others using internet
or other digital technologies. It has a quantitative research design nd
uses relational survey as its method. The participants consisted of
300 secondary school students in the city of Konya, Turkey. 195
(64.8%) participants were female and 105 (35.2%) were male. 39
(13%) students were at grade 1, 187 (62.1%) were at grade 2 and 74
(24.6%) were at grade 3. The “Cyber Bullying Question List"
developed by Ar─▒cak (2009) was given to students. Following
questions about demographics, a functional definition of cyber
bullying was provided. In order to specify students- human values,
“Human Values Scale (HVS)" developed by Dilmaç (2007) for
secondary school students was administered. The scale consists of 42
items in six dimensions. Data analysis was conducted by the primary
investigator of the study using SPSS 14.00 statistical analysis
software. Descriptive statistics were calculated for the analysis of
students- cyber bullying behaviour and simple regression analysis was
conducted in order to test whether each value in the scale could
explain cyber bullying behaviour.
Abstract: There have been various methods created based on the regression ideas to resolve the problem of data set containing censored observations, i.e. the Buckley-James method, Miller-s method, Cox method, and Koul-Susarla-Van Ryzin estimators. Even though comparison studies show the Buckley-James method performs better than some other methods, it is still rarely used by researchers mainly because of the limited diagnostics analysis developed for the Buckley-James method thus far. Therefore, a diagnostic tool for the Buckley-James method is proposed in this paper. It is called the renovated Cook-s Distance, (RD* i ) and has been developed based on the Cook-s idea. The renovated Cook-s Distance (RD* i ) has advantages (depending on the analyst demand) over (i) the change in the fitted value for a single case, DFIT* i as it measures the influence of case i on all n fitted values Yˆ∗ (not just the fitted value for case i as DFIT* i) (ii) the change in the estimate of the coefficient when the ith case is deleted, DBETA* i since DBETA* i corresponds to the number of variables p so it is usually easier to look at a diagnostic measure such as RD* i since information from p variables can be considered simultaneously. Finally, an example using Stanford Heart Transplant data is provided to illustrate the proposed diagnostic tool.
Abstract: The objective of this paper is to characterize the spontaneous Electroencephalogram (EEG) signals of four different motor imagery tasks and to show hereby a possible solution for the present binary communication between the brain and a machine ora Brain-Computer Interface (BCI). The processing technique used in this paper was the fractal analysis evaluated by the Critical Exponent Method (CEM). The EEG signal was registered in 5 healthy subjects,sampling 15 measuring channels at 1024 Hz.Each channel was preprocessed by the Laplacian space ltering so as to reduce the space blur and therefore increase the spaceresolution. The EEG of each channel was segmented and its Fractaldimension (FD) calculated. The FD was evaluated in the time interval corresponding to the motor imagery and averaged out for all the subjects (each channel). In order to characterize the FD distribution,the linear regression curves of FD over the electrodes position were applied. The differences FD between the proposed mental tasks are quantied and evaluated for each experimental subject. The obtained results of the proposed method are a substantial fractal dimension in the EEG signal of motor imagery tasks and can be considerably utilized as the multiple-states BCI applications.
Abstract: In this paper, estimation of the linear regression
model is made by ordinary least squares method and the
partially linear regression model is estimated by penalized
least squares method using smoothing spline. Then, it is
investigated that differences and similarity in the sum of
squares related for linear regression and partial linear
regression models (semi-parametric regression models). It is
denoted that the sum of squares in linear regression is reduced
to sum of squares in partial linear regression models.
Furthermore, we indicated that various sums of squares in the
linear regression are similar to different deviance statements in
partial linear regression. In addition to, coefficient of the
determination derived in linear regression model is easily
generalized to coefficient of the determination of the partial
linear regression model. For this aim, it is made two different
applications. A simulated and a real data set are considered to
prove the claim mentioned here. In this way, this study is
supported with a simulation and a real data example.
Abstract: Throughout this paper, a relatively new technique, the Tabu search variable selection model, is elaborated showing how it can be efficiently applied within the financial world whenever researchers come across the selection of a subset of variables from a whole set of descriptive variables under analysis. In the field of financial prediction, researchers often have to select a subset of variables from a larger set to solve different type of problems such as corporate bankruptcy prediction, personal bankruptcy prediction, mortgage, credit scoring and the Arbitrage Pricing Model (APM). Consequently, to demonstrate how the method operates and to illustrate its usefulness as well as its superiority compared to other commonly used methods, the Tabu search algorithm for variable selection is compared to two main alternative search procedures namely, the stepwise regression and the maximum R 2 improvement method. The Tabu search is then implemented in finance; where it attempts to predict corporate bankruptcy by selecting the most appropriate financial ratios and thus creating its own prediction score equation. In comparison to other methods, mostly the Altman Z-Score model, the Tabu search model produces a higher success rate in predicting correctly the failure of firms or the continuous running of existing entities.
Abstract: The study investigates the relationship between
education level, workplace learning behaviors, psychological
empowerment and burnout in a sample of 191 teachers. We
hypothesized that education level will positively affect psychological
state of increased empowerment and decreased burnout, and we
purposed that these effects will be mediated by workplace learning
behaviors. We used multiple regression analyses to test the model
that included also the 6 following control variables: The teachers'
age, gender, and teaching tenure; the schools' religious level, the
pupils' needs: regular/ special needs, and the class level: elementary/
high school. The results support the purposed mediating model.
Abstract: Response surface methodology was used for
quantitative investigation of water and solids transfer during osmotic
dehydration of beetroot in aqueous solution of salt. Effects of
temperature (25 – 45oC), processing time (30–150 min), salt
concentration (5–25%, w/w) and solution to sample ratio (5:1 – 25:1)
on osmotic dehydration of beetroot were estimated. Quadratic
regression equations describing the effects of these factors on the
water loss and solids gain were developed. It was found that effects
of temperature and salt concentrations were more significant on the
water loss than the effects of processing time and solution to sample
ratio. As for solids gain processing time and salt concentration were
the most significant factors. The osmotic dehydration process was
optimized for water loss, solute gain, and weight reduction. The
optimum conditions were found to be: temperature – 35oC,
processing time – 90 min, salt concentration – 14.31% and solution
to sample ratio 8.5:1. At these optimum values, water loss, solid gain
and weight reduction were found to be 30.86 (g/100 g initial sample),
9.43 (g/100 g initial sample) and 21.43 (g/100 g initial sample)
respectively.
Abstract: The objectives of this research were to compare the success of SME registered in Nakorn Pathom Province divided in personal data also to study the relations between the innovation knowledge and capability and the success of SME registered in Nakorn Pathom Province and to study the relations between the work efficiency and the success of SME registered in Nakorn Pathom Province. A questionnaire was utilized as a tool to collect data. Statistics utilized in this research included frequency, percentage, mean, standard deviation, and multiple regression analysis. Data were analyzed by using Statistical Package for the Social Sciences.The findings revealed that the majority of respondents were male with the age between 25-34 years old, hold undergraduate degree, married and stay together. The average income of respondents was between 10,001-20,000 baht. It also found that in terms of innovation knowledge and capability, there were two variables had an influence on the amount of innovation knowledge and capability, innovation evaluation which were physical characteristic and innovation process.
Abstract: The main purpose of the research is to address the role of psychological harassment behaviors (mobbing) to which employees are exposed and personality characteristics over work alienation. Research population was composed of the employees of Provincial Special Administration. A survey with four sections was created to measure variables and reach out the basic goals of the research. Correlation and step-wise regression analyses were performed to investigate the separate and overall effects of sub-dimensions of psychological harassment behaviors and personality characteristic on work alienation of employees. Correlation analysis revealed significant but weak relationships between work alienation and psychological harassment and personality characteristics. Step-wise regression analysis revealed also significant relationships between work alienation variable and assault to personality, direct negative behaviors (sub dimensions of mobbing) and openness (sub-dimension of personality characteristics). Each variable was introduced into the model step by step to investigate the effects of significant variables in explaining the variations in work alienation. While the explanation ratio of the first model was 13%, the last model including three variables had an explanation ratio of 24%.
Abstract: The authors have been developing several models
based on artificial neural networks, linear regression models, Box-
Jenkins methodology and ARIMA models to predict the time series
of tourism. The time series consist in the “Monthly Number of Guest
Nights in the Hotels" of one region. Several comparisons between the
different type models have been experimented as well as the features
used at the entrance of the models. The Artificial Neural Network
(ANN) models have always had their performance at the top of the
best models. Usually the feed-forward architecture was used due to
their huge application and results. In this paper the author made a
comparison between different architectures of the ANNs using
simply the same input. Therefore, the traditional feed-forward
architecture, the cascade forwards, a recurrent Elman architecture and
a radial based architecture were discussed and compared based on the
task of predicting the mentioned time series.
Abstract: ANNARIMA that combines both autoregressive integrated moving average (ARIMA) model and artificial neural network (ANN) model is a valuable tool for modeling and forecasting nonlinear time series, yet the over-fitting problem is more likely to occur in neural network models. This paper provides a hybrid methodology that combines both radial basis function (RBF) neural network and auto regression (AR) model based on binomial smoothing (BS) technique which is efficient in data processing, which is called BSRBFAR. This method is examined by using the data of Canadian Lynx data. Empirical results indicate that the over-fitting problem can be eased using RBF neural network based on binomial smoothing which is called BS-RBF, and the hybrid model–BS-RBFAR can be an effective way to improve forecasting accuracy achieved by BSRBF used separately.
Abstract: The detection of outliers is very essential because of
their responsibility for producing huge interpretative problem in
linear as well as in nonlinear regression analysis. Much work has
been accomplished on the identification of outlier in linear
regression, but not in nonlinear regression. In this article we propose
several outlier detection techniques for nonlinear regression. The
main idea is to use the linear approximation of a nonlinear model and
consider the gradient as the design matrix. Subsequently, the
detection techniques are formulated. Six detection measures are
developed that combined with three estimation techniques such as the
Least-Squares, M and MM-estimators. The study shows that among
the six measures, only the studentized residual and Cook Distance
which combined with the MM estimator, consistently capable of
identifying the correct outliers.
Abstract: The main aim of this study is to identify the most
influential variables that cause defects on the items produced by a
casting company located in Turkey. To this end, one of the items
produced by the company with high defective percentage rates is
selected. Two approaches-the regression analysis and decision treesare
used to model the relationship between process parameters and
defect types. Although logistic regression models failed, decision tree
model gives meaningful results. Based on these results, it can be
claimed that the decision tree approach is a promising technique for
determining the most important process variables.
Abstract: Sleep stage scoring is the process of classifying the
stage of the sleep in which the subject is in. Sleep is classified into
two states based on the constellation of physiological parameters.
The two states are the non-rapid eye movement (NREM) and the
rapid eye movement (REM). The NREM sleep is also classified into
four stages (1-4). These states and the state wakefulness are
distinguished from each other based on the brain activity. In this
work, a classification method for automated sleep stage scoring
based on a single EEG recording using wavelet packet decomposition
was implemented. Thirty two ploysomnographic recording from the
MIT-BIH database were used for training and validation of the
proposed method. A single EEG recording was extracted and
smoothed using Savitzky-Golay filter. Wavelet packets
decomposition up to the fourth level based on 20th order Daubechies
filter was used to extract features from the EEG signal. A features
vector of 54 features was formed. It was reduced to a size of 25 using
the gain ratio method and fed into a classifier of regression trees. The
regression trees were trained using 67% of the records available. The
records for training were selected based on cross validation of the
records. The remaining of the records was used for testing the
classifier. The overall correct rate of the proposed method was found
to be around 75%, which is acceptable compared to the techniques in
the literature.
Abstract: Missing data is a persistent problem in almost all
areas of empirical research. The missing data must be treated very
carefully, as data plays a fundamental role in every analysis.
Improper treatment can distort the analysis or generate biased results.
In this paper, we compare and contrast various imputation techniques
on missing data sets and make an empirical evaluation of these
methods so as to construct quality software models. Our empirical
study is based on NASA-s two public dataset. KC4 and KC1. The
actual data sets of 125 cases and 2107 cases respectively, without
any missing values were considered. The data set is used to create
Missing at Random (MAR) data Listwise Deletion(LD), Mean
Substitution(MS), Interpolation, Regression with an error term and
Expectation-Maximization (EM) approaches were used to compare
the effects of the various techniques.
Abstract: Avoiding learning failures in mathematics e-learning environments caused by emotional problems in students with autism has become an important topic for combining of special education with information and communications technology. This study presents an adaptive emotional adjustment model in mathematics e-learning for students with autism, emphasizing the lack of emotional perception in mathematics e-learning systems. In addition, an emotion classification for students with autism was developed by inducing emotions in mathematical learning environments to record changes in the physiological signals and facial expressions of students. Using these methods, 58 emotional features were obtained. These features were then processed using one-way ANOVA and information gain (IG). After reducing the feature dimension, methods of support vector machines (SVM), k-nearest neighbors (KNN), and classification and regression trees (CART) were used to classify four emotional categories: baseline, happy, angry, and anxious. After testing and comparisons, in a situation without feature selection, the accuracy rate of the SVM classification can reach as high as 79.3-%. After using IG to reduce the feature dimension, with only 28 features remaining, SVM still has a classification accuracy of 78.2-%. The results of this research could enhance the effectiveness of eLearning in special education.
Abstract: Due to the call of global warming effects, city planners aim at actions for reducing carbon emission. One of the approaches is to promote the usage of public transportation system toward the transit-oriented-development. For example, rapid transit system in Taipei city and Kaohsiung city are opening. However, until November 2008 the average daily patronage counted only 113,774 passengers at Kaohsiung MRT systems, much less than which was expected. Now the crucial questions: how the public transport competes with private transport? And more importantly, what factors would enhance the use of public transport? To give the answers to those questions, our study first applied regression to analyze the factors attracting people to use public transport around cities in the world. It is shown in our study that the number of MRT stations, city population, cost of living, transit fare, density, gasoline price, and scooter being a major mode of transport are the major factors. Subsequently, our study identified successful and unsuccessful cities in regard of the public transport usage based on the diagnosis of regression residuals. Finally, by comparing transportation strategies adopted by those successful cities, our conclusion stated that Kaohsiung City could apply strategies such as increasing parking fees, reducing parking spaces in downtown area, and reducing transfer time by providing more bus services and public bikes to promote the usage of public transport.
Abstract: Through 1980s, management accounting researchers
described the increasing irrelevance of traditional control and
performance measurement systems. The Balanced Scorecard (BSC)
is a critical business tool for a lot of organizations. It is a
performance measurement system which translates mission and
strategy into objectives. Strategy map approach is a development
variant of BSC in which some necessary causal relations must be
established. To recognize these relations, experts usually use
experience. It is also possible to utilize regression for the same
purpose. Structural Equation Modeling (SEM), which is one of the
most powerful methods of multivariate data analysis, obtains more
appropriate results than traditional methods such as regression. In the
present paper, we propose SEM for the first time to identify the
relations between objectives in the strategy map, and a test to
measure the importance of relations. In SEM, factor analysis and test
of hypotheses are done in the same analysis. SEM is known to be
better than other techniques at supporting analysis and reporting. Our
approach provides a framework which permits the experts to design
the strategy map by applying a comprehensive and scientific method
together with their experience. Therefore this scheme is a more
reliable method in comparison with the previously established
methods.
Abstract: The present study investigated the relationship
between personality characteristics of drivers and the number and
amount of fines they have in a year .This study was carried out on
120 male taxi drivers that worked at least seven hours in a day in
Lamerd - a city in the south of IRAN. Subjects were chosen
voluntarily among those available. Predictive variables were the NEO
–five great personality factors (1. conscientiousness 2. Openness to
Experience 3.Neuroticism4 .Extraversion 5.Agreeableness )
thecriterion variables were the number and amount of fines the
drivers have had the last three years. the result of regression analysis
showed that conscientiousness factor was able to negatively predict
the number and amount of financial fines the drivers had during the
last three years. The openness factor positively predicted the number
of fines they had in last 3 years and the amount of financial fines
during the last year. The extraversion factor both meaningfully and
positively could predict only the amount of financial fines they had
during the last year. Increasing age was associated with decreasing
driving offenses as well as financial loss.The findings can be useful
in recognizing the high-risk drivers and leading them to counseling
centers .They can also be used to inform the drivers about their
personality and it’s relation with their accident rate. Such criteria
would be of great importance in employing drivers in different places
such as companies, offices etc…
Abstract: Cutting tools are widely used in manufacturing processes and drilling is the most commonly used machining process. Although drill-bits used in drilling may not be expensive, their breakage can cause damage to expensive work piece being drilled and at the same time has major impact on productivity. Predicting drill-bit breakage, therefore, is important in reducing cost and improving productivity. This study uses twenty features extracted from two degradation signals viz., thrust force and torque. The methodology used involves developing and comparing decision tree, random forest, and multinomial logistic regression models for classifying and predicting drill-bit breakage using degradation signals.