Abstract: Problem Statement:Rapid technological developments of the 21st century have advanced our daily lives in various ways. Particularly in education, students frequently utilize technological resources to aid their homework and to access information. listen to radio or watch television (26.9 %) and e-mails (34.2 %) [26]. Not surprisingly, the increase in the use of technologies also resulted in an increase in the use of e-mail, instant messaging, chat rooms, mobile phones, mobile phone cameras and web sites by adolescents to bully peers. As cyber bullying occurs in the cyber space, lesser access to technologies would mean lesser cyber-harm. Therefore, the frequency of technology use is a significant predictor of cyber bullying and cyber victims. Cyber bullies try to harm the victim using various media. These tools include sending derogatory texts via mobile phones, sending threatening e-mails and forwarding confidential emails to everyone on the contacts list. Another way of cyber bullying is to set up a humiliating website and invite others to post comments. In other words, cyber bullies use e-mail, chat rooms, instant messaging, pagers, mobile texts and online voting tools to humiliate and frighten others and to create a sense of helplessness. No matter what type of bullying it is, it negatively affects its victims. Children who bully exhibit more emotional inhibition and attribute themselves more negative self-statements compared to non-bullies. Students whose families are not sympathetic and who receive lower emotional support are more prone to bully their peers. Bullies have authoritarian families and do not get along well with them. The family is the place where the children-s physical, social and psychological needs are satisfied and where their personalities develop. As the use of the internet became prevalent so did parents- restrictions on their children-s internet use. However, parents are unaware of the real harm. Studies that explain the relationship between parental attitudes and cyber bullying are scarce in literature. Thus, this study aims to investigate the relationship between cyber bullying and parental attitudes in the primary school. Purpose of Study: This study aimed to investigate the relationship between cyber bullying and parental attitudes. A second aim was to determine whether parental attitudes could predict cyber bullying and if so which variables could predict it significantly. Methods:The study had a cross-sectional and relational survey model. A demographics information form, questions about cyber bullying and a Parental Attitudes Inventory were conducted with a total of 346 students (189 females and 157 males) registered at various primary schools. Data was analysed by multiple regression analysis using the software package SPSS 16.
Abstract: This paper discusses two observers, which are used
for the estimation of parameters of PMSM. Former one, reduced
order observer, which is used to estimate the inaccessible parameters
of PMSM. Later one, full order observer, which is used to estimate
all the parameters of PMSM even though some of the parameters are
directly available for measurement, so as to meet with the
insensitivity to the parameter variation. However, the state space
model contains some nonlinear terms i.e. the product of different
state variables. The asymptotic state observer, which approximately
reconstructs the state vector for linear systems without uncertainties,
was presented by Luenberger. In this work, a modified form of such
an observer is used by including a non-linear term involving the
speed. So, both the observers are designed in the framework of
nonlinear control; their stability and rate of convergence is discussed.
Abstract: Equilibrium and stability equations of a thin rectangular plate with length a, width b, and thickness h(x)=C1x+C2, made of functionally graded materials under thermal loads are derived based on the first order shear deformation theory. It is assumed that the material properties vary as a power form of thickness coordinate variable z. The derived equilibrium and buckling equations are then solved analytically for a plate with simply supported boundary conditions. One type of thermal loading, uniform temperature rise and gradient through the thickness are considered, and the buckling temperatures are derived. The influences of the plate aspect ratio, the relative thickness, the gradient index and the transverse shear on buckling temperature difference are all discussed.
Abstract: An experiment was conducted with 80 unsexed
broilers of the Arbor Acress strain to determine the capability of a
carrot and fruit juice wastes mixture (carrot, apple, manggo, avocado,
orange, melon and Dutch egg plant) in the same proportion for
replacing corn in broiler diet. This study involved a completely
randomized design (CRD) with 5 treatments (0, 5, 10, 15, and 20% of
juice wastes mixture in diets) and 4 replicates per treatment. Diets
were isonitrogenous (22% crude protein) and isocaloric (3000 kcal/kg
diet). Measured variables were feed consumption, average daily
gain, feed conversion, as well as percentages of abdominal fat pad,
carcass, digestive organs (liver, pancreas and gizzard), and heart.
Data were analyzed by analysis of variance for CRD. Increasing
juice wastes mixture levels in diets increased feed consumption
(P
Abstract: Multi-loop (De-centralized) Proportional-Integral-
Derivative (PID) controllers have been used extensively in process
industries due to their simple structure for control of multivariable
processes. The objective of this work is to design multiple-model
adaptive multi-loop PID strategy (Multiple Model Adaptive-PID)
and neural network based multi-loop PID strategy (Neural Net
Adaptive-PID) for the control of multivariable system. The first
method combines the output of multiple linear PID controllers,
each describing process dynamics at a specific level of operation.
The global output is an interpolation of the individual multi-loop
PID controller outputs weighted based on the current value of the
measured process variable. In the second method, neural network
is used to calculate the PID controller parameters based on the
scheduling variable that corresponds to major shift in the process
dynamics. The proposed control schemes are simple in structure with
less computational complexity. The effectiveness of the proposed
control schemes have been demonstrated on the CSTR process,
which exhibits dynamic non-linearity.
Abstract: Carbon fibers are fabricated from different materials,
such as special polyacrylonitrile (PAN) fibers, rayon fibers and pitch.
Among these three groups of materials, PAN fibers are the most
widely used precursor for the manufacture of carbon fibers. The
process of fabrication carbon fibers from special PAN fibers includes
two steps; oxidative stabilization at low temperature and
carbonization at high temperatures in an inert atmosphere. Due to the
high price of raw materials (special PAN fibers), carbon fibers are
still expensive.
In the present work the main goal is making carbon fibers from
low price commercial PAN fibers with modified chemical
compositions. The results show that in case of conducting completes
stabilization process, it is possible to produce carbon fibers with
desirable tensile strength from this type of PAN fibers. To this
matter, thermal characteristics of commercial PAN fibers were
investigated and based upon the obtained results, with some changes
in conventional procedure of stabilization in terms of temperature
and time variables; the desirable conditions of complete stabilization
is achieved.
Abstract: The tree structured approach of non-uniform filterbank
(NUFB) is normally used in perfect reconstruction (PR). The PR is
not always feasible due to certain limitations, i.e, constraints in
selecting design parameters, design complexity and some times
output is severely affected by aliasing error if necessary and
sufficient conditions of PR is not satisfied perfectly. Therefore, there
has been generalized interest of researchers to go for near perfect
reconstruction (NPR). In this proposed work, an optimized tree
structure technique is used for the design of NPR non-uniform
filterbank. Window functions of Blackman family are used to design
the prototype FIR filter. A single variable linear optimization is used
to minimize the amplitude distortion. The main feature of the
proposed design is its simplicity with linear phase property.
Abstract: Designing modern machine tools is a complex task. A
simulation tool to aid the design work, a virtual machine, has
therefore been developed in earlier work. The virtual machine
considers the interaction between the mechanics of the machine
(including structural flexibility) and the control system. This paper
exemplifies the usefulness of the virtual machine as a tool for product
development. An optimisation study is conducted aiming at
improving the existing design of a machine tool regarding weight and
manufacturing accuracy at maintained manufacturing speed. The
problem can be categorised as constrained multidisciplinary multiobjective
multivariable optimisation. Parameters of the control and
geometric quantities of the machine are used as design variables. This
results in a mix of continuous and discrete variables and an
optimisation approach using a genetic algorithm is therefore
deployed. The accuracy objective is evaluated according to
international standards. The complete systems model shows nondeterministic
behaviour. A strategy to handle this based on statistical
analysis is suggested. The weight of the main moving parts is reduced
by more than 30 per cent and the manufacturing accuracy is
improvement by more than 60 per cent compared to the original
design, with no reduction in manufacturing speed. It is also shown
that interaction effects exist between the mechanics and the control,
i.e. this improvement would most likely not been possible with a
conventional sequential design approach within the same time, cost
and general resource frame. This indicates the potential of the virtual
machine concept for contributing to improved efficiency of both
complex products and the development process for such products.
Companies incorporating such advanced simulation tools in their
product development could thus improve its own competitiveness as
well as contribute to improved resource efficiency of society at large.
Abstract: Calcium is very important for communication among
the neurons. It is vital in a number of cell processes such as secretion,
cell movement, cell differentiation. To reduce the system of reactiondiffusion
equations of [Ca2+] into a single equation, two theories
have been proposed one is excess buffer approximation (EBA) other
is rapid buffer approximation (RBA). The RBA is more realistic than
the EBA as it considers both the mobile and stationary endogenous
buffers. It is valid near the mouth of the channel. In this work we have
studied the effects of different types of buffers on calcium diffusion
under RBA. The novel thing studied is the effect of sodium ions on
calcium diffusion. The model has been made realistic by considering
factors such as variable [Ca2+], [Na+] sources, sodium-calcium
exchange protein(NCX), Sarcolemmal Calcium ATPase pump. The
proposed mathematical leads to a system of partial differential equations
which has been solved numerically to study the relationships
between different parameters such as buffer concentration, buffer
disassociation rate, calcium permeability. We have used Forward
Time Centred Space (FTCS) approach to solve the system of partial
differential equations.
Abstract: The impact of fixed speed squirrel cage type as well as
variable speed doubly fed induction generators (DFIG) on dynamic
performance of a multimachine power system has been investigated.
Detailed models of the various components have been presented and
the integration of asynchronous and synchronous generators has been
carried out through a rotor angle based transform. Simulation studies
carried out considering the conventional dynamic model of squirrel
cage asynchronous generators show that integration, as such, could
degrade to the AC system performance transiently. This article
proposes a frequency or power controller which can effectively
control the transients and restore normal operation of fixed speed
induction generator quickly. Comparison of simulation results
between classical cage and doubly-fed induction generators indicate
that the doubly fed induction machine is more adaptable to
multimachine AC system. Frequency controller installed in the DFIG
system can also improve its transient profile.
Abstract: The purpose of this study was to explore the
relationship between knowledge sharing and innovation capability,
by examining the influence of individual, organizational and
technological factors on knowledge sharing. The research is based
on a survey of 103 employees from different organizations in the
United Arab Emirates. The study is based on a model and a
questionnaire that was previously tested by Lin [1]. Thus, the study
aims at examining the validity of that model in UAE context. The
results of the research show varying degrees of correlation between
the different variables, with ICT use having the strongest relationship
with the innovation capabilities of organizations. The study also
revealed little evidence of knowledge collecting and knowledge
sharing among UAE employees.
Abstract: Identifying and classifying intersections according to
severity is very important for implementation of safety related
counter measures and effective models are needed to compare and
assess the severity. Highway safety organizations have considered
intersection safety among their priorities. In spite of significant
advances in highways safety, the large numbers of crashes with high
severities still occur in the highways. Investigation of influential
factors on crashes enables engineers to carry out calculations in order
to reduce crash severity. Previous studies lacked a model capable of
simultaneous illustration of the influence of human factors, road,
vehicle, weather conditions and traffic features including traffic
volume and flow speed on the crash severity. Thus, this paper is
aimed at developing the models to illustrate the simultaneous
influence of these variables on the crash severity in urban highways.
The models represented in this study have been developed using
binary Logit Models. SPSS software has been used to calibrate the
models. It must be mentioned that backward regression method in
SPSS was used to identify the significant variables in the model.
Consider to obtained results it can be concluded that the main
factor in increasing of crash severity in urban highways are driver
age, movement with reverse gear, technical defect of the vehicle,
vehicle collision with motorcycle and bicycle, bridge, frontal impact
collisions, frontal-lateral collisions and multi-vehicle crashes in
urban highways which always increase the crash severity in urban
highways.
Abstract: New Growth Theory helps us make sense of the
ongoing shift from a resource-based economy to a knowledge-based
economy. It underscores the point that the economic processes which
create and diffuse new knowledge are critical to shaping the growth
of nations, communities and individual firms. In all too many
contributions to New (Endogenous) Growth Theory – though not in
all – central reference is made to 'a stock of knowledge', a 'stock of
ideas', etc., this variable featuring centre-stage in the analysis. Yet it
is immediately apparent that this is far from being a crystal clear
concept. The difficulty and uncertainty of being able to capture the
value associated with knowledge is a real problem. The intent of this
paper is introducing new thinking and theorizing about the
knowledge and its measurability in new growth theory. Moreover the
study aims to synthesize various strain of the literature with a
practical bearing on knowledge concept. By contribution of
institution framework which is found within NGT, we can indirectly
measure the knowledge concept. Institutions matter because they
shape the environment for production and employment of new
knowledge
Abstract: The third generation (3G) of cellular system adopted
the spread spectrum as solution for the transmission of the data in the
physical layer. Contrary to systems IS-95 or CDMAOne (systems
with spread spectrum of the preceding generation), the new standard,
called Universal Mobil Telecommunications System (UMTS), uses
long codes in the down link. The system is conceived for the vocal
communication and the transmission of the data. In particular, the
down link is very important, because of the asymmetrical request of
the data, i.e., more remote loading towards the mobiles than towards
the basic station. Moreover, the UMTS uses for the down link an
orthogonal spreading out with a variable factor of spreading out
(OVSF for Orthogonal Variable Spreading Factor). This
characteristic makes it possible to increase the flow of data of one or
more users by reducing their factor of spreading out without
changing the factor of spreading out of other users. In the current
standard of the UMTS, two techniques to increase the performances
of the down link were proposed, the diversity of sending antenna and
the codes space-time. These two techniques fight only fainding. The
receiver proposed for the mobil station is the RAKE, but one can
imagine a receiver more sophisticated, able to reduce the interference
between users and the impact of the coloured noise and interferences
to narrow band. In this context, where the users have long codes
synchronized with variable factor of spreading out and ignorance by
the mobile of the other active codes/users, the use of the sequences of
code pseudo-noises different lengths is presented in the form of one
of the most appropriate solutions.
Abstract: Quality control charts are very effective in detecting
out of control signals but when a control chart signals an out of
control condition of the process mean, searching for a special cause
in the vicinity of the signal time would not always lead to prompt
identification of the source(s) of the out of control condition as the
change point in the process parameter(s) is usually different from the
signal time. It is very important to manufacturer to determine at what
point and which parameters in the past caused the signal. Early
warning of process change would expedite the search for the special
causes and enhance quality at lower cost. In this paper the quality
variables under investigation are assumed to follow a multivariate
normal distribution with known means and variance-covariance
matrix and the process means after one step change remain at the new
level until the special cause is being identified and removed, also it is
supposed that only one variable could be changed at the same time.
This research applies artificial neural network (ANN) to identify the
time the change occurred and the parameter which caused the change
or shift. The performance of the approach was assessed through a
computer simulation experiment. The results show that neural
network performs effectively and equally well for the whole shift
magnitude which has been considered.
Abstract: Overcurrent (OC) relays are the major protection
devices in a distribution system. The operating time of the OC relays
are to be coordinated properly to avoid the mal-operation of the
backup relays. The OC relay time coordination in ring fed
distribution networks is a highly constrained optimization problem
which can be stated as a linear programming problem (LPP). The
purpose is to find an optimum relay setting to minimize the time of
operation of relays and at the same time, to keep the relays properly
coordinated to avoid the mal-operation of relays.
This paper presents two phase simplex method for optimum time
coordination of OC relays. The method is based on the simplex
algorithm which is used to find optimum solution of LPP. The
method introduces artificial variables to get an initial basic feasible
solution (IBFS). Artificial variables are removed using iterative
process of first phase which minimizes the auxiliary objective
function. The second phase minimizes the original objective function
and gives the optimum time coordination of OC relays.
Abstract: Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. One such technique is Artificial Neural Network (ANN). This paper examined the application of ANN for software quality prediction using Object- Oriented (OO) metrics. Quality estimation includes estimating maintainability of software. The dependent variable in our study was maintenance effort. The independent variables were principal components of eight OO metrics. The results showed that the Mean Absolute Relative Error (MARE) was 0.265 of ANN model. Thus we found that ANN method was useful in constructing software quality model.
Abstract: The purpose of this study was to explore the
relationship between Burnout, Negative Affectivity, and
Organizational Citizenship Behavior (OCB) for social service
workers at two agencies serving homeless populations. Thirty two
subjects completed surveys. Significant correlations between major
variables and subscales were found.
Abstract: Random Forests are a powerful classification technique, consisting of a collection of decision trees. One useful feature of Random Forests is the ability to determine the importance of each variable in predicting the outcome. This is done by permuting each variable and computing the change in prediction accuracy before and after the permutation. This variable importance calculation is similar to a one-factor-at a time experiment and therefore is inefficient. In this paper, we use a regular fractional factorial design to determine which variables to permute. Based on the results of the trials in the experiment, we calculate the individual importance of the variables, with improved precision over the standard method. The method is illustrated with a study of student attrition at Monash University.
Abstract: Web usage mining algorithms have been widely
utilized for modeling user web navigation behavior. In this study we
advance a model for mining of user-s navigation pattern. The model
makes user model based on expectation-maximization (EM)
algorithm.An EM algorithm is used in statistics for finding maximum
likelihood estimates of parameters in probabilistic models, where the
model depends on unobserved latent variables. The experimental
results represent that by decreasing the number of clusters, the log
likelihood converges toward lower values and probability of the
largest cluster will be decreased while the number of the clusters
increases in each treatment.