Abstract: Well-being has been given special emphasis in quality
of life. It involves living a meaningful, life satisfaction, stability and
happiness in life. Well-being also concerns the satisfaction of
physical, psychological, social needs and demands of an individual.
The purpose of this study was to validate three-factor measurement
model of well-being using structural equation modeling (SEM). The
conceptions of well-being measured such dimensions as physical,
psychological and social well-being. This study was done based on a
total sample of 650 adolescents from east-coast of peninsular
Malaysia. The Well-Being Scales which was adapted from [1] was
used in this study. The items were hypothesized a priori to have nonzero
loadings on all dimensions in the model. The findings of the
SEM demonstrated that it is a good fitting model which the proposed
model fits the driving theory; (x2df = 1.268; GFI = .994; CFI = .998;
TLI= .996; p = .255; RMSEA = .021). Composite reliability (CR)
was .93 and average variance extracted (AVE) was 58%. The model
in this study fits with the sample of data and well-being is important
to bring sustainable development to the mainstream.
Abstract: The vertex connectivity of a graph is the smallest number of vertices whose deletion separates the graph or makes it trivial. This work is devoted to the problem of vertex connectivity test of graphs in a distributed environment based on a general and a constructive approach. The contribution of this paper is threefold. First, using a preconstructed spanning tree of the considered graph, we present a protocol to test whether a given graph is 2-connected using only local knowledge. Second, we present an encoding of this protocol using graph relabeling systems. The last contribution is the implementation of this protocol in the message passing model. For a given graph G, where M is the number of its edges, N the number of its nodes and Δ is its degree, our algorithms need the following requirements: The first one uses O(Δ×N2) steps and O(Δ×logΔ) bits per node. The second one uses O(Δ×N2) messages, O(N2) time and O(Δ × logΔ) bits per node. Furthermore, the studied network is semi-anonymous: Only the root of the pre-constructed spanning tree needs to be identified.
Abstract: The objective of this project is to produce computer
assisted instruction(CAI) for welding and brazing in order to
determine the efficiency of the instruction package and the study
accomplishment of learner by studying through computer assisted
instruction for welding and brazing it was examined through the
target group surveyed from the 30 students studying in the two year
of 5-year-academic program, department of production technology
education, faculty of industrial education and technology, king
mongkut-s university of technology thonburi. The result of the
research indicated that the media evaluated by experts and subject
matter quality evaluation of computer assisted instruction for welding
and brazing was in line for the good criterion. The mean of score
evaluated before the study, during the study and after the study was
34.58, 83.33 and 83.43, respectively. The efficiency of the lesson was
83.33/83.43 which was higher than the expected value, 80/80. The
study accomplishment of the learner, who utilizes computer assisted
instruction for welding and brazing as a media, was higher and equal
to the significance statistical level of 95%. The value was 1.669
which was equal to 35.36>1.669. It could be summarized that
computer assisted instruction for welding and brazing was the
efficient media to use for studying and teaching.
Abstract: This paper presents Simulated Annealing based
approach to estimate solar cell model parameters. Single diode solar
cell model is used in this study to validate the proposed approach
outcomes. The developed technique is used to estimate different
model parameters such as generated photocurrent, saturation current,
series resistance, shunt resistance, and ideality factor that govern the
current-voltage relationship of a solar cell. A practical case study is
used to test and verify the consistency of accurately estimating
various parameters of single diode solar cell model. Comparative
study among different parameter estimation techniques is presented
to show the effectiveness of the developed approach.
Abstract: This evaluation of land supply system performance in
China shall examine the combination of government functions and
national goals in order to perform a cost-benefit analysis of system
results. From the author's point of view, it is most productive to
evaluate land supply system performance at moments of system
transformation for the following reasons. The behavior and
input-output change of beneficial results at different times can be
observed when the system or policy changes and system performance
can be evaluated through a cost-benefit analysis during the process of
system transformation. Moreover, this evaluation method can avoid
the influence of land resource endowment. Different land resource
endowment methods and different economy development periods
result in different systems. This essay studies the contents, principles
and methods of land supply system performance evaluation. Taking
Beijing as an example, this essay optimizes and classifies the land
supply index, makes a quantitative evaluation of land supply system
performance through principal component analysis (PCA), and finally
analyzes the factors that influence land supply system performance at
times of system transformation.
Abstract: A number of automated shot-change detection
methods for indexing a video sequence to facilitate browsing and
retrieval have been proposed in recent years. This paper emphasizes
on the simulation of video shot boundary detection using one of the
methods of the color histogram wherein scaling of the histogram
metrics is an added feature. The difference between the histograms of
two consecutive frames is evaluated resulting in the metrics. Further
scaling of the metrics is performed to avoid ambiguity and to enable
the choice of apt threshold for any type of videos which involves
minor error due to flashlight, camera motion, etc. Two sample videos
are used here with resolution of 352 X 240 pixels using color
histogram approach in the uncompressed media. An attempt is made
for the retrieval of color video. The simulation is performed for the
abrupt change in video which yields 90% recall and precision value.
Abstract: The detection of outliers is very essential because of
their responsibility for producing huge interpretative problem in
linear as well as in nonlinear regression analysis. Much work has
been accomplished on the identification of outlier in linear
regression, but not in nonlinear regression. In this article we propose
several outlier detection techniques for nonlinear regression. The
main idea is to use the linear approximation of a nonlinear model and
consider the gradient as the design matrix. Subsequently, the
detection techniques are formulated. Six detection measures are
developed that combined with three estimation techniques such as the
Least-Squares, M and MM-estimators. The study shows that among
the six measures, only the studentized residual and Cook Distance
which combined with the MM estimator, consistently capable of
identifying the correct outliers.
Abstract: Huge losses in apple production are caused by pathogens that cannot be seen shortly after harvest. After-harvest thermotherapy treatments can considerably improve control of storage diseases on apples and become an alternative to chemical pesticides. In the years 2010-2012 carried out research in this area. Apples of 'Topaz' cultivar were harvested at optimal maturity time for long storage and subject to water bath treatment at 45, 50, 52, 55°C for 60, 120, 180 and 240 seconds. The control was untreated fruits. After 12 and 24 weeks and during so called simulated trade turnover the fruits were checked for their condition and the originators of diseases were determined by using the standard phytopathological methods. The most common originator of 'Topaz' apple infection during storage were the fungi of genus Gloeosporium. In this paper it was proven that for effective protection of 'Topaz' apples against diseases, thermotherapy by using water treatments at temperature range of 50-52°C is quite sufficient.
Abstract: The two significant overvoltages in power system,
switching overvoltage and lightning overvoltage, are investigated in
this paper. Firstly, the effect of various power system parameters on
Line Energization overvoltages is evaluated by simulation in ATP.
The dominant parameters include line parameters; short-circuit
impedance and circuit breaker parameters. Solutions to reduce
switching overvoltages are reviewed and controlled closing using
switchsync controllers is proposed as proper method.
This paper also investigates lightning overvoltages in the
overhead-cable transition. Simulations are performed in
PSCAD/EMTDC. Surge arresters are applied in both ends of cable to
fulfill the insulation coordination. The maximum amplitude of
overvoltages inside the cable is surveyed which should be of great
concerns in insulation coordination studies.
Abstract: Web applications have become very complex and crucial, especially when combined with areas such as CRM (Customer Relationship Management) and BPR (Business Process Reengineering), the scientific community has focused attention to Web applications design, development, analysis, and testing, by studying and proposing methodologies and tools. This paper proposes an approach to automatic multi-dimensional concern mining for Web Applications, based on concepts analysis, impact analysis, and token-based concern identification. This approach lets the user to analyse and traverse Web software relevant to a particular concern (concept, goal, purpose, etc.) via multi-dimensional separation of concerns, to document, understand and test Web applications. This technique was developed in the context of WAAT (Web Applications Analysis and Testing) project. A semi-automatic tool to support this technique is currently under development.
Abstract: The purpose of this study is to identify and evaluate
the scale of implementation of Just-In-Time (JIT) in the different industrial sectors in the Middle East. This study analyzes the empirical data collected by a questionnaire survey distributed to
companies in three main industrial sectors in the Middle East, which
are: food, chemicals and fabrics. The following main hypotheses is formulated and tested: (The requirements of JIT application differ
according to the type of industrial sector).Descriptive statistics and Box plot analysis were used to examine the hypotheses. This study indicates a reasonable evidence for accepting the main hypotheses. It
reveals that there is no standard way to adopt JIT as a production system. But each industrial sector should concentrate in the
investment on critical requirements that differ according to the nature
and strategy of production followed in that sector.
Abstract: We propose a control design scheme that aims to
prevent undesirable liquid outpouring and suppress sloshing during
the forward and backward tilting phases of the pouring process, for
the case of liquid containers carried by manipulators. The proposed
scheme combines a partial inverse dynamics controller with a PID
controller, tuned with the use of a “metaheuristic" search algorithm.
The “metaheuristic" search algorithm tunes the PID controller based
on simulation results of the plant-s linearization around the operating
point corresponding to the critical tilting angle, where outpouring
initiates. Liquid motion is modeled using the well-known pendulumtype
model. However, the proposed controller does not require
measurements of the liquid-s motion within the tank.
Abstract: The frequency contents of the non-stationary
signals vary with time. For proper characterization of such
signals, a smart time-frequency representation is necessary.
Classically, the STFT (short-time Fourier transform) is
employed for this purpose. Its limitation is the fixed timefrequency
resolution. To overcome this drawback an enhanced
STFT version is devised. It is based on the signal driven
sampling scheme, which is named as the cross-level sampling.
It can adapt the sampling frequency and the window function
(length plus shape) by following the input signal local
variations. This adaptation results into the proposed technique
appealing features, which are the adaptive time-frequency
resolution and the computational efficiency.
Abstract: An adaptive software reliability prediction model
using evolutionary connectionist approach based on Recurrent Radial
Basis Function architecture is proposed. Based on the currently
available software failure time data, Fuzzy Min-Max algorithm is
used to globally optimize the number of the k Gaussian nodes. The
corresponding optimized neural network architecture is iteratively
and dynamically reconfigured in real-time as new actual failure time
data arrives. The performance of our proposed approach has been
tested using sixteen real-time software failure data. Numerical results
show that our proposed approach is robust across different software
projects, and has a better performance with respect to next-steppredictability
compared to existing neural network model for failure
time prediction.
Abstract: This paper maps the structure of the social network of
the 2011 class ofsixty graduate students of the Masters of Science
(Knowledge Management) programme at the Nanyang Technological
University, based on their friending relationships on Facebook. To
ensure anonymity, actual names were not used. Instead, they were
replaced with codes constructed from their gender, nationality, mode
of study, year of enrollment and a unique number. The relationships
between friends within the class, and among the seniors and alumni
of the programme wereplotted. UCINet and Pajek were used to plot
the sociogram, to compute the density, inclusivity, and degree,
global, betweenness, and Bonacich centralities, to partition the
students into two groups, namely, active and peripheral, and to
identify the cut-points. Homophily was investigated, and it was
observed for nationality and study mode. The groups students formed
on Facebook were also studied, and of fifteen groups, eight were
classified as dead, which we defined as those that have been inactive
for over two months.
Abstract: Recently the usefulness of Concept Abduction, a novel non-monotonic inference service for Description Logics (DLs), has been argued in the context of ontology-based applications such as semantic matchmaking and resource retrieval. Based on tableau calculus, a method has been proposed to realize this reasoning task in ALN, a description logic that supports simple cardinality restrictions as well as other basic constructors. However, in many ontology-based systems, the representation of ontology would require expressive formalisms for capturing domain-specific constraints, this language is not sufficient. In order to increase the applicability of the abductive reasoning method in such contexts, we would like to present in the scope of this paper an extension of the tableaux-based algorithm for dealing with concepts represented inALCQ, the description logic that extends ALN with full concept negation and quantified number restrictions.
Abstract: Fuzzy logic can be used when knowledge is
incomplete or when ambiguity of data exists. The purpose of
this paper is to propose a proactive fuzzy set- based model for
reacting to the risk inherent in investment activities relative to
a complete view of portfolio management. Fuzzy rules are
given where, depending on the antecedents, the portfolio size
may be slightly or significantly decreased or increased. The
decision maker considers acceptable bounds on the proportion
of acceptable risk and return. The Fuzzy Controller model
allows learning to be achieved as 1) the firing strength of each
rule is measured, 2) fuzzy output allows rules to be updated,
and 3) new actions are recommended as the system continues
to loop. An extension is given to the fuzzy controller that
evaluates potential financial loss before adjusting the
portfolio. An application is presented that illustrates the
algorithm and extension developed in the paper.
Abstract: We consider a typical problem in the assembly of
printed circuit boards (PCBs) in a two-machine flow shop system to
simultaneously minimize the weighted sum of weighted tardiness and
weighted flow time. The investigated problem is a group scheduling
problem in which PCBs are assembled in groups and the interest is to
find the best sequence of groups as well as the boards within each
group to minimize the objective function value. The type of setup
operation between any two board groups is characterized as carryover
sequence-dependent setup time, which exactly matches with the real
application of this problem. As a technical constraint, all of the
boards must be kitted before the assembly operation starts (kitting
operation) and by kitting staff. The main idea developed in this paper
is to completely eliminate the role of kitting staff by assigning the
task of kitting to the machine operator during the time he is idle
which is referred to as integration of internal (machine) and external
(kitting) setup times. Performing the kitting operation, which is a
preparation process of the next set of boards while the other boards
are currently being assembled, results in the boards to continuously
enter the system or have dynamic arrival times. Consequently, a
dynamic PCB assembly system is introduced for the first time in the
assembly of PCBs, which also has characteristics similar to that of
just-in-time manufacturing. The problem investigated is
computationally very complex, meaning that finding the optimal
solutions especially when the problem size gets larger is impossible.
Thus, a heuristic based on Genetic Algorithm (GA) is employed. An
example problem on the application of the GA developed is
demonstrated and also numerical results of applying the GA on
solving several instances are provided.
Abstract: Software maintenance and mainly software
comprehension pose the largest costs in the software lifecycle. In
order to assess the cost of software comprehension, various
complexity measures have been proposed in the literature. This paper
proposes new cognitive-spatial complexity measures, which combine
the impact of spatial as well as architectural aspect of the software to
compute the software complexity. The spatial aspect of the software
complexity is taken into account using the lexical distances (in
number of lines of code) between different program elements and the
architectural aspect of the software complexity is taken into
consideration using the cognitive weights of control structures
present in control flow of the program. The proposed measures are
evaluated using standard axiomatic frameworks and then, the
proposed measures are compared with the corresponding existing
cognitive complexity measures as well as the spatial complexity
measures for object-oriented software. This study establishes that the
proposed measures are better indicators of the cognitive effort
required for software comprehension than the other existing
complexity measures for object-oriented software.
Abstract: Hearing impairment is the number one chronic
disability affecting many people in the world. Background noise is
particularly damaging to speech intelligibility for people with
hearing loss especially for sensorineural loss patients. Several
investigations on speech intelligibility have demonstrated
sensorineural loss patients need 5-15 dB higher SNR than the normal
hearing subjects. This paper describes Discrete Hartley Transform
Power Normalized Least Mean Square algorithm (DHT-LMS) to
improve the SNR and to reduce the convergence rate of the Least
Means Square (LMS) for sensorineural loss patients. The DHT
transforms n real numbers to n real numbers, and has the convenient
property of being its own inverse. It can be effectively used for noise
cancellation with less convergence time. The simulated result shows
the superior characteristics by improving the SNR at least 9 dB for
input SNR with zero dB and faster convergence rate (eigenvalue ratio
12) compare to time domain method and DFT-LMS.