Abstract: Elasticity is the essential property of cloud computing. As the name suggests, it constitutes the ability of a cloud system to adjust resource provisioning in relation to fluctuating workloads. There are two types of elasticity operations, vertical and horizontal. In this work, we are interested in horizontal scaling, which is ensured by two mechanisms; scaling in and scaling out. Following the sizing of the system, we can adopt scaling in the event of over-supply and scaling out in the event of under-supply. In this paper, we propose a formal model, based on temporized and colored Petri nets (TdCPNs), for the modeling of the duplication and the removal of a virtual machine from a server. This model is based on formal Petri Nets (PNs) modeling language. The proposed models are edited, verified, and simulated with two examples implemented in colored Petri nets (CPNs)tools, which is a modeling tool for colored and timed PNs.
Abstract: The purpose of the present research is to equate two
test forms as part of a study to evaluate the educational effectiveness
of the ARTé: Mecenas art history learning game. The researcher
applied Item Response Theory (IRT) procedures to calculate item,
test, and mean-sigma equating parameters. With the sample size
n=134, test parameters indicated “good” model fit but low Test
Information Functions and more acute than expected equating
parameters. Therefore, the researcher applied equipercentile equating
and linear equating to raw scores and compared the equated form
parameters and effect sizes from each method. Item scaling in IRT
enables the researcher to select a subset of well-discriminating items.
The mean-sigma step produces a mean-slope adjustment from the
anchor items, which was used to scale the score on the new form
(Form R) to the reference form (Form Q) scale. In equipercentile
equating, scores are adjusted to align the proportion of scores in each
quintile segment. Linear equating produces a mean-slope adjustment,
which was applied to all core items on the new form. The study
followed a quasi-experimental design with purposeful sampling of
students enrolled in a college level art history course (n=134) and
counterbalancing design to distribute both forms on the pre- and posttests.
The Experimental Group (n=82) was asked to play ARTé:
Mecenas online and complete Level 4 of the game within a two-week
period; 37 participants completed Level 4. Over the same period, the
Control Group (n=52) did not play the game. The researcher
examined between group differences from post-test scores on test
Form Q and Form R by full-factorial Two-Way ANOVA. The raw
score analysis indicated a 1.29% direct effect of form, which was
statistically non-significant but may be practically significant. The
researcher repeated the between group differences analysis with all
three equating methods. For the IRT mean-sigma adjusted scores,
form had a direct effect of 8.39%. Mean-sigma equating with a small
sample may have resulted in inaccurate equating parameters.
Equipercentile equating aligned test means and standard deviations,
but resultant skewness and kurtosis worsened compared to raw score
parameters. Form had a 3.18% direct effect. Linear equating
produced the lowest Form effect, approaching 0%. Using linearly
equated scores, the researcher conducted an ANCOVA to examine
the effect size in terms of prior knowledge. The between group effect
size for the Control Group versus Experimental Group participants
who completed the game was 14.39% with a 4.77% effect size
attributed to pre-test score. Playing and completing the game
increased art history knowledge, and individuals with low prior
knowledge tended to gain more from pre- to post test. Ultimately,
researchers should approach test equating based on their theoretical
stance on Classical Test Theory and IRT and the respective assumptions. Regardless of the approach or method, test equating
requires a representative sample of sufficient size. With small sample
sizes, the application of a range of equating approaches can expose
item and test features for review, inform interpretation, and identify
paths for improving instruments for future study.
Abstract: Power management techniques are necessary to save power in the microprocessor. By changing the frequency and/or operating voltage of processor, DVFS can control power consumption. In this paper, we perform a case study to find optimal power state transition for DVFS. We propose the equation to find the optimal ratio between executions of states while taking into account the deadline of processing time and the power state transition delay overhead. The experiment is performed on the Cortex-M4 processor, and average 6.5% power saving is observed when DVFS is applied under the deadline condition.
Abstract: Frequency domain independent component analysis has
a scaling indeterminacy and a permutation problem. The scaling
indeterminacy can be solved by use of a decomposed spectrum. For
the permutation problem, we have proposed the rules in terms of gain
ratio and phase difference derived from the decomposed spectra and
the source-s coarse directions.
The present paper experimentally clarifies that the gain ratio and
the phase difference work effectively in a real environment but their
performance depends on frequency bands, a microphone-space and
a source-microphone distance. From these facts it is seen that it is
difficult to attain a perfect solution for the permutation problem in a
real environment only by either the gain ratio or the phase difference.
For the perfect solution, this paper gives a solution to the problems
in a real environment. The proposed method is simple, the amount of
calculation is small. And the method has high correction performance
without depending on the frequency bands and distances from source
signals to microphones. Furthermore, it can be applied under the real
environment. From several experiments in a real room, it clarifies
that the proposed method has been verified.
Abstract: Scalability poses a severe threat to the existing
DRAM technology. The capacitors that are used for storing and
sensing charge in DRAM are generally not scaled beyond 42nm.
This is because; the capacitors must be sufficiently large for reliable
sensing and charge storage mechanism. This leaves DRAM memory
scaling in jeopardy, as charge sensing and storage mechanisms
become extremely difficult. In this paper we provide an overview of
the potential and the possibilities of using Phase Change Memory
(PCM) as an alternative for the existing DRAM technology. The
main challenges that we encounter in using PCM are, the limited
endurance, high access latencies, and higher dynamic energy
consumption than that of the conventional DRAM. We then provide
an overview of various methods, which can be employed to
overcome these drawbacks. Hybrid memories involving both PCM
and DRAM can be used, to achieve good tradeoffs in access latency
and storage density. We conclude by presenting, the results of these
methods that makes PCM a potential replacement for the current
DRAM technology.
Abstract: Modern manufacturing facilities are large scale,
highly complex, and operate with large number of variables under
closed loop control. Early and accurate fault detection and diagnosis
for these plants can minimise down time, increase the safety of plant
operations, and reduce manufacturing costs. Fault detection and
isolation is more complex particularly in the case of the faulty analog
control systems. Analog control systems are not equipped with
monitoring function where the process parameters are continually
visualised. In this situation, It is very difficult to find the relationship
between the fault importance and its consequences on the product
failure. We consider in this paper an approach to fault detection and
analysis of its effect on the production quality using an adaptive
centring and scaling in the pickling process in cold rolling. The fault
appeared on one of the power unit driving a rotary machine, this
machine can not track a reference speed given by another machine.
The length of metal loop is then in continuous oscillation, this affects
the product quality. Using a computerised data acquisition system,
the main machine parameters have been monitored. The fault has
been detected and isolated on basis of analysis of monitored data.
Normal and faulty situation have been obtained by an artificial neural
network (ANN) model which is implemented to simulate the normal
and faulty status of rotary machine. Correlation between the product
quality defined by an index and the residual is used to quality
classification.