Abstract: In this paper, we employ the approach of linear
programming to propose a new interactive broadcast method. In our
method, a film S is divided into n equal parts and broadcast via k
channels. The user simultaneously downloads these segments from k
channels into the user-s set-top-box (STB) and plays them in order.
Our method assumes that the initial p segments will not have
fast-forwarding capabilities. Every time the user wants to initiate d
times fast-forwarding, according to our broadcasting strategy, the
necessary segments already saved in the user-s STB or are just
download on time for playing. The proposed broadcasting strategy not
only allows the user to pause and rewind, but also to fast-forward.
Abstract: It is well known that Logistic Regression is the gold
standard method for predicting clinical outcome, especially
predicting risk of mortality. In this paper, the Decision Tree method
has been proposed to solve specific problems that commonly use
Logistic Regression as a solution. The Biochemistry and
Haematology Outcome Model (BHOM) dataset obtained from
Portsmouth NHS Hospital from 1 January to 31 December 2001 was
divided into four subsets. One subset of training data was used to
generate a model, and the model obtained was then applied to three
testing datasets. The performance of each model from both methods
was then compared using calibration (the χ2 test or chi-test) and
discrimination (area under ROC curve or c-index). The experiment
presented that both methods have reasonable results in the case of the
c-index. However, in some cases the calibration value (χ2) obtained
quite a high result. After conducting experiments and investigating
the advantages and disadvantages of each method, we can conclude
that Decision Trees can be seen as a worthy alternative to Logistic
Regression in the area of Data Mining.
Abstract: Software projects are very dynamic and require
recurring adjustments of their project plans. These settings can be
understood as reconfigurations in the schedule, in the resources
allocation and other design elements. Yet, during the planning and
execution of a software project, the integration of specific activities
in the projects with the activities that take part in the organization-s
common activity flow should be considered. This article presents the
results from a systematic review of aspects related to software
projects- dynamic reconfiguration emphasizing the integration of
project management with the organizational flows. A series of studies
was analyzed from the year 2000 to the present. The results of this
work show that there is a diversity of techniques and strategies for
dynamic reconfiguration of software projects-. However, few
approaches consider the integration of software project activities with
the activities that take part in the organization-s common workflow.
Abstract: The article deals with technical support of intracranial single unit activity measurement. The parameters of the whole measuring set were tested in order to assure the optimal conditions of extracellular single-unit recording. Metal microelectrodes for measuring the single-unit were tested during animal experiments. From signals recorded during these experiments, requirements for the measuring set parameters were defined. The impedance parameters of the metal microelectrodes were measured. The frequency-gain and autonomous noise properties of preamplifier and amplifier were verified. The measurement and the description of the extracellular single unit activity could help in prognoses of brain tissue damage recovery.
Abstract: Knowledge Discovery in Databases (KDD) has
evolved into an important and active area of research because of
theoretical challenges and practical applications associated with the
problem of discovering (or extracting) interesting and previously
unknown knowledge from very large real-world databases. Rough
Set Theory (RST) is a mathematical formalism for representing
uncertainty that can be considered an extension of the classical set
theory. It has been used in many different research areas, including
those related to inductive machine learning and reduction of
knowledge in knowledge-based systems. One important concept
related to RST is that of a rough relation. In this paper we presented
the current status of research on applying rough set theory to KDD,
which will be helpful for handle the characteristics of real-world
databases. The main aim is to show how rough set and rough set
analysis can be effectively used to extract knowledge from large
databases.
Abstract: Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.
Abstract: A simple impedance matching technique for inset feed
grooved microstrip patch antenna based on the concept of coplanar
waveguide feed line has been developed and investigated for a
printed antenna at X-Band frequency of 10GHz. The proposed
technique has been used in the design of Linear Grooved Microstrip
patch antenna array. The characteristics of the antenna are
determined in terms of Return loss, VSWR, gain, radiation pattern
etc. The measured and simulated results presented are found to be in
good agreement.
Abstract: The study explored varied types of human smiles and
extracted most of the key factors affecting the smiles. These key
factors then were converted into a set of control points which could
serve to satisfy the needs for creation of facial expression for 3D
animators and be further applied to the face simulation for robots in the
future. First, hundreds of human smile pictures were collected and
analyzed to identify the key factors for face expression. Then, the
factors were converted into a set of control points and sizing
parameters calculated proportionally. Finally, two different faces
were constructed for validating the parameters via the process of
simulating smiles of the same type as the original one.
Abstract: In this work, the natural convection in a concentric
annulus between a cold outer inclined square enclosure and heated
inner circular cylinder is simulated for two-dimensional steady
state. The Boussinesq approximation was applied to model the
buoyancy-driven effect and the governing equations were solved
using the time marching approach staggered by body fitted
coordinates. The coordinate transformation from the physical
domain to the computational domain is set up by an analytical
expression. Numerical results for Rayleigh numbers 103 , 104 , 105
and 106, aspect ratios 1.5 , 3.0 and 4.5 for seven different
inclination angles for the outer square enclosure 0o , -30o
, -45o
,
-60o , -90o , -135o , -180o are presented as well. The computed flow
and temperature fields were demonstrated in the form of
streamlines, isotherms and Nusselt numbers variation. It is found
that both the aspect ratio and the Rayleigh number are critical to the
patterns of flow and thermal fields. At all Rayleigh numbers angle
of inclination has nominal effect on heat transfer.
Abstract: A high-frequency low-power sinusoidal quadrature
oscillator is presented through the use of two 2nd-order low-pass
current-mirror (CM)-based filters, a 1st-order CM low-pass filter and
a CM bilinear transfer function. The technique is relatively simple
based on (i) inherent time constants of current mirrors, i.e. the
internal capacitances and the transconductance of a diode-connected
NMOS, (ii) a simple negative resistance RN formed by a resistor load
RL of a current mirror. Neither external capacitances nor inductances
are required. As a particular example, a 1.9-GHz, 0.45-mW, 2-V
CMOS low-pass-filter-based all-current-mirror sinusoidal quadrature
oscillator is demonstrated. The oscillation frequency (f0) is 1.9 GHz
and is current-tunable over a range of 370 MHz or 21.6 %. The
power consumption is at approximately 0.45 mW. The amplitude
matching and the quadrature phase matching are better than 0.05 dB
and 0.15°, respectively. Total harmonic distortions (THD) are less
than 0.3 %. At 2 MHz offset from the 1.9 GHz, the carrier to noise
ratio (CNR) is 90.01 dBc/Hz whilst the figure of merit called a
normalized carrier-to-noise ratio (CNRnorm) is 153.03 dBc/Hz. The
ratio of the oscillation frequency (f0) to the unity-gain frequency (fT)
of a transistor is 0.25. Comparisons to other approaches are also
included.
Abstract: The Muslim faith requires individuals to fast between
the hours of sunrise and sunset during the month of Ramadan. Our
recent work has concentrated on some of the changes that take place
during the daytime when fasting. A questionnaire was developed to
assess subjective estimates of physical, mental and social activities,
and fatigue. Four days were studied: in the weeks before and after
Ramadan (control days) and during the first and last weeks of
Ramadan (experimental days). On each of these four days, this
questionnaire was given several times during the daytime and once
after the fast had been broken and just before individuals retired at
night.
During Ramadan, daytime mental, physical and social activities
all decreased below control values but then increased to abovecontrol
values in the evening. The desires to perform physical and
mental activities showed very similar patterns. That is, individuals
tried to conserve energy during the daytime in preparation for the
evenings when they ate and drank, often with friends. During
Ramadan also, individuals were more fatigued in the daytime and
napped more often than on control days. This extra fatigue probably
reflected decreased sleep, individuals often having risen earlier
(before sunrise, to prepare for fasting) and retired later (to enable
recovery from the fast).
Some physiological measures and objective measures of
performance (including the response to a bout of exercise) have also
been investigated. Urine osmolality fell during the daytime on
control days as subjects drank, but rose in Ramadan to reach values
at sunset indicative of dehydration. Exercise performance was also
compromised, particularly late in the afternoon when the fast had
lasted several hours. Self-chosen exercise work-rates fell and a set
amount of exercise felt more arduous. There were also changes in
heart rate and lactate accumulation in the blood, indicative of greater
cardiovascular and metabolic stress caused by the exercise in
subjects who had been fasting. Daytime fasting in Ramadan produces
widespread effects which probably reflect combined effects of sleep
loss and restrictions to intakes of water and food.
Abstract: Some meta-schedulers query the information system of individual supercomputers in order to submit jobs to the least busy supercomputer on a computational Grid. However, this information can become outdated by the time a job starts due to changes in scheduling priorities. The MSR scheme is based on Multiple Simultaneous Requests and can take advantage of opportunities resulting from these priorities changes. This paper presents the SWARM meta-scheduler, which can speed up the execution of large sets of tasks by minimizing the job queuing time through the submission of multiple requests. Performance tests have shown that this new meta-scheduler is faster than an implementation of the MSR scheme and the gLite meta-scheduler. SWARM has been used through the GridQTL project beta-testing portal during the past year. Statistics are provided for this usage and demonstrate its capacity to achieve reliably a substantial reduction of the execution time in production conditions.
Abstract: This study aims to propose three evaluation methods to
evaluate the Tokyo Cap and Trade Program when emissions trading is
performed virtually among enterprises, focusing on carbon dioxide
(CO2), which is the only emitted greenhouse gas that tends to increase.
The first method clarifies the optimum reduction rate for the highest
cost benefit, the second discusses emissions trading among enterprises
through market trading, and the third verifies long-term emissions
trading during the term of the plan (2010-2019), checking the validity
of emissions trading partly using Geographic Information Systems
(GIS). The findings of this study can be summarized in the following
three points.
1. Since the total cost benefit is the greatest at a 44% reduction rate, it
is possible to set it more highly than that of the Tokyo Cap and
Trade Program to get more total cost benefit.
2. At a 44% reduction rate, among 320 enterprises, 8 purchasing
enterprises and 245 sales enterprises gain profits from emissions
trading, and 67 enterprises perform voluntary reduction without
conducting emissions trading. Therefore, to further promote
emissions trading, it is necessary to increase the sales volumes of
emissions trading in addition to sales enterprises by increasing the
number of purchasing enterprises.
3. Compared to short-term emissions trading, there are few enterprises
which benefit in each year through the long-term emissions trading
of the Tokyo Cap and Trade Program. Only 81 enterprises at the
most can gain profits from emissions trading in FY 2019. Therefore,
by setting the reduction rate more highly, it is necessary to increase
the number of enterprises that participate in emissions trading and
benefit from the restraint of CO2 emissions.
Abstract: In this article, by using fuzzy AHP and TOPSIS
technique we propose a new method for project selection problem.
After reviewing four common methods of comparing alternatives
investment (net present value, rate of return, benefit cost analysis
and payback period) we use them as criteria in AHP tree. In this
methodology by utilizing improved Analytical Hierarchy Process
by Fuzzy set theory, first we try to calculate weight of each
criterion. Then by implementing TOPSIS algorithm, assessment of
projects has been done. Obtained results have been tested in a
numerical example.
Abstract: The paper deals with an application of quantitative analysis – the Data Envelopment Analysis (DEA) method to performance evaluation of the European Union Member States, in the reference years 2000 and 2011. The main aim of the paper is to measure efficiency changes over the reference years and to analyze a level of productivity in individual countries based on DEA method and to classify the EU Member States to homogeneous units (clusters) according to efficiency results. The theoretical part is devoted to the fundamental basis of performance theory and the methodology of DEA. The empirical part is aimed at measuring degree of productivity and level of efficiency changes of evaluated countries by basic DEA model – CCR CRS model, and specialized DEA approach – the Malmquist Index measuring the change of technical efficiency and the movement of production possibility frontier. Here, DEA method becomes a suitable tool for setting a competitive/uncompetitive position of each country because there is not only one factor evaluated, but a set of different factors that determine the degree of economic development.
Abstract: In this paper, we propose a single sample path based
algorithm with state aggregation to optimize the average rewards of
singularly perturbed Markov reward processes (SPMRPs) with a
large scale state spaces. It is assumed that such a reward process
depend on a set of parameters. Differing from the other kinds of
Markov chain, SPMRPs have their own hierarchical structure. Based
on this special structure, our algorithm can alleviate the load in the
optimization for performance. Moreover, our method can be applied
on line because of its evolution with the sample path simulated.
Compared with the original algorithm applied on these problems of
general MRPs, a new gradient formula for average reward
performance metric in SPMRPs is brought in, which will be proved
in Appendix, and then based on these gradients, the schedule of the
iteration algorithm is presented, which is based on a single sample
path, and eventually a special case in which parameters only
dominate the disturbance matrices will be analyzed, and a precise
comparison with be displayed between our algorithm with the old
ones which is aim to solve these problems in general Markov reward
processes. When applied in SPMRPs, our method will approach a fast
pace in these cases. Furthermore, to illustrate the practical value of
SPMRPs, a simple example in multiple programming in computer
systems will be listed and simulated. Corresponding to some practical
model, physical meanings of SPMRPs in networks of queues will be
clarified.
Abstract: We propose a fast and robust hierarchical face detection system which finds and localizes face images with a cascade of classifiers. Three modules contribute to the efficiency of our detector. First, heterogeneous feature descriptors are exploited to enrich feature types and feature numbers for face representation. Second, a PSO-Adaboost algorithm is proposed to efficiently select discriminative features from a large pool of available features and reinforce them into the final ensemble classifier. Compared with the standard exhaustive Adaboost for feature selection, the new PSOAdaboost algorithm reduces the training time up to 20 times. Finally, a three-stage hierarchical classifier framework is developed for rapid background removal. In particular, candidate face regions are detected more quickly by using a large size window in the first stage. Nonlinear SVM classifiers are used instead of decision stump functions in the last stage to remove those remaining complex nonface patterns that can not be rejected in the previous two stages. Experimental results show our detector achieves superior performance on the CMU+MIT frontal face dataset.
Abstract: Public health surveillance system focuses on outbreak detection and data sources used. Variation or aberration in the frequency distribution of health data, compared to historical data is often used to detect outbreaks. It is important that new techniques be developed to improve the detection rate, thereby reducing wastage of resources in public health. Thus, the objective is to developed technique by applying frequent mining and outlier mining techniques in outbreak detection. 14 datasets from the UCI were tested on the proposed technique. The performance of the effectiveness for each technique was measured by t-test. The overall performance shows that DTK can be used to detect outlier within frequent dataset. In conclusion the outbreak detection technique using anomaly-based on frequent-outlier technique can be used to identify the outlier within frequent dataset.
Abstract: Color constancy algorithms are generally based on the
simplified assumption about the spectral distribution or the reflection
attributes of the scene surface. However, in reality, these assumptions
are too restrictive. The methodology is proposed to extend existing
algorithm to applying color constancy locally to image patches rather
than globally to the entire images.
In this paper, a method based on low-level image features using
superpixels is proposed. Superpixel segmentation partition an image
into regions that are approximately uniform in size and shape. Instead
of using entire pixel set for estimating the illuminant, only superpixels
with the most valuable information are used. Based on large scale
experiments on real-world scenes, it can be derived that the estimation
is more accurate using superpixels than when using the entire image.
Abstract: As the trend of manufacturing is being dominated depending on services, products and processes are more and more related with sophisticated services. Thus, this research starts with the discussion about integration of the product, process, and service in the innovation process. In particular, this paper sets out some foundations for a theory of service innovation in the field of manufacturing, and proposes the dynamic model of service innovation related to product and process. Two dynamic models of service innovation are suggested to investigate major tendencies and dynamic variations during the innovation cycle: co-innovation and sequential innovation. To structure dynamic models of product, process, and service innovation, the innovation stages in which two models are mainly achieved are identified. The research would encourage manufacturers to formulate strategy and planning for service development with product and process.