Abstract: Digital broadcasting has been an area of active
research, development, innovation and business models development
in recent years. This paper presents a survey on the characteristics of
the digital terrestrial television broadcasting (DTTB) standards, and
implementation status of DTTB worldwide showing the standards
adopted. It is clear that only the developed countries and some in the
developing ones shall be able to beat the ITU set analogue to digital
broadcasting migration deadline because of the challenges that these
countries faces in digitizing their terrestrial broadcasting. The
challenges to keep on track the DTTB migration plan are also
discussed in this paper. They include financial, technology gap,
policies alignment with DTTB technology, etc. The reported
performance comparisons for the different standards are also
presented. The interesting part is that the results for many
comparative studies depends to a large extent on the objective behind
such studies, hence counter claims are common.
Abstract: This paper presents a novel method for remaining
useful life prediction using the Elliptical Basis Function (EBF)
network and a Markov chain. The EBF structure is trained by a
modified Expectation-Maximization (EM) algorithm in order to take
into account the missing covariate set. No explicit extrapolation is
needed for internal covariates while a Markov chain is constructed to
represent the evolution of external covariates in the study. The
estimated external and the unknown internal covariates constitute an
incomplete covariate set which are then used and analyzed by the EBF
network to provide survival information of the asset. It is shown in the
case study that the method slightly underestimates the remaining
useful life of an asset which is a desirable result for early maintenance
decision and resource planning.
Abstract: As the air traffic increases at a hub airport, some
flights cannot land or depart at their preferred target time. This event
happens because the airport runways become occupied to near their
capacity. It results in extra costs for both passengers and airlines
because of the loss of connecting flights or more waiting, more fuel
consumption, rescheduling crew members, etc. Hence, devising an
appropriate scheduling method that determines a suitable runway and
time for each flight in order to efficiently use the hub capacity and
minimize the related costs is of great importance. In this paper, we
present a mixed-integer zero-one model for scheduling a set of mixed
landing and departing flights (despite of most previous studies
considered only landings). According to the fact that the flight cost is
strongly affected by the level of airline, we consider different airline
categories in our model. This model presents a single objective
minimizing the total sum of three terms, namely 1) the weighted
deviation from targets, 2) the scheduled time of the last flight (i.e.,
makespan), and 3) the unbalancing the workload on runways. We
solve 10 simulated instances of different sizes up to 30 flights and 4
runways. Optimal solutions are obtained in a reasonable time, which
are satisfactory in comparison with the traditional rule, namely First-
Come-First-Serve (FCFS) that is far apart from optimality in most
cases.
Abstract: We consider the methods of construction simple
polygons for a set S of n points and applying them for searching the
minimal area polygon. In this paper we propose the approximate
algorithm, which generates the simple polygonalizations of a fixed
set of points and finds the minimal area polygon, in O (n3) time and
using O(n2) memory.
Abstract: This paper presents the methodology from machine
learning approaches for short-term rain forecasting system. Decision
Tree, Artificial Neural Network (ANN), and Support Vector Machine
(SVM) were applied to develop classification and prediction models
for rainfall forecasts. The goals of this presentation are to
demonstrate (1) how feature selection can be used to identify the
relationships between rainfall occurrences and other weather
conditions and (2) what models can be developed and deployed for
predicting the accurate rainfall estimates to support the decisions to
launch the cloud seeding operations in the northeastern part of
Thailand. Datasets collected during 2004-2006 from the
Chalermprakiat Royal Rain Making Research Center at Hua Hin,
Prachuap Khiri khan, the Chalermprakiat Royal Rain Making
Research Center at Pimai, Nakhon Ratchasima and Thai
Meteorological Department (TMD). A total of 179 records with 57
features was merged and matched by unique date. There are three
main parts in this work. Firstly, a decision tree induction algorithm
(C4.5) was used to classify the rain status into either rain or no-rain.
The overall accuracy of classification tree achieves 94.41% with the
five-fold cross validation. The C4.5 algorithm was also used to
classify the rain amount into three classes as no-rain (0-0.1 mm.),
few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall
accuracy of classification tree achieves 62.57%. Secondly, an ANN
was applied to predict the rainfall amount and the root mean square
error (RMSE) were used to measure the training and testing errors of
the ANN. It is found that the ANN yields a lower RMSE at 0.171 for
daily rainfall estimates, when compared to next-day and next-2-day
estimation. Thirdly, the ANN and SVM techniques were also used to
classify the rain amount into three classes as no-rain, few-rain, and
moderate-rain as above. The results achieved in 68.15% and 69.10%
of overall accuracy of same-day prediction for the ANN and SVM
models, respectively. The obtained results illustrated the comparison
of the predictive power of different methods for rainfall estimation.
Abstract: The paper represents a reflection on how to select proper indicators to assess the progress of regional contexts towards a knowledge-based society. Taking the first research methodologies elaborated at an international level (World Bank, OECD, etc.) as a reference point, this work intends to identify a set of indicators of the knowledge economy suitable to adequately understand in which manner and to which extent the territorial development dynamics are correlated with the knowledge-base of the considered local society. After a critical survey of the variables utilized within other approaches adopted by international or national organizations, this paper seeks to elaborate a framework of variables, named Regional Knowledge Economy Indicators (ReKEI), necessary to describe the knowledge-based relations of subnational socio-economic contexts. The realization of this framework has a double purpose: an analytical one consisting in highlighting the regional differences in the governance of knowledge based processes, and an operative one consisting in providing some reference parameters for contributing to increasing the effectiveness of those economic policies aiming at enlarging the knowledge bases of local societies.
Abstract: The main objective of this study was to remove and recover Ni, Cu and Fe from a mixed metal system using sodium hypophosphite as a reducing agent and nickel powder as seeding material. The metal systems studied consisted of Ni-Cu, Ni-Fe and Ni-Cu-Fe solutions. A 5 L batch reactor was used to conduct experiments where 100 mg/l of each respective metal was used. It was found that the metals were reduced to their elemental form with removal efficiencies of over 80%. The removal efficiency decreased in the order Fe>Ni>Cu. The metal powder obtained contained between 97-99% Ni and was almost spherical and porous. Size enlargement by aggregation was the dominant particulate process.
Abstract: This paper investigates the effect of product substitution in the single-period 'newsboy-type' problem in a fuzzy environment. It is supposed that the single-period problem operates under uncertainty in customer demand, which is described by imprecise terms and modelled by fuzzy sets. To perform this analysis, we consider the fuzzy model for two-item with upward substitution. This upward substitutability is reasonable when the products can be stored according to certain attribute levels such as quality, brand or package size. We show that the explicit consideration of this substitution opportunity increase the average expected profit. Computational study is performed to observe the benefits of product's substitution.
Abstract: In this paper, a neural network tuned fuzzy controller
is proposed for controlling Multi-Input Multi-Output (MIMO)
systems. For the convenience of analysis, the structure of MIMO
fuzzy controller is divided into single input single-output (SISO)
controllers for controlling each degree of freedom. Secondly,
according to the characteristics of the system-s dynamics coupling, an
appropriate coupling fuzzy controller is incorporated to improve the
performance. The simulation analysis on a two-level mass–spring
MIMO vibration system is carried out and results show the
effectiveness of the proposed fuzzy controller. The performance
though improved, the computational time and memory used is
comparatively higher, because it has four fuzzy reasoning blocks and
number may increase in case of other MIMO system. Then a fuzzy
neural network is designed from a set of input-output training data to
reduce the computing burden during implementation. This control
strategy can not only simplify the implementation problem of fuzzy
control, but also reduce computational time and consume less
memory.
Abstract: This paper describes the implementation and testing
of a multichannel active noise control system (ANCS) based on the
filtered-inverse LMS (FILMS) algorithm. The FILMS algorithm is
derived from the well-known filtered-x LMS (FXLMS) algorithm
with the aim to improve the rate of convergence of the multichannel
FXLMS algorithm and to reduce its computational load. Laboratory
setup and techniques used to implement this system efficiently are
described in this paper. Experiments performed in order to test the
performance of the FILMS algorithm are discussed and the obtained
results presented.
Abstract: This paper presents a novel template-based method to
detect objects of interest from real images by shape matching. To
locate a target object that has a similar shape to a given template
boundary, the proposed method integrates three components: contour
grouping, partial shape matching, and boundary verification. In the
first component, low-level image features, including edges and
corners, are grouped into a set of perceptually salient closed contours
using an extended ratio-contour algorithm. In the second component,
we develop a partial shape matching algorithm to identify the
fractions of detected contours that partly match given template
boundaries. Specifically, we represent template boundaries and
detected contours using landmarks, and apply a greedy algorithm to
search the matched landmark subsequences. For each matched
fraction between a template and a detected contour, we estimate an
affine transform that transforms the whole template into a hypothetic
boundary. In the third component, we provide an efficient algorithm
based on oriented edge lists to determine the target boundary from
the hypothetic boundaries by checking each of them against image
edges. We evaluate the proposed method on recognizing and
localizing 12 template leaves in a data set of real images with clutter
back-grounds, illumination variations, occlusions, and image noises.
The experiments demonstrate the high performance of our proposed
method1.
Abstract: This research paper presents a framework on how to
build up malware dataset.Many researchers took longer time to
clean the dataset from any noise or to transform the dataset into a
format that can be used straight away for testing. Therefore, this
research is proposing a framework to help researchers to speed up
the malware dataset cleaningprocesses which later can be used for
testing. It is believed, an efficient malware dataset cleaning
processes, can improved the quality of the data, thus help to improve
the accuracy and the efficiency of the subsequent analysis. Apart
from that, an in-depth understanding of the malware taxonomy is
also important prior and during the dataset cleaning processes. A
new Trojan classification has been proposed to complement this
framework.This experiment has been conducted in a controlled lab
environment and using the dataset from VxHeavens dataset. This
framework is built based on the integration of static and dynamic
analyses, incident response method and knowledge database
discovery (KDD) processes.This framework can be used as the basis
guideline for malware researchers in building malware dataset.
Abstract: A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.
Abstract: This study investigates the capacity of granular
activated carbon (GAC) for the storage of methane through the
equilibrium adsorption. An experimental apparatus consist of a dual
adsorption vessel was set up for the measurement of equilibrium
adsorption of methane on GAC using volumetric technique (pressure
decay). Experimental isotherms of methane adsorption were
determined by the measurement of equilibrium uptake of methane in
different pressures (0-50 bar) and temperatures (285.15-328.15°K).
The experimental data was fitted to Freundlich and Langmuir
equations to determine the model isotherm. The results show that the
experimental data is equally well fitted by the both model isotherms.
Using the experimental data obtained in different temperatures the
isosteric heat of methane adsorption was also calculated by the
Clausius-Clapeyron equation from the Sips isotherm model. Results
of isosteric heat of adsorption show that decreasing temperature or
increasing methane uptake by GAC decrease the isosteric heat of
methane adsorption.
Abstract: Owing to the stringent environmental legislations,
CO2 capture and sequestration is one of the viable solutions to reduce
the CO2 emissions from various sources. In this context, Ionic liquids
(ILs) are being investigated as suitable absorption media for CO2
capture. Due to their non-evaporative, non-toxic, and non-corrosive
nature, these ILs have the potential to replace the existing solvents
like aqueous amine solutions for CO2 separation technologies. Thus,
the present work aims at studying the important aspects such as the
interactions of CO2 molecule with different anions (F-, Br-, Cl-, NO3
-,
BF4
-, PF6
-, Tf2N-, and CF3SO3
-) that are commonly used in ILs
through molecular modeling. In this, the minimum energy structures
have been obtained using Ab initio based calculations at MP2
(Moller-Plesset perturbation) level. Results revealed various degrees
of distortion of CO2 molecule (from its linearity) with the anions
studied, most likely due to the Lewis acid-base interactions between
CO2 and anion. Furthermore, binding energies for the anion-CO2
complexes were also calculated. The implication of anion-CO2
interactions to the solubility of CO2 in ionic liquids is also discussed.
Abstract: Many supervised induction algorithms require discrete
data, even while real data often comes in a discrete
and continuous formats. Quality discretization of continuous
attributes is an important problem that has effects on speed,
accuracy and understandability of the induction models. Usually,
discretization and other types of statistical processes are applied
to subsets of the population as the entire population is practically
inaccessible. For this reason we argue that the discretization
performed on a sample of the population is only an estimate of
the entire population. Most of the existing discretization methods,
partition the attribute range into two or several intervals using
a single or a set of cut points. In this paper, we introduce a
technique by using resampling (such as bootstrap) to generate
a set of candidate discretization points and thus, improving the
discretization quality by providing a better estimation towards
the entire population. Thus, the goal of this paper is to observe
whether the resampling technique can lead to better discretization
points, which opens up a new paradigm to construction of
soft decision trees.
Abstract: Availability and mobilization of revenue is the main
essential with which an economy is managed and run. While
planning or while making the budgets nations set revenue targets to
be achieved. But later when the accounts are closed the actual
collections of revenue through taxes or even the non-tax revenue
collection would invariably be different as compared to the initial
estimates and targets set to be achieved. This revenue-gap distorts the
whole system and the economy disturbing all the major macroeconomic
indicators. This study is aimed to find out short and long
term impact of revenue gap on budget deficit, debt burden and
economic growth on the economy of Pakistan. For this purpose the
study uses autoregressive distributed lag approach to cointegration
and error correction mechanism on three different models for the
period 1980 to 2009. The empirical results show that revenue gap has
a short and long run relationship with economic growth and budget
deficit. However, revenue gap has no impact on debt burden.
Abstract: At any point of time, a power system operating
condition should be stable, meeting various operational criteria and it
should also be secure in the event of any credible contingency. Present
day power systems are being operated closer to their stability limits
due to economic and environmental constraints. Maintaining a stable
and secure operation of a power system is therefore a very important
and challenging issue. Voltage instability has been given much
attention by power system researchers and planners in recent years,
and is being regarded as one of the major sources of power system
insecurity. Voltage instability phenomena are the ones in which the
receiving end voltage decreases well below its normal value and does
not come back even after setting restoring mechanisms such as VAR
compensators, or continues to oscillate for lack of damping against the
disturbances. Reactive power limit of power system is one of the major
causes of voltage instability. This paper investigates the effects of
coordinated series capacitors (SC) with static VAR compensators
(SVC) on steady-state voltage stability of a power system. Also, the
influence of the presence of series capacitor on static VAR
compensator controller parameters and ratings required to stabilize
load voltages at certain values are highlighted.
Abstract: This paper presents parametric probability density
models for call holding times (CHTs) into emergency call center
based on the actual data collected for over a week in the public
Emergency Information Network (EIN) in Mongolia. When the set of
chosen candidates of Gamma distribution family is fitted to the call
holding time data, it is observed that the whole area in the CHT
empirical histogram is underestimated due to spikes of higher
probability and long tails of lower probability in the histogram.
Therefore, we provide the Gaussian parametric model of a mixture of
lognormal distributions with explicit analytical expressions for the
modeling of CHTs of PSNs. Finally, we show that the CHTs for
PSNs are fitted reasonably by a mixture of lognormal distributions
via the simulation of expectation maximization algorithm. This result
is significant as it expresses a useful mathematical tool in an explicit
manner of a mixture of lognormal distributions.
Abstract: In this paper a combined feature selection method is
proposed which takes advantages of sample domain filtering,
resampling and feature subset evaluation methods to reduce
dimensions of huge datasets and select reliable features. This method
utilizes both feature space and sample domain to improve the process
of feature selection and uses a combination of Chi squared with
Consistency attribute evaluation methods to seek reliable features.
This method consists of two phases. The first phase filters and
resamples the sample domain and the second phase adopts a hybrid
procedure to find the optimal feature space by applying Chi squared,
Consistency subset evaluation methods and genetic search.
Experiments on various sized datasets from UCI Repository of
Machine Learning databases show that the performance of five
classifiers (Naïve Bayes, Logistic, Multilayer Perceptron, Best First
Decision Tree and JRIP) improves simultaneously and the
classification error for these classifiers decreases considerably. The
experiments also show that this method outperforms other feature
selection methods.