Abstract: This study attempts to consider the linkage between management and computer sciences in order to develop the software named “IntelSymb” as a demo application to prove data analysis of non-energy* fields’ diversification, which will positively influence on energy dependency mitigation of countries. Afterward, we analyzed 18 years of economic fields of development (5 sectors) of 13 countries by identifying which patterns mostly prevailed and which can be dominant in the near future. To make our analysis solid and plausible, as a future work, we suggest developing a gateway or interface, which will be connected to all available on-line data bases (WB, UN, OECD, U.S. EIA) for countries’ analysis by fields. Sample data consists of energy (TPES and energy import indicators) and non-energy industries’ (Main Science and Technology Indicator, Internet user index, and Sales and Production indicators) statistics from 13 OECD countries over 18 years (1995-2012). Our results show that the diversification of non-energy industries can have a positive effect on energy sector dependency (energy consumption and import dependence on crude oil) deceleration. These results can provide empirical and practical support for energy and non-energy industries diversification’ policies, such as the promoting of Information and Communication Technologies (ICTs), services and innovative technologies efficiency and management, in other OECD and non-OECD member states with similar energy utilization patterns and policies. Industries, including the ICT sector, generate around 4 percent of total GHG, but this is much higher — around 14 percent — if indirect energy use is included. The ICT sector itself (excluding the broadcasting sector) contributes approximately 2 percent of global GHG emissions, at just under 1 gigatonne of carbon dioxide equivalent (GtCO2eq). Ergo, this can be a good example and lesson for countries which are dependent and independent on energy, and mainly emerging oil-based economies, as well as to motivate non-energy industries diversification in order to be ready to energy crisis and to be able to face any economic crisis as well.
Abstract: In this paper we present the first Arabic sentence
dataset for on-line handwriting recognition written on tablet pc. The
dataset is natural, simple and clear. Texts are sampled from daily
newspapers. To collect naturally written handwriting, forms are
dictated to writers. The current version of our dataset includes 154
paragraphs written by 48 writers. It contains more than 3800 words
and more than 19,400 characters. Handwritten texts are mainly
written by researchers from different research centers. In order to use
this dataset in a recognition system word extraction is needed. In this
paper a new word extraction technique based on the Arabic
handwriting cursive nature is also presented. The technique is applied
to this dataset and good results are obtained. The results can be
considered as a bench mark for future research to be compared with.
Abstract: Processing the data by computers and performing
reasoning tasks is an important aim in Computer Science. Semantic
Web is one step towards it. The use of ontologies to enhance the
information by semantically is the current trend. Huge amount of
domain specific, unstructured on-line data needs to be expressed in
machine understandable and semantically searchable format.
Currently users are often forced to search manually in the results
returned by the keyword-based search services. They also want to use
their native languages to express what they search. In this paper, an
ontology-based automated question answering system on software
test documents domain is presented. The system allows users to enter
a question about the domain by means of natural language and
returns exact answer of the questions. Conversion of the natural
language question into the ontology based query is the challenging
part of the system. To be able to achieve this, a new algorithm
regarding free text to ontology based search engine query conversion
is proposed. The algorithm is based on investigation of suitable
question type and parsing the words of the question sentence.
Abstract: It is estimated that the total cost of abnormal
conditions to US process industries is around $20 billion dollars in
annual losses. The hydrotreatment (HDT) of diesel fuel in petroleum
refineries is a conversion process that leads to high profitable
economical returns. However, this is a difficult process to control
because it is operated continuously, with high hydrogen pressures
and it is also subject to disturbances in feed properties and catalyst
performance. So, the automatic detection of fault and diagnosis plays
an important role in this context. In this work, a hybrid approach
based on neural networks together with a pos-processing
classification algorithm is used to detect faults in a simulated HDT
unit. Nine classes (8 faults and the normal operation) were correctly
classified using the proposed approach in a maximum time of 5
minutes, based on on-line data process measurements.
Abstract: Detection of incipient abnormal events is important to
improve safety and reliability of machine operations and reduce losses
caused by failures. Improper set-ups or aligning of parts often leads to
severe problems in many machines. The construction of prediction
models for predicting faulty conditions is quite essential in making
decisions on when to perform machine maintenance. This paper
presents a multivariate calibration monitoring approach based on the
statistical analysis of machine measurement data. The calibration
model is used to predict two faulty conditions from historical reference
data. This approach utilizes genetic algorithms (GA) based variable
selection, and we evaluate the predictive performance of several
prediction methods using real data. The results shows that the
calibration model based on supervised probabilistic principal
component analysis (SPPCA) yielded best performance in this work.
By adopting a proper variable selection scheme in calibration models,
the prediction performance can be improved by excluding
non-informative variables from their model building steps.