Abstract: This paper presents a novel statistical description of
the counterpoise effective length due to lightning surges, where the
(impulse) effective length had been obtained by means of regressive
formulas applied to the transient simulation results. The effective
length is described in terms of a statistical distribution function, from
which median, mean, variance, and other parameters of interest could
be readily obtained. The influence of lightning current amplitude,
lightning front duration, and soil resistivity on the effective length has
been accounted for, assuming statistical nature of these parameters. A
method for determining the optimal counterpoise length, in terms of
the statistical impulse effective length, is also presented. It is based on
estimating the number of dangerous events associated with lightning
strikes. Proposed statistical description and the associated method
provide valuable information which could aid the design engineer in
optimising physical lengths of counterpoises in different grounding
arrangements and soil resistivity situations.
Abstract: Prior to quantifying the variables of the information
model for using school terminology in Croatia's region of Dalmatia
from 1884 to 2014, the most relevant model variables had to be
determined: historical circumstances, standard of living, education
system, linguistic situation, and media. The research findings show
that there was no significant transfer of the 1884 school terms into
1949 usage; likewise, the 1949 school terms were not widely used in
2014. On the other hand, the research revealed that the meaning of
school terms changed over the decades. The quantification of the
variables will serve as the groundwork for creating an information
model for using school terminology in Dalmatia from 1884 to 2014
and for defining direct growth rates in further research.
Abstract: Motion Tracking and Stereo Vision are complicated,
albeit well-understood problems in computer vision. Existing
softwares that combine the two approaches to perform stereo motion
tracking typically employ complicated and computationally expensive
procedures. The purpose of this study is to create a simple and
effective solution capable of combining the two approaches. The
study aims to explore a strategy to combine the two techniques
of two-dimensional motion tracking using Kalman Filter; and depth
detection of object using Stereo Vision. In conventional approaches
objects in the scene of interest are observed using a single camera.
However for Stereo Motion Tracking; the scene of interest is
observed using video feeds from two calibrated cameras. Using two
simultaneous measurements from the two cameras a calculation for
the depth of the object from the plane containing the cameras is made.
The approach attempts to capture the entire three-dimensional spatial
information of each object at the scene and represent it through a
software estimator object. In discrete intervals, the estimator tracks
object motion in the plane parallel to plane containing cameras and
updates the perpendicular distance value of the object from the plane
containing the cameras as depth. The ability to efficiently track
the motion of objects in three-dimensional space using a simplified
approach could prove to be an indispensable tool in a variety of
surveillance scenarios. The approach may find application from high
security surveillance scenes such as premises of bank vaults, prisons
or other detention facilities; to low cost applications in supermarkets
and car parking lots.
Abstract: Reverse Logistics (RL) Network is considered as
complex and dynamic network that involves many stakeholders such
as: suppliers, manufactures, warehouse, retails and costumers, this
complexity is inherent in such process due to lack of perfect
knowledge or conflicting information. Ontologies on the other hand
can be considered as an approach to overcome the problem of sharing
knowledge and communication among the various reverse logistics
partners. In this paper we propose a semantic representation based on
hybrid architecture for building the Ontologies in ascendant way, this
method facilitates the semantic reconciliation between the
heterogeneous information systems that support reverse logistics
processes and product data.
Abstract: To date, one of the few comprehensive indicators for
the measurement of food security is the Global Food Security Index
(GFSI). This index is a dynamic quantitative and qualitative
benchmarking model, constructed from 28 unique indicators, that
measures drivers of food security across both developing and
developed countries. Whereas the GFSI has been calculated across a
set of 109 countries, in this paper we aim to present and compare, for
the Middle East and North Africa (MENA), 1) the Food Security
Index scores achieved and 2) the data available on affordability,
availability, and quality of food. The data for this work was taken
from the latest available report published by the creators of the GFSI,
which in turn used information from national and international
statistical sources. MENA countries rank from place 17/109 (Israel,
although with resent political turmoil this is likely to have changed)
to place 91/109 (Yemen) with household expenditure spent in food
ranging from 15.5% (Israel) to 60% (Egypt). Lower spending on food
as a share of household consumption in most countries and better
food safety net programs in the MENA have contributed to a notable
increase in food affordability. The region has also, however,
experienced a decline in food availability, owing to more limited
food supplies and higher volatility of agricultural production. In
terms of food quality and safety the MENA has the top ranking
country (Israel). The most frequent challenges faced by the countries
of the MENA include public expenditure on agricultural research and
development as well as volatility of agricultural production. Food
security is a complex phenomenon that interacts with many other
indicators of a country’s wellbeing; in the MENA it is slowly but
markedly improving.
Abstract: This paper focuses on the assessment of the air
pollution and morbidity relationship in Tunisia. Air pollution is
measured by ozone air concentration and the morbidity is measured
by the number of respiratory-related restricted activity days during
the 2-week period prior to the interview. Socioeconomic data are also
collected in order to adjust for any confounding covariates. Our
sample is composed by 407 Tunisian respondents; 44.7% are women,
the average age is 35.2, near 69% are living in a house built after
1980, and 27.8% have reported at least one day of respiratory-related
restricted activity. The model consists on the regression of the
number of respiratory-related restricted activity days on the air
quality measure and the socioeconomic covariates. In order to correct
for zero-inflation and heterogeneity, we estimate several models
(Poisson, negative binomial, zero inflated Poisson, Poisson hurdle,
negative binomial hurdle and finite mixture Poisson models).
Bootstrapping and post-stratification techniques are used in order to
correct for any sample bias. According to the Akaike information
criteria, the hurdle negative binomial model has the greatest goodness
of fit. The main result indicates that, after adjusting for
socioeconomic data, the ozone concentration increases the probability
of positive number of restricted activity days.
Abstract: Recently, increasing the quality of experience (QoE) is
an important issue. Since performance degradation at cell edge
extremely reduces the QoE, several techniques are defined at
LTE/LTE-A standard to remove inter-cell interference (ICI). However,
the conventional techniques have disadvantage because there is a
trade-off between resource allocation and reliable communication.
The proposed scheme reduces the ICI more efficiently by using
channel state information (CSI) smartly. It is shown that the proposed
scheme can reduce the ICI with fewer resources.
Abstract: Wireless mesh networking is rapidly gaining in
popularity with a variety of users: from municipalities to enterprises,
from telecom service providers to public safety and military
organizations. This increasing popularity is based on two basic facts:
ease of deployment and increase in network capacity expressed in
bandwidth per footage; WMNs do not rely on any fixed
infrastructure. Many efforts have been used to maximizing
throughput of the network in a multi-channel multi-radio wireless
mesh network. Current approaches are purely based on either static or
dynamic channel allocation approaches. In this paper, we use a
hybrid multichannel multi radio wireless mesh networking
architecture, where static and dynamic interfaces are built in the
nodes. Dynamic Adaptive Channel Allocation protocol (DACA), it
considers optimization for both throughput and delay in the channel
allocation. The assignment of the channel has been allocated to be codependent
with the routing problem in the wireless mesh network and
that should be based on passage flow on every link. Temporal and
spatial relationship rises to re compute the channel assignment every
time when the pattern changes in mesh network, channel assignment
algorithms assign channels in network. In this paper a computing
path which captures the available path bandwidth is the proposed
information and the proficient routing protocol based on the new path
which provides both static and dynamic links. The consistency
property guarantees that each node makes an appropriate packet
forwarding decision and balancing the control usage of the network,
so that a data packet will traverse through the right path.
Abstract: Web-based Cognitive Writing Instruction (WeCWI)’s
contribution towards language development can be divided into
linguistic and non-linguistic perspectives. In linguistic perspective,
WeCWI focuses on the literacy and language discoveries, while the
cognitive and psychological discoveries are the hubs in non-linguistic
perspective. In linguistic perspective, WeCWI draws attention to free
reading and enterprises, which are supported by the language
acquisition theories. Besides, the adoption of process genre approach
as a hybrid guided writing approach fosters literacy development.
Literacy and language developments are interconnected in the
communication process; hence, WeCWI encourages meaningful
discussion based on the interactionist theory that involves input,
negotiation, output, and interactional feedback. Rooted in the elearning
interaction-based model, WeCWI promotes online
discussion via synchronous and asynchronous communications,
which allows interactions happened among the learners, instructor,
and digital content. In non-linguistic perspective, WeCWI highlights
on the contribution of reading, discussion, and writing towards
cognitive development. Based on the inquiry models, learners’
critical thinking is fostered during information exploration process
through interaction and questioning. Lastly, to lower writing anxiety,
WeCWI develops the instructional tool with supportive features to
facilitate the writing process. To bring a positive user experience to
the learner, WeCWI aims to create the instructional tool with
different interface designs based on two different types of perceptual
learning style.
Abstract: In this study, attempt has been made to investigate the
relationship specifically the causal relation between fund unit prices
of Islamic equity unit trust fund which measure by fund NAV and the
selected macro-economic variables of Malaysian economy by using
VECM causality test and Granger causality test. Monthly data has
been used from Jan, 2006 to Dec, 2012 for all the variables. The
findings of the study showed that industrial production index,
political election and financial crisis are the only variables having
unidirectional causal relationship with fund unit price. However the
global oil price is having bidirectional causality with fund NAV.
Thus, it is concluded that the equity unit trust fund industry in
Malaysia is an inefficient market with respect to the industrial
production index, global oil prices, political election and financial
crisis. However the market is approaching towards informational
efficiency at least with respect to four macroeconomic variables,
treasury bill rate, money supply, foreign exchange rate, and
corruption index.
Abstract: Nowadays social media information, such as news,
links, images, or VDOs, is shared extensively. However, the
effectiveness of disseminating information through social media
lacks in quality: less fact checking, more biases, and several rumors.
Many researchers have investigated about credibility on Twitter, but
there is no the research report about credibility information on
Facebook. This paper proposes features for measuring credibility on
Facebook information. We developed the system for credibility on
Facebook. First, we have developed FB credibility evaluator for
measuring credibility of each post by manual human’s labelling. We
then collected the training data for creating a model using Support
Vector Machine (SVM). Secondly, we developed a chrome extension
of FB credibility for Facebook users to evaluate the credibility of
each post. Based on the usage analysis of our FB credibility chrome
extension, about 81% of users’ responses agree with suggested
credibility automatically computed by the proposed system.
Abstract: Science and technology has a major impact on many
societal domains such as communication, medicine, food,
transportation, etc. However, this dominance of modern technology
can have a negative unintended impact on indigenous systems, and in
particular on indigenous foods. This problem serves as a motivation
to this study whose aim is to examine the perceptions of learners on
the usefulness of Information and Communication Technologies
(ICTs) for learning about indigenous foods. This aim will be
subdivided into two types of research objectives. The design and
identification of theories and models will be achieved using literature
content analysis. The objective on the empirical testing of such
theories and models will be achieved through the survey of
Hospitality studies learners from different schools in the iLembe and
Umgungundlovu Districts of the South African Kwazulu-Natal
province. SPSS is used to quantitatively analyze the data collected by
the questionnaire of this survey using descriptive statistics and
Pearson correlations after the assessment of the validity and the
reliability of the data. The main hypothesis behind this study is that
there is a connection between the demographics of learners, their
perceptions on the usefulness of ICTs for learning about indigenous
foods, and the following personality and eLearning related theories
constructs: Computer self-efficacy, Trust in ICT systems, and
Conscientiousness; as suggested by existing studies on learning
theories. This hypothesis was fully confirmed by the survey
conducted by this study except for the demographic factors where
gender and age were not found to be determinant factors of learners’
perceptions on the usefulness of ICTs for learning about indigenous
foods.
Abstract: The ad hoc networks are the future of wireless
technology as everyone wants fast and accurate error free information
so keeping this in mind Bit Error Rate (BER) and power is optimized
in this research paper by using the Genetic Algorithm (GA). The
digital modulation techniques used for this paper are Binary Phase
Shift Keying (BPSK), M-ary Phase Shift Keying (M-ary PSK), and
Quadrature Amplitude Modulation (QAM). This work is
implemented on Wireless Ad Hoc Networks (WLAN). Then it is
analyze which modulation technique is performing well to optimize
the BER and power of WLAN.
Abstract: The use of eXtensible Markup Language (XML) in
web, business and scientific databases lead to the development of
methods, techniques and systems to manage and analyze XML data.
Semi-structured documents suffer due to its heterogeneity and
dimensionality. XML structure and content mining represent
convergence for research in semi-structured data and text mining. As
the information available on the internet grows drastically, extracting
knowledge from XML documents becomes a harder task. Certainly,
documents are often so large that the data set returned as answer to a
query may also be very big to convey the required information. To
improve the query answering, a Semantic Tree Based Association
Rule (STAR) mining method is proposed. This method provides
intentional information by considering the structure, content and the
semantics of the content. The method is applied on Reuter’s dataset
and the results show that the proposed method outperforms well.
Abstract: The paper describes a Chinese shadow play animation
system based on Kinect. Users, without any professional training, can
personally manipulate the shadow characters to finish a shadow play
performance by their body actions and get a shadow play video
through giving the record command to our system if they want. In our
system, Kinect is responsible for capturing human movement and
voice commands data. Gesture recognition module is used to control
the change of the shadow play scenes. After packaging the data from
Kinect and the recognition result from gesture recognition module,
VRPN transmits them to the server-side. At last, the server-side uses
the information to control the motion of shadow characters and video
recording. This system not only achieves human-computer interaction,
but also realizes the interaction between people. It brings an
entertaining experience to users and easy to operate for all ages. Even
more important is that the application background of Chinese shadow
play embodies the protection of the art of shadow play animation.
Abstract: The final step to complete the “Analytical Systems
Engineering Process” is the “Allocated Architecture” in which all
Functional Requirements (FRs) of an engineering system must be
allocated into their corresponding Physical Components (PCs). At
this step, any design for developing the system’s allocated
architecture in which no clear pattern of assigning the exclusive
“responsibility” of each PC for fulfilling the allocated FR(s) can be
found is considered a poor design that may cause difficulties in
determining the specific PC(s) which has (have) failed to satisfy a
given FR successfully. The present study utilizes the Axiomatic
Design method principles to mathematically address this problem and
establishes an “Axiomatic Model” as a solution for reaching good
alternatives for developing the allocated architecture. This study
proposes a “loss Function”, as a quantitative criterion to monetarily
compare non-ideal designs for developing the allocated architecture
and choose the one which imposes relatively lower cost to the
system’s stakeholders. For the case-study, we use the existing design
of U. S. electricity marketing subsystem, based on data provided by
the U.S. Energy Information Administration (EIA). The result for
2012 shows the symptoms of a poor design and ineffectiveness due to
coupling among the FRs of this subsystem.
Abstract: The classroom of the 21st century is an ever changing
forum for new and innovative thoughts and ideas. With increasing
technology and opportunity, students have rapid access to
information that only decades ago would have taken weeks to obtain.
Unfortunately, new techniques and technology are not the cure for
the fundamental problems that have plagued the classroom ever since
education was established. Class size has been an issue long debated
in academia. While it is difficult to pin point an exact number, it is
clear that in this case more does not mean better. By looking into the
success and pitfalls of classroom size the true advantages of smaller
classes will become clear. Previously, one class was comprised of 50
students. Being seventeen and eighteen- year- old students,
sometimes it was quite difficult for them to stay focused. To help
them understand and gain much knowledge, a researcher introduced
“The Theory of Multiple Intelligence” and this, in fact, enabled
students to learn according to their own learning preferences no
matter how they were being taught. In this lesson, the researcher
designed a cycle of learning activities involving all intelligences so
that everyone had equal opportunities to learn.
Abstract: Steganography is the art and science that hides the information in an appropriate cover carrier like image, text, audio and video media. In this work the authors propose a new image based steganographic method for hiding information within the complex bit planes of the image. After slicing into bit planes the cover image is analyzed to extract the most complex planes in decreasing order based on their bit plane complexity. The complexity function next determines the complex noisy blocks of the chosen bit plane and finally pixel mapping method (PMM) has been used to embed secret bits into those regions of the bit plane. The novel approach of using pixel mapping method (PMM) in bit plane domain adaptively embeds data on most complex regions of image, provides high embedding capacity, better imperceptibility and resistance to steganalysis attack.
Abstract: The growth in the volume of text data such as books
and articles in libraries for centuries has imposed to establish
effective mechanisms to locate them. Early techniques such as
abstraction, indexing and the use of classification categories have
marked the birth of a new field of research called "Information
Retrieval". Information Retrieval (IR) can be defined as the task of
defining models and systems whose purpose is to facilitate access to
a set of documents in electronic form (corpus) to allow a user to find
the relevant ones for him, that is to say, the contents which matches
with the information needs of the user.
Most of the models of information retrieval use a specific data
structure to index a corpus which is called "inverted file" or "reverse
index".
This inverted file collects information on all terms over the corpus
documents specifying the identifiers of documents that contain the
term in question, the frequency of each term in the documents of the
corpus, the positions of the occurrences of the word...
In this paper we use an oriented object database (db4o) instead of
the inverted file, that is to say, instead to search a term in the inverted
file, we will search it in the db4o database.
The purpose of this work is to make a comparative study to see if
the oriented object databases may be competing for the inverse index
in terms of access speed and resource consumption using a large
volume of data.
Abstract: This paper reports a structured literature review of the
application of Health Information Technology in developing
countries, defined as the World Bank categories Low-income
countries, Lower-middle-income, and Upper-middle-income
countries. The aim was to identify and classify the various
applications of health information technology to assess its current
state in developing countries and explore potential areas of research.
We offer specific analysis and application of HIT in Libya as one of
the developing countries. A structured literature review was
conducted using the following online databases: IEEE, Science
Direct, PubMed, and Google Scholar. Publication dates were set for
2000-2013. For the PubMed search, publications in English, French,
and Arabic were specified. Using a content analysis approach, 159
papers were analyzed and a total number of 26 factors were identified
that affect the adoption of health information technology. Of the 2681
retrieved articles, 159 met the inclusion criteria which were carefully
analyzed and classified. The implementation of health information
technology across developing countries is varied. Whilst it was
initially expected financial constraints would have severely limited
health information technology implementation, some developing
countries like India have nevertheless dominated the literature and
taken the lead in conducting scientific research. Comparing the
number of studies to the number of countries in each category, we
found that Low-income countries and Lower-middle-income had
more studies carried out than Upper-middle-income countries.
However, whilst IT has been used in various sectors of the economy,
the healthcare sector in developing countries is still failing to benefit
fully from the potential advantages that IT can offer.