Abstract: The excessive use of agricultural pesticides and the
resulting contamination of food and beds of rivers have been a
recurring problem nowadays. Some of these substances can cause
changes in endocrine balance and impair reproductive function of
human and animal population. In the present study, we evaluated the
possible effects of the fungicide cuprous copper oxide Sandoz® on
pregnant Wistar rats. They received a daily oral administration of 103
or 3.103 mg/kg of the fungicide from the 6th to the 15th day of
gestation. On day 21 of gestation, the maternal and fetal toxicity
parameters and indices were determined. The administration of
cuprous oxide (Copper Sandoz) in Wistar rats, the period of
organogenesis, revealed no evidence of maternal toxicity or embryo
at the studied doses.
Abstract: The purpose of this paper is to propose a text mining
approach to evaluate companies- practices on affective management.
Affective management argues that it is critical to take stakeholders-
affects into consideration during decision-making process, along with
the traditional numerical and rational indices. CSR reports published
by companies were collected as source information. Indices were
proposed based on the frequency and collocation of words relevant to
affective management concept using text mining approach to analyze
the text information of CSR reports. In addition, the relationships
between the results obtained using proposed indices and traditional
indicators of business performance were investigated using
correlation analysis. Those correlations were also compared between
manufacturing and non-manufacturing companies. The results of this
study revealed the possibility to evaluate affective management
practices of companies based on publicly available text documents.
Abstract: The African Great Lakes Region refers to the zone
around lakes Victoria, Tanganyika, Albert, Edward, Kivu, and
Malawi. The main source of electricity in this region is hydropower
whose systems are generally characterized by relatively weak,
isolated power schemes, poor maintenance and technical deficiencies
with limited electricity infrastructures. Most of the hydro sources are
rain fed, and as such there is normally a deficiency of water during
the dry seasons and extended droughts. In such calamities fossil fuels
sources, in particular petroleum products and natural gas, are
normally used to rescue the situation but apart from them being nonrenewable,
they also release huge amount of green house gases to our
environment which in turn accelerates the global warming that has at
present reached an amazing stage. Wind power is ample, renewable,
widely distributed, clean, and free energy source that does not
consume or pollute water. Wind generated electricity is one of the
most practical and commercially viable option for grid quality and
utility scale electricity production. However, the main shortcoming
associated with electric wind power generation is fluctuation in its
output both in space and time. Before making a decision to establish
a wind park at a site, the wind speed features there should therefore
be known thoroughly as well as local demand or transmission
capacity. The main objective of this paper is to utilise monthly
average wind speed data collected from one prospective site within
the African Great Lakes Region to demonstrate that the available
wind power there is high enough to generate electricity. The mean
monthly values were calculated from records gathered on hourly
basis for a period of 5 years (2001 to 2005) from a site in Tanzania.
The documentations that were collected at a height of 2 m were
projected to a height of 50 m which is the standard hub height of
wind turbines. The overall monthly average wind speed was found to
be 12.11 m/s whereas June to November was established to be the
windy season as the wind speed during the session is above the
overall monthly wind speed. The available wind power density
corresponding to the overall mean monthly wind speed was evaluated
to be 1072 W/m2, a potential that is worthwhile harvesting for the
purpose of electric generation.
Abstract: The novelty proposed in this study is twofold and consists in the developing of a new color similarity metric based on the human visual system and a new color indexing based on a textual approach. The new color similarity metric proposed is based on the color perception of the human visual system. Consequently the results returned by the indexing system can fulfill as much as possibile the user expectations. We developed a web application to collect the users judgments about the similarities between colors, whose results are used to estimate the metric proposed in this study. In order to index the image's colors, we used a text indexing engine to facilitate the integration of visual features in a database of text documents. The textual signature is build by weighting the image's colors in according to their occurrence in the image. The use of a textual indexing engine, provide us a simple, fast and robust solution to index images. A typical usage of the system proposed in this study, is the development of applications whose data type is both visual and textual. In order to evaluate the proposed method we chose a price comparison engine as a case of study, collecting a series of commercial offers containing the textual description and the image representing a specific commercial offer.
Abstract: Severe acute respiratory syndrome (SARS) is a respiratory disease in humans which is caused by the SARS coronavirus. The treatment of coronavirus-associated SARS has been evolving and so far there is no consensus on an optimal regimen. The mainstream therapeutic interventions for SARS involve broad-spectrum antibiotics and supportive care, as well as antiviral agents and immunomodulatory therapy. The Protein- Ligand interaction plays a significant role in structural based drug designing. In the present work we have taken the receptor Angiotensin converting enzyme 2 and identified the drugs that are commonly used against SARS. They are Lopinavir, Ritonavir, Ribavirin, and Oseltamivir. The receptor Angiotensin converting enzyme 2 (ACE-2) was docked with above said drugs and the energy value obtained are as follows, Lopinavir (-292.3), Ritonavir (-325.6), Oseltamivir (- 229.1), Ribavirin (-208.8). Depending on the least energy value we have chosen the best two drugs out of the four conventional drugs. We tried to improve the binding efficiency and steric compatibility of the two drugs namely Ritonavir and Lopinavir. Several modifications were made to the probable functional groups (phenylic, ketonic groups in case of Ritonavir and carboxylic groups in case of Lopinavir respectively) which were interacting with the receptor molecule. Analogs were prepared by Marvin Sketch software and were docked using HEX docking software. Lopinavir analog 8 and Ritonavir analog 11 were detected with significant energy values and are probable lead molecule. It infers that some of the modified drugs are better than the original drugs. Further work can be carried out to improve the steric compatibility of the drug based upon the work done above for a more energy efficient binding of the drugs to the receptor.
Abstract: The Internet and the ever growing applications enable
communities to share and collaborate through common platforms.
However, this growing pattern is not witnessed yet even for elearning.
This paper is based on a doctoral research which aimed at
researching the ways students interact in an online campus and the
supports that they look for and require. Content analysis, based on the
Panchoo/Jaillet methodology, was done on four synchronous
meetings between a tutor and his ten students. The UNIV-Rct ecampus,
analogical to a physical campus, was found to be user
friendly and the students enrolled in a master-s course faced no
difficulties in using it. In addition to the environmental aspects, the
pedagogical implementation of the course has driven the students to
interact and collaborate significantly and this has contributed to
overcome the problems faced by the distance learners. This
completely online model was found to be fruitful in helping distant
learners fight their loneliness and brave their difficulties in a socioconstructivism
approach.
Abstract: Abovepresented work deals with the new scope of application of information and communication technologies for the improvement of the election process in the biased environment. We are introducing a new concept of construction of the information-communication system for the election participant. It consists of four main components: Software, Physical Infrastructure, Structured Information and the Trained Stuff. The Structured Information is the bases of the whole system and is the collection of all possible events (irregularities among them) at the polling stations, which are structured in special templates, forms and integrated in mobile devices.The software represents a package of analytic modules, which operates with the dynamic database. The application of modern communication technologies facilities the immediate exchange of information and of relevant documents between the polling stations and the Server of the participant. No less important is the training of the staff for the proper functioning of the system. The e-training system with various modules should be applied in this respect. The presented methodology is primarily focused on the election processes in the countries of emerging democracies.It can be regarded as the tool for the monitoring of elections process by the political organization(s) and as one of the instruments to foster the spread of democracy in these countries.
Abstract: Most of the Question Answering systems
composed of three main modules: question processing,
document processing and answer processing. Question
processing module plays an important role in QA systems. If
this module doesn't work properly, it will make problems for
other sections. Moreover answer processing module is an
emerging topic in Question Answering, where these systems
are often required to rank and validate candidate answers.
These techniques aiming at finding short and precise answers
are often based on the semantic classification.
This paper discussed about a new model for question
answering which improved two main modules, question
processing and answer processing.
There are two important components which are the bases
of the question processing. First component is question
classification that specifies types of question and answer.
Second one is reformulation which converts the user's
question into an understandable question by QA system in a
specific domain. Answer processing module, consists of
candidate answer filtering, candidate answer ordering
components and also it has a validation section for interacting
with user. This module makes it more suitable to find exact
answer. In this paper we have described question and answer
processing modules with modeling, implementing and
evaluating the system. System implemented in two versions.
Results show that 'Version No.1' gave correct answer to 70%
of questions (30 correct answers to 50 asked questions) and
'version No.2' gave correct answers to 94% of questions (47
correct answers to 50 asked questions).
Abstract: Due to the tremendous amount of information provided
by the World Wide Web (WWW) developing methods for mining
the structure of web-based documents is of considerable interest. In
this paper we present a similarity measure for graphs representing
web-based hypertext structures. Our similarity measure is mainly
based on a novel representation of a graph as linear integer strings,
whose components represent structural properties of the graph. The
similarity of two graphs is then defined as the optimal alignment of
the underlying property strings. In this paper we apply the well known
technique of sequence alignments for solving a novel and challenging
problem: Measuring the structural similarity of generalized trees.
In other words: We first transform our graphs considered as high
dimensional objects in linear structures. Then we derive similarity
values from the alignments of the property strings in order to
measure the structural similarity of generalized trees. Hence, we
transform a graph similarity problem to a string similarity problem for
developing a efficient graph similarity measure. We demonstrate that
our similarity measure captures important structural information by
applying it to two different test sets consisting of graphs representing
web-based document structures.
Abstract: At the end of the 17th Century the Greek orthodox
Archbishop in Venice -Meletios Typaldos- decided to turn the
doctrine of the orthodox Greeks into Catholicism. More than 5.000
Greeks were living in Venice then. Their leadership -the Greek
confraternity- fought against Meletios. Participants in this conflict
were the Pope, the ecumenical Patriarch in Constantinople and Peter
the Great of Russia. All the play according to my opinion -which is
followed by evidence and theoretical support is a strong conflict
between the two actors -the Archbishop and the Confraternity- and
the object of conflict is the change of the Greek orthodox beliefs to
Catholicism. Ethnicity especially for Greeks of the era is identified
with orthodoxy. So this was a conflict of identity. The results of that
conflict were of tremendous importance to the Greeks in Venice and
affected them for long.
Abstract: Adenylate kinase (AK) catalyse the phosphotransferase
reaction plays an important role in cellular energy homeostasis. The
inhibitors of bacterial AK are useful in the treatment of several
bacterial infections. To the novel inhibitors of AK, docking studies
performed by using the 3D structure of Bacillus stearothermophilus
adenylate kinase from protein data bank (IZIP). 46 Quinoxaline
analogues were docked in 1ZIP and selected the highly interacting
compounds based on their binding energies, for further studies
Abstract: The last two decades witnessed some advances in the development of an Arabic character recognition (CR) system. Arabic CR faces technical problems not encountered in any other language that make Arabic CR systems achieve relatively low accuracy and retards establishing them as market products. We propose the basic stages towards a system that attacks the problem of recognizing online Arabic cursive handwriting. Rule-based methods are used to perform simultaneous segmentation and recognition of word portions in an unconstrained cursively handwritten document using dynamic programming. The output of these stages is in the form of a ranked list of the possible decisions. A new technique for text line separation is also used.
Abstract: An artificial neural network (ANN) approach was used to model the energy consumption of wheat production. This study was conducted over 35,300 hectares of irrigated and dry land wheat fields in Canterbury in the 2007-2008 harvest year.1 In this study several direct and indirect factors have been used to create an artificial neural networks model to predict energy use in wheat production. The final model can predict energy consumption by using farm condition (size of wheat area and number paddocks), farmers- social properties (education), and energy inputs (N and P use, fungicide consumption, seed consumption, and irrigation frequency), it can also predict energy use in Canterbury wheat farms with error margin of ±7% (± 1600 MJ/ha).
Abstract: In this study, a fuzzy similarity approach for Arabic
web pages classification is presented. The approach uses a fuzzy
term-category relation by manipulating membership degree for the
training data and the degree value for a test web page. Six measures
are used and compared in this study. These measures include:
Einstein, Algebraic, Hamacher, MinMax, Special case fuzzy and
Bounded Difference approaches. These measures are applied and
compared using 50 different Arabic web pages. Einstein measure was
gave best performance among the other measures. An analysis of
these measures and concluding remarks are drawn in this study.
Abstract: Text document categorization involves large amount
of data or features. The high dimensionality of features is a
troublesome and can affect the performance of the classification.
Therefore, feature selection is strongly considered as one of the
crucial part in text document categorization. Selecting the best
features to represent documents can reduce the dimensionality of
feature space hence increase the performance. There were many
approaches has been implemented by various researchers to
overcome this problem. This paper proposed a novel hybrid approach
for feature selection in text document categorization based on Ant
Colony Optimization (ACO) and Information Gain (IG). We also
presented state-of-the-art algorithms by several other researchers.
Abstract: In this paper, we present symbolic recognition models to extract knowledge characterized by document structures. Focussing on the extraction and the meticulous exploitation of the semantic structure of documents, we obtain a meaningful contextual tagging corresponding to different unit types (title, chapter, section, enumeration, etc.).
Abstract: Script identification is one of the challenging steps in the development of optical character recognition system for bilingual or multilingual documents. In this paper an attempt is made for identification of English numerals at word level from Punjabi documents by using Gabor features. The support vector machine (SVM) classifier with five fold cross validation is used to classify the word images. The results obtained are quite encouraging. Average accuracy with RBF kernel, Polynomial and Linear Kernel functions comes out to be greater than 99%.
Abstract: We have proposed an information filtering system
using index word selection from a document set based on the
topics included in a set of documents. This method narrows
down the particularly characteristic words in a document set
and the topics are obtained by Sparse Non-negative Matrix
Factorization. In information filtering, a document is often
represented with the vector in which the elements correspond
to the weight of the index words, and the dimension of the
vector becomes larger as the number of documents is
increased. Therefore, it is possible that useless words as index
words for the information filtering are included. In order to
address the problem, the dimension needs to be reduced. Our
proposal reduces the dimension by selecting index words
based on the topics included in a document set. We have
applied the Sparse Non-negative Matrix Factorization to the
document set to obtain these topics. The filtering is carried out
based on a centroid of the learning document set. The centroid
is regarded as the user-s interest. In addition, the centroid is
represented with a document vector whose elements consist of
the weight of the selected index words. Using the English test
collection MEDLINE, thus, we confirm the effectiveness of
our proposal. Hence, our proposed selection can confirm the
improvement of the recommendation accuracy from the other
previous methods when selecting the appropriate number of
index words. In addition, we discussed the selected index
words by our proposal and we found our proposal was able to
select the index words covered some minor topics included in
the document set.
Abstract: Text similarity measurement is a fundamental issue in
many textual applications such as document clustering, classification,
summarization and question answering. However, prevailing approaches
based on Vector Space Model (VSM) more or less suffer
from the limitation of Bag of Words (BOW), which ignores the semantic
relationship among words. Enriching document representation
with background knowledge from Wikipedia is proven to be an effective
way to solve this problem, but most existing methods still
cannot avoid similar flaws of BOW in a new vector space. In this
paper, we propose a novel text similarity measurement which goes
beyond VSM and can find semantic affinity between documents.
Specifically, it is a unified graph model that exploits Wikipedia as
background knowledge and synthesizes both document representation
and similarity computation. The experimental results on two different
datasets show that our approach significantly improves VSM-based
methods in both text clustering and classification.
Abstract: A variety of new technology-based services have
emerged with the development of Information and Communication
Technologies (ICTs). Since technology-based services have technology-driven characteristics, the identification of relationships
between technology-based services and ICTs would give meaningful implications. Thus, this paper proposes an approach for identifying the
relationships between technology-based services and ICTs by
analyzing patent documents. First, business model (BM) patents are
classified into relevant service categories. Second, patent citation
analysis is conducted to investigate the technological linkage and impacts between technology-based services and ICTs at macro level.
Third, as a micro level analysis, patent co-classification analysis is
employed to identify the technological linkage and coverage. The
proposed approach could guide and help managers and designers of
technology-based services to discover the opportunity of the development of new technology-based services in emerging service sectors.