Abstract: I/O workload is a critical and important factor to
analyze I/O pattern and file system performance. However tracing I/O
operations on the fly distributed parallel file system is non-trivial due
to collection overhead and a large volume of data. In this paper, we
design and implement a parallel file system logging method for high
performance computing using shared memory-based multi-layer
scheme. It minimizes the overhead with reduced logging operation
response time and provides efficient post-processing scheme through
shared memory. Separated logging server can collect sequential logs
from multiple clients in a cluster through packet communication.
Implementation and evaluation result shows low overhead and high
scalability of this architecture for high performance parallel logging
analysis.
Abstract: This study proposes a new recommender system based on the collaborative folksonomy. The purpose of the proposed system is to recommend Internet resources (such as books, articles, documents, pictures, audio and video) to users. The proposed method includes four steps: creating the user profile based on the tags, grouping the similar users into clusters using an agglomerative hierarchical clustering, finding similar resources based on the user-s past collections by using content-based filtering, and recommending similar items to the target user. This study examines the system-s performance for the dataset collected from “del.icio.us," which is a famous social bookmarking website. Experimental results show that the proposed tag-based collaborative and content-based filtering hybridized recommender system is promising and effectiveness in the folksonomy-based bookmarking website.
Abstract: Human amniotic membrane (HAM) is a useful
biological material for the reconstruction of damaged ocular surface.
The processing and preservation of HAM is critical to prevent the
patients undergoing amniotic membrane transplant (AMT) from cross
infections. For HAM preparation human placenta is obtained after an
elective cesarean delivery. Before collection, the donor is screened
for seronegativity of HCV, Hbs Ag, HIV and Syphilis. After
collection, placenta is washed in balanced salt solution (BSS) in
sterile environment. Amniotic membrane is then separated from the
placenta as well as chorion while keeping the preparation in BSS.
Scrapping of HAM is then carried out manually until all the debris is
removed and clear transparent membrane is acquired. Nitrocellulose
membrane filters are then placed on the stromal side of HAM, cut
around the edges with little membrane folded towards other side
making it easy to separate during surgery. HAM is finally stored in
solution of glycerine and Dulbecco-s Modified Eagle Medium
(DMEM) in 1:1 ratio containing antibiotics. The capped borosil vials
containing HAM are kept at -80°C until use. This vial is thawed to
room temperature and opened under sterile operation theatre
conditions at the time of surgery.
Abstract: Mobile adhoc network (MANET) is a collection of
mobile devices which form a communication network with no preexisting
wiring or infrastructure. Multiple routing protocols have
been developed for MANETs. As MANETs gain popularity, their
need to support real time applications is growing as well. Such
applications have stringent quality of service (QoS) requirements
such as throughput, end-to-end delay, and energy. Due to dynamic
topology and bandwidth constraint supporting QoS is a challenging
task. QoS aware routing is an important building block for QoS
support. The primary goal of the QoS aware protocol is to determine
the path from source to destination that satisfies the QoS
requirements. This paper proposes a new energy and delay aware
protocol called energy and delay aware TORA (EDTORA) based on
extension of Temporally Ordered Routing Protocol (TORA).Energy
and delay verifications of query packet have been done in each node.
Simulation results show that the proposed protocol has a higher
performance than TORA in terms of network lifetime, packet
delivery ratio and end-to-end delay.
Abstract: Mobile ad hoc network is a collection of mobile
nodes communicating through wireless channels without any
existing network infrastructure or centralized administration.
Because of the limited transmission range of wireless network
interfaces, multiple "hops" may be needed to exchange data
across the network. Consequently, many routing algorithms
have come into existence to satisfy the needs of
communications in such networks. Researchers have
conducted many simulations comparing the performance of
these routing protocols under various conditions and
constraints. One question that arises is whether speed of nodes
affects the relative performance of routing protocols being
studied. This paper addresses the question by simulating two
routing protocols AODV and DSDV. Protocols were
simulated using the ns-2 and were compared in terms of
packet delivery fraction, normalized routing load and average
delay, while varying number of nodes, and speed.
Abstract: This paper describes topic of computer simulation with regard to the ground movement above an underground mine. Simulation made with software package ADINA for nonlinear elastic-plastic analysis with finite elements method. The one of representative profiles from Mine 'Stara Jama' in Zenica has been investigated. A collection and selection of both geo-mechanical data and geometric parameters of the mine was necessary for performing these simulations. Results of estimation have been compared with measured values (vertical displacement of surface), and then simulation performed with assumed dynamic and dimensions of excavation, over a period of time. Results are presented with bitmaps and charts.
Abstract: The amount and heterogeneity of data in biomedical research, notably in interdisciplinary research, requires new methods for the collection, presentation and analysis of information. Important data from laboratory experiments as well as patient trials are available but come out of distributed resources. The Charite Medical School in Berlin has established together with the German Research Foundation (DFG) a new information service center for kidney diseases and transplantation (Open European Nephrology Science Centre - OpEN.SC). The system is based on a service-oriented architecture (SOA) with main and auxiliary modules arranged in four layers. To improve the reuse and efficient arrangement of the services the functionalities are described as business processes using the standardised Business Process Execution Language (BPEL).
Abstract: A local municipality has decided to build a sewage pit
to receive residential sewage waste arriving by tank trucks. Daily
accumulated waste are to be pumped to a nearby waste water
treatment facility to be re-consumed for agricultural and construction
projects. A discrete-event simulation model using Arena Software
was constructed to assist in defining the capacity of the system in
cubic meters, number of tank trucks to use the system, number of
unload docks required, number of standby areas needed and
manpower required for data collection at entrance checkpoint and
truck tank load toxicity testing. The results of the model are
statistically validated. Simulation turned out to be an excellent tool
in the facility planning effort for the pit project, as it insured smooth
flow lines of tank trucks load discharge and best utilization of
facilities on site.
Abstract: Question answering (QA) aims at retrieving precise information from a large collection of documents. Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems to reformulate questions. Moreover answer processing module is an emerging topic in QA systems, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic relations and co-occurrence keywords. This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing which both affect on the evaluation of the system operations. There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. The objective of an Answer Validation task is thus to judge the correctness of an answer returned by a QA system, according to the text snippet given to support it. For validating answers we apply candidate answer filtering, candidate answer ranking and also it has a final validation section by user voting. Also this paper described new architecture of question and answer processing modules with modeling, implementing and evaluating the system. The system differs from most question answering systems in its answer validation model. This module makes it more suitable to find exact answer. Results show that, from total 50 asked questions, evaluation of the model, show 92% improving the decision of the system.
Abstract: Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Abstract: The advances in location-based data collection
technologies such as GPS, RFID etc. and the rapid reduction of their
costs provide us with a huge and continuously increasing amount of
data about movement of vehicles, people and goods in an urban area.
This explosive growth of geospatially-referenced data has far
outpaced the planner-s ability to utilize and transform the data into
insightful information thus creating an adverse impact on the return
on the investment made to collect and manage this data. Addressing
this pressing need, we designed and developed DIVAD, a dynamic
and interactive visual analytics dashboard to allow city planners to
explore and analyze city-s transportation data to gain valuable
insights about city-s traffic flow and transportation requirements. We
demonstrate the potential of DIVAD through the use of interactive
choropleth and hexagon binning maps to explore and analyze large
taxi-transportation data of Singapore for different geographic and
time zones.
Abstract: Classifying biomedical literature is a difficult and
challenging task, especially when a large number of biomedical
articles should be organized into a hierarchical structure. In this paper,
we present an approach for classifying a collection of biomedical text
abstracts downloaded from Medline database with the help of
ontology alignment. To accomplish our goal, we construct two types
of hierarchies, the OHSUMED disease hierarchy and the Medline
abstract disease hierarchies from the OHSUMED dataset and the
Medline abstracts, respectively. Then, we enrich the OHSUMED
disease hierarchy before adapting it to ontology alignment process for
finding probable concepts or categories. Subsequently, we compute
the cosine similarity between the vector in probable concepts (in the
“enriched" OHSUMED disease hierarchy) and the vector in Medline
abstract disease hierarchies. Finally, we assign category to the new
Medline abstracts based on the similarity score. The results obtained
from the experiments show the performance of our proposed approach
for hierarchical classification is slightly better than the performance of
the multi-class flat classification.
Abstract: Scolothrips longicornis Priesner is one of the
important predators of tetranychid mites with a wide distribution
throughout Iran. Life table and population growth parameters of S.
longicornis feeding on two-spotted spider mite, Tetranychus
turkestani Ugarov & Nikolski were investigated under laboratory
condition (26±1ºC, 65±5% R.H. and 16L: 8D). To carry of these
experiments, S. longicornis collections reared on cowpea infested
with T. turkestani were prepared. The eggs with less than 24 hours
old were selected and reared. The emerged larvae feeding directly on
cowpea leaf discs which were infested with T. turkestani. Thirty
females of S. longicornis with 24 hours age were selected and
released on infested leaf discs. They replaced daily to a new leaf disc
and the laying eggs have counted. The experiment continued till the
last thrips had died. The result showed that the mean age mortality of
the adult female thrips were between 21-25 days which is nearly
equal life expectancy (ex) at the time of adult eclosion. Parameters
related to reproductive table including gross reproductive rate, net
reproductive rate, intrinsic rate of natural increase and finite rate of
increase were 48.91, 37.63, 0.26 and 2.3, respectively. Mean age per
female/day, mean fertile egg per female/day, gross hatch rate, mean
net age fertility, mean net age fecundity, net fertility rate and net
fecundity rate were 2.23, 1.76, 0.87, 13.87, 14.26, 69.1 and 78.5,
respectively. Sex ratio of offspring also recorded daily. The highest
sex ratio for females was 0.88 in first day of oviposition. The sex
ratio decreased gradually and reached under 0.46 after the day 26 and
the oviposition rate declined. Then it seems that maintenance of
rearing culture of predatory thrips for mass rearing later than 26 days
after egg-laying commence is not profitable.
Abstract: A perfect secret-sharing scheme is a method to distribute a secret among a set of participants in such a way that only qualified subsets of participants can recover the secret and the joint share of participants in any unqualified subset is statistically independent of the secret. The collection of all qualified subsets is called the access structure of the perfect secret-sharing scheme. In a graph-based access structure, each vertex of a graph G represents a participant and each edge of G represents a minimal qualified subset. The average information ratio of a perfect secret-sharing scheme realizing the access structure based on G is defined as AR = (Pv2V (G) H(v))/(|V (G)|H(s)), where s is the secret and v is the share of v, both are random variables from and H is the Shannon entropy. The infimum of the average information ratio of all possible perfect secret-sharing schemes realizing a given access structure is called the optimal average information ratio of that access structure. Most known results about the optimal average information ratio give upper bounds or lower bounds on it. In this present structures based on bipartite graphs and determine the exact values of the optimal average information ratio of some infinite classes of them.
Abstract: Since IEC61850 substation communication standard represents the trend to develop new generations of Substation Automation System (SAS), many IED manufacturers pursue this technique and apply for KEMA. In order to put on the market to meet customer demand as fast as possible, manufacturers often apply their products only for basic environment standard certification but claim to conform to IEC61850 certification. Since verification institutes generally perform verification tests only on specific IEDs of the manufacturers, the interoperability between all certified IEDs cannot be guaranteed. Therefore the interoperability between IEDs from different manufacturers needs to be tested. Based upon the above reasons, this study applies the definitions of the information models, communication service, GOOSE functionality and Substation Configuration Language (SCL) of the IEC61850 to build the concept of communication protocols, and build the test environment. The procedures of the test of the data collection and exchange of the P2P communication mode and Client / Server communication mode in IEC61850 are outlined as follows. First, test the IED GOOSE messages communication capability from different manufacturers. Second, collect IED data from each IED with SCADA system and use HMI to display the SCADA platform. Finally, problems generally encountered in the test procedure are summarized.
Abstract: Music segmentation is a key issue in music information
retrieval (MIR) as it provides an insight into the
internal structure of a composition. Structural information about
a composition can improve several tasks related to MIR such
as searching and browsing large music collections, visualizing
musical structure, lyric alignment, and music summarization.
The authors of this paper present the MTSSM framework, a twolayer
framework for the multi-track segmentation of symbolic
music. The strength of this framework lies in the combination of
existing methods for local track segmentation and the application
of global structure information spanning via multiple tracks.
The first layer of the MTSSM uses various string matching
techniques to detect the best candidate segmentations for each
track of a multi-track composition independently. The second
layer combines all single track results and determines the best
segmentation for each track in respect to the global structure of
the composition.
Abstract: Mining frequent tree patterns have many useful
applications in XML mining, bioinformatics, network routing, etc.
Most of the frequent subtree mining algorithms (i.e. FREQT,
TreeMiner and CMTreeMiner) use anti-monotone property in the
phase of candidate subtree generation. However, none of these
algorithms have verified the correctness of this property in tree
structured data. In this research it is shown that anti-monotonicity
does not generally hold, when using weighed support in tree pattern
discovery. As a result, tree mining algorithms that are based on this
property would probably miss some of the valid frequent subtree
patterns in a collection of trees. In this paper, we investigate the
correctness of anti-monotone property for the problem of weighted
frequent subtree mining. In addition we propose W3-Miner, a new
algorithm for full extraction of frequent subtrees. The experimental
results confirm that W3-Miner finds some frequent subtrees that the
previously proposed algorithms are not able to discover.
Abstract: The demand for autonomous resource
management for distributed systems has increased in recent
years. Distributed systems require an efficient and powerful
communication mechanism between applications running on
different hosts and networks. The use of mobile agent
technology to distribute and delegate management tasks
promises to overcome the scalability and flexibility limitations
of the currently used centralized management approach. This
work proposes a multiagent system that adopts mobile agents
as a technology for tasks distribution, results collection, and
management of resources in large-scale distributed systems. A
new mobile agent-based approach for collecting results from
distributed system elements is presented. The technique of
artificial intelligence based on intelligent agents giving the
system a proactive behavior. The presented results are based
on a design example of an application operating in a mobile
environment.
Abstract: The aim of this paper is to explain what a multienterprise tie is, what evidence its analysis provides and how does the cooperation mechanism influence the establishment of a multienterprise tie. The study focuses on businesses of smaller dimension, geographically dispersed and whose businessmen are learning to cooperate in an international environment. The empirical evidence obtained at this moment permits to conclude the following: The tie is not long-lasting, it has an end; opportunism is an opportunity to learn; the multi-enterprise tie is a space to learn about the cooperation mechanism; the local tie permits a businessman to alternate between competition and cooperation strategies; the disappearance of a tie is an experience of learning for a businessman, diminishing the possibility of failure in the next tie; the cooperation mechanism tends to eliminate hierarchical relations; the multienterprise tie diminishes the asymmetries and permits SME-s to have a better position when they negotiate with large companies; the multi-enterprise tie impacts positively on the local system. The collection of empirical evidence was done trough the following instruments: direct observation in a business encounter to which the businesses attended in 2003 (202 Mexican agro industry SME-s), a survey applied in 2004 (129), a questionnaire applied in 2005 (86 businesses), field visits to the businesses during the period 2006-2008 and; a survey applied by telephone in 2008 (55 Mexican agro industry SME-s).
Abstract: XML is becoming a de facto standard for online data exchange. Existing XML filtering techniques based on a publish/subscribe model are focused on the highly structured data marked up with XML tags. These techniques are efficient in filtering the documents of data-centric XML but are not effective in filtering the element contents of the document-centric XML. In this paper, we propose an extended XPath specification which includes a special matching character '%' used in the LIKE operation of SQL in order to solve the difficulty of writing some queries to adequately filter element contents using the previous XPath specification. We also present a novel technique for filtering a collection of document-centric XMLs, called Pfilter, which is able to exploit the extended XPath specification. We show several performance studies, efficiency and scalability using the multi-query processing time (MQPT).