Abstract: Detection and recognition of the Human Body Composition and extraction their measures (width and length of human body) in images are a major issue in detecting objects and the important field in Image, Signal and Vision Computing in recent years. Finding people and extraction their features in Images are particularly important problem of object recognition, because people can have high variability in the appearance. This variability may be due to the configuration of a person (e.g., standing vs. sitting vs. jogging), the pose (e.g. frontal vs. lateral view), clothing, and variations in illumination. In this study, first, Human Body is being recognized in image then the measures of Human Body extract from the image.
Abstract: In the last few years, the Semantic Web gained scientific acceptance as a means of relationships identification in knowledge base, widely known by semantic association. Query about complex relationships between entities is a strong requirement for many applications in analytical domains. In bioinformatics for example, it is critical to extract exchanges between proteins. Currently, the widely known result of such queries is to provide paths between connected entities from data graph. However, they do not always give good results while facing the user need by the best association or a set of limited best association, because they only consider all existing paths but ignore the path evaluation. In this paper, we present an approach for supporting association discovery queries. Our proposal includes (i) a query language PmSPRQL which provides a multiparadigm query expressions for association extraction and (ii) some quantification measures making easy the process of association ranking. The originality of our proposal is demonstrated by a performance evaluation of our approach on real world datasets.
Abstract: This research presents a system for post processing of
data that takes mined flat rules as input and discovers crisp as well as
fuzzy hierarchical structures using Learning Classifier System
approach. Learning Classifier System (LCS) is basically a machine
learning technique that combines evolutionary computing,
reinforcement learning, supervised or unsupervised learning and
heuristics to produce adaptive systems. A LCS learns by interacting
with an environment from which it receives feedback in the form of
numerical reward. Learning is achieved by trying to maximize the
amount of reward received. Crisp description for a concept usually
cannot represent human knowledge completely and practically. In the
proposed Learning Classifier System initial population is constructed
as a random collection of HPR–trees (related production rules) and
crisp / fuzzy hierarchies are evolved. A fuzzy subsumption relation is
suggested for the proposed system and based on Subsumption Matrix
(SM), a suitable fitness function is proposed. Suitable genetic
operators are proposed for the chosen chromosome representation
method. For implementing reinforcement a suitable reward and
punishment scheme is also proposed. Experimental results are
presented to demonstrate the performance of the proposed system.
Abstract: In this paper, the authors present architecture of a multi agent consultation system for obesity related problems, which hybrid the technology of an expert system (ES) and an intelligent agent (IA). The strength of the ES which is capable of pulling the expert knowledge is consulted and presented to the end user via the autonomous and friendly pushing environment of the intelligent agent.
Abstract: In this paper DJess is presented, a novel distributed production system that provides an infrastructure for factual and procedural knowledge sharing. DJess is a Java package that provides programmers with a lightweight middleware by which inference systems implemented in Jess and running on different nodes of a network can communicate. Communication and coordination among inference systems (agents) is achieved through the ability of each agent to transparently and asynchronously reason on inferred knowledge (facts) that might be collected and asserted by other agents on the basis of inference code (rules) that might be either local or transmitted by any node to any other node.
Abstract: A spatial analysis of a large 20th century urban settlement (town/city) easily presents the celebrated central Business District (CBD). Theories of Urban Land Economics have easily justified and attempted to explain the existence of such a district activity area within the cityscape. This work examines the gradual emergence and development of the CBD in Lafia Town, Nigeria over 20 years and the attended urban problems caused by its emergence. Personal knowledge and observation of land use change are the main sources of data for the work, with unstructured interview with residents. The result are that the absence of a co-ordinate land use plan for the town, multi-nuclei nature, and regional location of surrounding towns have affected the growth pattern, hence the CBD. Traffic congestion, dispersed CBD land uses are some of the urban planning problems. The work concludes by advocating for integrating CBD uses.
Abstract: The main goal of data mining is to extract accurate, comprehensible and interesting knowledge from databases that may be considered as large search spaces. In this paper, a new, efficient type of Genetic Algorithm (GA) called uniform two-level GA is proposed as a search strategy to discover truly interesting, high-level prediction rules, a difficult problem and relatively little researched, rather than discovering classification knowledge as usual in the literatures. The proposed method uses the advantage of uniform population method and addresses the task of generalized rule induction that can be regarded as a generalization of the task of classification. Although the task of generalized rule induction requires a lot of computations, which is usually not satisfied with the normal algorithms, it was demonstrated that this method increased the performance of GAs and rapidly found interesting rules.
Abstract: The area of Project Risk Management (PRM) has
been extensively researched, and the utilization of various tools and
techniques for managing risk in several industries has been
sufficiently reported. Formal and systematic PRM practices have
been made available for the construction industry. Based on such
body of knowledge, this paper tries to find out the global picture of
PRM practices and approaches with the help of a survey to look into
the usage of PRM techniques and diffusion of software tools, their
level of maturity, and their usefulness in the construction sector.
Results show that, despite existing techniques and tools, their usage is
limited: software tools are used only by a minority of respondents
and their cost is one of the largest hurdles in adoption. Finally, the
paper provides some important guidelines for future research
regarding quantitative risk analysis techniques and suggestions for
PRM software tools development and improvement.
Abstract: Verification of real-time software systems can be
expensive in terms of time and resources. Testing is the main method
of proving correctness but has been shown to be a long and time
consuming process. Everyday engineers are usually unwilling to
adopt formal approaches to correctness because of the overhead
associated with developing their knowledge of such techniques.
Performance modelling techniques allow systems to be evaluated
with respect to timing constraints. This paper describes PARTES, a
framework which guides the extraction of performance models from
programs written in an annotated subset of C.
Abstract: The present study focuses on methods allowing a convenient and quick calculation of the SIFs in order to predict the static adhesive strength of bonded joints. A new SIF calculation method is proposed, based on the stresses obtained from a FE model at a reference point located in the adhesive layer at equal distance of the free-edge and of the two interfaces. It is shown that, even limiting ourselves to the two main modes, i.e. the opening and the shearing modes, and using the values of the stresses resulting from a low detailed FE model, an efficient calculation of the peeling stress at adhesive-substrate corners can be obtained by this way. The proposed method is interesting in that it can be the basis of a prediction tool that will allow the designer to quickly evaluate the SIFs characterizing a particular application without developing a detailed analysis.
Abstract: Delivering streaming video over wireless is an
important component of many interactive multimedia applications
running on personal wireless handset devices. Such personal devices
have to be inexpensive, compact, and lightweight. But wireless
channels have a high channel bit error rate and limited bandwidth.
Delay variation of packets due to network congestion and the high bit
error rate greatly degrades the quality of video at the handheld
device. Therefore, mobile access to multimedia contents requires
video transcoding functionality at the edge of the mobile network for
interworking with heterogeneous networks and services. Therefore,
to guarantee quality of service (QoS) delivered to the mobile user, a
robust and efficient transcoding scheme should be deployed in
mobile multimedia transporting network. Hence, this paper
examines the challenges and limitations that the video transcoding
schemes in mobile multimedia transporting network face. Then
handheld resources, network conditions and content based mobile
and wireless video transcoding is proposed to provide high QoS
applications. Exceptional performance is demonstrated in the
experiment results. These experiments were designed to verify and
prove the robustness of the proposed approach. Extensive
experiments have been conducted, and the results of various video
clips with different bit rate and frame rate have been provided.
Abstract: Assume that we have m identical graphs where the
graphs consists of paths with k vertices where k is a positive integer.
In this paper, we discuss certain labelling of the m graphs called
c-Erdösian for some positive integers c. We regard labellings of the
vertices of the graphs by positive integers, which induce the edge
labels for the paths as the sum of the two incident vertex labels.
They have the property that each vertex label and edge label appears
only once in the set of positive integers {c, . . . , c+6m- 1}. Here,
we show how to construct certain c-Erdösian of m paths with 2 and
3 vertices by using Skolem sequences.
Abstract: Lately there has been a significant boost of interest in
music digital libraries, which constitute an attractive area of research
and development due to their inherent interesting issues and
challenging technical problems, solutions to which will be highly
appreciated by enthusiastic end-users. We present here a DL that we
have developed to support users in their quest for classical music
pieces within a particular collection of 18,000+ audio recordings.
To cope with the early DL model limitations, we have used a refined
socio-semantic and contextual model that allows rich bibliographic
content description, along with semantic annotations, reviewing,
rating, knowledge sharing etc. The multi-layered service model
allows incorporation of local and distributed information,
construction of rich hypermedia documents, expressing the complex
relationships between various objects and multi-dimensional spaces,
agents, actors, services, communities, scenarios etc., and facilitates
collaborative activities to offer to individual users the needed
collections and services.
Abstract: It is widely acknowledged that there is a shortage of software developers, not only in South Africa, but also worldwide. Despite reports on a gap between industry needs and software education, the gap has mostly been explored in quantitative studies. This paper reports on the qualitative data of a mixed method study of the perceptions of professional software developers regarding what topics they learned from their formal education and the importance of these topics to their actual work. The analysis suggests that there is a gap between industry’s needs and software development education and the following recommendations are made: 1) Real-life projects must be included in students’ education; 2) Soft skills and business skills must be included in curricula; 3) Universities must keep the curriculum up to date; 4) Software development education must be made accessible to a diverse range of students.
Abstract: The purpose of this research was develop a biological
nutrient removal (BNR) system which has low energy consumption, sludge production, and land usage. These indicate that BNR system could be a alternative of future wastewater treatment in ubiquitous
city(U-city). Organics and nitrogen compounds could be removed by this system so that secondary or tertiary stages of wastewater treatment satisfy their standards. This system was composed of oxic and anoxic
filter filed with PVDC and POM media. Anoxic/oxic filter system operated under empty bed contact time of 4 hours by increasing
recirculation ratio from 0 to 100 %. The system removals of total nitrogen and COD were 76.3% and 93%, respectively. To be observed
internal behavior in this system SCOD, NH3-N, and NO3-N were
conducted and removal shows range of 25~100%, 59~99%, and
70~100%, respectively.
Abstract: The new programming technologies allow for the
creation of components which can be automatically or manually
assembled to reach a new experience in knowledge understanding
and mastering or in getting skills for a specific knowledge area. The
project proposes an interactive framework that permits the creation,
combination and utilization of components that are specific to
mathematical training in high schools.
The main framework-s objectives are:
• authoring lessons by the teacher or the students; all they need
are simple operating skills for Equation Editor (or something
similar, or Latex); the rest are just drag & drop operations,
inserting data into a grid, or navigating through menus
• allowing sonorous presentations of mathematical texts and
solving hints (easier understood by the students)
• offering graphical representations of a mathematical function
edited in Equation
• storing of learning objects in a database
• storing of predefined lessons (efficient for expressions and
commands, the rest being calculations; allows a high
compression)
• viewing and/or modifying predefined lessons, according to the
curricula
The whole thing is focused on a mathematical expressions minicompiler,
storing the code that will be later used for different
purposes (tables, graphics, and optimisations).
Programming technologies used. A Visual C# .NET
implementation is proposed. New and innovative digital learning
objects for mathematics will be developed; they are capable to
interpret, contextualize and react depending on the architecture
where they are assembled.
Abstract: Induction machine models used for steady-state and
transient analysis require machine parameters that are usually
considered design parameters or data. The knowledge of induction
machine parameters is very important for Indirect Field Oriented
Control (IFOC). A mismatched set of parameters will degrade the
response of speed and torque control. This paper presents an
improvement approach on rotor time constant adaptation in IFOC for
Induction Machines (IM). Our approach tends to improve the
estimation accuracy of the fundamental model for flux estimation.
Based on the reduced order of the IM model, the rotor fluxes and
rotor time constant are estimated using only the stator currents and
voltages. This reduced order model offers many advantages for real
time identification parameters of the IM.
Abstract: Optical Bursts Switching (OBS) is a relatively new
optical switching paradigm. Contention and burst loss in OBS
networks are major concerns. To resolve contentions, an interesting
alternative to discarding the entire data burst is to partially drop the
burst. Partial burst dropping is based on burst segmentation concept
that its implementation is constrained by some technical challenges,
besides the complexity added to the algorithms and protocols on both
edge and core nodes. In this paper, the burst segmentation concept is
investigated, and an implementation scheme is proposed and
evaluated. An appropriate dropping policy that effectively manages
the size of the segmented data bursts is presented. The dropping
policy is further supported by a new control packet format that
provides constant transmission overhead.
Abstract: Reverse engineering of full-genomic interaction networks based on compendia of expression data has been successfully applied for a number of model organisms. This study adapts these approaches for an important non-model organism: The major human fungal pathogen Candida albicans. During the infection process, the pathogen can adapt to a wide range of environmental niches and reversibly changes its growth form. Given the importance of these processes, it is important to know how they are regulated. This study presents a reverse engineering strategy able to infer fullgenomic interaction networks for C. albicans based on a linear regression, utilizing the sparseness criterion (LASSO). To overcome the limited amount of expression data and small number of known interactions, we utilize different prior-knowledge sources guiding the network inference to a knowledge driven solution. Since, no database of known interactions for C. albicans exists, we use a textmining system which utilizes full-text research papers to identify known regulatory interactions. By comparing with these known regulatory interactions, we find an optimal value for global modelling parameters weighting the influence of the sparseness criterion and the prior-knowledge. Furthermore, we show that soft integration of prior-knowledge additionally improves the performance. Finally, we compare the performance of our approach to state of the art network inference approaches.
Abstract: The multi-agent system for processing Bio-signals
will help the medical practitioners to have a standard examination
procedure stored in web server. Web Servers supporting any standard
Search Engine follow all possible combinations of the search
keywords as an input by the user to a Search Engine. As a result, a
huge number of Web-pages are shown in the Web browser. It also
helps the medical practitioner to interact with the expert in the field
his need in order to make a proper judgment in the diagnosis phase
[3].A web server uses a web server plug in to establish and
maintained the medical practitioner to make a fast analysis. If the
user uses the web server client can get a related data requesting their
search. DB agent, EEG / ECG / EMG agents- user placed with
difficult aspects for updating medical information-s in web server.