Abstract: In this paper we propose a new knowledge model using
the Dempster-Shafer-s evidence theory for image segmentation and
fusion. The proposed method is composed essentially of two steps.
First, mass distributions in Dempster-Shafer theory are obtained from
the membership degrees of each pixel covering the three image
components (R, G and B). Each membership-s degree is determined by
applying Fuzzy C-Means (FCM) clustering to the gray levels of the
three images. Second, the fusion process consists in defining three
discernment frames which are associated with the three images to be
fused, and then combining them to form a new frame of discernment.
The strategy used to define mass distributions in the combined
framework is discussed in detail. The proposed fusion method is
illustrated in the context of image segmentation. Experimental
investigations and comparative studies with the other previous methods
are carried out showing thus the robustness and superiority of the
proposed method in terms of image segmentation.
Abstract: With optimized bandwidth and latency discrepancy ratios, Node Gain Scores (NGSs) are determined and used as a basis for shaping the max-heap overlay. The NGSs - determined as the respective bandwidth-latency-products - govern the construction of max-heap-form overlays. Each NGS is earned as a synergy of discrepancy ratio of the bandwidth requested with respect to the estimated available bandwidth, and latency discrepancy ratio between the nodes and the source node. The tree leads to enhanceddelivery overlay multicasting – increasing packet delivery which could, otherwise, be hindered by induced packet loss occurring in other schemes not considering the synergy of these parameters on placing the nodes on the overlays. The NGS is a function of four main parameters – estimated available bandwidth, Ba; individual node's requested bandwidth, Br; proposed node latency to its prospective parent (Lp); and suggested best latency as advised by source node (Lb). Bandwidth discrepancy ratio (BDR) and latency discrepancy ratio (LDR) carry weights of α and (1,000 - α ) , respectively, with arbitrary chosen α ranging between 0 and 1,000 to ensure that the NGS values, used as node IDs, maintain a good possibility of uniqueness and balance between the most critical factor between the BDR and the LDR. A max-heap-form tree is constructed with assumption that all nodes possess NGS less than the source node. To maintain a sense of load balance, children of each level's siblings are evenly distributed such that a node can not accept a second child, and so on, until all its siblings able to do so, have already acquired the same number of children. That is so logically done from left to right in a conceptual overlay tree. The records of the pair-wise approximate available bandwidths as measured by a pathChirp scheme at individual nodes are maintained. Evaluation measures as compared to other schemes – Bandwidth Aware multicaSt architecturE (BASE), Tree Building Control Protocol (TBCP), and Host Multicast Tree Protocol (HMTP) - have been conducted. This new scheme generally performs better in terms of trade-off between packet delivery ratio; link stress; control overhead; and end-to-end delays.
Abstract: This work presents an approach for the construction of a hybrid color-texture space by using mutual information. Feature extraction is done by the Laws filter with SVM (Support Vectors Machine) as a classifier. The classification is applied on the VisTex database and a SPOT HRV (XS) image representing two forest areas in the region of Rabat in Morocco. The result of classification obtained in the hybrid space is compared with the one obtained in the RGB color space.
Abstract: In this paper, a new proposed system for Persian
printed numeral characters recognition with emphasis on
representation and recognition stages is introduced. For the first time,
in Persian optical character recognition, geometrical central moments
as character image descriptor and fuzzy min-max neural network for
Persian numeral character recognition has been used. Set of different
experiments on binary images of regular, translated, rotated and
scaled Persian numeral characters has been done and variety of
results has been presented. The best result was 99.16% correct
recognition demonstrating geometrical central moments and fuzzy
min-max neural network are adequate for Persian printed numeral
character recognition.
Abstract: In the queueing theory, it is assumed that customer
arrivals correspond to a Poisson process and service time has the
exponential distribution. Using these assumptions, the behaviour of
the queueing system can be described by means of Markov chains
and it is possible to derive the characteristics of the system. In the
paper, these theoretical approaches are presented on several types of
systems and it is also shown how to compute the characteristics in a
situation when these assumptions are not satisfied
Abstract: This paper proposes a new of cloud computing for individual computer users to share applications in distributed communities, called community-based personal cloud computing (CPCC). The paper also presents a prototype design and implementation of CPCC. The users of CPCC are able to share their computing applications with other users of the community. Any member of the community is able to execute remote applications shared by other members. The remote applications behave in the same way as their local counterparts, allowing the user to enter input, receive output as well as providing the access to the local data of the user. CPCC provides a peer-to-peer (P2P) environment where each peer provides applications which can be used by the other peers that are connected CPCC.
Abstract: Network warfare is an emerging concept that focuses on the network and computer based forms through which information is attacked and defended. Various computer and network security concepts thus play a role in network warfare. Due the intricacy of the various interacting components, a model to better understand the complexity in a network warfare environment would be beneficial. Non-quantitative modeling is a useful method to better characterize the field due to the rich ideas that can be generated based on the use of secular associations, chronological origins, linked concepts, categorizations and context specifications. This paper proposes the use of non-quantitative methods through a morphological analysis to better explore and define the influential conditions in a network warfare environment.
Abstract: Risk management is an essential fraction of project management, which plays a significant role in project success. Many failures associated with Web projects are the consequences of poor awareness of the risks involved and lack of process models that can serve as a guideline for the development of Web based applications. To circumvent this problem, contemporary process models have been devised for the development of conventional software. This paper introduces the WPRiMA (Web Project Risk Management Assessment) as the tool, which is used to implement RIAP, the risk identification architecture pattern model, which focuses upon the data from the proprietor-s and vendor-s perspectives. The paper also illustrates how WPRiMA tool works and how it can be used to calculate the risk level for a given Web project, to generate recommendations in order to facilitate risk avoidance in a project, and to improve the prospects of early risk management.
Abstract: This paper proposes a method that discovers time series event patterns from textual data with time information. The patterns are composed of sequences of events and each event is extracted from the textual data, where an event is characteristic content included in the textual data such as a company name, an action, and an impression of a customer. The method introduces 7 types of time constraints based on the analysis of the textual data. The method also evaluates these constraints when the frequency of a time series event pattern is calculated. We can flexibly define the time constraints for interesting combinations of events and can discover valid time series event patterns which satisfy these conditions. The paper applies the method to daily business reports collected by a sales force automation system and verifies its effectiveness through numerical experiments.
Abstract: In this paper, an improvement of PDLZW implementation
with a new dictionary updating technique is proposed. A
unique dictionary is partitioned into hierarchical variable word-width
dictionaries. This allows us to search through dictionaries in parallel.
Moreover, the barrel shifter is adopted for loading a new input string
into the shift register in order to achieve a faster speed. However,
the original PDLZW uses a simple FIFO update strategy, which is
not efficient. Therefore, a new window based updating technique
is implemented to better classify the difference in how often each
particular address in the window is referred. The freezing policy
is applied to the address most often referred, which would not be
updated until all the other addresses in the window have the same
priority. This guarantees that the more often referred addresses would
not be updated until their time comes. This updating policy leads
to an improvement on the compression efficiency of the proposed
algorithm while still keep the architecture low complexity and easy
to implement.
Abstract: In this paper, we present a novel objective nonreference performance assessment algorithm for image fusion. It takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is based on the Universal Image Quality Index and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors for the metrics. Experimental results confirm that the values of the proposed metrics correlate well with the subjective quality of the fused images, giving a significant improvement over standard measures based on mean squared error and mutual information.
Abstract: We study how the outcome of evolutionary dynamics on
graphs depends on a randomness on the graph structure. We gradually
change the underlying graph from completely regular (e.g. a square lattice) to completely random. We find that the fixation probability increases as the randomness increases; nevertheless, the increase is
not significant and thus the fixation probability could be estimated by the known formulas for underlying regular graphs.
Abstract: This paper presents the result of three senior capstone
projects at the Department of Computer Engineering, Prince of
Songkla University, Thailand. These projects focus on developing an
examination management system for the Faculty of Engineering in
order to manage the examination both the examination room
assignments and the examination proctor assignments in each room.
The current version of the software is a web-based application. The
developed software allows the examination proctors to select their
scheduled time online while each subject is assigned to each available
examination room according to its type and the room capacity. The
developed system is evaluated using real data by prospective users of
the system. Several suggestions for further improvements are given
by the testers. Even though the features of the developed software are
not superior, the developing process can be a case study for a projectbased
teaching style. Furthermore, the process of developing this
software can show several issues in developing an educational
support application.
Abstract: In this paper we introduce a novel kernel classifier
based on a iterative shrinkage algorithm developed for compressive
sensing. We have adopted Bregman iteration with soft and hard
shrinkage functions and generalized hinge loss for solving l1 norm
minimization problem for classification. Our experimental results
with face recognition and digit classification using SVM as the
benchmark have shown that our method has a close error rate
compared to SVM but do not perform better than SVM. We have
found that the soft shrinkage method give more accuracy and in some
situations more sparseness than hard shrinkage methods.
Abstract: The policies governing the business of any
organization are well reflected in her business rules. The business
rules are implemented by data validation techniques, coded during
the software development process. Any change in business
policies results in change in the code written for data validation
used to enforce the business policies. Implementing the change in
business rules without changing the code is the objective of this
paper. The proposed approach enables users to create rule sets at
run time once the software has been developed. The newly defined
rule sets by end users are associated with the data variables for
which the validation is required. The proposed approach facilitates
the users to define business rules using all the comparison
operators and Boolean operators. Multithreading is used to
validate the data entered by end user against the business rules
applied. The evaluation of the data is performed by a newly
created thread using an enhanced form of the RPN (Reverse Polish
Notation) algorithm.
Abstract: This paper presents a watermarking protocol able to
solve the well-known “customer-s right problem" and “unbinding
problem". In particular, the protocol has been purposely designed
to be adopted in a web context, where users wanting to buy digital
contents are usually neither provided with digital certificates issued
by certification authorities (CAs) nor able to autonomously perform
specific security actions. Furthermore, the protocol enables users to
keep their identities unexposed during web transactions as well as
allows guilty buyers, i.e. who are responsible distributors of illegal
replicas, to be unambiguously identified. Finally, the protocol has
been designed so that web content providers (CPs) can exploit
copyright protection services supplied by web service providers (SPs)
in a security context. Thus, CPs can take advantage of complex
services without having to directly implement them.
Abstract: Online news websites are one of the main and wide areas of Mass Media. Since the nineties several Jordanian newspapers were introduced to the World Wide Web to reach various and large numbers of audiances. Examples of these newspapers that have online version are Al-Rai, Ad-Dustor and AlGhad. Other pure online news websites include Ammon and Rum. The main aim of this study is to evaluate online newspaper websites using two assessment measures; usability and web content. This aim is achieved by using a questionnaire based evaluation which is based on the definition of usability and web content in the ISO document as the standard number 9241-part 11. The results are obtained based on 204 audiences- responses. The results of the research showed that the usability factor is relatively good for all Jordanian online newspapers whereas the web content factor is moderate.
Abstract: Within the collaborative research center 666 a new
product development approach and the innovative manufacturing
method of linear flow splitting are being developed. So far the design process is supported by 3D-CAD models utilizing User Defined
Features in standard CAD-Systems. This paper now presents new
functions for generating 3D-models of integral sheet metal products with bifurcations using Siemens PLM NX 6. The emphasis is placed
on design and semi-automated insertion of User Defined Features.
Therefore User Defined Features for both, linear flow splitting
and its derivative linear bend splitting, were developed. In order to facilitate the modeling process, an application was developed
that guides through the insertion process. Its usability and dialog layout adapt known standard features. The work presented here has
significant implications on the quality, accurateness and efficiency of the product generation process of sheet metal products with higher
order bifurcations.
Abstract: Although the World Wide Web is considered the
largest source of information there exists nowadays, due to its
inherent dynamic characteristics, the task of finding useful and
qualified information can become a very frustrating experience. This
study presents a research on the information mining systems in the
Web; and proposes an implementation of these systems by means of
components that can be built using the technology of Web services.
This implies that they can encompass features offered by a services
oriented architecture (SOA) and specific components may be used by
other tools, independent of platforms or programming languages.
Hence, the main objective of this work is to provide an architecture
to Web mining systems, divided into stages, where each step is a
component that will incorporate the characteristics of SOA. The
separation of these steps was designed based upon the existing
literature. Interesting results were obtained and are shown here.
Abstract: As seen in literature, about 70% of the improvement initiatives fail, and a significant number do not even get started. This paper analyses the problem of failing initiatives on Software Process Improvement (SPI), and proposes good practices supported by motivational tools that can help minimizing failures. It elaborates on the hypothesis that human factors are poorly addressed by deployers, especially because implementation guides usually emphasize only technical factors. This research was conducted with SPI deployers and analyses 32 SPI initiatives. The results indicate that although human factors are not commonly highlighted in guidelines, the successful initiatives usually address human factors implicitly. This research shows that practices based on human factors indeed perform a crucial role on successful implantations of SPI, proposes change management as a theoretical framework to introduce those practices in the SPI context and suggests some motivational tools based on SPI deployers experience to support it.