Abstract: Nowadays, hand vein recognition has attracted more attentions in identification biometrics systems. Generally, hand vein image is acquired with low contrast and irregular illumination. Accordingly, if you have a good preprocessing of hand vein image, we can easy extracted the feature extraction even with simple binarization. In this paper, a proposed approach is processed to improve the quality of hand vein image. First, a brief survey on existing methods of enhancement is investigated. Then a Radon Like features method is applied to preprocessing hand vein image. Finally, experiments results show that the proposed method give the better effective and reliable in improving hand vein images.
Abstract: The policies governing the business of any
organization are well reflected in her business rules. The business
rules are implemented by data validation techniques, coded during
the software development process. Any change in business
policies results in change in the code written for data validation
used to enforce the business policies. Implementing the change in
business rules without changing the code is the objective of this
paper. The proposed approach enables users to create rule sets at
run time once the software has been developed. The newly defined
rule sets by end users are associated with the data variables for
which the validation is required. The proposed approach facilitates
the users to define business rules using all the comparison
operators and Boolean operators. Multithreading is used to
validate the data entered by end user against the business rules
applied. The evaluation of the data is performed by a newly
created thread using an enhanced form of the RPN (Reverse Polish
Notation) algorithm.
Abstract: In this paper we introduce a novel kernel classifier
based on a iterative shrinkage algorithm developed for compressive
sensing. We have adopted Bregman iteration with soft and hard
shrinkage functions and generalized hinge loss for solving l1 norm
minimization problem for classification. Our experimental results
with face recognition and digit classification using SVM as the
benchmark have shown that our method has a close error rate
compared to SVM but do not perform better than SVM. We have
found that the soft shrinkage method give more accuracy and in some
situations more sparseness than hard shrinkage methods.
Abstract: This paper presents the result of three senior capstone
projects at the Department of Computer Engineering, Prince of
Songkla University, Thailand. These projects focus on developing an
examination management system for the Faculty of Engineering in
order to manage the examination both the examination room
assignments and the examination proctor assignments in each room.
The current version of the software is a web-based application. The
developed software allows the examination proctors to select their
scheduled time online while each subject is assigned to each available
examination room according to its type and the room capacity. The
developed system is evaluated using real data by prospective users of
the system. Several suggestions for further improvements are given
by the testers. Even though the features of the developed software are
not superior, the developing process can be a case study for a projectbased
teaching style. Furthermore, the process of developing this
software can show several issues in developing an educational
support application.
Abstract: We study how the outcome of evolutionary dynamics on
graphs depends on a randomness on the graph structure. We gradually
change the underlying graph from completely regular (e.g. a square lattice) to completely random. We find that the fixation probability increases as the randomness increases; nevertheless, the increase is
not significant and thus the fixation probability could be estimated by the known formulas for underlying regular graphs.
Abstract: In this paper, we present a novel objective nonreference performance assessment algorithm for image fusion. It takes into account local measurements to estimate how well the important information in the source images is represented by the fused image. The metric is based on the Universal Image Quality Index and uses the similarity between blocks of pixels in the input images and the fused image as the weighting factors for the metrics. Experimental results confirm that the values of the proposed metrics correlate well with the subjective quality of the fused images, giving a significant improvement over standard measures based on mean squared error and mutual information.
Abstract: In this paper, an improvement of PDLZW implementation
with a new dictionary updating technique is proposed. A
unique dictionary is partitioned into hierarchical variable word-width
dictionaries. This allows us to search through dictionaries in parallel.
Moreover, the barrel shifter is adopted for loading a new input string
into the shift register in order to achieve a faster speed. However,
the original PDLZW uses a simple FIFO update strategy, which is
not efficient. Therefore, a new window based updating technique
is implemented to better classify the difference in how often each
particular address in the window is referred. The freezing policy
is applied to the address most often referred, which would not be
updated until all the other addresses in the window have the same
priority. This guarantees that the more often referred addresses would
not be updated until their time comes. This updating policy leads
to an improvement on the compression efficiency of the proposed
algorithm while still keep the architecture low complexity and easy
to implement.
Abstract: This paper proposes a method that discovers time series event patterns from textual data with time information. The patterns are composed of sequences of events and each event is extracted from the textual data, where an event is characteristic content included in the textual data such as a company name, an action, and an impression of a customer. The method introduces 7 types of time constraints based on the analysis of the textual data. The method also evaluates these constraints when the frequency of a time series event pattern is calculated. We can flexibly define the time constraints for interesting combinations of events and can discover valid time series event patterns which satisfy these conditions. The paper applies the method to daily business reports collected by a sales force automation system and verifies its effectiveness through numerical experiments.
Abstract: Risk management is an essential fraction of project management, which plays a significant role in project success. Many failures associated with Web projects are the consequences of poor awareness of the risks involved and lack of process models that can serve as a guideline for the development of Web based applications. To circumvent this problem, contemporary process models have been devised for the development of conventional software. This paper introduces the WPRiMA (Web Project Risk Management Assessment) as the tool, which is used to implement RIAP, the risk identification architecture pattern model, which focuses upon the data from the proprietor-s and vendor-s perspectives. The paper also illustrates how WPRiMA tool works and how it can be used to calculate the risk level for a given Web project, to generate recommendations in order to facilitate risk avoidance in a project, and to improve the prospects of early risk management.
Abstract: Network warfare is an emerging concept that focuses on the network and computer based forms through which information is attacked and defended. Various computer and network security concepts thus play a role in network warfare. Due the intricacy of the various interacting components, a model to better understand the complexity in a network warfare environment would be beneficial. Non-quantitative modeling is a useful method to better characterize the field due to the rich ideas that can be generated based on the use of secular associations, chronological origins, linked concepts, categorizations and context specifications. This paper proposes the use of non-quantitative methods through a morphological analysis to better explore and define the influential conditions in a network warfare environment.
Abstract: This paper proposes a new of cloud computing for individual computer users to share applications in distributed communities, called community-based personal cloud computing (CPCC). The paper also presents a prototype design and implementation of CPCC. The users of CPCC are able to share their computing applications with other users of the community. Any member of the community is able to execute remote applications shared by other members. The remote applications behave in the same way as their local counterparts, allowing the user to enter input, receive output as well as providing the access to the local data of the user. CPCC provides a peer-to-peer (P2P) environment where each peer provides applications which can be used by the other peers that are connected CPCC.
Abstract: In the queueing theory, it is assumed that customer
arrivals correspond to a Poisson process and service time has the
exponential distribution. Using these assumptions, the behaviour of
the queueing system can be described by means of Markov chains
and it is possible to derive the characteristics of the system. In the
paper, these theoretical approaches are presented on several types of
systems and it is also shown how to compute the characteristics in a
situation when these assumptions are not satisfied
Abstract: A sequential decision problem, based on the task ofidentifying the species of trees given acoustic echo data collectedfrom them, is considered with well-known stochastic classifiers,including single and mixture Gaussian models. Echoes are processedwith a preprocessing stage based on a model of mammalian cochlearfiltering, using a new discrete low-pass filter characteristic. Stoppingtime performance of the sequential decision process is evaluated andcompared. It is observed that the new low pass filter processingresults in faster sequential decisions.
Abstract: In this paper, a new proposed system for Persian
printed numeral characters recognition with emphasis on
representation and recognition stages is introduced. For the first time,
in Persian optical character recognition, geometrical central moments
as character image descriptor and fuzzy min-max neural network for
Persian numeral character recognition has been used. Set of different
experiments on binary images of regular, translated, rotated and
scaled Persian numeral characters has been done and variety of
results has been presented. The best result was 99.16% correct
recognition demonstrating geometrical central moments and fuzzy
min-max neural network are adequate for Persian printed numeral
character recognition.
Abstract: This work presents an approach for the construction of a hybrid color-texture space by using mutual information. Feature extraction is done by the Laws filter with SVM (Support Vectors Machine) as a classifier. The classification is applied on the VisTex database and a SPOT HRV (XS) image representing two forest areas in the region of Rabat in Morocco. The result of classification obtained in the hybrid space is compared with the one obtained in the RGB color space.
Abstract: With optimized bandwidth and latency discrepancy ratios, Node Gain Scores (NGSs) are determined and used as a basis for shaping the max-heap overlay. The NGSs - determined as the respective bandwidth-latency-products - govern the construction of max-heap-form overlays. Each NGS is earned as a synergy of discrepancy ratio of the bandwidth requested with respect to the estimated available bandwidth, and latency discrepancy ratio between the nodes and the source node. The tree leads to enhanceddelivery overlay multicasting – increasing packet delivery which could, otherwise, be hindered by induced packet loss occurring in other schemes not considering the synergy of these parameters on placing the nodes on the overlays. The NGS is a function of four main parameters – estimated available bandwidth, Ba; individual node's requested bandwidth, Br; proposed node latency to its prospective parent (Lp); and suggested best latency as advised by source node (Lb). Bandwidth discrepancy ratio (BDR) and latency discrepancy ratio (LDR) carry weights of α and (1,000 - α ) , respectively, with arbitrary chosen α ranging between 0 and 1,000 to ensure that the NGS values, used as node IDs, maintain a good possibility of uniqueness and balance between the most critical factor between the BDR and the LDR. A max-heap-form tree is constructed with assumption that all nodes possess NGS less than the source node. To maintain a sense of load balance, children of each level's siblings are evenly distributed such that a node can not accept a second child, and so on, until all its siblings able to do so, have already acquired the same number of children. That is so logically done from left to right in a conceptual overlay tree. The records of the pair-wise approximate available bandwidths as measured by a pathChirp scheme at individual nodes are maintained. Evaluation measures as compared to other schemes – Bandwidth Aware multicaSt architecturE (BASE), Tree Building Control Protocol (TBCP), and Host Multicast Tree Protocol (HMTP) - have been conducted. This new scheme generally performs better in terms of trade-off between packet delivery ratio; link stress; control overhead; and end-to-end delays.
Abstract: Unsatisfactory effectiveness of software systems
development and enhancement projects is one of the main reasons
why in software engineering there are attempts being made to use
experiences coming from other engineering disciplines. In spite of
specificity of software product and process a belief had come out that
the execution of software could be more effective if these objects
were subject to measurement – as it is true in other engineering
disciplines for which measurement is an immanent feature. Thus
objective and reliable approaches to the measurement of software
processes and products have been sought in software engineering for
several dozens of years already. This may be proved, among others,
by the current version of CMMI for Development model. This paper
is aimed at analyzing the approach to the software processes and
products measurement proposed in the latest version of this very
model, indicating growing acceptance for this issue in software
engineering.
Abstract: In this paper we propose a new knowledge model using
the Dempster-Shafer-s evidence theory for image segmentation and
fusion. The proposed method is composed essentially of two steps.
First, mass distributions in Dempster-Shafer theory are obtained from
the membership degrees of each pixel covering the three image
components (R, G and B). Each membership-s degree is determined by
applying Fuzzy C-Means (FCM) clustering to the gray levels of the
three images. Second, the fusion process consists in defining three
discernment frames which are associated with the three images to be
fused, and then combining them to form a new frame of discernment.
The strategy used to define mass distributions in the combined
framework is discussed in detail. The proposed fusion method is
illustrated in the context of image segmentation. Experimental
investigations and comparative studies with the other previous methods
are carried out showing thus the robustness and superiority of the
proposed method in terms of image segmentation.
Abstract: In this work, I present a review on Sparse Distributed
Memory for Small Cues (SDMSCue), a variant of Sparse Distributed
Memory (SDM) that is capable of handling small cues. I then conduct
and show some cognitive experiments on SDMSCue to test its
cognitive soundness compared to SDM. Small cues refer to input
cues that are presented to memory for reading associations; but have
many missing parts or fields from them. The original SDM failed to
handle such a problem. SDMSCue handles and overcomes this
pitfall. The main idea in SDMSCue; is the repeated projection of the
semantic space on smaller subspaces; that are selected based on the
input cue length and pattern. This process allows for Read/Write
operations using an input cue that is missing a large portion.
SDMSCue is augmented with the use of genetic algorithms for
memory allocation and initialization. I claim that SDM functionality
is a subset of SDMSCue functionality.
Abstract: In today-s hip hop world where everyone is running
short of time and works hap hazardly,the similar scene is common on
the roads while in traffic.To do away with the fatal consequences of
such speedy traffics on rushy lanes, a software to analyse and keep
account of the traffic and subsequent conjestion is being used in the
developed countries. This software has being implemented and used
with the help of a suppprt tool called Critical Analysis Reporting
Environment.There has been two existing versions of this tool.The
current research paper involves examining the issues and probles
while using these two practically. Further a hybrid architecture is
proposed for the same that retains the quality and performance of
both and is better in terms of coupling of components , maintainence
and many other features.