Abstract: This paper shows possibility of extraction Social,
Group and Individual Mind from Multiple Agents Rule Bases. Types
those Rule bases are selected as two fuzzy systems, namely
Mambdani and Takagi-Sugeno fuzzy system. Their rule bases are
describing (modeling) agent behavior. Modifying of agent behavior
in the time varying environment will be provided by learning fuzzyneural
networks and optimization of their parameters with using
genetic algorithms in development system FUZNET. Finally,
extraction Social, Group and Individual Mind from Multiple Agents
Rule Bases are provided by Cognitive analysis and Matching
criterion.
Abstract: It is important to retain customer satisfaction in
information technology services. When a service failure occurs,
companies need to take service recovery action to recover their
customer satisfaction. Although companies cannot avoid all problems
and complaints, they should try to make up. Therefore, service failure
and service recovery have become an important and challenging issue
for companies. In this paper, the literature and the problems in the
information technology services were reviewed. An integrated model
of profit driven for the service failure and service recovery was
established in view of the benefit of customer and enterprise.
Moreover, the interaction between service failure and service recovery
strategy was studied, the result of which verified the matching
principles of the service recovery strategy and the type of service
failure. In addition, the relationship between the cost of service
recovery and customer-s cumulative value of service after recovery
was analyzed with the model. The result attributes to managers in
deciding on appropriate resource allocations for recovery strategies.
Abstract: Speeded-Up Robust Feature (SURF) is commonly used for feature matching in stereovision because of their robustness towards scale changes and rotational changes. However, SURF feature cannot cope with large viewpoint changes or skew distortion. This paper introduces a method which can help to improve the wide baseline-s matching performance in term of accuracy by rectifying the image using two vanishing points. Simplified orientation correction was used to remove the false matching..
Abstract: In this paper, we consider the problem of Popular Matching of strictly ordered preference lists. A Popular Matching is not guaranteed to exist in any network. We propose an IDbased, constant space, self-stabilizing algorithm that converges to a Maximum Popular Matching an optimum solution, if one exist. We show that the algorithm stabilizes in O(n5) moves under any scheduler (daemon).
Abstract: Image registration plays an important role in the
diagnosis of dental pathologies such as dental caries, alveolar bone
loss and periapical lesions etc. This paper presents a new wavelet
based algorithm for registering noisy and poor contrast dental x-rays.
Proposed algorithm has two stages. First stage is a preprocessing
stage, removes the noise from the x-ray images. Gaussian filter has
been used. Second stage is a geometric transformation stage.
Proposed work uses two levels of affine transformation. Wavelet
coefficients are correlated instead of gray values. Algorithm has been
applied on number of pre and post RCT (Root canal treatment)
periapical radiographs. Root Mean Square Error (RMSE) and
Correlation coefficients (CC) are used for quantitative evaluation.
Proposed technique outperforms conventional Multiresolution
strategy based image registration technique and manual registration
technique.
Abstract: A lot of matching algorithms with different characteristics have been introduced in recent years. For real time systems these algorithms are usually based on minutiae features. In this paper we introduce a novel approach for feature extraction in which the extracted features are independent of shift and rotation of the fingerprint and at the meantime the matching operation is performed much more easily and with higher speed and accuracy. In this new approach first for any fingerprint a reference point and a reference orientation is determined and then based on this information features are converted into polar coordinates. Due to high speed and accuracy of this approach and small volume of extracted features and easily execution of matching operation this approach is the most appropriate for real time applications.
Abstract: This paper proposed a new CAD tools for microwave amplifier design. The proposed tool is based on survey about the broadband amplifier design methods, such as the Feedback amplifiers, balanced amplifiers and Compensated Matching Network The proposed tool is developed for broadband amplifier using a compensated matching network "unconditional stability amplifier". The developed program is based on analytical procedures with ability of smith chart explanation. The C# software is used for the proposed tools implementation. The program is applied on broadband amplifier as an example for testing. The designed amplifier is considered as a broadband amplifier at the range 300-700 MHz. The results are highly agreement with the expected results. Finally, these methods can be extended for wide band amplifier design.
Abstract: XML is becoming a de facto standard for online data exchange. Existing XML filtering techniques based on a publish/subscribe model are focused on the highly structured data marked up with XML tags. These techniques are efficient in filtering the documents of data-centric XML but are not effective in filtering the element contents of the document-centric XML. In this paper, we propose an extended XPath specification which includes a special matching character '%' used in the LIKE operation of SQL in order to solve the difficulty of writing some queries to adequately filter element contents using the previous XPath specification. We also present a novel technique for filtering a collection of document-centric XMLs, called Pfilter, which is able to exploit the extended XPath specification. We show several performance studies, efficiency and scalability using the multi-query processing time (MQPT).
Abstract: Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body research using Active Shape Models, such as human detection, primarily take the form of silhouette of human body. This technique is not able to estimate accurately for human pose to concern two arms and legs, as the silhouette of human body represents the shape as out of round. To solve this problem, we applied the human body model as stick-figure, “skeleton". The skeleton model of human body can give consideration to various shapes of human pose. To obtain effective estimation result, we applied background subtraction and deformed matching algorithm of primary Active Shape Models in the fitting process. The images which were used to make the model were 600 human bodies, and the model has 17 landmark points which indicate body junction and key features of human pose. The maximum iteration for the fitting process was 30 times and the execution time was less than .03 sec.
Abstract: An effective visual error concealment method has been presented by employing a robust rotation, scale, and translation (RST) invariant partial patch matching model (RSTI-PPMM) and
exemplar-based inpainting. While the proposed robust and inherently
feature-enhanced texture synthesis approach ensures the generation
of excellent and perceptually plausible visual error concealment results, the outlier pruning property guarantees the significant quality improvements, both quantitatively and qualitatively. No intermediate
user-interaction is required for the pre-segmented media and the
presented method follows a bootstrapping approach for an automatic
visual loss recovery and the image and video error concealment.
Abstract: Recently, Denial of Service(DoS) attacks and Distributed DoS(DDoS) attacks which are stronger form of DoS attacks from plural hosts have become security threats on the Internet. It is important to identify the attack source and to block attack traffic as one of the measures against these attacks. In general, it is difficult to identify them because information about the attack source is falsified. Therefore a method of identifying the attack source by tracing the route of the attack traffic is necessary. A traceback method which uses traffic patterns, using changes in the number of packets over time as criteria for the attack traceback has been proposed. The traceback method using the traffic patterns can trace the attack by matching the shapes of input traffic patterns and the shape of output traffic pattern observed at a network branch point such as a router. The traffic pattern is a shapes of traffic and unfalsifiable information. The proposed trace methods proposed till date cannot obtain enough tracing accuracy, because they directly use traffic patterns which are influenced by non-attack traffics. In this paper, a new traffic pattern matching method using Independent Component Analysis(ICA) is proposed.
Abstract: An Advance Driver Assistance System (ADAS) is a computer system on board a vehicle which is used to reduce the risk of vehicular accidents by monitoring factors relating to the driver, vehicle and environment and taking some action when a risk is identified. Much work has been done on assessing vehicle and environmental state but there is still comparatively little published work that tackles the problem of driver state. Visual attention is one such driver state. In fact, some researchers claim that lack of attention is the main cause of accidents as factors such as fatigue, alcohol or drug use, distraction and speeding all impair the driver-s capacity to pay attention to the vehicle and road conditions [1]. This seems to imply that the main cause of accidents is inappropriate driver behaviour in cases where the driver is not giving full attention while driving. The work presented in this paper proposes an ADAS system which uses an image based template matching algorithm to detect if a driver is failing to observe particular windscreen cells. This is achieved by dividing the windscreen into 24 uniform cells (4 rows of 6 columns) and matching video images of the driver-s left eye with eye-gesture templates drawn from images of the driver looking at the centre of each windscreen cell. The main contribution of this paper is to assess the accuracy of this approach using Receiver Operating Characteristic analysis. The results of our evaluation give a sensitivity value of 84.3% and a specificity value of 85.0% for the eye-gesture template approach indicating that it may be useful for driver point of regard determinations.
Abstract: We demonstrate the synthesis of intermediary views
within a sequence of color encoded, materials discriminating, X-ray
images that exhibit animated depth in a visual display. During the
image acquisition process, the requirement for a linear X-ray detector
array is replaced by synthetic image. Scale Invariant Feature
Transform, SIFT, in combination with material segmented morphing
is employed to produce synthetic imagery. A quantitative analysis of
the feature matching performance of the SIFT is presented along with
a comparative study of the synthetic imagery. We show that the total
number of matches produced by SIFT reduces as the angular
separation between the generating views increases. This effect is
accompanied by an increase in the total number of synthetic pixel
errors. The trends observed are obtained from 15 different luggage
items. This programme of research is in collaboration with the UK
Home Office and the US Dept. of Homeland Security.
Abstract: Bioinformatics and Cheminformatics use computer as disciplines providing tools for acquisition, storage, processing, analysis, integrate data and for the development of potential applications of biological and chemical data. A chemical database is one of the databases that exclusively designed to store chemical information. NMRShiftDB is one of the main databases that used to represent the chemical structures in 2D or 3D structures. SMILES format is one of many ways to write a chemical structure in a linear format. In this study we extracted Antimicrobial Structures in SMILES format from NMRShiftDB and stored it in our Local Data Warehouse with its corresponding information. Additionally, we developed a searching tool that would response to user-s query using the JME Editor tool that allows user to draw or edit molecules and converts the drawn structure into SMILES format. We applied Quick Search algorithm to search for Antimicrobial Structures in our Local Data Ware House.
Abstract: Small-scale RC models of both piles and tunnel ducts
were produced as mockups of reality and loaded under soil
confinement conditionsto investigate the damage evolution of
structural RC interacting with soil. Experimental verifications usinga
3D nonlinear FE analysis program called COM3D, which was
developed at the University of Tokyo, are introduced. This analysis
has been used in practice for seismic performance assessment of
underground ducts and in-ground LNG storage tanks in consideration
of soil-structure interactionunder static and dynamic loading. Varying
modes of failure of RCpilessubjected to different magnitudes of soil
confinement were successfully reproduced in the proposed small-scale
experiments and numerically simulated as well. Analytical simulation
was applied to RC tunnel mockups under a wide variety of depth and
soil confinement conditions, and reasonable matching was confirmed.
Abstract: Numerical design optimization is a powerful tool that
can be used by engineers during any stage of the design process.
There are many different applications for structural optimization. A
specific application that will be discussed in the following paper is
experimental data matching. Data obtained through tests on a physical
structure will be matched with data from a numerical model of that
same structure. The data of interest will be the dynamic characteristics
of an antenna structure focusing on the mode shapes and modal
frequencies. The structure used was a scaled and simplified model of
the Karoo Array Telescope-7 (KAT-7) antenna structure.
This kind of data matching is a complex and difficult task. This
paper discusses how optimization can assist an engineer during the
process of correlating a finite element model with vibration test data.
Abstract: The problem of frequent pattern discovery is defined
as the process of searching for patterns such as sets of features or items that appear in data frequently. Finding such frequent patterns
has become an important data mining task because it reveals associations, correlations, and many other interesting relationships
hidden in a database. Most of the proposed frequent pattern mining
algorithms have been implemented with imperative programming
languages. Such paradigm is inefficient when set of patterns is large
and the frequent pattern is long. We suggest a high-level declarative
style of programming apply to the problem of frequent pattern
discovery. We consider two languages: Haskell and Prolog. Our
intuitive idea is that the problem of finding frequent patterns should
be efficiently and concisely implemented via a declarative paradigm
since pattern matching is a fundamental feature supported by most
functional languages and Prolog. Our frequent pattern mining
implementation using the Haskell and Prolog languages confirms our
hypothesis about conciseness of the program. The comparative
performance studies on line-of-code, speed and memory usage of
declarative versus imperative programming have been reported in the
paper.
Abstract: Nowadays, computer worms, viruses and Trojan horse
become popular, and they are collectively called malware. Those
malware just spoiled computers by deleting or rewriting important
files a decade ago. However, recent malware seems to be born to earn
money. Some of malware work for collecting personal information so
that malicious people can find secret information such as password for
online banking, evidence for a scandal or contact address which relates
with the target. Moreover, relation between money and malware
becomes more complex. Many kinds of malware bear bots to get
springboards. Meanwhile, for ordinary internet users,
countermeasures against malware come up against a blank wall.
Pattern matching becomes too much waste of computer resources,
since matching tools have to deal with a lot of patterns derived from
subspecies. Virus making tools can automatically bear subspecies of
malware. Moreover, metamorphic and polymorphic malware are no
longer special. Recently there appears malware checking sites that
check contents in place of users' PC. However, there appears a new
type of malicious sites that avoids check by malware checking sites. In
this paper, existing protocols and methods related with the web are
reconsidered in terms of protection from current attacks, and new
protocol and method are indicated for the purpose of security of the
web.
Abstract: In this paper we describe the design and implementation of a parallel algorithm for data assimilation with ensemble Kalman filter (EnKF) for oil reservoir history matching problem. The use of large number of observations from time-lapse seismic leads to a large turnaround time for the analysis step, in addition to the time consuming simulations of the realizations. For efficient parallelization it is important to consider parallel computation at the analysis step. Our experiments show that parallelization of the analysis step in addition to the forecast step has good scalability, exploiting the same set of resources with some additional efforts.
Abstract: Cryptography provides the secure manner of
information transmission over the insecure channel. It authenticates
messages based on the key but not on the user. It requires a lengthy
key to encrypt and decrypt the sending and receiving the messages,
respectively. But these keys can be guessed or cracked. Moreover,
Maintaining and sharing lengthy, random keys in enciphering and
deciphering process is the critical problem in the cryptography
system. A new approach is described for generating a crypto key,
which is acquired from a person-s iris pattern. In the biometric field,
template created by the biometric algorithm can only be
authenticated with the same person. Among the biometric templates,
iris features can efficiently be distinguished with individuals and
produces less false positives in the larger population. This type of iris
code distribution provides merely less intra-class variability that aids
the cryptosystem to confidently decrypt messages with an exact
matching of iris pattern. In this proposed approach, the iris features
are extracted using multi resolution wavelets. It produces 135-bit iris
codes from each subject and is used for encrypting/decrypting the
messages. The autocorrelators are used to recall original messages
from the partially corrupted data produced by the decryption process.
It intends to resolve the repudiation and key management problems.
Results were analyzed in both conventional iris cryptography system
(CIC) and non-repudiation iris cryptography system (NRIC). It
shows that this new approach provides considerably high
authentication in enciphering and deciphering processes.