Abstract: The main problems of data centric and open source
project are large number of developers and changes of core
framework. Model-View-Control (MVC) design pattern significantly
improved the development and adjustments of complex projects.
Entity framework as a Model layer in MVC architecture has
simplified communication with the database. How often are the new
technologies used and whether they have potentials for designing
more efficient Enterprise Resource Planning (ERP) system that will
be more suited to accountants?
Abstract: The goal of data mining algorithms is to discover
useful information embedded in large databases. One of the most
important data mining problems is discovery of frequently occurring
patterns in sequential data. In a multidimensional sequence each
event depends on more than one dimension. The search space is quite
large and the serial algorithms are not scalable for very large
datasets. To address this, it is necessary to study scalable parallel
implementations of sequence mining algorithms.
In this paper, we present a model for multidimensional sequence
and describe a parallel algorithm based on data parallelism.
Simulation experiments show good load balancing and scalable and
acceptable speedup over different processors and problem sizes and
demonstrate that our approach can works efficiently in a real parallel
computing environment.
Abstract: The Economic factors are leading to the rise of
infrastructures provides software and computing facilities as a
service, known as cloud services or cloud computing. Cloud services
can provide efficiencies for application providers, both by limiting
up-front capital expenses, and by reducing the cost of ownership over
time. Such services are made available in a data center, using shared
commodity hardware for computation and storage. There is a varied
set of cloud services available today, including application services
(salesforce.com), storage services (Amazon S3), compute services
(Google App Engine, Amazon EC2) and data services (Amazon
SimpleDB, Microsoft SQL Server Data Services, Google-s Data
store). These services represent a variety of reformations of data
management architectures, and more are on the horizon.
Abstract: Different methods containing biometric algorithms are
presented for the representation of eigenfaces detection including
face recognition, are identification and verification. Our theme of this
research is to manage the critical processing stages (accuracy, speed,
security and monitoring) of face activities with the flexibility of
searching and edit the secure authorized database. In this paper we
implement different techniques such as eigenfaces vector reduction
by using texture and shape vector phenomenon for complexity
removal, while density matching score with Face Boundary Fixation
(FBF) extracted the most likelihood characteristics in this media
processing contents. We examine the development and performance
efficiency of the database by applying our creative algorithms in both
recognition and detection phenomenon. Our results show the
performance accuracy and security gain with better achievement than
a number of previous approaches in all the above processes in an
encouraging mode.
Abstract: The aim of this study is to emphasize the opportunities in space design under the aspect of HCI as performance areas. HCI is a multidisciplinary approach that could be identified in many different areas. The aesthetical reflections of HCI by virtual reality in space design are the high-tech solutions of the new innovations as computational facilities by artistic features. The method of this paper is to identify the subject in 3 main parts. In the first part a general approach and definition of interactivity on the basis of space design; in the second part the concept of multimedia interactive theater by some chosen samples from the world and interactive design aspects; in the third part the samples from Turkey will be identified by stage designing principles. In the results it could be declared that the multimedia database is the virtual approach of theatre stage designing regarding interactive means by computational facilities according to aesthetical aspects. HCI is mostly identified in theatre stages as computational intelligence under the affect of interactivity.
Abstract: Conception is the primordial part in the realization of
a computer system. Several tools have been used to help inventors to
describe their software. These tools knew a big success in the
relational databases domain since they permit to generate SQL script
modeling the database from an Entity/Association model. However,
with the evolution of the computer domain, the relational databases
proved their limits and object-relational model became used more
and more. Tools of present conception don't support all new concepts
introduced by this model and the syntax of the SQL3 language. We
propose in this paper a tool of help to the conception and
implementation of object-relational databases called «NAVIGTOOLS"
that allows the user to generate script modeling its database
in SQL3 language. This tool bases itself on the Entity/Association
and navigational model for modeling the object-relational databases.
Abstract: In this paper we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (o--algebras, probability spaces and condi¬tional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes' Formula. Besides we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this paper shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in crypto-graphic research, if the corresponding basic mathematical knowledge is available in a database.
Abstract: Object Relational Databases (ORDB) are complex in
nature than traditional relational databases because they combine the
characteristics of both object oriented concepts and relational
features of conventional databases. Design of an ORDB demands
efficient and quality schema considering the structural, functional
and componential traits. This internal quality of the schema is
assured by metrics that measure the relevant attributes. This is
extended to substantiate the understandability, usability and
reliability of the schema, thus assuring external quality of the
schema. This work institutes a formalization of ORDB metrics;
metric definition, evaluation methodology and the calibration of the
metric. Three ORDB schemas were used to conduct the evaluation
and the formalization of the metrics. The metrics are calibrated using
content and criteria related validity based on the measurability,
consistency and reliability of the metrics. Nominal and summative
scales are derived based on the evaluated metric values and are
standardized. Future works pertaining to ORDB metrics forms the
concluding note.
Abstract: In this paper, a two factor scheme is proposed to
generate cryptographic keys directly from biometric data, which
unlike passwords, are strongly bound to the user. Hash value of the
reference iris code is used as a cryptographic key and its length
depends only on the hash function, being independent of any other
parameter. The entropy of such keys is 94 bits, which is much higher
than any other comparable system. The most important and distinct
feature of this scheme is that it regenerates the reference iris code by
providing a genuine iris sample and the correct user password. Since
iris codes obtained from two images of the same eye are not exactly
the same, error correcting codes (Hadamard code and Reed-Solomon
code) are used to deal with the variability. The scheme proposed here
can be used to provide keys for a cryptographic system and/or for
user authentication. The performance of this system is evaluated on
two publicly available databases for iris biometrics namely CBS and
ICE databases. The operating point of the system (values of False
Acceptance Rate (FAR) and False Rejection Rate (FRR)) can be set
by properly selecting the error correction capacity (ts) of the Reed-
Solomon codes, e.g., on the ICE database, at ts = 15, FAR is 0.096%
and FRR is 0.76%.
Abstract: Computer languages are usually lumped together
into broad -paradigms-, leaving us in want of a finer classification
of kinds of language. Theories distinguishing between -genuine
differences- in language has been called for, and we propose that
such differences can be observed through a notion of expressive mode.
We outline this concept, propose how it could be operationalized and
indicate a possible context for the development of a corresponding
theory. Finally we consider a possible application in connection
with evaluation of language revision. We illustrate this with a case,
investigating possible revisions of the relational algebra in order to
overcome weaknesses of the division operator in connection with
universal queries.
Abstract: Tofurther advance research on immune-related genes
from T. molitor, we constructed acDNA library and analyzed
expressed sequence taq (EST) sequences from 1,056 clones. After
removing vector sequence and quality checkingthrough thePhred
program (trim_alt 0.05 (P-score>20), 1039 sequences were generated.
The average length of insert was 792 bp. In addition, we identified 162
clusters, 167 contigs and 391 contigs after clustering and assembling
process using a TGICL package. EST sequences were searchedagainst
NCBI nr database by local BLAST (blastx, E
Abstract: The recognition of handwritten numeral is an
important area of research for its applications in post office, banks
and other organizations. This paper presents automatic recognition of
handwritten Kannada numerals based on structural features. Five
different types of features, namely, profile based 10-segment string,
water reservoir; vertical and horizontal strokes, end points and
average boundary length from the minimal bounding box are used in
the recognition of numeral. The effect of each feature and their
combination in the numeral classification is analyzed using nearest
neighbor classifiers. It is common to combine multiple categories of
features into a single feature vector for the classification. Instead,
separate classifiers can be used to classify based on each visual
feature individually and the final classification can be obtained based
on the combination of separate base classification results. One
popular approach is to combine the classifier results into a feature
vector and leaving the decision to next level classifier. This method
is extended to extract a better information, possibility distribution,
from the base classifiers in resolving the conflicts among the
classification results. Here, we use fuzzy k Nearest Neighbor (fuzzy
k-NN) as base classifier for individual feature sets, the results of
which together forms the feature vector for the final k Nearest
Neighbor (k-NN) classifier. Testing is done, using different features,
individually and in combination, on a database containing 1600
samples of different numerals and the results are compared with the
results of different existing methods.
Abstract: The vast amount of information hidden in huge
databases has created tremendous interests in the field of data
mining. This paper examines the possibility of using data clustering
techniques in oral medicine to identify functional relationships
between different attributes and classification of similar patient
examinations. Commonly used data clustering algorithms have been
reviewed and as a result several interesting results have been
gathered.
Abstract: An automatic method for the extraction of feature points for face based applications is proposed. The system is based upon volumetric feature descriptors, which in this paper has been extended to incorporate scale space. The method is robust to noise and has the ability to extract local and holistic features simultaneously from faces stored in a database. Extracted features are stable over a range of faces, with results indicating that in terms of intra-ID variability, the technique has the ability to outperform manual landmarking.
Abstract: Warehousing is commonly used in factories for the
storage of products until delivery of orders. As the amount of
products stored increases it becomes tedious to be carried out
manually. In recent years, the manual storing has converted into fully
or partially computer controlled systems, also known as Automated
Storage and Retrieval Systems (AS/RS). This paper discusses an
ASRS system, which was designed such that the best storage location
for the products is determined by utilizing a fuzzy control system.
The design maintains the records of the products to be/already in
store and the storage/retrieval times along with the availability status
of the storage locations. This paper discusses on the maintenance of
the above mentioned records and the utilization of the concept of
fuzzy logic in order to determine the optimum storage location for
the products. The paper will further discuss on the dynamic splitting
and merging of the storage locations depending on the product sizes.
Abstract: Application of Geo-Informatic technology in land
tenure and land use on the economic crop area, to create sustainable
land, access to the area, and produce sustainable food for the demand
of its people in the community. The research objectives are to 1)
apply Geo-Informatic Technology on land ownership and agricultural
land use (cash crops) in the research area, 2) create GIS database on
land ownership and land use, 3) create database of an online Geoinformation
system on land tenure and land use. The results of this
study reveal that, first; the study area is on high slope, mountains and
valleys. The land is mainly in the forest zone which was included in
the Forest Act 1941 and National Conserved Forest 1964. Residents
gained the rights to exploit the land passed down from their
ancestors. The practice was recognized by communities. The land
was suitable for cultivating a wide variety of economic crops that was
the main income of the family. At present the local residents keep
expanding the land to grow cash crops. Second; creating a database
of the geographic information system consisted of the area range,
announcement from the Interior Ministry, interpretation of satellite
images, transportation routes, waterways, plots of land with a title
deed available at the provincial land office. Most pieces of land
without a title deed are located in the forest and national reserve
areas. Data were created from a field study and a land zone
determined by a GPS. Last; an online Geo-Informatic System can
show the information of land tenure and land use of each economic
crop. Satellite data with high resolution which could be updated and
checked on the online Geo-Informatic System simultaneously.
Abstract: Approximate tandem repeats in a genomic sequence are
two or more contiguous, similar copies of a pattern of nucleotides.
They are used in DNA mapping, studying molecular evolution
mechanisms, forensic analysis and research in diagnosis of inherited
diseases. All their functions are still investigated and not well
defined, but increasing biological databases together with tools for
identification of these repeats may lead to discovery of their specific
role or correlation with particular features. This paper presents a new
approach for finding approximate tandem repeats in a given sequence,
where the similarity between consecutive repeats is measured using
the Hamming distance. It is an enhancement of a method for finding
exact tandem repeats in DNA sequences based on the Burrows-
Wheeler transform.
Abstract: The goal of this work is to describe a new algorithm for finding the optimal variable order, number of nodes for any order and other ROBDD parameters, based on a tabular method. The tabular method makes use of a pre-built backend database table that stores the ROBDD size for selected combinations of min-terms. The user uses the backend table and the proposed algorithm to find the necessary ROBDD parameters, such as best variable order, number of nodes etc. Experimental results on benchmarks are given for this technique.
Abstract: This paper presents the findings of two experiments that were performed on the Redundancy in Wireless Connection Model (RiWC) using the 802.11b standard. The experiments were simulated using OPNET 11.5 Modeler software. The first was aimed at finding the maximum number of simultaneous Voice over Internet Protocol (VoIP) users the model would support under the G.711 and G.729 codec standards when the packetization interval was 10 milliseconds (ms). The second experiment examined the model?s VoIP user capacity using the G.729 codec standard along with background traffic using the same packetization interval as in the first experiment. To determine the capacity of the model under various experiments, we checked three metrics: jitter, delay and data loss. When background traffic was added, we checked the response time in addition to the previous three metrics. The findings of the first experiment indicated that the maximum number of simultaneous VoIP users the model was able to support was 5, which is consistent with recent research findings. When using the G.729 codec, the model was able to support up to 16 VoIP users; similar experiments in current literature have indicated a maximum of 7 users. The finding of the second experiment demonstrated that the maximum number of VoIP users the model was able to support was 12, with the existence of background traffic.
Abstract: Many studies have been conducted for derivation of
attenuation relationships worldwide, however few relationships have
been developed to use for the seismic region of Iranian plateau and
only few of these studies have been conducted for derivation of
attenuation relationships for parameters such as uniform duration.
Uniform duration is the total time during which the acceleration is
larger than a given threshold value (default is 5% of PGA). In this
study, the database was same as that used previously by Ghodrati
Amiri et al. (2007) with same correction methods for earthquake
records in Iran. However in this study, records from earthquakes with
MS< 4.0 were excluded from this database, each record has
individually filtered afterward, and therefore the dataset has been
expanded. These new set of attenuation relationships for Iran are
derived based on tectonic conditions with soil classification into rock
and soil. Earthquake parameters were chosen to be
hypocentral distance and magnitude in order to make it easier to use
the relationships for seismic hazard analysis. Tehran is the capital
city of Iran wit ha large number of important structures. In this study,
a probabilistic approach has been utilized for seismic hazard
assessment of this city. The resulting uniform duration against return
period diagrams are suggested to be used in any projects in the area.