Abstract: This paper suggests an algorithm for the evaluation
and selection of suppliers. At the beginning, all the needed materials and services used by the organization were identified and categorized
with regard to their nature by ABC method. Afterwards, in order to reduce risk factors and maximize the organization's profit, purchase strategies were determined. Then, appropriate criteria were identified for primary evaluation of suppliers applying to the organization. The output of this stage was a list of suppliers qualified by the organization to participate in its tenders. Subsequently, considering a material in particular, appropriate criteria on the ordering of the
mentioned material were determined, taking into account the particular materials' specifications as well as the organization's needs. Finally, for the purpose of validation and verification of the
proposed model, it was applied to Mobarakeh Steel Company (MSC), the qualified suppliers of this Company are ranked by the means of a Hierarchical Fuzzy TOPSIS method. The obtained results
show that the proposed algorithm is quite effective, efficient and easy to apply.
Abstract: This article presents the developments of efficient
algorithms for tablet copies comparison. Image recognition has
specialized use in digital systems such as medical imaging,
computer vision, defense, communication etc. Comparison between
two images that look indistinguishable is a formidable task. Two
images taken from different sources might look identical but due to
different digitizing properties they are not. Whereas small variation
in image information such as cropping, rotation, and slight
photometric alteration are unsuitable for based matching
techniques. In this paper we introduce different matching
algorithms designed to facilitate, for art centers, identifying real
painting images from fake ones. Different vision algorithms for
local image features are implemented using MATLAB. In this
framework a Table Comparison Computer Tool “TCCT" is
designed to facilitate our research. The TCCT is a Graphical Unit
Interface (GUI) tool used to identify images by its shapes and
objects. Parameter of vision system is fully accessible to user
through this graphical unit interface. And then for matching, it
applies different description technique that can identify exact
figures of objects.
Abstract: With the advance of information technology in the
new era the applications of Internet to access data resources has
steadily increased and huge amount of data have become accessible
in various forms. Obviously, the network providers and agencies,
look after to prevent electronic attacks that may be harmful or may
be related to terrorist applications. Thus, these have facilitated the
authorities to under take a variety of methods to protect the special
regions from harmful data. One of the most important approaches is
to use firewall in the network facilities. The main objectives of
firewalls are to stop the transfer of suspicious packets in several
ways. However because of its blind packet stopping, high process
power requirements and expensive prices some of the providers are
reluctant to use the firewall. In this paper we proposed a method to
find a discriminate function to distinguish between usual packets and
harmful ones by the statistical processing on the network router logs.
By discriminating these data, an administrator may take an approach
action against the user. This method is very fast and can be used
simply in adjacent with the Internet routers.
Abstract: Stream Control Transmission Protocol (SCTP) has been
proposed to provide reliable transport of real-time communications.
Due to its attractive features, such as multi-streaming and multihoming,
the SCTP is often expected to be an alternative protocol
for TCP and UDP. In the original SCTP standard, the secondary path
is mainly regarded as a redundancy. Recently, most of researches
have focused on extending the SCTP to enable a host to send its
packets to a destination over multiple paths simultaneously. In order
to transfer packets concurrently over the multiple paths, the SCTP
should be well designed to avoid unnecessary fast retransmission
and the mis-estimation of congestion window size through the paths.
Therefore, we propose an Enhanced Cooperative ACK SCTP (ECASCTP)
to improve the path recovery efficiency of multi-homed host
which is under concurrent multiple transfer mode. We evaluated the
performance of our proposed scheme using ns-2 simulation in terms
of cwnd variation, path recovery time, and goodput. Our scheme
provides better performance in lossy and path asymmetric networks.
Abstract: This paper addresses the development of an intelligent vision system for human-robot interaction. The two novel contributions of this paper are 1) Detection of human faces and 2) Localizing the eye. The method is based on visual attributes of human skin colors and geometrical analysis of face skeleton. This paper introduces a spatial domain filtering method named ?Fuzzily skewed filter' which incorporates Fuzzy rules for deciding the gray level of pixels in the image in their neighborhoods and takes advantages of both the median and averaging filters. The effectiveness of the method has been justified over implementing the eye tracking commands to an entertainment robot, named ''AIBO''.
Abstract: Blind signatures enable users to obtain valid signatures for a message without revealing its content to the signer. This paper presents a new blind signature scheme, i.e. identity-based blind signature scheme with message recovery. Due to the message recovery property, the new scheme requires less bandwidth than the identitybased blind signatures with similar constructions. The scheme is based on modified Weil/Tate pairings over elliptic curves, and thus requires smaller key sizes for the same level of security compared to previous approaches not utilizing bilinear pairings. Security and efficiency analysis for the scheme is provided in this paper.
Abstract: The purpose of this paper is to study Database Models
to use them efficiently in E-commerce websites. In this paper we are
going to find a method which can save and retrieve information in Ecommerce
websites. Thus, semantic web applications can work with,
and we are also going to study different technologies of E-commerce
databases and we know that one of the most important deficits in
semantic web is the shortage of semantic data, since most of the
information is still stored in relational databases, we present an
approach to map legacy data stored in relational databases into the
Semantic Web using virtually any modern RDF query language, as
long as it is closed within RDF. To achieve this goal we study XML
structures for relational data bases of old websites and eventually we
will come up one level over XML and look for a map from relational
model (RDM) to RDF. Noting that a large number of semantic webs
get advantage of relational model, opening the ways which can be
converted to XML and RDF in modern systems (semantic web) is
important.
Abstract: A large amount of valuable information is available in
plain text clinical reports. New techniques and technologies are
applied to extract information from these reports. In this study, we
developed a domain based software system to transform 600
Otorhinolaryngology discharge notes to a structured form for
extracting clinical data from the discharge notes. In order to decrease
the system process time discharge notes were transformed into a data
table after preprocessing. Several word lists were constituted to
identify common section in the discharge notes, including patient
history, age, problems, and diagnosis etc. N-gram method was used
for discovering terms co-Occurrences within each section. Using this
method a dataset of concept candidates has been generated for the
validation step, and then Predictive Apriori algorithm for Association
Rule Mining (ARM) was applied to validate candidate concepts.
Abstract: Microarray data profiles gene expression on a whole
genome scale, therefore, it provides a good way to study associations
between gene expression and occurrence or progression of cancer.
More and more researchers realized that microarray data is helpful
to predict cancer sample. However, the high dimension of gene
expressions is much larger than the sample size, which makes this
task very difficult. Therefore, how to identify the significant genes
causing cancer becomes emergency and also a hot and hard research
topic. Many feature selection algorithms have been proposed in
the past focusing on improving cancer predictive accuracy at the
expense of ignoring the correlations between the features. In this
work, a novel framework (named by SGS) is presented for stable gene
selection and efficient cancer prediction . The proposed framework
first performs clustering algorithm to find the gene groups where
genes in each group have higher correlation coefficient, and then
selects the significant genes in each group with Bayesian Lasso and
important gene groups with group Lasso, and finally builds prediction
model based on the shrinkage gene space with efficient classification
algorithm (such as, SVM, 1NN, Regression and etc.). Experiment
results on real world data show that the proposed framework often
outperforms the existing feature selection and prediction methods,
say SAM, IG and Lasso-type prediction model.
Abstract: Traditionally, VLSI implementations of spiking
neural nets have featured large neuron counts for fixed computations
or small exploratory, configurable nets. This paper presents the
system architecture of a large configurable neural net system
employing a dedicated mapping algorithm for projecting the targeted
biology-analog nets and dynamics onto the hardware with its
attendant constraints.
Abstract: Names are important in many societies, even in technologically oriented ones which use e.g. ID systems to identify individual people. Names such as surnames are the most important as they are used in many processes, such as identifying of people and genealogical research. On the other hand variation of names can be a major problem for the identification and search for people, e.g. web search or security reasons. Name matching presumes a-priori that the recorded name written in one alphabet reflects the phonetic identity of two samples or some transcription error in copying a previously recorded name. We add to this the lode that the two names imply the same person. This paper describes name variations and some basic description of various name matching algorithms developed to overcome name variation and to find reasonable variants of names which can be used to further increasing mismatches for record linkage and name search. The implementation contains algorithms for computing a range of fuzzy matching based on different types of algorithms, e.g. composite and hybrid methods and allowing us to test and measure algorithms for accuracy. NYSIIS, LIG2 and Phonex have been shown to perform well and provided sufficient flexibility to be included in the linkage/matching process for optimising name searching.
Abstract: Recently, grid computing has been widely focused on
the science, industry, and business fields, which are required a vast
amount of computing. Grid computing is to provide the environment
that many nodes (i.e., many computers) are connected with each
other through a local/global network and it is available for many
users. In the environment, to achieve data processing among nodes
for any applications, each node executes mutual authentication by
using certificates which published from the Certificate Authority
(for short, CA). However, if a failure or fault has occurred in the
CA, any new certificates cannot be published from the CA. As
a result, a new node cannot participate in the gird environment.
In this paper, an off-the-shelf scheme for dependable grid systems
using virtualization techniques is proposed and its implementation is
verified. The proposed approach using the virtualization techniques
is to restart an application, e.g., the CA, if it has failed. The system
can tolerate a failure or fault if it has occurred in the CA. Since
the proposed scheme is implemented at the application level easily,
the cost of its implementation by the system builder hardly takes
compared it with other methods. Simulation results show that the
CA in the system can recover from its failure or fault.
Abstract: Developing an accurate classifier for high dimensional microarray datasets is a challenging task due to availability of small sample size. Therefore, it is important to determine a set of relevant genes that classify the data well. Traditionally, gene selection method often selects the top ranked genes according to their discriminatory power. Often these genes are correlated with each other resulting in redundancy. In this paper, we have proposed a hybrid method using feature ranking and wrapper method (Genetic Algorithm with multiclass SVM) to identify a set of relevant genes that classify the data more accurately. A new fitness function for genetic algorithm is defined that focuses on selecting the smallest set of genes that provides maximum accuracy. Experiments have been carried on four well-known datasets1. The proposed method provides better results in comparison to the results found in the literature in terms of both classification accuracy and number of genes selected.
Abstract: Financial forecasting is an example of signal processing problems. A number of ways to train/learn the network are available. We have used Levenberg-Marquardt algorithm for error back-propagation for weight adjustment. Pre-processing of data has reduced much of the variation at large scale to small scale, reducing the variation of training data.
Abstract: The ARMrayan Multimedia Mobile CMS (Content
Management System) is the first mobile CMS that gives the
opportunity to users for creating multimedia J2ME mobile
applications with their desired content, design and logo; simply,
without any need for writing even a line of code. The low-level
programming and compatibility problems of the J2ME, along with
UI designing difficulties, makes it hard for most people –even
programmers- to broadcast their content to the widespread mobile
phones used by nearly all people. This system provides user-friendly,
PC-based tools for creating a tree index of pages and inserting
multiple multimedia contents (e.g. sound, video and picture) in each
page for creating a J2ME mobile application. The output is a standalone
Java mobile application that has a user interface, shows texts
and pictures and plays music and videos regardless of the type of
devices used as long as the devices support the J2ME platform.
Bitmap fonts have also been used thus Middle Eastern languages can
be easily supported on all mobile phone devices. We omitted
programming concepts for users in order to simplify multimedia
content-oriented mobile applictaion designing for use in educational,
cultural or marketing centers. Ordinary operators can now create a
variety of multimedia mobile applications such as tutorials,
catalogues, books, and guides in minutes rather than months.
Simplicity and power has been the goal of this CMS. In this paper,
we present the software engineered-designed concepts of the
ARMrayan MCMS along with the implementation challenges faces
and solutions adapted.
Abstract: Wireless channels are characterized by more serious
bursty and location-dependent errors. Many packet scheduling
algorithms have been proposed for wireless networks to guarantee
fairness and delay bounds. However, most existing schemes do not
consider the difference of traffic natures among packet flows. This
will cause the delay-weight coupling problem. In particular, serious
queuing delays may be incurred for real-time flows. In this paper, it
is proposed a scheduling algorithm that takes traffic types of flows
into consideration when scheduling packets and also it is provided
scheduling flexibility by trading off video quality to meet the
playback deadline.
Abstract: The protection of the contents of digital products is
referred to as content authentication. In some applications, to be able
to authenticate a digital product could be extremely essential. For
example, if a digital product is used as a piece of evidence in the
court, its integrity could mean life or death of the accused. Generally,
the problem of content authentication can be solved using semifragile
digital watermarking techniques. Recently many authors have
proposed Computer Generated Hologram Watermarking (CGHWatermarking)
techniques. Starting from these studies, in this paper
a semi-fragile Computer Generated Hologram coding technique is
proposed, which is able to detect malicious tampering while
tolerating some incidental distortions. The proposed technique uses
as watermark an encrypted image, and it is well suitable for digital
image authentication.
Abstract: The feature of HIV genome is in a wide range because
of it is highly heterogeneous. Hence, the infection ability of the virus changes related with different chemokine receptors. From this point,
R5 and X4 HIV viruses use CCR5 and CXCR5 coreceptors respectively while R5X4 viruses can utilize both coreceptors. Recently, in Bioinformatics, R5X4 viruses have been studied to
classify by using the coreceptors of HIV genome.
The aim of this study is to develop the optimal Multilayer
Perceptron (MLP) for high classification accuracy of HIV sub-type viruses. To accomplish this purpose, the unit number in hidden layer
was incremented one by one, from one to a particular number. The statistical data of R5X4, R5 and X4 viruses was preprocessed by the
signal processing methods. Accessible residues of these virus sequences were extracted and modeled by Auto-Regressive Model
(AR) due to the dimension of residues is large and different from each other. Finally the pre-processed dataset was used to evolve MLP with various number of hidden units to determine R5X4
viruses. Furthermore, ROC analysis was used to figure out the optimal MLP structure.
Abstract: The main goal of the article is to present new model of
application architecture of banking IT solution providing the Internet
Banking services that is particularly outsourced. At first, we propose
business rationale and a SWOT analysis to explain the reasons for the
model in the article. The most important factor for our model is
nowadays- big boom around smart phones and tablet devices. As
next, we focus on IT architecture viewpoint where we design
application, integration and security model. Finally, we propose a
generic governance model that serves as a basis for the specialized
governance model. The specialized instance of governance model is
designed to ensure that the development and the maintenance of
different parts of the IT solution are well governed in time.
Abstract: In this article, we introduce a mechanism by which the same concept of differentiated services used in network transmission can be applied to provide quality of service levels to pervasive systems applications. The classical DiffServ model, including marking and classification, assured forwarding, and expedited forwarding, are all utilized to create quality of service guarantees for various pervasive applications requiring different levels of quality of service. Through a collection of various sensors, personal devices, and data sources, the transmission of contextsensitive data can automatically occur within a pervasive system with a given quality of service level. Triggers, initiators, sources, and receivers are four entities labeled in our mechanism. An explanation of the role of each is provided, and how quality of service is guaranteed.