Abstract: This paper proposes rough set models with three
different level knowledge granules in incomplete information system
under tolerance relation by similarity between objects according to
their attribute values. Through introducing dominance relation on the
discourse to decompose similarity classes into three subclasses: little
better subclass, little worse subclass and vague subclass, it dismantles
lower and upper approximations into three components. By using
these components, retrieving information to find naturally hierarchical
expansions to queries and constructing answers to elaborative queries
can be effective. It illustrates the approach in applying rough set
models in the design of information retrieval system to access different
granular expanded documents. The proposed method enhances rough
set model application in the flexibility of expansions and elaborative
queries in information retrieval.
Abstract: Shirvan is located in plain in Northern Khorasan province north east of Iran and has semiarid to temperate climate. To investigate the annual changes in some qualitative parameters such as electrical conductivity, total dissolved solids and chloride concentrations which have increased during ten continuous years. Fourteen groundwater sources including deep as well as semi-deep wells were sampled and were analyzed using standard methods. The trends of obtained data were analyzed during these years and the effects of different factors on the changes in electrical conductivity, concentration of chloride and total dissolved solids were clarified. The results showed that the amounts of some qualitative parameters have been increased during 10 years time which has led to decrease in water quality. The results also showed that increased in urban populations as well as extensive industrialization in the studied area are the most important reasons to influence underground water quality. Furthermore decrease in water quantity is also evident due to more water utilization and occurrence of recent droughts in the region during recent years.
Abstract: In this paper, we propose an approach for the classification of fingerprint databases. It is based on the fact that a fingerprint image is composed of regular texture regions that can be successfully represented by co-occurrence matrices. So, we first extract the features based on certain characteristics of the cooccurrence matrix and then we use these features to train a neural network for classifying fingerprints into four common classes. The obtained results compared with the existing approaches demonstrate the superior performance of our proposed approach.
Abstract: Hazard rate estimation is one of the important topics
in forecasting earthquake occurrence. Forecasting earthquake
occurrence is a part of the statistical seismology where the main
subject is the point process. Generally, earthquake hazard rate is
estimated based on the point process likelihood equation called the
Hazard Rate Likelihood of Point Process (HRLPP). In this research,
we have developed estimation method, that is hazard rate single
decrement HRSD. This method was adapted from estimation method
in actuarial studies. Here, one individual associated with an
earthquake with inter event time is exponentially distributed. The
information of epicenter and time of earthquake occurrence are used
to estimate hazard rate. At the end, a case study of earthquake hazard
rate will be given. Furthermore, we compare the hazard rate between
HRLPP and HRSD method.
Abstract: This paper deals with the project selection problem. Project selection problem is one of the problems arose firstly in the field of operations research following some production concepts from primary product mix problem. Afterward, introduction of managerial considerations into the project selection problem have emerged qualitative factors and criteria to be regarded as well as quantitative ones. To overcome both kinds of criteria, an analytic network process is developed in this paper enhanced with fuzzy sets theory to tackle the vagueness of experts- comments to evaluate the alternatives. Additionally, a modified version of Least-Square method through a non-linear programming model is augmented to the developed group decision making structure in order to elicit the final weights from comparison matrices. Finally, a case study is considered by which developed structure in this paper is validated. Moreover, a sensitivity analysis is performed to validate the response of the model with respect to the condition alteration.
Abstract: This contribution aims to outline some topics around the process of introduction of compulsory electronic exchange of documents (so called e-Boxes) in public administration. The research was conducted in order to gauge the difference between the expectation of those using internal email and their experience in reality. Both qualitative and quantitative research is employed to lead also to an estimation of the willingness and readiness of government bodies, business units and citizens to adopt new technologies. At the same time the most potent barriers to successful e-communication through the e-Boxes are identified.
Abstract: The network of delivering commodities has been an important design problem in our daily lives and many transportation applications. The delivery performance is evaluated based on the system reliability of delivering commodities from a source node to a sink node in the network. The system reliability is thus maximized to find the optimal routing. However, the design problem is not simple because (1) each path segment has randomly distributed attributes; (2) there are multiple commodities that consume various path capacities; (3) the optimal routing must successfully complete the delivery process within the allowable time constraints. In this paper, we want to focus on the design optimization of the Multi-State Flow Network (MSFN) for multiple commodities. We propose an efficient approach to evaluate the system reliability in the MSFN with respect to randomly distributed path attributes and find the optimal routing subject to the allowable time constraints. The delivery rates, also known as delivery currents, of the path segments are evaluated and the minimal-current arcs are eliminated to reduce the complexity of the MSFN. Accordingly, the correct optimal routing is found and the worst-case reliability is evaluated. It has been shown that the reliability of the optimal routing is at least higher than worst-case measure. Two benchmark examples are utilized to demonstrate the proposed method. The comparisons between the original and the reduced networks show that the proposed method is very efficient.
Abstract: Locality Sensitive Hashing (LSH) is one of the most
promising techniques for solving nearest neighbour search problem in
high dimensional space. Euclidean LSH is the most popular variation
of LSH that has been successfully applied in many multimedia
applications. However, the Euclidean LSH presents limitations that
affect structure and query performances. The main limitation of the
Euclidean LSH is the large memory consumption. In order to achieve
a good accuracy, a large number of hash tables is required. In this
paper, we propose a new hashing algorithm to overcome the storage
space problem and improve query time, while keeping a good
accuracy as similar to that achieved by the original Euclidean LSH.
The Experimental results on a real large-scale dataset show that the
proposed approach achieves good performances and consumes less
memory than the Euclidean LSH.
Abstract: Combining classifiers is a useful method for solving
complex problems in machine learning. The ECOC (Error Correcting
Output Codes) method has been widely used for designing combining
classifiers with an emphasis on the diversity of classifiers. In this
paper, in contrast to the standard ECOC approach in which individual
classifiers are chosen homogeneously, classifiers are selected
according to the complexity of the corresponding binary problem. We
use SATIMAGE database (containing 6 classes) for our experiments.
The recognition error rate in our proposed method is %10.37 which
indicates a considerable improvement in comparison with the
conventional ECOC and stack generalization methods.
Abstract: Naive Bayes Nearest Neighbor (NBNN) and its variants, i,e., local NBNN and the NBNN kernels, are local feature-based classifiers that have achieved impressive performance in image classification. By exploiting instance-to-class (I2C) distances (instance means image/video in image/video classification), they avoid quantization errors of local image descriptors in the bag of words (BoW) model. However, the performances of NBNN, local NBNN and the NBNN kernels have not been validated on video analysis. In this paper, we introduce these three classifiers into human action recognition and conduct comprehensive experiments on the benchmark KTH and the realistic HMDB datasets. The results shows that those I2C based classifiers consistently outperform the SVM classifier with the BoW model.
Abstract: This paper proposes an efficient finite precision block floating point (BFP) treatment to the fixed coefficient finite impulse response (FIR) digital filter. The treatment includes effective implementation of all the three forms of the conventional FIR filters, namely, direct form, cascaded and par- allel, and a roundoff error analysis of them in the BFP format. An effective block formatting algorithm together with an adaptive scaling factor is pro- posed to make the realizations more simple from hardware view point. To this end, a generic relation between the tap weight vector length and the input block length is deduced. The implementation scheme also emphasises on a simple block exponent update technique to prevent overflow even during the block to block transition phase. The roundoff noise is also investigated along the analogous lines, taking into consideration these implementational issues. The simulation results show that the BFP roundoff errors depend on the sig- nal level almost in the same way as floating point roundoff noise, resulting in approximately constant signal to noise ratio over a relatively large dynamic range.
Abstract: Cellular automata have been used for design of cryptosystems. Recently some secret sharing schemes based on linear memory cellular automata have been introduced which are used for both text and image. In this paper, we illustrate that these secret sharing schemes are vulnerable to dishonest participants- collusion. We propose a cheating model for the secret sharing schemes based on linear memory cellular automata. For this purpose we present a novel uniform model for representation of all secret sharing schemes based on cellular automata. Participants can cheat by means of sending bogus shares or bogus transition rules. Cheaters can cooperate to corrupt a shared secret and compute a cheating value added to it. Honest participants are not aware of cheating and suppose the incorrect secret as the valid one. We prove that cheaters can recover valid secret by removing the cheating value form the corrupted secret. We provide methods of calculating the cheating value.
Abstract: Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body research using Active Shape Models, such as human detection, primarily take the form of silhouette of human body. This technique is not able to estimate accurately for human pose to concern two arms and legs, as the silhouette of human body represents the shape as out of round. To solve this problem, we applied the human body model as stick-figure, “skeleton". The skeleton model of human body can give consideration to various shapes of human pose. To obtain effective estimation result, we applied background subtraction and deformed matching algorithm of primary Active Shape Models in the fitting process. The images which were used to make the model were 600 human bodies, and the model has 17 landmark points which indicate body junction and key features of human pose. The maximum iteration for the fitting process was 30 times and the execution time was less than .03 sec.
Abstract: After the terrorist attack on September 11, 2001 in
U.S., the container security issue got high attention, especially by U.S.
government, which deployed a lot of measures to promote or improve
security systems. U.S. government not only enhances its national
security system, but allies with other countries against the potential
terrorist attacks in the future. For example CSI (Container Security
Initiative), it encourages foreign ports outside U.S. to become CSI
ports as a part of U.S. anti-terrorism network. Although promotion of
the security could partly reach the goal of anti-terrorism, that will
influence the efficiency of container supply chain, which is the main
concern when implementing the inspection measurements. This paper
proposes a quick estimation methodology for an inspection service
rate by a berth allocation heuristic such that the inspection activities
will not affect the original container supply chain. Theoretical and
simulation results show this approach is effective.
Abstract: This paper presents an online method that learns the
corresponding points of an object from un-annotated grayscale images
containing instances of the object. In the first image being
processed, an ensemble of node points is automatically selected
which is matched in the subsequent images. A Bayesian posterior
distribution for the locations of the nodes in the images is formed.
The likelihood is formed from Gabor responses and the prior assumes
the mean shape of the node ensemble to be similar in a translation
and scale free space. An association model is applied for separating
the object nodes and background nodes. The posterior distribution is
sampled with Sequential Monte Carlo method. The matched object
nodes are inferred to be the corresponding points of the object
instances. The results show that our system matches the object nodes
as accurately as other methods that train the model with annotated
training images.
Abstract: A multimedia presentation system refers to the integration of a multimedia database with a presentation manager which has the functionality of content selection, organization and playout of multimedia presentations. It requires high performance of involved system components. Starting from multimedia information capture until the presentation delivery, high performance tools are required for accessing, manipulating, storing and retrieving these segments, for transferring and delivering them in a presentation terminal according to a playout order. The organization of presentations is a complex task in that the display order of presentation contents (in time and space) must be specified. A multimedia presentation contains audio, video, images and text media types. The critical decisions for presentation construction include what the contents are, how the contents are organized, and once the decision is made on the organization of the contents of the presentation, it must be conveyed to the end user in the correct organizational order and in a timely fashion. This paper introduces a framework for specification of multimedia presentations and describes the design of sample presentations using this framework from a multimedia database.
Abstract: Numerical analysis of flow characteristics and
separation efficiency in a high-efficiency cyclone has been performed.
Several models based on the experimental observation for a design
purpose were proposed. However, the model is only estimated the
cyclone's performance under the limited environments; it is difficult to
obtain a general model for all types of cyclones. The purpose of this
study is to find out the flow characteristics and separation efficiency
numerically. The Reynolds stress model (RSM) was employed instead
of a standard k-ε or a k-ω model which was suitable for isotropic
turbulence and it could predict the pressure drop and the Rankine
vortex very well. For small particles, there were three significant
components (entrance of vortex finder, cone, and dust collector) for
the particle separation. In the present work, the particle re-entraining
phenomenon from the dust collector to the cyclone body was observed
after considerable time. This re-entrainment degraded the separation
efficiency and was one of the significant factors for the separation
efficiency of the cyclone.
Abstract: Dehydration process was carried out for tomato slices of var. Avinash after giving different pre-treatments such as calcium chloride (CaCl2), potassium metabisulphite (KMS), calcium chloride and potassium metabisulphite (CaCl2 +KMS), and sodium chloride (NaCl). Untreated samples served as control. Solar drier and continuous conveyor (tunnel) drier were used for dehydration. Quality characteristics of tomato slices viz. moisture content, sugar, titratable acidity, lycopene content, dehydration ratio, rehydration ratio and non-enzymatic browning as affected by dehydration process were studied. Storage study was also carried out for a period of six months for tomato powder packed into different types of packaging materials viz. metalized polyester (MP) film and low density poly ethylene (LDPE). Changes in lycopene content and non-enzymatic browning (NEB) were estimated during storage at room temperature. Pretreatment of 5 mm thickness of tomato slices with calcium chloride in combination with potassium metabisulphite and drying using a tunnel drier with subsequent storage of product in metalized polyester bags was selected as the best process.
Abstract: This work describes a CACSD tool for automatic design of robust controllers for hydraulic turbines. The tool calculates the optimal controller using the MATLAB hinfopt function and it
serves as a practical and effective solution for the laborious task of
designing a different controller for each type of turbine and generator, and different parameters and conditions of the plant. Results of the simulation of a generating unit subject to parameters
variation show the accuracy and efficiency of the obtained robust
controllers.