Abstract: A big organization may have multiple branches spread across different locations. Processing of data from these branches becomes a huge task when innumerable transactions take place. Also, branches may be reluctant to forward their data for centralized processing but are ready to pass their association rules. Local mining may also generate a large amount of rules. Further, it is not practically possible for all local data sources to be of the same size. A model is proposed for discovering valid rules from different sized data sources where the valid rules are high weighted rules. These rules can be obtained from the high frequency rules generated from each of the data sources. A data source selection procedure is considered in order to efficiently synthesize rules. Support Equalization is another method proposed which focuses on eliminating low frequency rules at the local sites itself thus reducing the rules by a significant amount.
Abstract: This study analyzes characteristics determining
member’s willingness to invest in cooperatives using ordered logit
model. The data were collected in a field survey among 122
cooperative members in north-central China. The descriptive analysis
of survey evidence suggests that cooperatives in China generally
having poor ability to deliver the processing services related to
product package, grading, and storage, performing worse in
profitability, inability of providing returns to capital and obtaining
agricultural loan. The regression results demonstrate that members’
farm size, their satisfaction with cooperative price preferential
services, attitudes toward cooperative operational scale and
development potential have statistically significant impact on
willingness to invest.
Abstract: A prototype of an anomaly detection system was
developed to automate process of recognizing an anomaly of
roentgen image by utilizing fuzzy histogram hyperbolization image
enhancement and back propagation artificial neural network.
The system consists of image acquisition, pre-processor, feature
extractor, response selector and output. Fuzzy Histogram
Hyperbolization is chosen to improve the quality of the roentgen
image. The fuzzy histogram hyperbolization steps consist of
fuzzyfication, modification of values of membership functions and
defuzzyfication. Image features are extracted after the the quality of
the image is improved. The extracted image features are input to the
artificial neural network for detecting anomaly. The number of nodes
in the proposed ANN layers was made small.
Experimental results indicate that the fuzzy histogram
hyperbolization method can be used to improve the quality of the
image. The system is capable to detect the anomaly in the roentgen
image.
Abstract: The implementation of the new software and hardware-s technologies for tritium processing nuclear plants, and especially those with an experimental character or of new technology developments shows a coefficient of complexity due to issues raised by the implementation of the performing instrumentation and equipment into a unitary monitoring system of the nuclear technological process of tritium removal. Keeping the system-s flexibility is a demand of the nuclear experimental plants for which the change of configuration, process and parameters is something usual. The big amount of data that needs to be processed stored and accessed for real time simulation and optimization demands the achievement of the virtual technologic platform where the data acquiring, control and analysis systems of the technological process can be integrated with a developed technological monitoring system. Thus, integrated computing and monitoring systems needed for the supervising of the technological process will be executed, to be continued with the execution of optimization system, by choosing new and performed methods corresponding to the technological processes within the tritium removal processing nuclear plants. The developing software applications is executed with the support of the program packages dedicated to industrial processes and they will include acquisition and monitoring sub-modules, named “virtually" as well as the storage sub-module of the process data later required for the software of optimization and simulation of the technological process for tritium removal. The system plays and important role in the environment protection and durable development through new technologies, that is – the reduction of and fight against industrial accidents in the case of tritium processing nuclear plants. Research for monitoring optimisation of nuclear processes is also a major driving force for economic and social development.
Abstract: In this paper, we propose a new architecture for the implementation of the N-point Fast Fourier Transform (FFT), based on the Radix-2 Decimation in Frequency algorithm. This architecture is based on a pipeline circuit that can process a stream of samples and produce two FFT transform samples every clock cycle. Compared to existing implementations the architecture proposed achieves double processing speed using the same circuit complexity.
Abstract: Today, incorrect use of lands and land use changes,
excessive grazing, no suitable using of agricultural farms, plowing on
steep slopes, road construct, building construct, mine excavation etc
have been caused increasing of soil erosion and sediment yield. For
erosion and sediment estimation one can use statistical and empirical
methods. This needs to identify land unit map and the map of
effective factors. However, these empirical methods are usually time
consuming and do not give accurate estimation of erosion. In this
study, we applied GIS techniques to estimate erosion and sediment of
Menderjan watershed at upstream Zayandehrud river in center of
Iran. Erosion faces at each land unit were defined on the basis of land
use, geology and land unit map using GIS. The UTM coordinates of
each erosion type that showed more erosion amounts such as rills and
gullies were inserted in GIS using GPS data. The frequency of
erosion indicators at each land unit, land use and their sediment yield
of these indices were calculated. Also using tendency analysis of
sediment yield changes in watershed outlet (Menderjan hydrometric
gauge station), was calculated related parameters and estimation
errors. The results of this study according to implemented watershed
management projects can be used for more rapid and more accurate
estimation of erosion than traditional methods. These results can also
be used for regional erosion assessment and can be used for remote
sensing image processing.
Abstract: The iris recognition technology is the most accurate,
fast and less invasive one compared to other biometric techniques
using for example fingerprints, face, retina, hand geometry, voice or
signature patterns. The system developed in this study has the
potential to play a key role in areas of high-risk security and can
enable organizations with means allowing only to the authorized
personnel a fast and secure way to gain access to such areas. The
paper aim is to perform the iris region detection and iris inner and
outer boundaries localization. The system was implemented on
windows platform using Visual C# programming language. It is easy
and efficient tool for image processing to get great performance
accuracy. In particular, the system includes two main parts. The first
is to preprocess the iris images by using Canny edge detection
methods, segments the iris region from the rest of the image and
determine the location of the iris boundaries by applying Hough
transform. The proposed system tested on 756 iris images from 60
eyes of CASIA iris database images.
Abstract: KSLV-I(Korea Space Launch Vehicle-I) is designed as
a launch vehicle to enter a 100 kg-class satellite to the LEO(Low Earth
Orbit). Attitude angles of the upper-stage, including roll, pitch and
yaw are controlled by the cold gas thruster system using nitrogen gas.
The cold gas thruster is an actuator in the RCS(Reaction Control
System). To design an attitude controller for the upper-stage, thrust
measurement in vacuum condition is required. In this paper, the new
thrust measurement system and calibration mechanism are developed
and measurement errors and signal processing method are presented.
Abstract: Image fusion aims to enhance the perception
of a scene by combining important information captured by
different sensors. Dual-Tree Complex Wavelet (DT-CWT) has been
thouroughly investigated for image fusion, since it takes advantages
of approximate shift invariance and direction selectivity. But it can
only handle limited direction information. To allow a more flexible
directional expansion for images, we propose a novel fusion scheme,
referred to as complex contourlet transform (CCT). It successfully
incorporates directional filter banks (DFB) into DT-CWT. As a result
it efficiently deal with images containing contours and textures,
whereas it retains the property of shift invariance. Experimental
results demonstrated that the method features high quality fusion
performance and can facilitate many image processing applications.
Abstract: Increasing growth of information volume in the
internet causes an increasing need to develop new (semi)automatic
methods for retrieval of documents and ranking them according to
their relevance to the user query. In this paper, after a brief review
on ranking models, a new ontology based approach for ranking
HTML documents is proposed and evaluated in various
circumstances. Our approach is a combination of conceptual,
statistical and linguistic methods. This combination reserves the
precision of ranking without loosing the speed. Our approach
exploits natural language processing techniques for extracting
phrases and stemming words. Then an ontology based conceptual
method will be used to annotate documents and expand the query.
To expand a query the spread activation algorithm is improved so
that the expansion can be done in various aspects. The annotated
documents and the expanded query will be processed to compute
the relevance degree exploiting statistical methods. The outstanding
features of our approach are (1) combining conceptual, statistical
and linguistic features of documents, (2) expanding the query with
its related concepts before comparing to documents, (3) extracting
and using both words and phrases to compute relevance degree, (4)
improving the spread activation algorithm to do the expansion based
on weighted combination of different conceptual relationships and
(5) allowing variable document vector dimensions. A ranking
system called ORank is developed to implement and test the
proposed model. The test results will be included at the end of the
paper.
Abstract: The process of wafer fabrication is arguably the most
technologically complex and capital intensive stage in semiconductor
manufacturing. This large-scale discrete-event process is highly reentrant,
and involves hundreds of machines, restrictions, and
processing steps. Therefore, production control of wafer fabrication
facilities (fab), specifically scheduling, is one of the most challenging
problems that this industry faces. Dispatching rules have been
extensively applied to the scheduling problems in semiconductor
manufacturing. Moreover, lot release policies are commonly used in
this manufacturing setting to further improve the performance of such
systems and reduce its inherent variability. In this work, simulation is
used in the scheduling of re-entrant flow shop manufacturing systems
with an application in semiconductor wafer fabrication; where, a
simulation model has been developed for the Intel Five-Machine Six
Step Mini-Fab using the ExtendTM simulation environment. The
Mini-Fab has been selected as it captures the challenges involved in
scheduling the highly re-entrant semiconductor manufacturing lines.
A number of scenarios have been developed and have been used to
evaluate the effect of different dispatching rules and lot release
policies on the selected performance measures. Results of simulation
showed that the performance of the Mini-Fab can be drastically
improved using a combination of dispatching rules and lot release
policy.
Abstract: This paper presents an automatic feature recognition
method based on center-surround difference detecting and fuzzy logic
that can be applied in ground-penetrating radar (GPR) image
processing. Adopted center-surround difference method, the salient
local image regions are extracted from the GPR images as features of
detected objects. And fuzzy logic strategy is used to match the
detected features and features in template database. This way, the
problem of objects detecting, which is the key problem in GPR image
processing, can be converted into two steps, feature extracting and
matching. The contributions of these skills make the system have the
ability to deal with changes in scale, antenna and noises. The results of
experiments also prove that the system has higher ratio of features
sensing in using GPR to image the subsurface structures.
Abstract: We address the problem of joint beamforming and multipath channel parameters estimation in Wideband Code Division Multiple Access (WCDMA) communication systems that employ Multiple-Access Interference (MAI) suppression techniques in the uplink (from mobile to base station). Most of the existing schemes rely on time multiplex a training sequence with the user data. In WCDMA, the channel parameters can also be estimated from a code multiplexed common pilot channel (CPICH) that could be corrupted by strong interference resulting in a bad estimate. In this paper, we present new methods to combine interference suppression together with channel estimation when using multiple receiving antennas by using adaptive signal processing techniques. Computer simulation is used to compare between the proposed methods and the existing conventional estimation techniques.
Abstract: This paper deals with dynamic load balancing using PVM. In distributed environment Load Balancing and Heterogeneity are very critical issues and needed to drill down in order to achieve the optimal results and efficiency. Various techniques are being used in order to distribute the load dynamically among different nodes and to deal with heterogeneity. These techniques are using different approaches where Process Migration is basic concept with different optimal flavors. But Process Migration is not an easy job, it impose lot of burden and processing effort in order to track each process in nodes. We will propose a dynamic load balancing technique in which application will intelligently balance the load among different nodes, resulting in efficient use of system and have no overheads of process migration. It would also provide a simple solution to problem of load balancing in heterogeneous environment.
Abstract: Wavelet transforms are multiresolution
decompositions that can be used to analyze signals and images.
Image compression is one of major applications of wavelet
transforms in image processing. It is considered as one of the most
powerful methods that provides a high compression ratio. However,
its implementation is very time-consuming. At the other hand,
parallel computing technologies are an efficient method for image
compression using wavelets. In this paper, we propose a parallel
wavelet compression algorithm based on quadtrees. We implement
the algorithm using MatlabMPI (a parallel, message passing version
of Matlab), and compute its isoefficiency function, and show that it is
scalable. Our experimental results confirm the efficiency of the
algorithm also.
Abstract: This paper presents the theoretical background and
the real implementation of an automated computer system to
introduce machine vision in flower, fruit and vegetable processing
for recollection, cutting, packaging, classification, or fumigation
tasks. The considerations and implementation issues presented in this
work can be applied to a wide range of varieties of flowers, fruits and
vegetables, although some of them are especially relevant due to the
great amount of units that are manipulated and processed each year
over the world. The computer vision algorithms developed in this
work are shown in detail, and can be easily extended to other
applications. A special attention is given to the electromagnetic
compatibility in order to avoid noisy images. Furthermore, real
experimentation has been carried out in order to validate the
developed application. In particular, the tests show that the method
has good robustness and high success percentage in the object
characterization.
Abstract: This paper proposes an auto-classification algorithm
of Web pages using Data mining techniques. We consider the
problem of discovering association rules between terms in a set of
Web pages belonging to a category in a search engine database, and
present an auto-classification algorithm for solving this problem that
are fundamentally based on Apriori algorithm. The proposed
technique has two phases. The first phase is a training phase where
human experts determines the categories of different Web pages, and
the supervised Data mining algorithm will combine these categories
with appropriate weighted index terms according to the highest
supported rules among the most frequent words. The second phase is
the categorization phase where a web crawler will crawl through the
World Wide Web to build a database categorized according to the
result of the data mining approach. This database contains URLs and
their categories.
Abstract: Over the past few years, a number of efforts have
been exerted to build parallel processing systems that utilize the idle
power of LAN-s and PC-s available in many homes and corporations.
The main advantage of these approaches is that they provide cheap
parallel processing environments for those who cannot afford the
expenses of supercomputers and parallel processing hardware.
However, most of the solutions provided are not very flexible in the
use of available resources and very difficult to install and setup.
In this paper, a multi-level web-based parallel processing system
(MWPS) is designed (appendix). MWPS is based on the idea of
volunteer computing, very flexible, easy to setup and easy to use.
MWPS allows three types of subscribers: simple volunteers (single
computers), super volunteers (full networks) and end users. All of
these entities are coordinated transparently through a secure web site.
Volunteer nodes provide the required processing power needed by
the system end users. There is no limit on the number of volunteer
nodes, and accordingly the system can grow indefinitely. Both
volunteer and system users must register and subscribe. Once, they
subscribe, each entity is provided with the appropriate MWPS
components. These components are very easy to install.
Super volunteer nodes are provided with special components that
make it possible to delegate some of the load to their inner nodes.
These inner nodes may also delegate some of the load to some other
lower level inner nodes .... and so on. It is the responsibility of the
parent super nodes to coordinate the delegation process and deliver
the results back to the user.
MWPS uses a simple behavior-based scheduler that takes into
consideration the current load and previous behavior of processing
nodes. Nodes that fulfill their contracts within the expected time get a
high degree of trust. Nodes that fail to satisfy their contract get a
lower degree of trust.
MWPS is based on the .NET framework and provides the minimal
level of security expected in distributed processing environments.
Users and processing nodes are fully authenticated. Communications
and messages between nodes are very secure. The system has been
implemented using C#.
MWPS may be used by any group of people or companies to
establish a parallel processing or grid environment.
Abstract: Skyline extraction in mountainous images can be used
for navigation of vehicles or UAV(unmanned air vehicles), but it is
very hard to extract skyline shape because of clutters like clouds, sea
lines and field borders in images. We developed the edge-based
skyline extraction algorithm using a proposed multistage edge filtering
(MEF) technique. In this method, characteristics of clutters in the
image are first defined and then the lines classified as clutters are
eliminated by stages using the proposed MEF technique. After this
processing, we select the last line using skyline measures among the
remained lines. This proposed algorithm is robust under severe
environments with clutters and has even good performance for
infrared sensor images with a low resolution. We tested this proposed
algorithm for images obtained in the field by an infrared camera and
confirmed that the proposed algorithm produced a better performance
and faster processing time than conventional algorithms.
Abstract: Multiparty voice over IP (MVoIP) systems allows a group of people to freely communicate each other via the internet, which have many applications such as online gaming, teleconferencing, online stock trading etc. Peertalk is a peer to peer multiparty voice over IP system (MVoIP) which is more feasible than existing approaches such as p2p overlay multicast and coupled distributed processing. Since the stream mixing and distribution are done by the peers, it is vulnerable to major security threats like nodes misbehavior, eavesdropping, Sybil attacks, Denial of Service (DoS), call tampering, Man in the Middle attacks etc. To thwart the security threats, a security framework called PEERTS (PEEred Reputed Trustworthy System for peertalk) is implemented so that efficient and secure communication can be carried out between peers.