Abstract: Elastic boundary eigensolution problems are converted
into boundary integral equations by potential theory. The kernels of
the boundary integral equations have both the logarithmic and Hilbert
singularity simultaneously. We present the mechanical quadrature
methods for solving eigensolutions of the boundary integral equations
by dealing with two kinds of singularities at the same time. The methods
possess high accuracy O(h3) and low computing complexity. The
convergence and stability are proved based on Anselone-s collective
compact theory. Bases on the asymptotic error expansion with odd
powers, we can greatly improve the accuracy of the approximation,
and also derive a posteriori error estimate which can be used for
constructing self-adaptive algorithms. The efficiency of the algorithms
are illustrated by numerical examples.
Abstract: A sequential decision problem, based on the task ofidentifying the species of trees given acoustic echo data collectedfrom them, is considered with well-known stochastic classifiers,including single and mixture Gaussian models. Echoes are processedwith a preprocessing stage based on a model of mammalian cochlearfiltering, using a new discrete low-pass filter characteristic. Stoppingtime performance of the sequential decision process is evaluated andcompared. It is observed that the new low pass filter processingresults in faster sequential decisions.
Abstract: In this work, I present a review on Sparse Distributed
Memory for Small Cues (SDMSCue), a variant of Sparse Distributed
Memory (SDM) that is capable of handling small cues. I then conduct
and show some cognitive experiments on SDMSCue to test its
cognitive soundness compared to SDM. Small cues refer to input
cues that are presented to memory for reading associations; but have
many missing parts or fields from them. The original SDM failed to
handle such a problem. SDMSCue handles and overcomes this
pitfall. The main idea in SDMSCue; is the repeated projection of the
semantic space on smaller subspaces; that are selected based on the
input cue length and pattern. This process allows for Read/Write
operations using an input cue that is missing a large portion.
SDMSCue is augmented with the use of genetic algorithms for
memory allocation and initialization. I claim that SDM functionality
is a subset of SDMSCue functionality.
Abstract: Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.
Abstract: High-frequency (HF) communications have been used by military organizations for more than 90 years. The opportunity of very long range communications without the need for advanced equipment makes HF a convenient and inexpensive alternative of satellite communications. Besides the advantages, voice and data transmission over HF is a challenging task, because the HF channel generally suffers from Doppler shift and spread, multi-path, cochannel interference, and many other sources of noise. In constructing an HF data modem, all these effects must be taken into account. STANAG 4539 is a NATO standard for high-speed data transmission over HF. It allows data rates up to 12800 bps over an HF channel of 3 kHz. In this work, an efficient implementation of STANAG 4539 on a single Texas Instruments- TMS320C6747 DSP chip is described. The state-of-the-art algorithms used in the receiver and the efficiency of the implementation enables real-time high-speed data / digitized voice transmission over poor HF channels.
Abstract: The objective of this research is to develop an advanced driver assistance system characterized with the functions of lane departure warning (LDW), forward collision warning (FCW) and adaptive front-lighting system (AFS). The system is mainly configured a CCD/CMOS camera to acquire the images of roadway ahead in association with the analysis made by an image-processing unit concerning the lane ahead and the preceding vehicles. The input image captured by a camera is used to recognize the lane and the preceding vehicle positions by image detection and DROI (Dynamic Range of Interesting) algorithms. Therefore, the system is able to issue real-time auditory and visual outputs of warning when a driver is departing the lane or driving too close to approach the preceding vehicle unwittingly so that the danger could be prevented from occurring. During the nighttime, in addition to the foregoing warning functions, the system is able to control the bending light of headlamp to provide an immediate light illumination when making a turn at a curved lane and adjust the level automatically to reduce the lighting interference against the oncoming vehicles driving in the opposite direction by the curvature of lane and the vanishing point estimations. The experimental results show that the integrated vehicle image system is robust to most environments such as the lane detection and preceding vehicle detection average accuracy performances are both above 90 %.
Abstract: This paper presents a new color face image database
for benchmarking of automatic face detection algorithms and human
skin segmentation techniques. It is named the VT-AAST image
database, and is divided into four parts. Part one is a set of 286 color
photographs that include a total of 1027 faces in the original format
given by our digital cameras, offering a wide range of difference in
orientation, pose, environment, illumination, facial expression and
race. Part two contains the same set in a different file format. The
third part is a set of corresponding image files that contain human
colored skin regions resulting from a manual segmentation
procedure. The fourth part of the database has the same regions
converted into grayscale. The database is available on-line for
noncommercial use. In this paper, descriptions of the database
development, organization, format as well as information needed for
benchmarking of algorithms are depicted in detail.
Abstract: Region covariance (RC) descriptor is an effective
and efficient feature for visual tracking. Current RC-based tracking
algorithms use the whole RC matrix to track the target in video
directly. However, there exist some issues for these whole RCbased
algorithms. If some features are contaminated, the whole RC
will become unreliable, which results in lost object-tracking. In
addition, if some features are very discriminative to the
background, other features are still processed and thus reduce the
efficiency. In this paper a new robust tracking method is proposed,
in which the whole RC matrix is decomposed into several low rank
matrices. Those matrices are dynamically chosen and processed so
as to achieve a good tradeoff between discriminability and
complexity. Experimental results have shown that our method is
more robust to complex environment changes, especially either
when occlusion happens or when the background is similar to the
target compared to other RC-based methods.
Abstract: This paper presents a modified version of the
maximum urgency first scheduling algorithm. The maximum
urgency algorithm combines the advantages of fixed and dynamic
scheduling to provide the dynamically changing systems with
flexible scheduling. This algorithm, however, has a major
shortcoming due to its scheduling mechanism which may cause a
critical task to fail. The modified maximum urgency first scheduling
algorithm resolves the mentioned problem. In this paper, we propose
two possible implementations for this algorithm by using either
earliest deadline first or modified least laxity first algorithms for
calculating the dynamic priorities. These two approaches are
compared together by simulating the two algorithms. The earliest
deadline first algorithm as the preferred implementation is then
recommended. Afterwards, we make a comparison between our
proposed algorithm and maximum urgency first algorithm using
simulation and results are presented. It is shown that modified
maximum urgency first is superior to maximum urgency first, since it
usually has less task preemption and hence, less related overhead. It
also leads to less failed non-critical tasks in overloaded situations.
Abstract: Infrared focal plane arrays (IRFPA) sensors, due to
their high sensitivity, high frame frequency and simple structure, have
become the most prominently used detectors in military applications.
However, they suffer from a common problem called the fixed pattern
noise (FPN), which severely degrades image quality and limits the
infrared imaging applications. Therefore, it is necessary to perform
non-uniformity correction (NUC) on IR image. The algorithms of
non-uniformity correction are classified into two main categories, the
calibration-based and scene-based algorithms. There exist some
shortcomings in both algorithms, hence a novel non-uniformity
correction algorithm based on non-linear fit is proposed, which
combines the advantages of the two algorithms. Experimental results
show that the proposed algorithm acquires a good effect of NUC with
a lower non-uniformity ratio.
Abstract: The main goal of data mining is to extract accurate, comprehensible and interesting knowledge from databases that may be considered as large search spaces. In this paper, a new, efficient type of Genetic Algorithm (GA) called uniform two-level GA is proposed as a search strategy to discover truly interesting, high-level prediction rules, a difficult problem and relatively little researched, rather than discovering classification knowledge as usual in the literatures. The proposed method uses the advantage of uniform population method and addresses the task of generalized rule induction that can be regarded as a generalization of the task of classification. Although the task of generalized rule induction requires a lot of computations, which is usually not satisfied with the normal algorithms, it was demonstrated that this method increased the performance of GAs and rapidly found interesting rules.
Abstract: Autofluorescence (AF) bronchoscopy is an
established method to detect dysplasia and carcinoma in situ (CIS).
For this reason the “Sotiria" Hospital uses the Karl Storz D-light
system. However, in early tumor stages the visualization is not that
obvious. With the help of a PC, we analyzed the color images we
captured by developing certain tools in Matlab®. We used statistical
methods based on texture analysis, signal processing methods based
on Gabor models and conversion algorithms between devicedependent
color spaces. Our belief is that we reduced the error made
by the naked eye. The tools we implemented improve the quality of
patients' life.
Abstract: In this paper the gradient based iterative algorithms are presented to solve the following four types linear matrix equations: (a) AXB = F; (b) AXB = F, CXD = G; (c) AXB = F s. t. X = XT ; (d) AXB+CYD = F, where X and Y are unknown matrices, A,B,C,D, F,G are the given constant matrices. It is proved that if the equation considered has a solution, then the unique minimum norm solution can be obtained by choosing a special kind of initial matrices. The numerical results show that the proposed method is reliable and attractive.
Abstract: This paper presents a heuristic approach to solve the Generalized Assignment Problem (GAP) which is NP-hard. It is worth mentioning that many researches used to develop algorithms for identifying the redundant constraints and variables in linear programming model. Some of the algorithms are presented using intercept matrix of the constraints to identify redundant constraints and variables prior to the start of the solution process. Here a new heuristic approach based on the dominance property of the intercept matrix to find optimal or near optimal solution of the GAP is proposed. In this heuristic, redundant variables of the GAP are identified by applying the dominance property of the intercept matrix repeatedly. This heuristic approach is tested for 90 benchmark problems of sizes upto 4000, taken from OR-library and the results are compared with optimum solutions. Computational complexity is proved to be O(mn2) of solving GAP using this approach. The performance of our heuristic is compared with the best state-ofthe- art heuristic algorithms with respect to both the quality of the solutions. The encouraging results especially for relatively large size test problems indicate that this heuristic approach can successfully be used for finding good solutions for highly constrained NP-hard problems.
Abstract: Wide applicability of concurrent programming
practices in developing various software applications leads to
different concurrency errors amongst which data race is the most
important. Java provides greatest support for concurrent
programming by introducing various concurrency packages. Aspect
oriented programming (AOP) is modern programming paradigm
facilitating the runtime interception of events of interest and can be
effectively used to handle the concurrency problems. AspectJ being
an aspect oriented extension to java facilitates the application of
concepts of AOP for data race detection. Volatile variables are
usually considered thread safe, but they can become the possible
candidates of data races if non-atomic operations are performed
concurrently upon them. Various data race detection algorithms have
been proposed in the past but this issue of volatility and atomicity is
still unaddressed. The aim of this research is to propose some
suggestions for incorporating certain conditions for data race
detection in java programs at the volatile fields by taking into account
support for atomicity in java concurrency packages and making use
of pointcuts. Two simple test programs will demonstrate the results
of research. The results are verified on two different Java
Development Kits (JDKs) for the purpose of comparison.
Abstract: Our objective in this paper is to propose an approach
capable of clustering web messages. The clustering is carried out by
assigning, with a certain probability, texts written by the same web
user to the same cluster based on Stylometric features and using
fuzzy clustering algorithms. Focus in the present work is on
comparing the most popular algorithms in fuzzy clustering theory
namely, Fuzzy C-means, Possibilistic C-means and Fuzzy
Possibilistic C-Means.
Abstract: Eigenvector methods are gaining increasing acceptance in the area of spectrum estimation. This paper presents a successful attempt at testing and evaluating the performance of two of the most popular types of subspace techniques in determining the parameters of multiexponential signals with real decay constants buried in noise. In particular, MUSIC (Multiple Signal Classification) and minimum-norm techniques are examined. It is shown that these methods perform almost equally well on multiexponential signals with MUSIC displaying better defined peaks.
Abstract: Medical Decision Support Systems (MDSSs) are sophisticated, intelligent systems that can provide inference due to lack of information and uncertainty. In such systems, to model the uncertainty various soft computing methods such as Bayesian networks, rough sets, artificial neural networks, fuzzy logic, inductive logic programming and genetic algorithms and hybrid methods that formed from the combination of the few mentioned methods are used. In this study, symptom-disease relationships are presented by a framework which is modeled with a formal concept analysis and theory, as diseases, objects and attributes of symptoms. After a concept lattice is formed, Bayes theorem can be used to determine the relationships between attributes and objects. A discernibility relation that forms the base of the rough sets can be applied to attribute data sets in order to reduce attributes and decrease the complexity of computation.
Abstract: In this paper we consider the problem of distributed adaptive estimation in wireless sensor networks for two different observation noise conditions. In the first case, we assume that there are some sensors with high observation noise variance (noisy sensors) in the network. In the second case, different variance for observation noise is assumed among the sensors which is more close to real scenario. In both cases, an initial estimate of each sensor-s observation noise is obtained. For the first case, we show that when there are such sensors in the network, the performance of conventional distributed adaptive estimation algorithms such as incremental distributed least mean square (IDLMS) algorithm drastically decreases. In addition, detecting and ignoring these sensors leads to a better performance in a sense of estimation. In the next step, we propose a simple algorithm to detect theses noisy sensors and modify the IDLMS algorithm to deal with noisy sensors. For the second case, we propose a new algorithm in which the step-size parameter is adjusted for each sensor according to its observation noise variance. As the simulation results show, the proposed methods outperforms the IDLMS algorithm in the same condition.
Abstract: In this paper multi-objective genetic algorithms are
employed for Pareto approach optimization of ideal Turboshaft
engines. In the multi-objective optimization a number of conflicting
objective functions are to be optimized simultaneously. The
important objective functions that have been considered for
optimization are specific thrust (F/m& 0), specific fuel consumption
( P S ), output shaft power 0 (& /&) shaft W m and overall efficiency( ) O
η .
These objectives are usually conflicting with each other. The design
variables consist of thermodynamic parameters (compressor pressure
ratio, turbine temperature ratio and Mach number).
At the first stage single objective optimization has been
investigated and the method of NSGA-II has been used for multiobjective
optimization. Optimization procedures are performed for
two and four objective functions and the results are compared for
ideal Turboshaft engine. In order to investigate the optimal
thermodynamic behavior of two objectives, different set, each
including two objectives of output parameters, are considered
individually. For each set Pareto front are depicted. The sets of
selected decision variables based on this Pareto front, will cause the
best possible combination of corresponding objective functions.
There is no superiority for the points on the Pareto front figure,
but they are superior to any other point. In the case of four objective
optimization the results are given in tables.