Abstract: Hypernetworks are a generalized graph structure
representing higher-order interactions between variables. We present a
method for self-organizing hypernetworks to learn an associative
memory of sentences and to recall the sentences from this memory.
This learning method is inspired by the “mental chemistry" model of
cognition and the “molecular self-assembly" technology in
biochemistry. Simulation experiments are performed on a corpus of
natural-language dialogues of approximately 300K sentences
collected from TV drama captions. We report on the sentence
completion performance as a function of the order of word-interaction
and the size of the learning corpus, and discuss the plausibility of this
architecture as a cognitive model of language learning and memory.
Abstract: Among all geo-hydrological relationships, rainfallrunoff
relationship is of utmost importance in any hydrological
investigation and water resource planning. Spatial variation, lag time
involved in obtaining areal estimates for the basin as a whole can
affect the parameterization in design stage as well as in planning
stage. In conventional hydrological processing of data, spatial aspect
is either ignored or interpolated at sub-basin level. Temporal
variation when analysed for different stages can provide clues for its
spatial effectiveness. The interplay of space-time variation at pixel
level can provide better understanding of basin parameters.
Sustenance of design structures for different return periods and their
spatial auto-correlations should be studied at different geographical
scales for better management and planning of water resources.
In order to understand the relative effect of spatio-temporal
variation in hydrological data network, a detailed geo-hydrological
analysis of Betwa river catchment falling in Lower Yamuna Basin is
presented in this paper. Moreover, the exact estimates about the
availability of water in the Betwa river catchment, especially in the
wake of recent Betwa-Ken linkage project, need thorough scientific
investigation for better planning. Therefore, an attempt in this
direction is made here to analyse the existing hydrological and
meteorological data with the help of SPSS, GIS and MS-EXCEL
software. A comparison of spatial and temporal correlations at subcatchment
level in case of upper Betwa reaches has been made to
demonstrate the representativeness of rain gauges. First, flows at
different locations are used to derive correlation and regression
coefficients. Then, long-term normal water yield estimates based on
pixel-wise regression coefficients of rainfall-runoff relationship have
been mapped. The areal values obtained from these maps can
definitely improve upon estimates based on point-based
extrapolations or areal interpolations.
Abstract: In aerospace applications, interactions of airflow with
aircraft structures can result in undesirable structural deformations.
This structural deformation in turn, can be predicted if the natural
modes of the structure are known. This can be achieved through
conventional modal testing that requires a known excitation force in
order to extract these dynamic properties. This technique can be
experimentally complex because of the need for artificial excitation
and it is also does not represent actual operational condition. The
current work presents part of research work that address the practical
implementation of operational modal analysis (OMA) applied to a
cantilevered hybrid composite plate employing single contactless
sensing system via laser vibrometer. OMA technique extracts the
modal parameters based only on the measurements of the dynamic
response. The OMA results were verified with impact hammer modal
testing and good agreement was obtained.
Abstract: Equations with differentials relating to the inverse of an unknown function rather than to the unknown function itself are solved exactly for some special cases and numerically for the general case. Invertibility combined with differentiability over connected domains forces solutions always to be monotone. Numerical function inversion is key to all solution algorithms which either are of a forward type or a fixed point type considering whole approximate solution functions in each iteration. The given considerations are restricted to ordinary differential equations with inverted functions (ODEIs) of first order. Forward type computations, if applicable, admit consistency of order one and, under an additional accuracy condition, convergence of order one.
Abstract: In this paper we present modeling and simulation for
physical vapor deposition for metallic bipolar plates. In the models
we discuss the application of different models to simulate the
transport of chemical reactions of the gas species in the gas chamber.
The so called sputter process is an extremely sensitive process to
deposit thin layers to metallic plates. We have taken into account
lower order models to obtain first results with respect to the gas
fluxes and the kinetics in the chamber.
The model equations can be treated analytically in some
circumstances and complicated multi-dimensional models are solved
numerically with a software-package (UG unstructed grids, see [1]).
Because of multi-scaling and multi-physical behavior of the models,
we discuss adapted schemes to solve more accurate in the different
domains and scales. The results are discussed with physical
experiments to give a valid model for the assumed growth of thin
layers.
Abstract: In this paper we analyze the application of a formal proof system to the discrete logarithm problem used in publickey cryptography. That means, we explore a computer verification of the ElGamal encryption scheme with the formal proof system Isabelle/HOL. More precisely, the functional correctness of this algorithm is formally verified with computer support. Besides, we present a formalization of the DSA signature scheme in the Isabelle/HOL system. We show that this scheme is correct what is a necessary condition for the usefulness of any cryptographic signature scheme.
Abstract: The problem addressed herein is the efficient management of the Grid/Cluster intense computation involved, when the preconditioned Bi-CGSTAB Krylov method is employed for the iterative solution of the large and sparse linear system arising from the discretization of the Modified Helmholtz-Dirichlet problem by the Hermite Collocation method. Taking advantage of the Collocation ma-trix's red-black ordered structure we organize efficiently the whole computation and map it on a pipeline architecture with master-slave communication. Implementation, through MPI programming tools, is realized on a SUN V240 cluster, inter-connected through a 100Mbps and 1Gbps ethernet network,and its performance is presented by speedup measurements included.
Abstract: The present paper discusses the selection of process
parameters for obtaining optimal nanocrystallites size in the CuOZrO2
catalyst. There are some parameters changing the inorganic
structure which have an influence on the role of hydrolysis and
condensation reaction. A statistical design test method is
implemented in order to optimize the experimental conditions of
CuO-ZrO2 nanoparticles preparation. This method is applied for the
experiments and L16 orthogonal array standard. The crystallites size
is considered as an index. This index will be used for the analysis in
the condition where the parameters vary. The effect of pH, H2O/
precursor molar ratio (R), time and temperature of calcination,
chelating agent and alcohol volume are particularity investigated
among all other parameters. In accordance with the results of
Taguchi, it is found that temperature has the greatest impact on the
particle size. The pH and H2O/ precursor molar ratio have low
influences as compared with temperature. The alcohol volume as
well as the time has almost no effect as compared with all other
parameters. Temperature also has an influence on the morphology
and amorphous structure of zirconia. The optimal conditions are
determined by using Taguchi method. The nanocatalyst is studied by
DTA-TG, XRD, EDS, SEM and TEM. The results of this research
indicate that it is possible to vary the structure, morphology and
properties of the sol-gel by controlling the above-mentioned
parameters.
Abstract: Kernel function, which allows the formulation of nonlinear variants of any algorithm that can be cast in terms of dot products, makes the Support Vector Machines (SVM) have been successfully applied in many fields, e.g. classification and regression. The importance of kernel has motivated many studies on its composition. It-s well-known that reproducing kernel (R.K) is a useful kernel function which possesses many properties, e.g. positive definiteness, reproducing property and composing complex R.K by simple operation. There are two popular ways to compute the R.K with explicit form. One is to construct and solve a specific differential equation with boundary value whose handicap is incapable of obtaining a unified form of R.K. The other is using a piecewise integral of the Green function associated with a differential operator L. The latter benefits the computation of a R.K with a unified explicit form and theoretical analysis, whereas there are relatively later studies and fewer practical computations. In this paper, a new algorithm for computing a R.K is presented. It can obtain the unified explicit form of R.K in general reproducing kernel Hilbert space. It avoids constructing and solving the complex differential equations manually and benefits an automatic, flexible and rigorous computation for more general RKHS. In order to validate that the R.K computed by the algorithm can be used in SVM well, some illustrative examples and a comparison between R.K and Gaussian kernel (RBF) in support vector regression are presented. The result shows that the performance of R.K is close or slightly superior to that of RBF.
Abstract: Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.
Abstract: In this paper, we study the numerical method for solving second-order fuzzy differential equations using Adomian method under strongly generalized differentiability. And, we present an example with initial condition having four different solutions to illustrate the efficiency of the proposed method under strongly generalized differentiability.
Abstract: Recently, it is found that telegraph equation is more suitable than ordinary diffusion equation in modelling reaction diffusion for such branches of sciences. In this paper, a numerical solution for the one-dimensional hyperbolic telegraph equation by using the collocation method using the septic splines is proposed. The scheme works in a similar fashion as finite difference methods. Test problems are used to validate our scheme by calculate L2-norm and L∞-norm. The accuracy of the presented method is demonstrated by two test problems. The numerical results are found to be in good agreement with the exact solutions.
Abstract: In this paper, the telegraph equation is solved numerically by cubic B-spline quasi-interpolation .We obtain the numerical scheme, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the temporal derivative of the dependent variable. The advantage of the resulting scheme is that the algorithm is very simple so it is very easy to implement. The results of numerical experiments are presented, and are compared with analytical solutions by calculating errors L2 and L∞ norms to confirm the good accuracy of the presented scheme.
Abstract: This paper argues that a product development exercise
involves in addition to the conventional stages, several decisions
regarding other aspects. These aspects should be addressed
simultaneously in order to develop a product that responds to the
customer needs and that helps realize objectives of the stakeholders
in terms of profitability, market share and the like. We present a
framework that encompasses these different development
dimensions. The framework shows that a product development
methodology such as the Quality Function Deployment (QFD) is the
basic tool which allows definition of the target specifications of a
new product. Creativity is the first dimension that enables the
development exercise to live and end successfully. A number of
group processes need to be followed by the development team in
order to ensure enough creativity and innovation. Secondly,
packaging is considered to be an important extension of the product.
Branding strategies, quality and standardization requirements,
identification technologies, design technologies, production
technologies and costing and pricing are also integral parts to the
development exercise. These dimensions constitute the proposed
framework. The paper also presents a mathematical model used to
calculate the design targets based on the target costing principle. The
framework is used to study a case of a new product development in
the telecommunications services sector.
Abstract: In the present research, a finite element model is
presented to study the geometrical and material nonlinear behavior of
reinforced concrete plane frames considering soil-structure
interaction. The nonlinear behaviors of concrete and reinforcing steel
are considered both in compression and tension up to failure. The
model takes account also for the number, diameter, and distribution
of rebar along every cross section. Soil behavior is taken into
consideration using four different models; namely: linear-, nonlinear
Winkler's model, and linear-, nonlinear continuum model. A
computer program (NARC) is specially developed in order to
perform the analysis. The results achieved by the present model show
good agreement with both theoretical and experimental published
literature. The nonlinear behavior of a rectangular frame resting on
soft soil up to failure using the proposed model is introduced for
demonstration.
Abstract: Median filters with larger windows offer greater smoothing and are more robust than the median filters of smaller windows. However, the larger median smoothers (the median filters with the larger windows) fail to track low order polynomial trends in the signals. Due to this, constant regions are produced at the signal corners, leading to the loss of fine details. In this paper, an algorithm, which combines the ability of the 3-point median smoother in preserving the low order polynomial trends and the superior noise filtering characteristics of the larger median smoother, is introduced. The proposed algorithm (called the combiner algorithm in this paper) is evaluated for its performance on a test image corrupted with different types of noise and the results obtained are included.
Abstract: Automatic Vehicle Identification (AVI) has many
applications in traffic systems (highway electronic toll collection, red
light violation enforcement, border and customs checkpoints, etc.).
License Plate Recognition is an effective form of AVI systems. In
this study, a smart and simple algorithm is presented for vehicle-s
license plate recognition system. The proposed algorithm consists of
three major parts: Extraction of plate region, segmentation of
characters and recognition of plate characters. For extracting the
plate region, edge detection algorithms and smearing algorithms are
used. In segmentation part, smearing algorithms, filtering and some
morphological algorithms are used. And finally statistical based
template matching is used for recognition of plate characters. The
performance of the proposed algorithm has been tested on real
images. Based on the experimental results, we noted that our
algorithm shows superior performance in car license plate
recognition.
Abstract: We propose a reduced-ordermodel for the instantaneous
hydrodynamic force on a cylinder. The model consists of a system of
two ordinary differential equations (ODEs), which can be integrated
in time to yield very accurate histories of the resultant force and
its direction. In contrast to several existing models, the proposed
model considers the actual (total) hydrodynamic force rather than its
perpendicular or parallel projection (the lift and drag), and captures
the complete force rather than the oscillatory part only. We study
and provide descriptions of the relationship between the model
parameters, evaluated utilizing results from numerical simulations,
and the Reynolds number so that the model can be used at any
arbitrary value within the considered range of 100 to 500 to provide
accurate representation of the force without the need to perform timeconsuming
simulations and solving the partial differential equations
(PDEs) governing the flow field.
Abstract: Nanotechnology is the science of creating, using and
manipulating objects which have at least one dimension in range of
0.1 to 100 nanometers. In other words, nanotechnology is
reconstructing a substance using its individual atoms and arranging
them in a way that is desirable for our purpose.
The main reason that nanotechnology has been attracting
attentions is the unique properties that objects show when they are
formed at nano-scale. These differing characteristics that nano-scale
materials show compared to their nature-existing form is both useful
in creating high quality products and dangerous when being in
contact with body or spread in environment.
In order to control and lower the risk of such nano-scale particles,
the main following three topics should be considered:
1) First of all, these materials would cause long term diseases that
may show their effects on body years after being penetrated in human
organs and since this science has become recently developed in
industrial scale not enough information is available about their
hazards on body.
2) The second is that these particles can easily spread out in
environment and remain in air, soil or water for very long time,
besides their high ability to penetrate body skin and causing new
kinds of diseases.
3) The third one is that to protect body and environment against
the danger of these particles, the protective barriers must be finer than
these small objects and such defenses are hard to accomplish.
This paper will review, discuss and assess the risks that human and
environment face as this new science develops at a high rate.
Abstract: In this paper a novel method for multiple one dimensional real valued sinusoidal signal frequency estimation in the presence of additive Gaussian noise is postulated. A computationally simple frequency estimation method with efficient statistical performance is attractive in many array signal processing applications. The prime focus of this paper is to combine the subspace-based technique and a simple peak search approach. This paper presents a variant of the Propagator Method (PM), where a collaborative approach of SUMWE and Propagator method is applied in order to estimate the multiple real valued sine wave frequencies. A new data model is proposed, which gives the dimension of the signal subspace is equal to the number of frequencies present in the observation. But, the signal subspace dimension is twice the number of frequencies in the conventional MUSIC method for estimating frequencies of real-valued sinusoidal signal. The statistical analysis of the proposed method is studied, and the explicit expression of asymptotic (large-sample) mean-squared-error (MSE) or variance of the estimation error is derived. The performance of the method is demonstrated, and the theoretical analysis is substantiated through numerical examples. The proposed method can achieve sustainable high estimation accuracy and frequency resolution at a lower SNR, which is verified by simulation by comparing with conventional MUSIC, ESPRIT and Propagator Method.