Abstract: We demonstrate that it is possible to compute wave function normalization constants for a class of Schr¨odinger type equations by an algorithm which scales linearly (in the number of eigenfunction evaluations) with the desired precision P in decimals.
Abstract: Industrial robots become useless without end-effectors
that for many instances are in the form of friction grippers.
Commonly friction grippers apply frictional forces to different
objects on the basis of programmers- experiences. This puts a
limitation on the effectiveness of gripping force that may result in
damaging the object. This paper describes various stages of design
and development of a low cost sensor-based robotic gripper that
would facilitate the task of applying right gripping forces to different
objects. The gripper is also equipped with range sensors in order to
avoid collisions of the gripper with objects. It is a fully functional
automated pick and place gripper which can be used in many
industrial applications. Yet it can also be altered or further developed
in order to suit a larger number of industrial activities. The current
design of gripper could lead to designing completely automated robot
grippers able to improve the efficiency and productivity of industrial
robots.
Abstract: The amount and heterogeneity of data in biomedical research, notably in interdisciplinary research, requires new methods for the collection, presentation and analysis of information. Important data from laboratory experiments as well as patient trials are available but come out of distributed resources. The Charite Medical School in Berlin has established together with the German Research Foundation (DFG) a new information service center for kidney diseases and transplantation (Open European Nephrology Science Centre - OpEN.SC). The system is based on a service-oriented architecture (SOA) with main and auxiliary modules arranged in four layers. To improve the reuse and efficient arrangement of the services the functionalities are described as business processes using the standardised Business Process Execution Language (BPEL).
Abstract: Nowadays, precipitation prediction is required for proper planning and management of water resources. Prediction with neural network models has received increasing interest in various research and application domains. However, it is difficult to determine the best neural network architecture for prediction since it is not immediately obvious how many input or hidden nodes are used in the model. In this paper, neural network model is used as a forecasting tool. The major aim is to evaluate a suitable neural network model for monthly precipitation mapping of Myanmar. Using 3-layerd neural network models, 100 cases are tested by changing the number of input and hidden nodes from 1 to 10 nodes, respectively, and only one outputnode used. The optimum model with the suitable number of nodes is selected in accordance with the minimum forecast error. In measuring network performance using Root Mean Square Error (RMSE), experimental results significantly show that 3 inputs-10 hiddens-1 output architecture model gives the best prediction result for monthly precipitation in Myanmar.
Abstract: Analysis and visualization of microarraydata is veryassistantfor biologists and clinicians in the field of diagnosis and treatment of patients. It allows Clinicians to better understand the structure of microarray and facilitates understanding gene expression in cells. However, microarray dataset is a complex data set and has thousands of features and a very small number of observations. This very high dimensional data set often contains some noise, non-useful information and a small number of relevant features for disease or genotype. This paper proposes a non-linear dimensionality reduction algorithm Local Principal Component (LPC) which aims to maps high dimensional data to a lower dimensional space. The reduced data represents the most important variables underlying the original data. Experimental results and comparisons are presented to show the quality of the proposed algorithm. Moreover, experiments also show how this algorithm reduces high dimensional data whilst preserving the neighbourhoods of the points in the low dimensional space as in the high dimensional space.
Abstract: This paper proposes a novel approach to the question of lithofacies classification based on an assessment of the uncertainty in the classification results. The proposed approach has multiple neural networks (NN), and interval neutrosophic sets (INS) are used to classify the input well log data into outputs of multiple classes of lithofacies. A pair of n-class neural networks are used to predict n-degree of truth memberships and n-degree of false memberships. Indeterminacy memberships or uncertainties in the predictions are estimated using a multidimensional interpolation method. These three memberships form the INS used to support the confidence in results of multiclass classification. Based on the experimental data, our approach improves the classification performance as compared to an existing technique applied only to the truth membership. In addition, our approach has the capability to provide a measure of uncertainty in the problem of multiclass classification.
Abstract: We develop a three-step fuzzy logic-based algorithm for clustering categorical attributes, and we apply it to analyze cultural data. In the first step the algorithm employs an entropy-based clustering scheme, which initializes the cluster centers. In the second step we apply the fuzzy c-modes algorithm to obtain a fuzzy partition of the data set, and the third step introduces a novel cluster validity index, which decides the final number of clusters.
Abstract: A local municipality has decided to build a sewage pit
to receive residential sewage waste arriving by tank trucks. Daily
accumulated waste are to be pumped to a nearby waste water
treatment facility to be re-consumed for agricultural and construction
projects. A discrete-event simulation model using Arena Software
was constructed to assist in defining the capacity of the system in
cubic meters, number of tank trucks to use the system, number of
unload docks required, number of standby areas needed and
manpower required for data collection at entrance checkpoint and
truck tank load toxicity testing. The results of the model are
statistically validated. Simulation turned out to be an excellent tool
in the facility planning effort for the pit project, as it insured smooth
flow lines of tank trucks load discharge and best utilization of
facilities on site.
Abstract: Laminar natural-convective heat transfer from a
horizontal cylinder is studied by solving the Navier-Stokes and
energy equations using higher order compact scheme in cylindrical
polar coordinates. Results are obtained for Rayleigh numbers of 1,
10, 100 and 1000 for a Prandtl number of 0.7. The local Nusselt
number and mean Nusselt number are calculated and compared with
available experimental and theoretical results. Streamlines, vorticity -
lines and isotherms are plotted.
Abstract: In this paper a deterministic polynomial-time
algorithm is presented for the Clique problem. The case is considered
as the problem of omitting the minimum number of vertices from the
input graph so that none of the zeroes on the graph-s adjacency
matrix (except the main diagonal entries) would remain on the
adjacency matrix of the resulting subgraph. The existence of a
deterministic polynomial-time algorithm for the Clique problem, as
an NP-complete problem will prove the equality of P and NP
complexity classes.
Abstract: The paper is concerned with the existence of solution
of nonlinear second order neutral stochastic differential inclusions
with infinite delay in a Hilbert Space. Sufficient conditions for the
existence are obtained by using a fixed point theorem for condensing
maps.
Abstract: In this work, a characterization and modeling of
packet loss of a Voice over Internet Protocol (VoIP) communication
is developed. The distributions of the number of consecutive received
and lost packets (namely gap and burst) are modeled from the
transition probabilities of two-state and four-state model.
Measurements show that both models describe adequately the burst
distribution, but the decay of gap distribution for non-homogeneous
losses is better fit by the four-state model. The respective
probabilities of transition between states for each model were
estimated with a proposed algorithm from a set of monitored VoIP
calls in order to obtain representative minimum, maximum and
average values for both models.
Abstract: In this paper an alternative analysis in the time
domain is described and the results of the interpolation process are
presented by means of functions that are based on the rule of
conditional mathematical expectation and the covariance function. A
comparison between the interpolation error caused by low order
filters and the classic sinc(t) truncated function is also presented.
When fewer samples are used, low-order filters have less error. If the
number of samples increases, the sinc(t) type functions are a better
alternative. Generally speaking there is an optimal filter for each
input signal which depends on the filter length and covariance
function of the signal. A novel scheme of work for adaptive
interpolation filters is also presented.
Abstract: Viscous heating becomes significant in the high speed
resin coating process of glass fibers for optical fiber manufacturing.
This study focuses on the coating resin flows inside the capillary
coating die of optical fiber coating applicator and they are numerically
simulated to examine the effects of viscous heating and subsequent
temperature increase in coating resin. Resin flows are driven by fast
moving glass fiber and the pressurization at the coating die inlet, while
the temperature dependent viscosity of liquid coating resin plays an
important role in the resin flow. It is found that the severe viscous
heating near the coating die wall profoundly alters the radial velocity
profiles and that the increase of final coating thickness by die
pressurization is amplified if viscous heating is present.
Abstract: In this study, noise characteristics of structure were analyzed in an effort to reduce noise passing through an opening of an
enclosure surrounding the structure that generates noise. Enclosures
are essential measure to protect noise propagation from operating machinery. Access openings of the enclosures are important path of noise leakage. First, noise characteristics of structure were analyzed
and feed-forward noise control was performed using simulation in
order to reduce noise passing through the opening of enclosure, which
surrounds a structure generating noise. We then implemented a
feed-forward controller to actively control the acoustic power through
the opening. Finally, we conducted optimization of placement of the
reference sensors for several cases of the number of sensors. Good
control performances were achieved using the minimum number of microphones arranged an optimal placement.
Abstract: This paper presents an economic game for sybil
detection in a distributed computing environment. Cost parameters
reflecting impacts of different sybil attacks are introduced in the sybil
detection game. The optimal strategies for this game in which both
sybil and non-sybil identities are expected to participate are devised.
A cost sharing economic mechanism called Discriminatory
Rewarding Mechanism for Sybil Detection is proposed based on this
game. A detective accepts a security deposit from each active agent,
negotiates with the agents and offers rewards to the sybils if the latter
disclose their identity. The basic objective of the detective is to
determine the optimum reward amount for each sybil which will
encourage the maximum possible number of sybils to reveal
themselves. Maintaining privacy is an important issue for the
mechanism since the participants involved in the negotiation are
generally reluctant to share their private information. The mechanism
has been applied to Tor by introducing a reputation scoring function.
Abstract: Stable bacterial polymorphism on a single limiting resource may appear if between the evolved strains metabolic interactions take place that allow the exchange of essential nutrients [8]. Towards an attempt to predict the possible outcome of longrunning evolution experiments, a network based on the metabolic capabilities of homogeneous populations of every single gene knockout strain (nodes) of the bacterium E. coli is reconstructed. Potential metabolic interactions (edges) are allowed only between strains of different metabolic capabilities. Bacterial communities are determined by finding cliques in this network. Growth of the emerged hypothetical bacterial communities is simulated by extending the metabolic flux balance analysis model of Varma et al [2] to embody heterogeneous cell population growth in a mutual environment. Results from aerobic growth on 10 different carbon sources are presented. The upper bounds of the diversity that can emerge from single-cloned populations of E. coli such as the number of strains that appears to metabolically differ from most strains (highly connected nodes), the maximum clique size as well as the number of all the possible communities are determined. Certain single gene deletions are identified to consistently participate in our hypothetical bacterial communities under most environmental conditions implying a pattern of growth-condition- invariant strains with similar metabolic effects. Moreover, evaluation of all the hypothetical bacterial communities under growth on pyruvate reveals heterogeneous populations that can exhibit superior growth performance when compared to the performance of the homogeneous wild-type population.
Abstract: We discuss the signal detection through nonlinear
threshold systems. The detection performance is assessed by the
probability of error Per . We establish that: (1) when the signal is
complete suprathreshold, noise always degrades the signal detection
both in the single threshold system and in the parallel array of
threshold devices. (2) When the signal is a little subthreshold, noise
degrades signal detection in the single threshold system. But in the
parallel array, noise can improve signal detection, i.e., stochastic
resonance (SR) exists in the array. (3) When the signal is predominant
subthreshold, noise always can improve signal detection and SR
always exists not only in the single threshold system but also in the
parallel array. (4) Array can improve signal detection by raising the
number of threshold devices. These results extend further the
applicability of SR in signal detection.
Abstract: this paper presents a novel scheme which is capable of reducing the error rate and improves the transmission performance in the asynchronous cooperative MIMO systems. A case study of image transmission is applied to prove the efficient of scheme. The linear dispersion structure is employed to accommodate the cooperative wireless communication network in the dynamic topology of structure, as well as to achieve higher throughput than conventional space–time codes based on orthogonal designs. The LDPC encoder without girth-4 and the STBC encoder with guard intervals are respectively introduced. The experiment results show that the combined coder of LDPC-STBC with guard intervals can be the good error correcting coders and BER performance in the asynchronous cooperative communication. In the case study of image transmission, the results show that in the transmission process, the image quality which is obtained by applied combined scheme is much better than it which is not applied the scheme in the asynchronous cooperative MIMO systems.
Abstract: As a vital activity for companies, new product
development (NPD) is also a very risky process due to the high
uncertainty degree encountered at every development stage and the
inevitable dependence on how previous steps are successfully
accomplished. Hence, there is an apparent need to evaluate new
product initiatives systematically and make accurate decisions under
uncertainty. Another major concern is the time pressure to launch a
significant number of new products to preserve and increase the
competitive power of the company. In this work, we propose an
integrated decision-making framework based on neural networks and
fuzzy logic to make appropriate decisions and accelerate the
evaluation process. We are especially interested in the two initial
stages where new product ideas are selected (go/no go decision) and
the implementation order of the corresponding projects are
determined. We show that this two-staged intelligent approach allows
practitioners to roughly and quickly separate good and bad product
ideas by making use of previous experiences, and then, analyze a
more shortened list rigorously.