Abstract: Within the collaborative research center 666 a new
product development approach and the innovative manufacturing
method of linear flow splitting are being developed. So far the design process is supported by 3D-CAD models utilizing User Defined
Features in standard CAD-Systems. This paper now presents new
functions for generating 3D-models of integral sheet metal products with bifurcations using Siemens PLM NX 6. The emphasis is placed
on design and semi-automated insertion of User Defined Features.
Therefore User Defined Features for both, linear flow splitting
and its derivative linear bend splitting, were developed. In order to facilitate the modeling process, an application was developed
that guides through the insertion process. Its usability and dialog layout adapt known standard features. The work presented here has
significant implications on the quality, accurateness and efficiency of the product generation process of sheet metal products with higher
order bifurcations.
Abstract: The energy consumption and delay in read/write
operation of conventional SRAM is investigated analytically as well
as by simulation. Explicit analytical expressions for the energy
consumption and delay in read and write operation as a function of
device parameters and supply voltage are derived. The expressions are
useful in predicting the effect of parameter changes on the energy
consumption and speed as well as in optimizing the design of
conventional SRAM. HSPICE simulation in standard 0.25μm CMOS
technology confirms precision of analytical expressions derived from
this paper.
Abstract: Detection of squirrel cage induction motor (SCIM) broken bars has long been an important but difficult job in the detection area of motor faults. Early detection of this abnormality in the motor would help to avoid costly breakdowns. A new detection method based on particle swarm optimization (PSO) is presented in this paper. Stator current in an induction motor will be measured and characteristic frequency components of faylted rotor will be detected by minimizing a fitness function using pso. Supply frequency and side band frequencies and their amplitudes can be estimated by the proposed method. The proposed method is applied to a faulty motor with one and two broken bars in different loading condition. Experimental results prove that the proposed method is effective and applicable.
Abstract: We study the typical domain size and configuration
character of a randomly perturbed system exhibiting continuous
symmetry breaking. As a model system we use rod-like objects
within a cubic lattice interacting via a Lebwohl–Lasher-type
interaction. We describe their local direction with a headless unit
director field. An example of such systems represents nematic LC or
nanotubes. We further introduce impurities of concentration p, which
impose the random anisotropy field-type disorder to directors. We
study the domain-type pattern of molecules as a function of p,
anchoring strength w between a neighboring director and impurity,
temperature, history of samples. In simulations we quenched the
directors either from the random or homogeneous initial
configuration. Our results show that a history of system strongly
influences: i) the average domain coherence length; and ii) the range
of ordering in the system. In the random case the obtained order is
always short ranged (SR). On the contrary, in the homogeneous case,
SR is obtained only for strong enough anchoring and large enough
concentration p. In other cases, the ordering is either of quasi long
range (QLR) or of long range (LR). We further studied memory
effects for the random initial configuration. With increasing external
ordering field B either QLR or LR is realized.
Abstract: In this paper we introduce a novel kernel classifier
based on a iterative shrinkage algorithm developed for compressive
sensing. We have adopted Bregman iteration with soft and hard
shrinkage functions and generalized hinge loss for solving l1 norm
minimization problem for classification. Our experimental results
with face recognition and digit classification using SVM as the
benchmark have shown that our method has a close error rate
compared to SVM but do not perform better than SVM. We have
found that the soft shrinkage method give more accuracy and in some
situations more sparseness than hard shrinkage methods.
Abstract: In this paper, we analyze the effect of noise in a single- ended input differential amplifier working at high frequencies. Both extrinsic and intrinsic noise are analyzed using time domain method employing techniques from stochastic calculus. Stochastic differential equations are used to obtain autocorrelation functions of the output noise voltage and other solution statistics like mean and variance. The analysis leads to important design implications and suggests changes in the device parameters for improved noise characteristics of the differential amplifier.
Abstract: A special case of floating point data representation is block
floating point format where a block of operands are forced to have a joint
exponent term. This paper deals with the finite wordlength properties of
this data format. The theoretical errors associated with the error model for
block floating point quantization process is investigated with the help of error
distribution functions. A fast and easy approximation formula for calculating
signal-to-noise ratio in quantization to block floating point format is derived.
This representation is found to be a useful compromise between fixed point
and floating point format due to its acceptable numerical error properties over
a wide dynamic range.
Abstract: Stochastic modeling of network traffic is an area of
significant research activity for current and future broadband
communication networks. Multimedia traffic is statistically
characterized by a bursty variable bit rate (VBR) profile. In this
paper, we develop an improved model for uniform activity level
video sources in ATM using a doubly stochastic autoregressive
model driven by an underlying spatial point process. We then
examine a number of burstiness metrics such as the peak-to-average
ratio (PAR), the temporal autocovariance function (ACF) and the
traffic measurements histogram. We found that the former measure is
most suitable for capturing the burstiness of single scene video
traffic. In the last phase of this work, we analyse statistical
multiplexing of several constant scene video sources. This proved,
expectedly, to be advantageous with respect to reducing the
burstiness of the traffic, as long as the sources are statistically
independent. We observed that the burstiness was rapidly
diminishing, with the largest gain occuring when only around 5
sources are multiplexed. The novel model used in this paper for
characterizing uniform activity video was thus found to be an
accurate model.
Abstract: The economical criterion is accounted as the objective
function to develop a computer program for designing lightning
protection systems for substations by using masts and Matlab in this
work. Masts are needed to be placed at desired locations; the program
will then show mast heights whose sum is the smallest, i.e. satisfies
the economical criterion. The program is helpful for engineers to
quickly design a lightning protection system for a substation. To
realize this work, methodology and limited conditions of the program,
as well as an example of the program result, were described in this
paper.
Abstract: This paper provides a framework in order to
incorporate reliability issue as a sign of disruption in distribution
systems and partial covering theory as a response to limitation in
coverage radios and economical preferences, simultaneously into the
traditional literatures of capacitated facility location problems. As a
result we develop a bi-objective model based on the discrete
scenarios for expected cost minimization and demands coverage
maximization through a three echelon supply chain network by
facilitating multi-capacity levels for provider side layers and
imposing gradual coverage function for distribution centers (DCs).
Additionally, in spite of objectives aggregation for solving the model
through LINGO software, a branch of LP-Metric method called Min-
Max approach is proposed and different aspects of corresponds
model will be explored.
Abstract: Grey mold on grape is caused by the fungus Botrytis
cinerea Pers. Trichodex WP, a new biofungicide, that contains fungal
spores of Trichoderma harzianum Rifai, was used for biological
control of Grey mold on grape. The efficacy of Trichodex WP has
been reported from many experiments. Experiments were carried out
in the locality – Banatski Karlovac, on grapevine species – talijanski
rizling. The trials were set according to instructions of methods
PP1/152(2) and PP1/17(3) , according to a fully randomized block
design. Phytotoxicity was estimated by PP methods 1/135(2), the
intensity of infection according to Towsend Heuberger , the
efficiency by Abbott, the analysis of variance with Duncan test and
PP/181(2). Application of Trichodex WP is limited to the first two
treatments. Other treatments are performed with the fungicides based
on a.i. procymidone, vinclozoline and iprodione.
Abstract: In this paper usefulness of quasi-Newton iteration
procedure in parameters estimation of the conditional variance
equation within BHHH algorithm is presented. Analytical solution of
maximization of the likelihood function using first and second
derivatives is too complex when the variance is time-varying. The
advantage of BHHH algorithm in comparison to the other
optimization algorithms is that requires no third derivatives with
assured convergence. To simplify optimization procedure BHHH
algorithm uses the approximation of the matrix of second derivatives
according to information identity. However, parameters estimation in
a/symmetric GARCH(1,1) model assuming normal distribution of
returns is not that simple, i.e. it is difficult to solve it analytically.
Maximum of the likelihood function can be founded by iteration
procedure until no further increase can be found. Because the
solutions of the numerical optimization are very sensitive to the
initial values, GARCH(1,1) model starting parameters are defined.
The number of iterations can be reduced using starting values close
to the global maximum. Optimization procedure will be illustrated in
framework of modeling volatility on daily basis of the most liquid
stocks on Croatian capital market: Podravka stocks (food industry),
Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla
stocks (information-s-communications industry).
Abstract: In this paper, a predator-prey model with Holling III type functional response is studied. It is interesting that the system is always uniformly persistent, which yields the existence of at least one positive periodic solutions for the corresponding periodic system. The result improves the corresponding ones in [11]. Moreover, an example is illustrated to verify the results by simulation.
Abstract: We develop new nonlinear methods of
immunofluorescence analysis for a sensitive technology of
respiratory burst reaction of DNA fluorescence due to oxidative
activity in the peripheral blood neutrophils. Histograms in flow
cytometry experiments represent a fluorescence flashes frequency as
functions of fluorescence intensity. We used the Shannon-Weaver
index for definition of neutrophils- biodiversity and Hurst index for
definition of fractal-s correlations in immunofluorescence for
different donors, as the basic quantitative criteria for medical
diagnostics of health status. We analyze frequencies of flashes,
information, Shannon entropies and their fractals in
immunofluorescence networks due to reduction of histogram range.
We found the number of simplest universal correlations for
biodiversity, information and Hurst index in diagnostics and
classification of pathologies for wide spectra of diseases. In addition
is determined the clear criterion of a common immunity and human
health status in a form of yes/no answers type. These answers based
on peculiarities of information in immunofluorescence networks and
biodiversity of neutrophils. Experimental data analysis has shown the
existence of homeostasis for information entropy in oxidative activity
of DNA in neutrophil nuclei for all donors.
Abstract: The inherent complexity in nowadays- business
environments is forcing organizations to be attentive to the dynamics
in several fronts. Therefore, the management of technological
innovation is continually faced with uncertainty about the future.
These issues lead to a need for a systemic perspective, able to analyze
the consequences of interactions between different factors. The field
of technology foresight has proposed methods and tools to deal with
this broader perspective. In an attempt to provide a method to analyze
the complex interactions between events in several areas, departing
from the identification of the most strategic competencies, this paper
presents a methodology based on the Delphi method and Quality
Function Deployment. This methodology is applied in a sheet metal
processing equipment manufacturer, as a case study.
Abstract: This paper presents a numerical analysis of the
performance of a three-bladed Darrieus vertical-axis wind turbine
based on the DU91-W2-250 airfoil. A complete campaign of 2-D
simulations, performed for several values of tip speed ratio and based
on RANS unsteady calculations, has been performed to obtain the
rotor torque and power curves. Rotor performances have been
compared with the results of a previous work based on the use of the
NACA 0021 airfoil. Both the power coefficient and the torque
coefficient have been determined as a function of the tip speed ratio.
The flow field around rotor blades has also been analyzed. As a final
result, the performance of the DU airfoil based rotor appears to be
lower than the one based on the NACA 0021 blade section. This
behavior could be due to the higher stall characteristics of the NACA
profile, being the separation zone at the trailing edge more extended
for the DU airfoil.
Abstract: In this paper we propose, a Lagrangian method to solve unsteady gas equation which is a nonlinear ordinary differential equation on semi-infnite interval. This approach is based on Modified generalized Laguerre functions. This method reduces the solution of this problem to the solution of a system of algebraic equations. We also compare this work with some other numerical results. The findings show that the present solution is highly accurate.
Abstract: In this paper, a fiber based Fabry-Perot interferometer
is proposed and demonstrated for a non-contact displacement
measurement. A piece of micro-prism which attached to the
mechanical vibrator is served as the target reflector. Interference
signal is generated from the superposition between the sensing beam
and the reference beam within the sensing arm of the fiber sensor.
This signal is then converted to the displacement value by using a
developed program written in visual Cµ programming with a
resolution of λ/8. A classical function generator is operated for
controlling the vibrator. By fixing an excitation frequency of 100 Hz
and varying the excitation amplitude range of 0.1 – 3 Volts, the
output displacements measured by the fiber sensor are obtained from
1.55 μm to 30.225 μm. A reference displacement sensor with a
sensitivity of ~0.4 μm is also employed for comparing the
displacement errors between both sensors. We found that over the
entire displacement range, a maximum and average measurement
error are obtained of 0.977% and 0.44% respectively.
Abstract: With optimized bandwidth and latency discrepancy ratios, Node Gain Scores (NGSs) are determined and used as a basis for shaping the max-heap overlay. The NGSs - determined as the respective bandwidth-latency-products - govern the construction of max-heap-form overlays. Each NGS is earned as a synergy of discrepancy ratio of the bandwidth requested with respect to the estimated available bandwidth, and latency discrepancy ratio between the nodes and the source node. The tree leads to enhanceddelivery overlay multicasting – increasing packet delivery which could, otherwise, be hindered by induced packet loss occurring in other schemes not considering the synergy of these parameters on placing the nodes on the overlays. The NGS is a function of four main parameters – estimated available bandwidth, Ba; individual node's requested bandwidth, Br; proposed node latency to its prospective parent (Lp); and suggested best latency as advised by source node (Lb). Bandwidth discrepancy ratio (BDR) and latency discrepancy ratio (LDR) carry weights of α and (1,000 - α ) , respectively, with arbitrary chosen α ranging between 0 and 1,000 to ensure that the NGS values, used as node IDs, maintain a good possibility of uniqueness and balance between the most critical factor between the BDR and the LDR. A max-heap-form tree is constructed with assumption that all nodes possess NGS less than the source node. To maintain a sense of load balance, children of each level's siblings are evenly distributed such that a node can not accept a second child, and so on, until all its siblings able to do so, have already acquired the same number of children. That is so logically done from left to right in a conceptual overlay tree. The records of the pair-wise approximate available bandwidths as measured by a pathChirp scheme at individual nodes are maintained. Evaluation measures as compared to other schemes – Bandwidth Aware multicaSt architecturE (BASE), Tree Building Control Protocol (TBCP), and Host Multicast Tree Protocol (HMTP) - have been conducted. This new scheme generally performs better in terms of trade-off between packet delivery ratio; link stress; control overhead; and end-to-end delays.
Abstract: In this work, I present a review on Sparse Distributed
Memory for Small Cues (SDMSCue), a variant of Sparse Distributed
Memory (SDM) that is capable of handling small cues. I then conduct
and show some cognitive experiments on SDMSCue to test its
cognitive soundness compared to SDM. Small cues refer to input
cues that are presented to memory for reading associations; but have
many missing parts or fields from them. The original SDM failed to
handle such a problem. SDMSCue handles and overcomes this
pitfall. The main idea in SDMSCue; is the repeated projection of the
semantic space on smaller subspaces; that are selected based on the
input cue length and pattern. This process allows for Read/Write
operations using an input cue that is missing a large portion.
SDMSCue is augmented with the use of genetic algorithms for
memory allocation and initialization. I claim that SDM functionality
is a subset of SDMSCue functionality.