Abstract: The recent global financial problem urges government
to play role in stimulating the economy due to the fact that private
sector has little ability to purchase during the recession. A concerned
question is whether the increased government spending crowds out
private consumption and whether it helps stimulate the economy. If
the government spending policy is effective; the private consumption
is expected to increase and can compensate the recent extra
government expense. In this study, the government spending is
categorized into government consumption spending and government
capital spending. The study firstly examines consumer consumption
along the line with the demand function in microeconomic theory.
Three categories of private consumption are used in the study. Those
are food consumption, non food consumption, and services
consumption. The dynamic Almost Ideal Demand System of the three
categories of the private consumption is estimated using the Vector
Error Correction Mechanism model. The estimated model indicates
the substituting effects (negative impacts) of the government
consumption spending on budget shares of private non food
consumption and of the government capital spending on budget share
of private food consumption, respectively. Nevertheless the result
does not necessarily indicate whether the negative effects of changes
in the budget shares of the non food and the food consumption means
fallen total private consumption. Microeconomic consumer demand
analysis clearly indicates changes in component structure of
aggregate expenditure in the economy as a result of the government
spending policy. The macroeconomic concept of aggregate demand
comprising consumption, investment, government spending (the
government consumption spending and the government capital
spending), export, and import are used to estimate for their
relationship using the Vector Error Correction Mechanism model.
The macroeconomic study found no effect of the government capital
spending on either the private consumption or the growth of GDP
while the government consumption spending has negative effect on
the growth of GDP. Therefore no crowding out effect of the
government spending is found on the private consumption but it is
ineffective and even inefficient expenditure as found reducing growth
of the GDP in the context of Thailand.
Abstract: NFκB is a transcription factor regulating many
function of the vessel wall. In the normal condition , NFκB is
revealed diffuse cytoplasmic expressionsuggesting that the system is
inactive. The presence of activation NFκB provide a potential
pathway for the rapid transcriptional of a variety of genes encoding
cytokines, growth factors, adhesion molecules and procoagulatory
factors. It is likely to play an important role in chronic inflamatory
disease involved atherosclerosis. There are many stimuli with the
potential to active NFκB, including hyperlipidemia. We used 24 mice
which was divided in 6 groups. The HFD given by et libitum
procedure during 2, 4, and 6 months. The parameters in this study
were the amount of NFKB activation ,H2O2 as ROS and VCAM-1 as
a product of NFKB activation. H2O2 colorimetryc assay performed
directly using Anti Rat H2O2 ELISA Kit. The NFKB and VCAM-1
detection obtained from aorta mice, measured by ELISA kit and
imunohistochemistry. There was a significant difference activation of
H2O2, NFKB and VCAM-1 level at induce HFD after 2, 4 and 6
months. It suggest that HFD induce ROS formation and increase the
activation of NFKB as one of atherosclerosis marker that caused by
hyperlipidemia as classical atheroschlerosis risk factor.
Abstract: Road signs are the elements of roads with a lot of
influence in driver-s behavior. So that signals can fulfill its function,
they must overcome visibility and durability requirements,
particularly needed at night, when the coefficient of retroreflection
becomes a decisive factor in ensuring road safety. Accepting that the
visibility of the signage has implications for people-s safety, we
understand the importance to fulfill its function: to foster the highest
standards of service and safety in drivers. The usual conditions of
perception of any sign are determined by: age of the driver, reflective
material, luminosity, vehicle speed and emplacement. In this way,
this paper evaluates the different signals to increase the safety road.
Abstract: The purpose of this work is measurement of the
system presampling MTF of a variable resolution x-ray (VRX) CT
scanner. In this paper, we used the parameters of an actual VRX CT
scanner for simulation and study of effect of different focal spot sizes
on system presampling MTF by Monte Carlo method (GATE
simulation software). Focal spot size of 0.6 mm limited the spatial
resolution of the system to 5.5 cy/mm at incident angles of below 17º
for cell#1. By focal spot size of 0.3 mm the spatial resolution
increased up to 11 cy/mm and the limiting effect of focal spot size
appeared at incident angles of below 9º. The focal spot size of 0.3
mm could improve the spatial resolution to some extent but because
of magnification non-uniformity, there is a 10 cy/mm difference
between spatial resolution of cell#1 and cell#256. The focal spot size
of 0.1 mm acted as an ideal point source for this system. The spatial
resolution increased to more than 35 cy/mm and at all incident angles
the spatial resolution was a function of incident angle. By the way
focal spot size of 0.1 mm minimized the effect of magnification nonuniformity.
Abstract: This paper presents the applicability of artificial
neural networks for 24 hour ahead solar power generation forecasting
of a 20 kW photovoltaic system, the developed forecasting is suitable
for a reliable Microgrid energy management. In total four neural
networks were proposed, namely: multi-layred perceptron, radial
basis function, recurrent and a neural network ensemble consisting in
ensemble of bagged networks. Forecasting reliability of the proposed
neural networks was carried out in terms forecasting error
performance basing on statistical and graphical methods. The
experimental results showed that all the proposed networks achieved
an acceptable forecasting accuracy. In term of comparison the neural
network ensemble gives the highest precision forecasting comparing
to the conventional networks. In fact, each network of the ensemble
over-fits to some extent and leads to a diversity which enhances the
noise tolerance and the forecasting generalization performance
comparing to the conventional networks.
Abstract: Finding the interpolation function of a given set of nodes is an important problem in scientific computing. In this work a kind of localization is introduced using the radial basis functions which finds a sufficiently smooth solution without consuming large amount of time and computer memory. Some examples will be presented to show the efficiency of the new method.
Abstract: Within the collaborative research center 666 a new
product development approach and the innovative manufacturing
method of linear flow splitting are being developed. So far the design process is supported by 3D-CAD models utilizing User Defined
Features in standard CAD-Systems. This paper now presents new
functions for generating 3D-models of integral sheet metal products with bifurcations using Siemens PLM NX 6. The emphasis is placed
on design and semi-automated insertion of User Defined Features.
Therefore User Defined Features for both, linear flow splitting
and its derivative linear bend splitting, were developed. In order to facilitate the modeling process, an application was developed
that guides through the insertion process. Its usability and dialog layout adapt known standard features. The work presented here has
significant implications on the quality, accurateness and efficiency of the product generation process of sheet metal products with higher
order bifurcations.
Abstract: The energy consumption and delay in read/write
operation of conventional SRAM is investigated analytically as well
as by simulation. Explicit analytical expressions for the energy
consumption and delay in read and write operation as a function of
device parameters and supply voltage are derived. The expressions are
useful in predicting the effect of parameter changes on the energy
consumption and speed as well as in optimizing the design of
conventional SRAM. HSPICE simulation in standard 0.25μm CMOS
technology confirms precision of analytical expressions derived from
this paper.
Abstract: Detection of squirrel cage induction motor (SCIM) broken bars has long been an important but difficult job in the detection area of motor faults. Early detection of this abnormality in the motor would help to avoid costly breakdowns. A new detection method based on particle swarm optimization (PSO) is presented in this paper. Stator current in an induction motor will be measured and characteristic frequency components of faylted rotor will be detected by minimizing a fitness function using pso. Supply frequency and side band frequencies and their amplitudes can be estimated by the proposed method. The proposed method is applied to a faulty motor with one and two broken bars in different loading condition. Experimental results prove that the proposed method is effective and applicable.
Abstract: We study the typical domain size and configuration
character of a randomly perturbed system exhibiting continuous
symmetry breaking. As a model system we use rod-like objects
within a cubic lattice interacting via a Lebwohl–Lasher-type
interaction. We describe their local direction with a headless unit
director field. An example of such systems represents nematic LC or
nanotubes. We further introduce impurities of concentration p, which
impose the random anisotropy field-type disorder to directors. We
study the domain-type pattern of molecules as a function of p,
anchoring strength w between a neighboring director and impurity,
temperature, history of samples. In simulations we quenched the
directors either from the random or homogeneous initial
configuration. Our results show that a history of system strongly
influences: i) the average domain coherence length; and ii) the range
of ordering in the system. In the random case the obtained order is
always short ranged (SR). On the contrary, in the homogeneous case,
SR is obtained only for strong enough anchoring and large enough
concentration p. In other cases, the ordering is either of quasi long
range (QLR) or of long range (LR). We further studied memory
effects for the random initial configuration. With increasing external
ordering field B either QLR or LR is realized.
Abstract: In this paper we introduce a novel kernel classifier
based on a iterative shrinkage algorithm developed for compressive
sensing. We have adopted Bregman iteration with soft and hard
shrinkage functions and generalized hinge loss for solving l1 norm
minimization problem for classification. Our experimental results
with face recognition and digit classification using SVM as the
benchmark have shown that our method has a close error rate
compared to SVM but do not perform better than SVM. We have
found that the soft shrinkage method give more accuracy and in some
situations more sparseness than hard shrinkage methods.
Abstract: In this paper, we analyze the effect of noise in a single- ended input differential amplifier working at high frequencies. Both extrinsic and intrinsic noise are analyzed using time domain method employing techniques from stochastic calculus. Stochastic differential equations are used to obtain autocorrelation functions of the output noise voltage and other solution statistics like mean and variance. The analysis leads to important design implications and suggests changes in the device parameters for improved noise characteristics of the differential amplifier.
Abstract: A special case of floating point data representation is block
floating point format where a block of operands are forced to have a joint
exponent term. This paper deals with the finite wordlength properties of
this data format. The theoretical errors associated with the error model for
block floating point quantization process is investigated with the help of error
distribution functions. A fast and easy approximation formula for calculating
signal-to-noise ratio in quantization to block floating point format is derived.
This representation is found to be a useful compromise between fixed point
and floating point format due to its acceptable numerical error properties over
a wide dynamic range.
Abstract: Stochastic modeling of network traffic is an area of
significant research activity for current and future broadband
communication networks. Multimedia traffic is statistically
characterized by a bursty variable bit rate (VBR) profile. In this
paper, we develop an improved model for uniform activity level
video sources in ATM using a doubly stochastic autoregressive
model driven by an underlying spatial point process. We then
examine a number of burstiness metrics such as the peak-to-average
ratio (PAR), the temporal autocovariance function (ACF) and the
traffic measurements histogram. We found that the former measure is
most suitable for capturing the burstiness of single scene video
traffic. In the last phase of this work, we analyse statistical
multiplexing of several constant scene video sources. This proved,
expectedly, to be advantageous with respect to reducing the
burstiness of the traffic, as long as the sources are statistically
independent. We observed that the burstiness was rapidly
diminishing, with the largest gain occuring when only around 5
sources are multiplexed. The novel model used in this paper for
characterizing uniform activity video was thus found to be an
accurate model.
Abstract: The economical criterion is accounted as the objective
function to develop a computer program for designing lightning
protection systems for substations by using masts and Matlab in this
work. Masts are needed to be placed at desired locations; the program
will then show mast heights whose sum is the smallest, i.e. satisfies
the economical criterion. The program is helpful for engineers to
quickly design a lightning protection system for a substation. To
realize this work, methodology and limited conditions of the program,
as well as an example of the program result, were described in this
paper.
Abstract: This paper provides a framework in order to
incorporate reliability issue as a sign of disruption in distribution
systems and partial covering theory as a response to limitation in
coverage radios and economical preferences, simultaneously into the
traditional literatures of capacitated facility location problems. As a
result we develop a bi-objective model based on the discrete
scenarios for expected cost minimization and demands coverage
maximization through a three echelon supply chain network by
facilitating multi-capacity levels for provider side layers and
imposing gradual coverage function for distribution centers (DCs).
Additionally, in spite of objectives aggregation for solving the model
through LINGO software, a branch of LP-Metric method called Min-
Max approach is proposed and different aspects of corresponds
model will be explored.
Abstract: In this paper usefulness of quasi-Newton iteration
procedure in parameters estimation of the conditional variance
equation within BHHH algorithm is presented. Analytical solution of
maximization of the likelihood function using first and second
derivatives is too complex when the variance is time-varying. The
advantage of BHHH algorithm in comparison to the other
optimization algorithms is that requires no third derivatives with
assured convergence. To simplify optimization procedure BHHH
algorithm uses the approximation of the matrix of second derivatives
according to information identity. However, parameters estimation in
a/symmetric GARCH(1,1) model assuming normal distribution of
returns is not that simple, i.e. it is difficult to solve it analytically.
Maximum of the likelihood function can be founded by iteration
procedure until no further increase can be found. Because the
solutions of the numerical optimization are very sensitive to the
initial values, GARCH(1,1) model starting parameters are defined.
The number of iterations can be reduced using starting values close
to the global maximum. Optimization procedure will be illustrated in
framework of modeling volatility on daily basis of the most liquid
stocks on Croatian capital market: Podravka stocks (food industry),
Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla
stocks (information-s-communications industry).
Abstract: In this paper, a predator-prey model with Holling III type functional response is studied. It is interesting that the system is always uniformly persistent, which yields the existence of at least one positive periodic solutions for the corresponding periodic system. The result improves the corresponding ones in [11]. Moreover, an example is illustrated to verify the results by simulation.
Abstract: We develop new nonlinear methods of
immunofluorescence analysis for a sensitive technology of
respiratory burst reaction of DNA fluorescence due to oxidative
activity in the peripheral blood neutrophils. Histograms in flow
cytometry experiments represent a fluorescence flashes frequency as
functions of fluorescence intensity. We used the Shannon-Weaver
index for definition of neutrophils- biodiversity and Hurst index for
definition of fractal-s correlations in immunofluorescence for
different donors, as the basic quantitative criteria for medical
diagnostics of health status. We analyze frequencies of flashes,
information, Shannon entropies and their fractals in
immunofluorescence networks due to reduction of histogram range.
We found the number of simplest universal correlations for
biodiversity, information and Hurst index in diagnostics and
classification of pathologies for wide spectra of diseases. In addition
is determined the clear criterion of a common immunity and human
health status in a form of yes/no answers type. These answers based
on peculiarities of information in immunofluorescence networks and
biodiversity of neutrophils. Experimental data analysis has shown the
existence of homeostasis for information entropy in oxidative activity
of DNA in neutrophil nuclei for all donors.
Abstract: The inherent complexity in nowadays- business
environments is forcing organizations to be attentive to the dynamics
in several fronts. Therefore, the management of technological
innovation is continually faced with uncertainty about the future.
These issues lead to a need for a systemic perspective, able to analyze
the consequences of interactions between different factors. The field
of technology foresight has proposed methods and tools to deal with
this broader perspective. In an attempt to provide a method to analyze
the complex interactions between events in several areas, departing
from the identification of the most strategic competencies, this paper
presents a methodology based on the Delphi method and Quality
Function Deployment. This methodology is applied in a sheet metal
processing equipment manufacturer, as a case study.