Abstract: Key performance indicators (KPIs) are used for post
result evaluation in the construction industry, and they normally do
not have provisions for changes. This paper proposes a set of
dynamic key performance indicators (d-KPIs) which predicts the
future performance of the activity being measured and presents the
opportunity to change practice accordingly. Critical to the
predictability of a construction project is the ability to achieve
automated data collection. This paper proposes an effective way to
collect the process and engineering management data from an
integrated construction management system. The d-KPI matrix,
consisting of various indicators under seven categories, developed
from this study can be applied to close monitoring of the
development projects of aged-care facilities. The d-KPI matrix also
enables performance measurement and comparison at both project
and organization levels.
Abstract: In this paper, an improvement of PDLZW implementation
with a new dictionary updating technique is proposed. A
unique dictionary is partitioned into hierarchical variable word-width
dictionaries. This allows us to search through dictionaries in parallel.
Moreover, the barrel shifter is adopted for loading a new input string
into the shift register in order to achieve a faster speed. However,
the original PDLZW uses a simple FIFO update strategy, which is
not efficient. Therefore, a new window based updating technique
is implemented to better classify the difference in how often each
particular address in the window is referred. The freezing policy
is applied to the address most often referred, which would not be
updated until all the other addresses in the window have the same
priority. This guarantees that the more often referred addresses would
not be updated until their time comes. This updating policy leads
to an improvement on the compression efficiency of the proposed
algorithm while still keep the architecture low complexity and easy
to implement.
Abstract: In this paper, we analyze the effect of noise in a single- ended input differential amplifier working at high frequencies. Both extrinsic and intrinsic noise are analyzed using time domain method employing techniques from stochastic calculus. Stochastic differential equations are used to obtain autocorrelation functions of the output noise voltage and other solution statistics like mean and variance. The analysis leads to important design implications and suggests changes in the device parameters for improved noise characteristics of the differential amplifier.
Abstract: Stochastic modeling of network traffic is an area of
significant research activity for current and future broadband
communication networks. Multimedia traffic is statistically
characterized by a bursty variable bit rate (VBR) profile. In this
paper, we develop an improved model for uniform activity level
video sources in ATM using a doubly stochastic autoregressive
model driven by an underlying spatial point process. We then
examine a number of burstiness metrics such as the peak-to-average
ratio (PAR), the temporal autocovariance function (ACF) and the
traffic measurements histogram. We found that the former measure is
most suitable for capturing the burstiness of single scene video
traffic. In the last phase of this work, we analyse statistical
multiplexing of several constant scene video sources. This proved,
expectedly, to be advantageous with respect to reducing the
burstiness of the traffic, as long as the sources are statistically
independent. We observed that the burstiness was rapidly
diminishing, with the largest gain occuring when only around 5
sources are multiplexed. The novel model used in this paper for
characterizing uniform activity video was thus found to be an
accurate model.
Abstract: Grey mold on grape is caused by the fungus Botrytis
cinerea Pers. Trichodex WP, a new biofungicide, that contains fungal
spores of Trichoderma harzianum Rifai, was used for biological
control of Grey mold on grape. The efficacy of Trichodex WP has
been reported from many experiments. Experiments were carried out
in the locality – Banatski Karlovac, on grapevine species – talijanski
rizling. The trials were set according to instructions of methods
PP1/152(2) and PP1/17(3) , according to a fully randomized block
design. Phytotoxicity was estimated by PP methods 1/135(2), the
intensity of infection according to Towsend Heuberger , the
efficiency by Abbott, the analysis of variance with Duncan test and
PP/181(2). Application of Trichodex WP is limited to the first two
treatments. Other treatments are performed with the fungicides based
on a.i. procymidone, vinclozoline and iprodione.
Abstract: In this paper usefulness of quasi-Newton iteration
procedure in parameters estimation of the conditional variance
equation within BHHH algorithm is presented. Analytical solution of
maximization of the likelihood function using first and second
derivatives is too complex when the variance is time-varying. The
advantage of BHHH algorithm in comparison to the other
optimization algorithms is that requires no third derivatives with
assured convergence. To simplify optimization procedure BHHH
algorithm uses the approximation of the matrix of second derivatives
according to information identity. However, parameters estimation in
a/symmetric GARCH(1,1) model assuming normal distribution of
returns is not that simple, i.e. it is difficult to solve it analytically.
Maximum of the likelihood function can be founded by iteration
procedure until no further increase can be found. Because the
solutions of the numerical optimization are very sensitive to the
initial values, GARCH(1,1) model starting parameters are defined.
The number of iterations can be reduced using starting values close
to the global maximum. Optimization procedure will be illustrated in
framework of modeling volatility on daily basis of the most liquid
stocks on Croatian capital market: Podravka stocks (food industry),
Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla
stocks (information-s-communications industry).
Abstract: The customer satisfaction for textile sector carries
great importance like the customer satisfaction for other sectors
carry. Especially, if it is considered that gaining new customers
create four times more costs than protecting existing customers from
leaving, it can be seen that the customer satisfaction plays a great
role for the firms. In this study the affecting independent variables of
customer satisfaction are chosen as brand image, perceived service
quality and perceived product quality. By these independent
variables, it is investigated that if any differences exist in perception
of customer satisfaction according to the Turkish textile consumers in
the view of gender. In data analysis of this research the SPSS
program is used.
Abstract: In this study, some physical and mechanical properties
of jujube fruits, were measured and compared at constant moisture
content of 15.5% w.b. The results showed that the mean length, width
and thickness of jujube fruits were 18.88, 16.79 and 15.9 mm,
respectively. The mean projected areas of jujube perpendicular to
length, width, and thickness were 147.01, 224.08 and 274.60 mm2,
respectively. The mean mass and volume were 1.51 g and 2672.80
mm3, respectively. The arithmetic mean diameter, geometric mean
diameter and equivalent diameter varied from 14.53 to 20 mm, 14.5
to 19.94 mm, and 14.52 to 19.97 mm, respectively. The sphericity,
aspect ratio and surface area of jujube fruits were 0.91, 0.89 and
926.28 mm2, respectively. Whole fruit density, bulk density and
porosity of jujube fruits were measured and found to be 1.52 g/cm3,
0.3 g/cm3 and 79.3%, respectively. The angle of repose of jujube fruit
was 14.66° (±0.58°). The static coefficient of friction on galvanized
iron steel was higher than that on plywood and lower than that on
glass surface. The values of rupture force, deformation, hardness and
energy absorbed were found to be between 11.13-19.91N, 2.53-
4.82mm, 3.06-5.81N mm and 20.13-39.08 N/mm, respectively.
Abstract: In this paper a class of numerical methods to solve linear and nonlinear PDEs and also systems of PDEs is developed. The Differential Transform method associated with the Method of Lines (MoL) is used. The theory for linear problems is extended to the nonlinear case, and a recurrence relation is established. This method can achieve an arbitrary high-order accuracy in time. A variable stepsize algorithm and some numerical results are also presented.
Abstract: Network warfare is an emerging concept that focuses on the network and computer based forms through which information is attacked and defended. Various computer and network security concepts thus play a role in network warfare. Due the intricacy of the various interacting components, a model to better understand the complexity in a network warfare environment would be beneficial. Non-quantitative modeling is a useful method to better characterize the field due to the rich ideas that can be generated based on the use of secular associations, chronological origins, linked concepts, categorizations and context specifications. This paper proposes the use of non-quantitative methods through a morphological analysis to better explore and define the influential conditions in a network warfare environment.
Abstract: This paper proposes a genetic algorithm based on a
new replacement strategy to solve the quadratic assignment problems,
which are NP-hard. The new replacement strategy aims to improve the
performance of the genetic algorithm through well balancing the
convergence of the searching process and the diversity of the
population. In order to test the performance of the algorithm, the
instances in QAPLIB, a quadratic assignment problem library, are
tried and the results are compared with those reported in the literature.
The performance of the genetic algorithm is promising. The
significance is that this genetic algorithm is generic. It does not rely on
problem-specific genetic operators, and may be easily applied to
various types of combinatorial problems.
Abstract: In this paper we introduce a novel method for
the characterization of synchronziation and coupling effects
in multivariate time series that can be used for the analysis
of EEG or ECoG signals recorded during epileptic seizures.
The method allows to visualize the spatio-temporal evolution
of synchronization and coupling effects that are characteristic
for epileptic seizures. Similar to other methods proposed for
this purpose our method is based on a regression analysis.
However, a more general definition of the regression together
with an effective channel selection procedure allows to use the
method even for time series that are highly correlated, which
is commonly the case in EEG/ECoG recordings with large
numbers of electrodes. The method was experimentally tested
on ECoG recordings of epileptic seizures from patients with
temporal lobe epilepsies. A comparision with the results from
a independent visual inspection by clinical experts showed
an excellent agreement with the patterns obtained with the
proposed method.
Abstract: In this paper, a new proposed system for Persian
printed numeral characters recognition with emphasis on
representation and recognition stages is introduced. For the first time,
in Persian optical character recognition, geometrical central moments
as character image descriptor and fuzzy min-max neural network for
Persian numeral character recognition has been used. Set of different
experiments on binary images of regular, translated, rotated and
scaled Persian numeral characters has been done and variety of
results has been presented. The best result was 99.16% correct
recognition demonstrating geometrical central moments and fuzzy
min-max neural network are adequate for Persian printed numeral
character recognition.
Abstract: In this paper, a fiber based Fabry-Perot interferometer
is proposed and demonstrated for a non-contact displacement
measurement. A piece of micro-prism which attached to the
mechanical vibrator is served as the target reflector. Interference
signal is generated from the superposition between the sensing beam
and the reference beam within the sensing arm of the fiber sensor.
This signal is then converted to the displacement value by using a
developed program written in visual Cµ programming with a
resolution of λ/8. A classical function generator is operated for
controlling the vibrator. By fixing an excitation frequency of 100 Hz
and varying the excitation amplitude range of 0.1 – 3 Volts, the
output displacements measured by the fiber sensor are obtained from
1.55 μm to 30.225 μm. A reference displacement sensor with a
sensitivity of ~0.4 μm is also employed for comparing the
displacement errors between both sensors. We found that over the
entire displacement range, a maximum and average measurement
error are obtained of 0.977% and 0.44% respectively.
Abstract: Energy dissipation in drops has been investigated by
physical models. After determination of effective parameters on the
phenomenon, three drops with different heights have been
constructed from Plexiglas. They have been installed in two existing
flumes in the hydraulic laboratory. Several runs of physical models
have been undertaken to measured required parameters for
determination of the energy dissipation. Results showed that the
energy dissipation in drops depend on the drop height and discharge.
Predicted relative energy dissipations varied from 10.0% to 94.3%.
This work has also indicated that the energy loss at drop is mainly
due to the mixing of the jet with the pool behind the jet that causes
air bubble entrainment in the flow. Statistical model has been
developed to predict the energy dissipation in vertical drops denotes
nonlinear correlation between effective parameters. Further an
artificial neural networks (ANNs) approach was used in this paper to
develop an explicit procedure for calculating energy loss at drops
using NeuroSolutions. Trained network was able to predict the
response with R2 and RMSE 0.977 and 0.0085 respectively. The
performance of ANN was found effective when compared to
regression equations in predicting the energy loss.
Abstract: In this work, I present a review on Sparse Distributed
Memory for Small Cues (SDMSCue), a variant of Sparse Distributed
Memory (SDM) that is capable of handling small cues. I then conduct
and show some cognitive experiments on SDMSCue to test its
cognitive soundness compared to SDM. Small cues refer to input
cues that are presented to memory for reading associations; but have
many missing parts or fields from them. The original SDM failed to
handle such a problem. SDMSCue handles and overcomes this
pitfall. The main idea in SDMSCue; is the repeated projection of the
semantic space on smaller subspaces; that are selected based on the
input cue length and pattern. This process allows for Read/Write
operations using an input cue that is missing a large portion.
SDMSCue is augmented with the use of genetic algorithms for
memory allocation and initialization. I claim that SDM functionality
is a subset of SDMSCue functionality.
Abstract: In a none-super-competitive environment the concepts
of closed system, management control remains to be the dominant
guiding concept to management. The merits of closed loop have been
the sources of most of the management literature and culture for
many decades. It is a useful exercise to investigate and poke into the
dynamics of the control loop phenomenon and draws some lessons to
use for refining the practice of management. This paper examines the
multitude of lessons abstracted from the behavior of the Input /output
/feedback control loop model, which is the core of control theory.
There are numerous lessons that can be learned from the insights this
model would provide and how it parallels the management dynamics
of the organization. It is assumed that an organization is basically a
living system that interacts with the internal and external variables. A
viable control loop is the one that reacts to the variation in the
environment and provide or exert a corrective action. In managing
organizations this is reflected in organizational structure and
management control practices. This paper will report findings that
were a result of examining several abstract scenarios that are
exhibited in the design, operation, and dynamics of the control loop
and how they are projected on the functioning of the organization.
Valuable lessons are drawn in trying to find parallels and new
paradigms, and how the control theory science is reflected in the
design of the organizational structure and management practices. The
paper is structured in a logical and perceptive format. Further
research is needed to extend these findings.
Abstract: Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.
Abstract: A numbers of important developments have led to an
increasing attractiveness for very high speed electrical machines
(either motor or generator). Specifically the increasing switching
speed of power electronics, high energy magnets, high strength
retaining materials, better high speed bearings and improvements in
design analysis are the primary drivers in a move to higher speed. The
design challenges come in the mechanical design both in terms of
strength and resonant modes and in the electromagnetic design
particularly in respect of iron losses and ac losses in the various
conducting parts including the rotor. This paper describes detailed
design work which has been done on a 50,000 rpm, 50kW permanent
magnet( PM) synchronous machine. It describes work on
electromagnetic and rotor eddy current losses using a variety of
methods including both 2D finite element analysis
Abstract: Direct fermentation of 226 white rose tapioca stem to
ethanol by Fusarium oxysporum was studied in a batch reactor.
Fermentation of ethanol can be achieved by sequential pretreatment
using dilute acid and dilute alkali solutions using 100 mesh tapioca
stem particles. The quantitative effects of substrate concentration, pH
and temperature on ethanol concentration were optimized using a full
factorial central composite design experiment. The optimum process
conditions were then obtained using response surface methodology.
The quadratic model indicated that substrate concentration of 33g/l,
pH 5.52 and a temperature of 30.13oC were found to be optimum for
maximum ethanol concentration of 8.64g/l. The predicted optimum
process conditions obtained using response surface methodology was
verified through confirmatory experiments. Leudeking-piret model
was used to study the product formation kinetics for the production
of ethanol and the model parameters were evaluated using
experimental data.