Abstract: Airport capacity has always been perceived in the
traditional sense as the number of aircraft operations during a
specified time corresponding to a tolerable level of average delay and
it mostly depends on the airside characteristics, on the fleet mix
variability and on the ATM. The adoption of the Directive
2002/30/EC in the EU countries drives the stakeholders to conceive
airport capacity in a different way though. Airport capacity in this
sense is fundamentally driven by environmental criteria, and since
acoustical externalities represent the most important factors, those are
the ones that could pose a serious threat to the growth of airports and
to aviation market itself in the short-medium term. The importance of
the regional airports in the deregulated market grew fast during the
last decade since they represent spokes for network carriers and a
preferential destination for low-fares carriers. Not only regional
airports have witnessed a fast and unexpected growth in traffic but
also a fast growth in the complaints for the nuisance by the people
living near those airports. In this paper the results of a study
conducted in cooperation with the airport of Bologna G. Marconi are
presented in order to investigate airport acoustical capacity as a defacto
constraint of airport growth.
Abstract: A complex valued neural network is a neural network
which consists of complex valued input and/or weights and/or thresholds
and/or activation functions. Complex-valued neural networks
have been widening the scope of applications not only in electronics
and informatics, but also in social systems. One of the most important
applications of the complex valued neural network is in signal
processing. In Neural networks, generalized mean neuron model
(GMN) is often discussed and studied. The GMN includes a new
aggregation function based on the concept of generalized mean of all
the inputs to the neuron. This paper aims to present exhaustive results
of using Generalized Mean Neuron model in a complex-valued neural
network model that uses the back-propagation algorithm (called
-Complex-BP-) for learning. Our experiments results demonstrate the
effectiveness of a Generalized Mean Neuron Model in a complex
plane for signal processing over a real valued neural network. We
have studied and stated various observations like effect of learning
rates, ranges of the initial weights randomly selected, error functions
used and number of iterations for the convergence of error required on
a Generalized Mean neural network model. Some inherent properties
of this complex back propagation algorithm are also studied and
discussed.
Abstract: Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.
Abstract: The purpose of this paper isunavailability of the two main types of conveSwedish traction power supply (TPS) system, i.e.static converter. The number of outages and the ouused to analyze and compare the unavailability oconverters. The mean cumulative function (MCF)analyze the number of outages and the unavailabthe forced outage rate (FOR) concept has been uoutage rates. The study shows that the outagesfailure occur at a constant rate by calendar timconverter stations, while very few stations havedecreasing rate. It has also been found that the stata higher number of outages and a higher outage ratcompared to the rotary converter types. The resultsthat combining the number of outages and the fgives a better view of the converters performasupport for the maintenance decision. In fact, usingdoes not reflect reality. Comparing these two indein identifying the areas where extra resources are maintenance planning and where improvementsoutage in the TPS system.KeywordsFrequency Converter, Forced OuCumulative Function, Traction Power Supply, ESystems.
Abstract: In this study, an ablation, mechanical and thermal properties of a rocket motor insulation from phenolic/ fiber matrix composites forming a laminate with different fiber between fiberglass and locally available synthetic fibers. The phenolic/ fiber matrix composites was mechanics and thermal properties by means of tensile strength, ablation, TGA and DSC. The design of thermal insulation involves several factors.Determined the mechanical properties according to MIL-I-24768: Density >1.3 g/cm3, Tensile strength >103 MPa and Ablation
Abstract: In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.
Abstract: The problem of estimating time-varying regression is
inevitably concerned with the necessity to choose the appropriate
level of model volatility - ranging from the full stationarity of instant
regression models to their absolute independence of each other. In the
stationary case the number of regression coefficients to be estimated
equals that of regressors, whereas the absence of any smoothness
assumptions augments the dimension of the unknown vector by the
factor of the time-series length. The Akaike Information Criterion
is a commonly adopted means of adjusting a model to the given
data set within a succession of nested parametric model classes,
but its crucial restriction is that the classes are rigidly defined by
the growing integer-valued dimension of the unknown vector. To
make the Kullback information maximization principle underlying the
classical AIC applicable to the problem of time-varying regression
estimation, we extend it onto a wider class of data models in which
the dimension of the parameter is fixed, but the freedom of its values
is softly constrained by a family of continuously nested a priori
probability distributions.
Abstract: To evaluate genetic variation of wheat (Triticum
aestivum) affected by heat and drought stress on eight Australian
wheat genotypes that are parents of Doubled Haploid (HD) mapping
populations at the vegetative stage, the water stress experiment was
conducted at 65% field capacity in growth room. Heat stress
experiment was conducted in the research field under irrigation over
summer. Result show that water stress decreased dry shoot weight
and RWC but increased osmolarity and means of Fv/Fm values in all
varieties except for Krichauff. Krichauff and Kukri had the
maximum RWC under drought stress. Trident variety was shown
maximum WUE, osmolarity (610 mM/Kg), dry mater, quantum yield
and Fv/Fm 0.815 under water stress condition. However, the
recovery of quantum yield was apparent between 4 to 7 days after
stress in all varieties. Nevertheless, increase in water stress after that
lead to strong decrease in quantum yield. There was a genetic
variation for leaf pigments content among varieties under heat stress.
Heat stress decreased significantly the total chlorophyll content that
measured by SPAD. Krichauff had maximum value of Anthocyanin
content (2.978 A/g FW), chlorophyll a+b (2.001 mg/g FW) and
chlorophyll a (1.502 mg/g FW). Maximum value of chlorophyll b
(0.515 mg/g FW) and Carotenoids (0.234 mg/g FW) content
belonged to Kukri. The quantum yield of all varieties decreased
significantly, when the weather temperature increased from 28 ÔùªC to
36 ÔùªC during the 6 days. However, the recovery of quantum yield
was apparent after 8th day in all varieties. The maximum decrease
and recovery in quantum yield was observed in Krichauff. Drought
and heat tolerant and moderately tolerant wheat genotypes were
included Trident, Krichauff, Kukri and RAC875. Molineux, Berkut
and Excalibur were clustered into most sensitive and moderately
sensitive genotypes. Finally, the results show that there was a
significantly genetic variation among the eight varieties that were
studied under heat and water stress.
Abstract: A human verification system is presented in this
paper. The system consists of several steps: background subtraction,
thresholding, line connection, region growing, morphlogy, star
skelatonization, feature extraction, feature matching, and decision
making. The proposed system combines an advantage of star
skeletonization and simple statistic features. A correlation matching
and probability voting have been used for verification, followed by a
logical operation in a decision making stage. The proposed system
uses small number of features and the system reliability is
convincing.
Abstract: The paper discusses a computationally efficient
method for the design of prototype filters required for the
implementation of an M-band cosine modulated filter bank. The
prototype filter is formulated as an optimum interpolated FIR filter.
The optimum interpolation factor requiring minimum number of
multipliers is used. The model filter as well as the image suppressor
will be designed using the Kaiser window. The method will seek to
optimize a single parameter namely cutoff frequency to minimize the
distortion in the overlapping passband.
Abstract: This paper presents a useful sub-pixel image
registration method using line segments and a sub-pixel edge detector.
In this approach, straight line segments are first extracted from gray
images at the pixel level before applying the sub-pixel edge detector.
Next, all sub-pixel line edges are mapped onto the orientation-distance
parameter space to solve for line correspondence between images.
Finally, the registration parameters with sub-pixel accuracy are
analytically solved via two linear least-square problems. The present
approach can be applied to various fields where fast registration with
sub-pixel accuracy is required. To illustrate, the present approach is
applied to the inspection of printed circuits on a flat panel. Numerical
example shows that the present approach is effective and accurate
when target images contain a sufficient number of line segments,
which is true in many industrial problems.
Abstract: The paper is concerned with relationships between
SSME and ICTs and focuses on the role of Web 2.0 tools in
the service development process. The research presented aims at
exploring how collaborative technologies can support and improve
service processes, highlighting customer centrality and value coproduction.
The core idea of the paper is the centrality of user
participation and the collaborative technologies as enabling factors;
Wikipedia is analyzed as an example. The result of such analysis is
the identification and description of a pattern characterising specific
services in which users collaborate by means of web tools with value
co-producers during the service process. The pattern of collaborative
co-production concerning several categories of services including
knowledge based services is then discussed.
Abstract: A decomposition of a graph G is a collection ψ of subgraphs H1,H2, . . . , Hr of G such that every edge of G belongs to exactly one Hi. If each Hi is either an induced path or an induced cycle in G, then ψ is called an induced path decomposition of G. The minimum cardinality of an induced path decomposition of G is called the induced path decomposition number of G and is denoted by πi(G). In this paper we initiate a study of this parameter.
Abstract: Shear-layer instabilities of a pulsed stack-issued
transverse jet were studied experimentally in a wind tunnel. Jet
pulsations were induced by means of acoustic excitation. Streak
pictures of the smoke-flow patterns illuminated by the laser-light sheet
in the median plane were recorded with a high-speed digital camera.
Instantaneous velocities of the shear-layer instabilities in the flow were
digitized by a hot-wire anemometer. By analyzing the streak pictures
of the smoke-flow visualization, three characteristic flow modes,
synchronized flapping jet, transition, and synchronized shear-layer
vortices, are identified in the shear layer of the pulsed stack-issued
transverse jet at various excitation Strouhal numbers. The shear-layer
instabilities of the pulsed stack-issued transverse jet are synchronized
by acoustic excitation except for transition mode. In transition flow
mode, the shear-layer vortices would exhibit a frequency that would be
twice as great as the acoustic excitation frequency.
Abstract: In this study, the numerical solution of unsteady flow
between two concentric rotating spheres with suction and blowing at
their boundaries is presented. The spheres are rotating about a
common axis of rotation while their angular velocities are constant.
The Navier-Stokes equations are solved by employing the finite
difference method and implicit scheme. The resulting flow patterns
are presented for various values of the flow parameters including
rotational Reynolds number Re , and a blowing/suction Reynolds
number Rew . Viscous torques at the inner and the outer spheres are
calculated, too. It is seen that increasing the amount of suction and
blowing decrease the size of eddies generated in the annulus.
Abstract: The development of the signal compression
algorithms is having compressive progress. These algorithms are
continuously improved by new tools and aim to reduce, an average,
the number of bits necessary to the signal representation by means of
minimizing the reconstruction error. The following article proposes
the compression of Arabic speech signal by a hybrid method
combining the wavelet transform and the linear prediction. The
adopted approach rests, on one hand, on the original signal
decomposition by ways of analysis filters, which is followed by the
compression stage, and on the other hand, on the application of the
order 5, as well as, the compression signal coefficients. The aim of
this approach is the estimation of the predicted error, which will be
coded and transmitted. The decoding operation is then used to
reconstitute the original signal. Thus, the adequate choice of the
bench of filters is useful to the transform in necessary to increase the
compression rate and induce an impercevable distortion from an
auditive point of view.
Abstract: In policy discourse of 1990s, more inclusive spaces
have been constructed for realizing full and meaningful participation
of common people in education. These participatory spaces provide
an alternative possibility for universalizing elementary education
against the backdrop of a history of entrenched forms of social and
economical exclusion; inequitable education provisions; and
shrinking role of the state in today-s neo-liberal times. Drawing on
case-studies of bottom-up approaches to school governance, the study
examines an array of innovative ways through which poor people
gained a sense of identity and agency by evolving indigenous
solutions to issues regarding schooling of their children. In the
process, state-s institutions and practices became more accountable
and responsive to educational concerns of the marginalized people.
The deliberative participation emerged as an active way of
experiencing deeper forms of empowerment and democracy than its
passive realization as mere bearers of citizen rights.
Abstract: In this research, we propose to use the discrete cosine
transform to approximate the cumulative distributions of data cube
cells- values. The cosine transform is known to have a good energy
compaction property and thus can approximate data distribution
functions easily with small number of coefficients. The derived
estimator is accurate and easy to update. We perform experiments to
compare its performance with a well-known technique - the (Haar)
wavelet. The experimental results show that the cosine transform
performs much better than the wavelet in estimation accuracy, speed,
space efficiency, and update easiness.
Abstract: Internet today has a huge impact on all aspects of life,
and also in the area of the broader context of democracy, politics and
politicians. If democracy is freedom of choice, there are a number of
conditions that can ensure in practice the freedom to be achieved and
realized. These preconditions must be achieved regardless of the
manner of voting. The key contribution of ICT to achieve freedom of
choice is that technology enables the correlation of the citizens and
elected representatives on the better way than it was possible without
the Internet. In this sense, we can say that the Internet and ICT are
changing significantly, and potentially improving the environment in
which democratic processes are taking place. This paper aims to
describe trends in use of ICT in democratic processes, and analyzes
the challenges for implementation of e-Democracy in Montenegro