Abstract: This work presents a novel means of extracting fixedlength parameters from voice signals, such that words can be recognized
in linear time. The power and the zero crossing rate are first
calculated segment by segment from a voice signal; by doing so, two
feature sequences are generated. We then construct an FIR system
across these two sequences. The parameters of this FIR system, used
as the input of a multilayer proceptron recognizer, can be derived by
recursive LSE (least-square estimation), implying that the complexity of overall process is linear to the signal size. In the second part of
this work, we introduce a weighting factor λ to emphasize recent
input; therefore, we can further recognize continuous speech signals.
Experiments employ the voice signals of numbers, from zero to nine, spoken in Mandarin Chinese. The proposed method is verified to
recognize voice signals efficiently and accurately.
Abstract: The present study is concerned with the effect of
exciting boundary layer on cooling process in a gas-turbine blades.
The cooling process is numerically investigated. Observations show
cooling the first row of moving or stable blades leads to increase
their life-time. Results show that minimum temperature in cooling
line with exciting boundary layer is lower than without exciting.
Using block in cooling line of turbines' blade causes flow pattern and
stability in boundary layer changed that causes increase in heat
transfer coefficient. Results show at the location of block,
temperature of turbines' blade is significantly decreased. The k-ε
turbulence model is used.
Abstract: The purpose of this research is to determine the
knowledge and skills possessed by instructional design (ID)
practitioners in Malaysia. As ID is a relatively new field in the
country and there seems to be an absence of any studies on its
community of practice, the main objective of this research is to
discover the tasks and activities performed by ID practitioners in
educational and corporate organizations as suggested by the
International Board of Standards for Training, Performance and
Instruction. This includes finding out the ID models applied in the
course of their work. This research also attempts to identify the
barriers and issues as to why some ID tasks and activities are rarely
or never conducted. The methodology employed in this descriptive
study was a survey questionnaire sent to 30 instructional designers
nationwide. The results showed that majority of the tasks and
activities are carried out frequently enough but omissions do occur
due to reasons such as it being out of job scope, the decision was
already made at a higher level, and the lack of knowledge and skills.
Further investigations of a qualitative manner should be conducted
to achieve a more in-depth understanding of ID practices in
Malaysia
Abstract: Stainless steel has been employed in many
engineering applications ranging from pharmaceutical equipment to
piping in the nuclear reactors and storage to chemical products. In
this attempt, simulation of fatigue crack growth based on
experimental results of austenitic stainless steel 304L was presented
using AFGROW code when NASGRO mode laws adopted. Double
through crack at hole specimen is used in this investigation under
constant amplitude loading. Effect of mean stress is highlighted.
Results show that fatigue crack growth rate (FCGR) and fatigue life
were affected by maximum applied load and dimension of hole. An
equivalent of Paris law for this material was estimated.
Abstract: This paper describes a new supervised fusion (hybrid)
electrocardiogram (ECG) classification solution consisting of a new
QRS complex geometrical feature extraction as well as a new version
of the learning vector quantization (LVQ) classification algorithm
aimed for overcoming the stability-plasticity dilemma. Toward this
objective, after detection and delineation of the major events of ECG
signal via an appropriate algorithm, each QRS region and also its
corresponding discrete wavelet transform (DWT) are supposed as
virtual images and each of them is divided into eight polar sectors.
Then, the curve length of each excerpted segment is calculated
and is used as the element of the feature space. To increase the
robustness of the proposed classification algorithm versus noise,
artifacts and arrhythmic outliers, a fusion structure consisting of
five different classifiers namely as Support Vector Machine (SVM),
Modified Learning Vector Quantization (MLVQ) and three Multi
Layer Perceptron-Back Propagation (MLP–BP) neural networks with
different topologies were designed and implemented. The new proposed
algorithm was applied to all 48 MIT–BIH Arrhythmia Database
records (within–record analysis) and the discrimination power of the
classifier in isolation of different beat types of each record was
assessed and as the result, the average accuracy value Acc=98.51%
was obtained. Also, the proposed method was applied to 6 number
of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging
to 20 different records of the aforementioned database (between–
record analysis) and the average value of Acc=95.6% was achieved.
To evaluate performance quality of the new proposed hybrid learning
machine, the obtained results were compared with similar peer–
reviewed studies in this area.
Abstract: Industrial radiography is a famous technique for the identification and evaluation of discontinuities, or defects, such as cracks, porosity and foreign inclusions found in welded joints. Although this technique has been well developed, improving both the inspection process and operating time, it does suffer from several drawbacks. The poor quality of radiographic images is due to the physical nature of radiography as well as small size of the defects and their poor orientation relatively to the size and thickness of the evaluated parts. Digital image processing techniques allow the interpretation of the image to be automated, avoiding the presence of human operators making the inspection system more reliable, reproducible and faster. This paper describes our attempt to develop and implement digital image processing algorithms for the purpose of automatic defect detection in radiographic images. Because of the complex nature of the considered images, and in order that the detected defect region represents the most accurately possible the real defect, the choice of global and local preprocessing and segmentation methods must be appropriated.
Abstract: Information is power. Geographical information is an
emerging science that is advancing the development of knowledge to
further help in the understanding of the relationship of “place" with
other disciplines such as crime. The researchers used crime data for
the years 2004 to 2007 from the Baguio City Police Office to
determine the incidence and actual locations of crime hotspots.
Combined qualitative and quantitative research methodology was
employed through extensive fieldwork and observation, geographic
visualization with Geographic Information Systems (GIS) and Global
Positioning Systems (GPS), and data mining. The paper discusses
emerging geographic visualization and data mining tools and
methodologies that can be used to generate baseline data for
environmental initiatives such as urban renewal and rejuvenation.
The study was able to demonstrate that crime hotspots can be
computed and were seen to be occurring to some select places in the
Central Business District (CBD) of Baguio City. It was observed that
some characteristics of the hotspot places- physical design and milieu
may play an important role in creating opportunities for crime. A list
of these environmental attributes was generated. This derived
information may be used to guide the design or redesign of the urban
environment of the City to be able to reduce crime and at the same
time improve it physically.
Abstract: Although services play a crucial role in economy,
service did not gain as much importance as productivity management
in manufacturing. This paper presents key findings from literature
and practice. Based on an initial definition of complex services, seven
productivity concepts are briefly presented and assessed by relevant,
complex service specific criteria. Following the findings a complex
service productivity model is proposed. The novel model comprises
of all specific dimensions of service provision from both, the
provider-s as well as costumer-s perspective. A clear assignment of
identified value drivers and relationships between them is presented.
In order to verify the conceptual service productivity model a case
study from a project engineering department of a chemical plant
development and construction company is presented.
Abstract: In this paper bi-annual time series data on unemployment rates (from the Labour Force Survey) are expanded to quarterly rates and linked to quarterly unemployment rates (from the Quarterly Labour Force Survey). The resultant linked series and the consumer price index (CPI) series are examined using Johansen’s cointegration approach and vector error correction modeling. The study finds that both the series are integrated of order one and are cointegrated. A statistically significant co-integrating relationship is found to exist between the time series of unemployment rates and the CPI. Given this significant relationship, the study models this relationship using Vector Error Correction Models (VECM), one with a restriction on the deterministic term and the other with no restriction.
A formal statistical confirmation of the existence of a unique linear and lagged relationship between inflation and unemployment for the period between September 2000 and June 2011 is presented. For the given period, the CPI was found to be an unbiased predictor of the unemployment rate. This relationship can be explored further for the development of appropriate forecasting models incorporating other study variables.
Abstract: In this study we present our developed formative
assessment tool for students' assignments. The tool enables lecturers
to define assignments for the course and assign each problem in each
assignment a list of criteria and weights by which the students' work
is evaluated. During assessment, the lecturers feed the scores for each
criterion with justifications. When the scores of the current
assignment are completely fed in, the tool automatically generates
reports for both students and lecturers. The students receive a report
by email including detailed description of their assessed work, their
relative score and their progress across the criteria along the course
timeline. This information is presented via charts generated
automatically by the tool based on the scores fed in. The lecturers
receive a report that includes summative (e.g., averages, standard
deviations) and detailed (e.g., histogram) data of the current
assignment. This information enables the lecturers to follow the class
achievements and adjust the learning process accordingly. The tool
was examined on two pilot groups of college students that study a
course in (1) Object-Oriented Programming (2) Plane Geometry.
Results reveal that most of the students were satisfied with the
assessment process and the reports produced by the tool. The
lecturers who used the tool were also satisfied with the reports and
their contribution to the learning process.
Abstract: A procedure for the preparation of clarified Pawpaw
Juice was developed. About 750ml Pawpaw pulp was measured into
2 measuring cylinders A & B of capacity 1 litre heated to 400C,
cooled to 200C. 30mls pectinase was added into cylinder A, while
30mls distilled water was added into cylinder B. Enzyme treated
sample (A) was allowed to digest for 5hours after which it was heated
to 900C for 15 minutes to inactivate the enzyme. The heated sample
was cooled and with the aid of a mucillin cloth the pulp was filtered
to obtain the clarified pawpaw juice. The juice was filled into 100ml
plastic bottles, pasteurized at 950C for 45 minutes, cooled and stored
at room temperature. The sample treated with 30mls distilled water
also underwent the same process. Freshly pasteurized sample was
analyzed for specific gravity, titratable acidity, pH, sugars and
ascorbic acid. The remaining sample was then stored for 2 weeks and
the above analyses repeated. There were differences in the results of
the freshly pasteurized samples and stored sample in pH and ascorbic
acid levels, also sample treated with pectinase yielded higher
volumes of juice than that treated with distilled water.
Abstract: The last Assessment Report of the Intergovernmental
Panel on Climate Change, stating that the greatest risk in climate
change affects sustainability is now widely known and accepted.
However, it has not provoked substantial reaction and attention in
Hungary, while international and national efforts have also not
achieved expected results so far. Still, there are numerous examples
on different levels (national, regional, local, household) making
considerable progress in limiting their own emissions and making
steps toward mitigation of and adaptation to climate change. The
local level is exceptionally important in sustainability adaptation, as
local communities are often able to adapt more flexibly to changes in
the natural environment.The aim of this paper is to attempt a review
of the national climate policy and the local climate change strategies
in Hungary considering sustainable development.
Abstract: In this paper, we propose a face recognition algorithm
using AAM and Gabor features. Gabor feature vectors which are well
known to be robust with respect to small variations of shape, scaling,
rotation, distortion, illumination and poses in images are popularly
employed for feature vectors for many object detection and
recognition algorithms. EBGM, which is prominent among face
recognition algorithms employing Gabor feature vectors, requires
localization of facial feature points where Gabor feature vectors are
extracted. However, localization method employed in EBGM is based
on Gabor jet similarity and is sensitive to initial values. Wrong
localization of facial feature points affects face recognition rate. AAM
is known to be successfully applied to localization of facial feature
points. In this paper, we devise a facial feature point localization
method which first roughly estimate facial feature points using AAM
and refine facial feature points using Gabor jet similarity-based facial
feature localization method with initial points set by the rough facial
feature points obtained from AAM, and propose a face recognition
algorithm using the devised localization method for facial feature
localization and Gabor feature vectors. It is observed through
experiments that such a cascaded localization method based on both
AAM and Gabor jet similarity is more robust than the localization
method based on only Gabor jet similarity. Also, it is shown that the
proposed face recognition algorithm using this devised localization
method and Gabor feature vectors performs better than the
conventional face recognition algorithm using Gabor jet
similarity-based localization method and Gabor feature vectors like
EBGM.
Abstract: The output beam quality of multi transverse modes of
laser, are relatively poor. In order to obtain better beam quality, one
may use an aperture inside the laser resonator. In this case, various
transverse modes can be selected. We have selected various
transverse modes both by simulation and doing experiment. By
inserting a circular aperture inside the diode end-pumped Nd:YAG
pulsed laser resonator, we have obtained 00 TEM , 01 TEM
, 20 TEM and have studied which parameters, can change the mode
shape. Then, we have determined the beam quality factor of TEM00
gaussian mode.
Abstract: Speedups from mapping four real-life DSP
applications on an embedded system-on-chip that couples coarsegrained
reconfigurable logic with an instruction-set processor are
presented. The reconfigurable logic is realized by a 2-Dimensional
Array of Processing Elements. A design flow for improving
application-s performance is proposed. Critical software parts, called
kernels, are accelerated on the Coarse-Grained Reconfigurable
Array. The kernels are detected by profiling the source code. For
mapping the detected kernels on the reconfigurable logic a prioritybased
mapping algorithm has been developed. Two 4x4 array
architectures, which differ in their interconnection structure among
the Processing Elements, are considered. The experiments for eight
different instances of a generic system show that important overall
application speedups have been reported for the four applications.
The performance improvements range from 1.86 to 3.67, with an
average value of 2.53, compared with an all-software execution.
These speedups are quite close to the maximum theoretical speedups
imposed by Amdahl-s law.
Abstract: We introduce the notion of strongly ω -Gorenstein modules, where ω is a faithfully balanced self-orthogonal module. This gives a common generalization of both Gorenstein projective (injective) modules and ω-Gorenstein modules. We investigate some characterizations of strongly ω -Gorenstein modules. Consequently, some properties under change of rings are obtained.
Abstract: Time varying network induced delays in networked
control systems (NCS) are known for degrading control system-s
quality of performance (QoP) and causing stability problems. In
literature, a control method employing modeling of communication
delays as probability distribution, proves to be a better method. This
paper focuses on modeling of network induced delays as probability
distribution.
CAN and MIL-STD-1553B are extensively used to carry periodic
control and monitoring data in networked control systems.
In literature, methods to estimate only the worst-case delays for
these networks are available. In this paper probabilistic network
delay model for CAN and MIL-STD-1553B networks are given.
A systematic method to estimate values to model parameters from
network parameters is given. A method to predict network delay in
next cycle based on the present network delay is presented. Effect of
active network redundancy and redundancy at node level on network
delay and system response-time is also analyzed.
Abstract: Estimation of voltage stability based on optimal
filtering method is presented. PV curve is used as a tool for voltage stability analysis. Dynamic voltage stability estimation is done by
using particle filter method. Optimum value (nose point) of PV curve can be estimated by estimating parameter of PV curve equation
optimal value represents critical voltage and
condition at specified point of measurement. Voltage stability is then estimated by analyzing loading margin condition c stimating equation. This
maximum loading
ecified dynamically.
Abstract: In this study, effects of premixed and equivalence
ratios on CO and HC emissions of a dual fuel HCCI engine are
investigated. Tests were conducted on a single-cylinder engine with
compression ratio of 17.5. Premixed gasoline is provided by a
carburetor connected to intake manifold and equipped with a screw
to adjust premixed air-fuel ratio, and diesel fuel is injected directly
into the cylinder through an injector at pressure of 250 bars. A heater
placed at inlet manifold is used to control the intake charge
temperature. Optimal intake charge temperature results in better
HCCI combustion due to formation of a homogeneous mixture,
therefore, all tests were carried out over the optimum intake
temperature of 110-115 ºC. Timing of diesel fuel injection has a great
effect on stratification of in-cylinder charge and plays an important
role in HCCI combustion phasing. Experiments indicated 35 BTDC
as the optimum injection timing. Varying the coolant temperature in
a range of 40 to 70 ºC, better HCCI combustion was achieved at 50
ºC. Therefore, coolant temperature was maintained 50 ºC during all
tests. Simultaneous investigation of effective parameters on HCCI
combustion was conducted to determine optimum parameters
resulting in fast transition to HCCI combustion. One of the
advantages of the method studied in this study is feasibility of easy
and fast transition of typical diesel engine to a dual fuel HCCI
engine. Results show that increasing premixed ratio, while keeping
EGR rate constant, increases unburned hydrocarbon (UHC)
emissions due to quenching phenomena and trapping of premixed
fuel in crevices, but CO emission decreases due to increase in CO to
CO2 reactions.
Abstract: Optimization of filter banks based on the knowledge of input statistics has been of interest for a long time. Finite impulse response (FIR) Compaction filters are used in the design of optimal signal adapted orthonormal FIR filter banks. In this paper we discuss three different approaches for the design of interpolated finite impulse response (IFIR) compaction filters. In the first method, the magnitude squared response satisfies Nyquist constraint approximately. In the second and third methods Nyquist constraint is exactly satisfied. These methods yield FIR compaction filters whose response is comparable with that of the existing methods. At the same time, IFIR filters enjoy significant saving in the number of multipliers and can be implemented efficiently. Since eigenfilter approach is used here, the method is less complex. Design of IFIR filters in the least square sense is presented.