Abstract: In this paper we present a modification to existed model of threshold for shot cut detection, which is able to adapt itself to the sequence statistics and operate in real time, because it use for calculation only previously evaluated frames. The efficiency of proposed modified adaptive threshold scheme was verified through extensive test experiment with several similarity metrics and achieved results were compared to the results reached by the original model. According to results proposed threshold scheme reached higher accuracy than existed original model.
Abstract: Solid waste can be considered as an urban burden or
as a valuable resource depending on how it is managed. To meet the
rising demand for energy and to address environmental concerns, a
conversion from conventional energy systems to renewable resources
is essential. For the sustainability of human civilization, an
environmentally sound and techno-economically feasible waste
treatment method is very important to treat recyclable waste. Several
technologies are available for realizing the potential of solid waste as
an energy source, ranging from very simple systems for disposing of
dry waste to more complex technologies capable of dealing with
large amounts of industrial waste. There are three main pathways for
conversion of waste material to energy: thermo chemical,
biochemical and physicochemical. This paper investigates the thermo
chemical conversion of solid waste for energy recovery. The
processes, advantages and dis-advantages of various thermo chemical
conversion processes are discussed and compared. Special attention
is given to Gasification process as it provides better solutions
regarding public acceptance, feedstock flexibility, near-zero
emissions, efficiency and security. Finally this paper presents
comparative statements of thermo chemical processes and introduces
an integrated waste management system.
Abstract: Built environments have a large impact on environmental sustainability and if it is not considered properly can negatively affect our planet. The application of transformable intelligent building systems that automatically respond to environmental conditions is one of the best ways that can intelligently assist us to create sustainable environment. The significance of this issue is evident as energy crisis and environmental changes has made the sustainability the main concerns in many societies. The aim of this research is to review and evaluate the importance and influence of transformable intelligent structure on the creation of sustainable architecture. Intelligent systems in current buildings provide convenience through automatically responding to changes in environmental conditions, reducing energy dissipation and increase of the lifecycle of buildings. This paper by analyzing significant intelligent building systems will evaluate the potentials of transformable intelligent systems in the creation of sustainable architecture and environment.
Abstract: Many applications of speech communication and speaker
identification suffer from the problem of co-channel speech. This
paper deals with a multi-resolution dyadic wavelet transform method
for usable segments of co-channel speech detection that could be
processed by a speaker identification system. Evaluation of this
method is performed on TIMIT database referring to the Target to
Interferer Ratio measure. Co-channel speech is constructed by
mixing all possible gender speakers. Results do not show much
difference for different mixtures. For the overall mixtures 95.76% of
usable speech is correctly detected with false alarms of 29.65%.
Abstract: In this paper, two centrifugal model tests (case 1: raft
foundation, case 2: 2x2 piled raft foundation) were conducted in
order to evaluate the effect of ground subsidence on load sharing
among piles and raft and settlement of raft and piled raft
foundations. For each case, two conditions consisting of undrained
(without groundwater pumping) and drained (with groundwater
pumping) conditions were considered. Vertical loads were applied
to the models after the foundations were completely consolidated by
selfweight at 50g. The results show that load sharing by the piles in
piled raft foundation (piled load share) for drained condition
decreases faster than that for undrained condition. Settlement of
both raft and piled raft foundations for drained condition increases
more quickly than that for undrained condition. In addition, the
settlement of raft foundation increases more largely than the
settlement of piled raft foundation for drained condition.
Abstract: The System Identification problem looks for a
suitably parameterized model, representing a given process. The
parameters of the model are adjusted to optimize a performance
function based on error between the given process output and
identified process output. The linear system identification field is
well established with many classical approaches whereas most of
those methods cannot be applied for nonlinear systems. The problem
becomes tougher if the system is completely unknown with only the
output time series is available. It has been reported that the
capability of Artificial Neural Network to approximate all linear and
nonlinear input-output maps makes it predominantly suitable for the
identification of nonlinear systems, where only the output time series
is available. [1][2][4][5]. The work reported here is an attempt to
implement few of the well known algorithms in the context of
modeling of nonlinear systems, and to make a performance
comparison to establish the relative merits and demerits.
Abstract: Six Sigma is a well known discipline that reduces
variation using complex statistical tools and the DMAIC model. By
integrating Goldratts-s Theory of Constraints, the Five Focusing
Points and System Thinking tools, Six Sigma projects can be selected
where it can cause more impact in the company. This research
defines an integrated model of six sigma and constraint management
that shows a step-by-step guide using the original methodologies
from each discipline and is evaluated in a case study from the
production line of a Automobile engine monoblock V8, resulting in
an increase in the line capacity from 18.7 pieces per hour to 22.4
pieces per hour, a reduction of 60% of Work-In-Process and a
variation decrease of 0.73%.
Abstract: One of the purposes of the robust method of
estimation is to reduce the influence of outliers in the data, on the
estimates. The outliers arise from gross errors or contamination from
distributions with long tails. The trimmed mean is a robust estimate.
This means that it is not sensitive to violation of distributional
assumptions of the data. It is called an adaptive estimate when the
trimming proportion is determined from the data rather than being
fixed a “priori-.
The main objective of this study is to find out the robustness
properties of the adaptive trimmed means in terms of efficiency, high
breakdown point and influence function. Specifically, it seeks to find
out the magnitude of the trimming proportion of the adaptive
trimmed mean which will yield efficient and robust estimates of the
parameter for data which follow a modified Weibull distribution with
parameter λ = 1/2 , where the trimming proportion is determined by a
ratio of two trimmed means defined as the tail length. Secondly, the
asymptotic properties of the tail length and the trimmed means are
also investigated. Finally, a comparison is made on the efficiency of
the adaptive trimmed means in terms of the standard deviation for the
trimming proportions and when these were fixed a “priori".
The asymptotic tail lengths defined as the ratio of two trimmed
means and the asymptotic variances were computed by using the
formulas derived. While the values of the standard deviations for the
derived tail lengths for data of size 40 simulated from a Weibull
distribution were computed for 100 iterations using a computer
program written in Pascal language.
The findings of the study revealed that the tail lengths of the
Weibull distribution increase in magnitudes as the trimming
proportions increase, the measure of the tail length and the adaptive
trimmed mean are asymptotically independent as the number of
observations n becomes very large or approaching infinity, the tail
length is asymptotically distributed as the ratio of two independent
normal random variables, and the asymptotic variances decrease as
the trimming proportions increase. The simulation study revealed
empirically that the standard error of the adaptive trimmed mean
using the ratio of tail lengths is relatively smaller for different values
of trimming proportions than its counterpart when the trimming
proportions were fixed a 'priori'.
Abstract: Reliability assessment and risk analysis of rotating
machine rotors in various overload and malfunction situations
present challenge to engineers and operators. In this paper a new
analytical method for evaluation of rotor under large deformation is
addressed. Model is presented in general form to include also
composite rotors. Presented simulation procedure is based on
variational work method and has capability to account for geometric
nonlinearity, large displacement, nonlinear support effect and rotor
contacting other machine components. New shape functions are
presented which capable to predict accurate nonlinear profile of
rotor. The closed form solutions for various operating and
malfunction situations are expressed. Analytical simulation results
are discussed
Abstract: Radiofrequency (RF) lesioning of nerves have been commonly used to alleviate chronic pain, where RF current preventing transmission of pain signals through the nerve by heating the nerve causing the pain. There are some factors that affect the temperature distribution and the nerve lesion size, one of these factors is the inhomogeneities in the tissue medium. Our objective is to calculate the temperature distribution and the nerve lesion size in an inhomogeneous medium surrounding the RF electrode. A two 3-D finite element models are used to compare the temperature distribution in the homogeneous and inhomogeneous medium. Also the effect of temperature-dependent electric conductivity on maximum temperature and lesion size is observed. Results show that the presence of an inhomogeneous medium around the RF electrode has a valuable effect on the temperature distribution and lesion size. The dependency of electric conductivity on tissue temperature increased lesion size.
Abstract: A novel file splitting technique for the reduction of the nth-order entropy of text files is proposed. The technique is based on mapping the original text file into a non-ASCII binary file using a new codeword assignment method and then the resulting binary file is split into several subfiles each contains one or more bits from each codeword of the mapped binary file. The statistical properties of the subfiles are studied and it is found that they reflect the statistical properties of the original text file which is not the case when the ASCII code is used as a mapper. The nth-order entropy of these subfiles are determined and it is found that the sum of their entropies is less than that of the original text file for the same values of extensions. These interesting statistical properties of the resulting subfiles can be used to achieve better compression ratios when conventional compression techniques are applied to these subfiles individually and on a bit-wise basis rather than on character-wise basis.
Abstract: In this paper, the process of obtaining Q and R
matrices for optimal pitch aircraft control system has been described.
Since the innovation of optimal control method, the determination of
Q and R matrices for such system has not been fully specified. The
value of Q and R for optimal pitch aircraft control application, have
been simulated and calculated. The suitable results for Q and R have
been observed through the performance index (PI). If the PI is small
“enough", we would say the Q & R values are suitable for that
certain type of optimal control system. Moreover, for the same value
of PI, we could have different Q and R sets. Due to the rule-free
determination of Q and R matrices, a specific method is brought to
find out the rough value of Q and R referring to rather small value of
PI.
Abstract: The machining performance is determined by the
frequency characteristics of the machine-tool structure and the
dynamics of the cutting process. Therefore, the prediction of dynamic
vibration behavior of spindle tool system is of great importance for the
design of a machine tool capable of high-precision and high-speed
machining. The aim of this study is to develop a finite element model
to predict the dynamic characteristics of milling machine tool and
hence evaluate the influence of the preload of the spindle bearings. To
this purpose, a three dimensional spindle bearing model of a high
speed engraving spindle tool was created. In this model, the rolling
interfaces with contact stiffness defined by Harris model were used to
simulate the spindle bearing components. Then a full finite element
model of a vertical milling machine was established by coupling the
spindle tool unit with the machine frame structure. Using this model,
the vibration mode that had a dominant influence on the dynamic
stiffness was determined. The results of the finite element simulations
reveal that spindle bearing with different preloads greatly affect the
dynamic behavior of the spindle tool unit and hence the dynamic
responses of the vertical column milling system. These results were
validated by performing vibration on the individual spindle tool unit
and the milling machine prototype, respectively. We conclude that
preload of the spindle bearings is an important component affecting
the dynamic characteristics and machining performance of the entire
vertical column structure of the milling machine.
Abstract: This article is devoted to the numerical solution of
large-scale quadratic eigenvalue problems. Such problems arise in
a wide variety of applications, such as the dynamic analysis of
structural mechanical systems, acoustic systems, fluid mechanics,
and signal processing. We first introduce a generalized second-order
Krylov subspace based on a pair of square matrices and two initial
vectors and present a generalized second-order Arnoldi process for
constructing an orthonormal basis of the generalized second-order
Krylov subspace. Then, by using the projection technique and the
refined projection technique, we propose a restarted generalized
second-order Arnoldi method and a restarted refined generalized
second-order Arnoldi method for computing some eigenpairs of largescale
quadratic eigenvalue problems. Some theoretical results are also
presented. Some numerical examples are presented to illustrate the
effectiveness of the proposed methods.
Abstract: This paper proposes a new performance characterization for the test strategy intended for second order filters denominated Transient Analysis Method (TRAM). We evaluate the ability of the addressed test strategy for detecting deviation faults under simultaneous statistical fluctuation of the non-faulty parameters. For this purpose, we use Monte Carlo simulations and a fault model that considers as faulty only one component of the filter under test while the others components adopt random values (within their tolerance band) obtained from their statistical distributions. The new data reported here show (for the filters under study) the presence of hard-to-test components and relatively low fault coverage values for small deviation faults. These results suggest that the fault coverage value obtained using only nominal values for the non-faulty components (the traditional evaluation of TRAM) seem to be a poor predictor of the test performance.
Abstract: Task of object localization is one of the major
challenges in creating intelligent transportation. Unfortunately, in
densely built-up urban areas, localization based on GPS only
produces a large error, or simply becomes impossible. New
opportunities arise for the localization due to the rapidly emerging
concept of a wireless ad-hoc network. Such network, allows
estimating potential distance between these objects measuring
received signal level and construct a graph of distances in which
nodes are the localization objects, and edges - estimates of the
distances between pairs of nodes. Due to the known coordinates of
individual nodes (anchors), it is possible to determine the location of
all (or part) of the remaining nodes of the graph. Moreover, road
map, available in digital format can provide localization routines
with valuable additional information to narrow node location search.
However, despite abundance of well-known algorithms for solving
the problem of localization and significant research efforts, there are
still many issues that currently are addressed only partially. In this
paper, we propose localization approach based on the graph mapped
distances on the digital road map data basis. In fact, problem is
reduced to distance graph embedding into the graph representing area
geo location data. It makes possible to localize objects, in some cases
even if only one reference point is available. We propose simple
embedding algorithm and sample implementation as spatial queries
over sensor network data stored in spatial database, allowing
employing effectively spatial indexing, optimized spatial search
routines and geometry functions.
Abstract: Building Sector is the major electricity consumer and
it is costly to building owners. Therefore the application of thermal
energy storage (TES) has gained attractive to reduce energy cost.
Many attractive tariff packages are being offered by the electricity
provider to promote TES. The tariff packages offered higher cost of
electricity during peak period and lower cost of electricity during off
peak period. This paper presented the return of initial investment by
implementing a centralized air-conditioning plant integrated with
thermal energy storage with partially operation strategies. Building
load profile will be calculated hourly according to building
specification and building usage trend. TES operation conditions will
be designed according to building load demand profile, storage
capacity, tariff packages and peak/off peak period. The Payback
Period analysis method was used to evaluate economic analysis. The
investment is considered a good investment where by the initial cost
is recovered less than ten than seven years.
Abstract: The Sphere Method is a flexible interior point algorithm for linear programming problems. This was developed mainly by Professor Katta G. Murty. It consists of two steps, the centering step and the descent step. The centering step is the most expensive part of the algorithm. In this centering step we proposed some improvements such as introducing two or more initial feasible solutions as we solve for the more favorable new solution by objective value while working with the rigorous updates of the feasible region along with some ideas integrated in the descent step. An illustration is given confirming the advantage of using the proposed procedure.
Abstract: In this study arsenate [As(V)] removal from drinking water by coagulation process was investigated. Ferric chloride (FeCl3.6H2O) and ferrous sulfate (FeSO4.7H2O) were used as coagulant. The effects of major operating variables such as coagulant dose (1–30 mg/L) and pH (5.5–9.5) were investigated. Ferric chloride and ferrous sulfate were found as effective and reliable coagulant due to required dose, residual arsenate and coagulant concentration. Optimum pH values for maximum arsenate removal for ferrous sulfate and ferric chloride were found as 8 and 7.5. The arsenate removal efficiency decreased at neutral and acidic pH values for Fe(II) and at the high acidic and high alkaline pH for Fe(III). It was found that the increase of coagulant dose caused a substantial increase in the arsenate removal. But above a certain ferric chloride and ferrous sulfate dosage, the increase in arsenate removal was not significant. Ferric chloride and ferrous sulfate dose above 8 mg/L slightly increased arsenate removal.
Abstract: The objective of this paper is to support the application of Open Innovation practices in firms and organizations by the assessment and management of Intellectual Capital. Intellectual Capital constituents are analyzed in order to verify their capability of acting as key drivers of Open Innovation processes and, therefore, of creating value. A methodology is defined to settle a procedure which helps to select the most relevant Intellectual Capital value drivers and to provide Communities of Innovation with strategic and managerial guidelines in sustaining Open Innovation paradigm. An application of the methodology is developed within a specifically addressed project and its results are hereafter examined.