Abstract: Implicit equations play a crucial role in Engineering.
Based on this importance, several techniques have been applied to
solve this particular class of equations. When it comes to practical
applications, in general, iterative procedures are taken into account.
On the other hand, with the improvement of computers, other
numerical methods have been developed to provide a more
straightforward methodology of solution. Analytical exact approaches
seem to have been continuously neglected due to the difficulty
inherent in their application; notwithstanding, they are indispensable
to validate numerical routines. Lagrange-s Inversion Theorem is a
simple mathematical tool which has proved to be widely applicable to
engineering problems. In short, it provides the solution to implicit
equations by means of an infinite series. To show the validity of this
method, the tree-parameter infiltration equation is, for the first time,
analytically and exactly solved. After manipulating these series,
closed-form solutions are presented as H-functions.
Abstract: Numerical analysis naturally finds applications in all
fields of engineering and the physical sciences, but in the
21st century, the life sciences and even the arts have adopted
elements of scientific computations. The numerical data analysis
became key process in research and development of all the fields [6].
In this paper we have made an attempt to analyze the specified
numerical patterns with reference to the association rule mining
techniques with minimum confidence and minimum support mining
criteria. The extracted rules and analyzed results are graphically
demonstrated. Association rules are a simple but very useful form of
data mining that describe the probabilistic co-occurrence of certain
events within a database [7]. They were originally designed to
analyze market-basket data, in which the likelihood of items being
purchased together within the same transactions are analyzed.
Abstract: A direct search approach to determine optimal reservoir operating is proposed with ant colony optimization for continuous domains (ACOR). The model is applied to a system of single reservoir to determine the optimum releases during 42 years of monthly steps. A disadvantage of ant colony based methods and the ACOR in particular, refers to great amount of computer run time consumption. In this study a highly effective procedure for decreasing run time has been developed. The results are compared to those of a GA based model.
Abstract: A number of automated shot-change detection
methods for indexing a video sequence to facilitate browsing and
retrieval have been proposed in recent years. This paper emphasizes
on the simulation of video shot boundary detection using one of the
methods of the color histogram wherein scaling of the histogram
metrics is an added feature. The difference between the histograms of
two consecutive frames is evaluated resulting in the metrics. Further
scaling of the metrics is performed to avoid ambiguity and to enable
the choice of apt threshold for any type of videos which involves
minor error due to flashlight, camera motion, etc. Two sample videos
are used here with resolution of 352 X 240 pixels using color
histogram approach in the uncompressed media. An attempt is made
for the retrieval of color video. The simulation is performed for the
abrupt change in video which yields 90% recall and precision value.
Abstract: This paper introduces a measure of similarity between
two clusterings of the same dataset produced by two different
algorithms, or even the same algorithm (K-means, for instance, with
different initializations usually produce different results in clustering
the same dataset). We then apply the measure to calculate the
similarity between pairs of clusterings, with special interest directed
at comparing the similarity between various machine clusterings and
human clustering of datasets. The similarity measure thus can be used
to identify the best (in terms of most similar to human) clustering
algorithm for a specific problem at hand. Experimental results
pertaining to the text categorization problem of a Portuguese corpus
(wherein a translation-into-English approach is used) are presented, as well as results on the well-known benchmark IRIS dataset. The
significance and other potential applications of the proposed measure
are discussed.
Abstract: In this paper, we present a novel, principled approach to resolve the remained problems of substitution technique of audio steganography. Using the proposed genetic algorithm, message bits are embedded into multiple, vague and higher LSB layers, resulting in increased robustness. The robustness specially would be increased against those intentional attacks which try to reveal the hidden message and also some unintentional attacks like noise addition as well.
Abstract: Currently, one of the main directions is developing of
development based on the clustering of economic operations of
Kazakhstan, providing for the organization and concentration of
production capacity in one region or the most optimal system. In the
modern economic literature clustering is regarded as one of the most
effective tools to ensure competitive businesses, and improve their
business itself.
Abstract: Quality costs are the costs associated with preventing,
finding, and correcting defective work. Since the main language of
corporate management is money, quality-related costs act as means of
communication between the staff of quality engineering departments
and the company managers. The objective of quality engineering is to
minimize the total quality cost across the life of product. Quality
costs provide a benchmark against which improvement can be
measured over time. It provides a rupee-based report on quality
improvement efforts. It is an effective tool to identify, prioritize and
select quality improvement projects. After reviewing through the
literature it was noticed that a simplified methodology for data
collection of quality cost in a manufacturing industry was required.
The quantified standard methodology is proposed for collecting data
of various elements of quality cost categories for manufacturing
industry. Also in the light of research carried out so far, it is felt
necessary to standardise cost elements in each of the prevention,
appraisal, internal failure and external failure costs. . Here an attempt
is made to standardise the various cost elements applicable to
manufacturing industry and data is collected by using the proposed
quantified methodology. This paper discusses the case study carried
in luggage manufacturing industry.
Abstract: Electronic Systems are the core of everyday lives.
They form an integral part in financial networks, mass transit,
telephone systems, power plants and personal computers. Electronic
systems are increasingly based on complex VLSI (Very Large Scale
Integration) integrated circuits. Initial electronic design automation is
concerned with the design and production of VLSI systems. The next
important step in creating a VLSI circuit is Physical Design. The
input to the physical design is a logical representation of the system
under design. The output of this step is the layout of a physical
package that optimally or near optimally realizes the logical
representation. Physical design problems are combinatorial in nature
and of large problem sizes. Darwin observed that, as variations are
introduced into a population with each new generation, the less-fit
individuals tend to extinct in the competition of basic necessities.
This survival of fittest principle leads to evolution in species. The
objective of the Genetic Algorithms (GA) is to find an optimal
solution to a problem .Since GA-s are heuristic procedures that can
function as optimizers, they are not guaranteed to find the optimum,
but are able to find acceptable solutions for a wide range of
problems. This survey paper aims at a study on Efficient Algorithms
for VLSI Physical design and observes the common traits of the
superior contributions.
Abstract: Security has been an important issue and concern in the
smart home systems. Smart home networks consist of a wide range of
wired or wireless devices, there is possibility that illegal access to
some restricted data or devices may happen. Password-based
authentication is widely used to identify authorize users, because this
method is cheap, easy and quite accurate. In this paper, a neural
network is trained to store the passwords instead of using verification
table. This method is useful in solving security problems that
happened in some authentication system. The conventional way to
train the network using Backpropagation (BPN) requires a long
training time. Hence, a faster training algorithm, Resilient
Backpropagation (RPROP) is embedded to the MLPs Neural
Network to accelerate the training process. For the Data Part, 200
sets of UserID and Passwords were created and encoded into binary
as the input. The simulation had been carried out to evaluate the
performance for different number of hidden neurons and combination
of transfer functions. Mean Square Error (MSE), training time and
number of epochs are used to determine the network performance.
From the results obtained, using Tansig and Purelin in hidden and
output layer and 250 hidden neurons gave the better performance. As
a result, a password-based user authentication system for smart home
by using neural network had been developed successfully.
Abstract: The main aim of this study is to identify the most
influential variables that cause defects on the items produced by a
casting company located in Turkey. To this end, one of the items
produced by the company with high defective percentage rates is
selected. Two approaches-the regression analysis and decision treesare
used to model the relationship between process parameters and
defect types. Although logistic regression models failed, decision tree
model gives meaningful results. Based on these results, it can be
claimed that the decision tree approach is a promising technique for
determining the most important process variables.
Abstract: Integrated fiber-wireless (FiWi) access networks are a viable solution that can deliver the high profile quadruple play services. Passive optical networks (PON) networks integrated with wireless access networks provide ubiquitous characteristics for high bandwidth applications. Operation of PON improves by employing a variety of multiplexing techniques. One of it is time division/wavelength division multiplexed (TDM/WDM) architecture that improves the performance of optical-wireless access networks. This paper proposes a novel feedback-based TDM/WDM-PON architecture and introduces a model of integrated PON-FiWi networks. Feedback-based link architecture is an efficient solution to improves the performance of optical-line-terminal (OLT) and interlink optical-network-units (ONUs) communication. Furthermore, the feedback-based WDM/TDM-PON architecture is compared with existing architectures in terms of capacity of network throughput.
Abstract: Automated operations based on voice commands will become more and more important in many applications, including robotics, maintenance operations, etc. However, voice command recognition rates drop quite a lot under non-stationary and chaotic noise environments. In this paper, we tried to significantly improve the speech recognition rates under non-stationary noise environments. First, 298 Navy acronyms have been selected for automatic speech recognition. Data sets were collected under 4 types of noisy environments: factory, buccaneer jet, babble noise in a canteen, and destroyer. Within each noisy environment, 4 levels (5 dB, 15 dB, 25 dB, and clean) of Signal-to-Noise Ratio (SNR) were introduced to corrupt the speech. Second, a new algorithm to estimate speech or no speech regions has been developed, implemented, and evaluated. Third, extensive simulations were carried out. It was found that the combination of the new algorithm, the proper selection of language model and a customized training of the speech recognizer based on clean speech yielded very high recognition rates, which are between 80% and 90% for the four different noisy conditions. Fourth, extensive comparative studies have also been carried out.
Abstract: Knowledge of an organization does not merely reside
in structured form of information and data; it is also embedded in
unstructured form. The discovery of such knowledge is particularly
difficult as the characteristic is dynamic, scattered, massive and
multiplying at high speed. Conventional methods of managing
unstructured information are considered too resource demanding and
time consuming to cope with the rapid information growth.
In this paper, a Multi-faceted and Automatic Knowledge
Elicitation System (MAKES) is introduced for the purpose of
discovery and capture of organizational knowledge. A trial
implementation has been conducted in a public organization to
achieve the objective of decision capture and navigation from a
number of meeting minutes which are autonomously organized,
classified and presented in a multi-faceted taxonomy map in both
document and content level. Key concepts such as critical decision
made, key knowledge workers, knowledge flow and the relationship
among them are elicited and displayed in predefined knowledge
model and maps. Hence, the structured knowledge can be retained,
shared and reused.
Conducting Knowledge Management with MAKES reduces work
in searching and retrieving the target decision, saves a great deal of
time and manpower, and also enables an organization to keep pace
with the knowledge life cycle. This is particularly important when
the amount of unstructured information and data grows extremely
quickly. This system approach of knowledge management can
accelerate value extraction and creation cycles of organizations.
Abstract: Because support interference corrections are not properly
understood, engineers mostly rely on expensive dummy measurements
or CFD calculations. This paper presents a method based on uncorrected wind tunnel measurements and fast calculation techniques
(it is a hybrid method) to calculate wall interference, support interference and residual interference (when e.g. a support member
closely approaches the wind tunnel walls) for any type of wind tunnel and support configuration. The method provides with a simple formula
for the calculation of the interference gradient. This gradient is
based on the uncorrected measurements and a successive calculation
of the slopes of the interference-free aerodynamic coefficients. For the latter purpose a new vortex-lattice routine is developed that corrects
the slopes for viscous effects. A test case of a measurement on a wing proves the value of this hybrid method as trends and orders of
magnitudes of the interference are correctly determined.
Abstract: Nonlinear propagation of ion-acoustic waves in a selfgravitating
dusty plasma consisting of warm positive ions,
isothermal two-temperature electrons and negatively charged dust
particles having charge fluctuations is studied using the reductive
perturbation method. It is shown that the nonlinear propagation of
ion-acoustic waves in such plasma can be described by an uncoupled
third order partial differential equation which is a modified form of
the usual Korteweg-deVries (KdV) equation. From this nonlinear
equation, a new type of solution for the ion-acoustic wave is
obtained. The effects of two-temperature electrons, gravity and dust
charge fluctuations on the ion-acoustic solitary waves are discussed
with possible applications.
Abstract: This paper describes a feasibility study that is
included with the research, development and testing of a micro
communications sonobuoy deployable by Maritime Fixed wing
Unmanned Aerial Vehicles (M-UAV) and rotor wing Quad Copters
which are both currently being developed by the University of
Adelaide. The micro communications sonobuoy is developed to act
as a seamless communication relay between an Autonomous
Underwater Vehicle (AUV) and an above water human operator
some distance away. Development of such a device would eliminate
the requirement of physical communication tethers attached to
submersible vehicles for control and data retrieval.
Abstract: Misalignment and unbalance are the major concerns
in rotating machinery. When the power supply to any rotating system
is cutoff, the system begins to lose the momentum gained during
sustained operation and finally comes to rest. The exact time period
from when the power is cutoff until the rotor comes to rest is called
Coast Down Time. The CDTs for different shaft cutoff speeds were
recorded at various misalignment and unbalance conditions. The
CDT reduction percentages were calculated for each fault and there
is a specific correlation between the CDT reduction percentage and
the severity of the fault. In this paper, radial basis network, a new
generation of artificial neural networks, has been successfully
incorporated for the prediction of CDT for misalignment and
unbalance conditions. Radial basis network has been found to be
successful in the prediction of CDT for mechanical faults in rotating
machinery.
Abstract: The vibrations produced by a single point defect on
various parts of the bearing under constant radial load are predicted
by using a theoretical model. The model includes variation in the
response due to the effect of bearing dimensions, rotating frequency
distribution of load. The excitation forces are generated when the
defects on the races strike to rolling elements. In case of the outer
ring defect, the pulses generated are with periodicity of outer ring
defect frequency where as for inner ring defect, the pulses are with
periodicity of inner ring defect frequency. The effort has been carried
out in preparing the physical model of the system. Different defect
frequencies are obtained and are used to find out the amplitudes of
the vibration due to excitation of the bearing parts. Increase in the
radial load or severity of the defect produces a significant change in
bearing signature characteristics.
Abstract: This paper describes a probabilistic method for
three-dimensional object recognition using a shared pool of surface
signatures. This technique uses flatness, orientation, and convexity
signatures that encode the surface of a free-form object into three
discriminative vectors, and then creates a shared pool of data by
clustering the signatures using a distance function. This method
applies the Bayes-s rule for recognition process, and it is extensible
to a large collection of three-dimensional objects.