Abstract: The present paper deals with the analysis and development of noise-reduction transformer that has a filter function for conductive noise transmission. Two types of prototype noise-reduction transformers with two different output voltages are proposed. To determine an optimum design for the noise-reduction transformer, noise attenuation characteristics are discussed based on the experiments and the equivalent circuit analysis. The analysis gives a relation between the circuit parameters and the noise attenuation. High performance step-down noise-reduction transformer for direct power supply to electronics equipment is developed. The input voltage of the transformer is 100 V and the output voltage is 5 V. Frequency characteristics of noise attenuation are discussed, and prevention of pulse noise transmission is demonstrated. Normal mode noise attenuation of this transformer is –80 dB, and common mode exceeds –90 dB. The step-down noise-reduction transformer eliminates pulse noise efficiently.
Abstract: Data mining uses a variety of techniques each of which
is useful for some particular task. It is important to have a deep
understanding of each technique and be able to perform sophisticated
analysis. In this article we describe a tool built to simulate a variation
of the Kohonen network to perform unsupervised clustering and
support the entire data mining process up to results visualization. A
graphical representation helps the user to find out a strategy to
optimize classification by adding, moving or delete a neuron in order
to change the number of classes. The tool is able to automatically
suggest a strategy to optimize the number of classes optimization, but
also support both tree classifications and semi-lattice organizations of
the classes to give to the users the possibility of passing from one
class to the ones with which it has some aspects in common.
Examples of using tree and semi-lattice classifications are given to
illustrate advantages and problems. The tool is applied to classify
macroeconomic data that report the most developed countries- import
and export. It is possible to classify the countries based on their
economic behaviour and use the tool to characterize the commercial
behaviour of a country in a selected class from the analysis of
positive and negative features that contribute to classes formation.
Possible interrelationships between the classes and their meaning are
also discussed.
Abstract: –In this paper the damage in clamped-free, clampedclamped and free-free beam are analyzed considering samples
without and with structural modifications. The damage location is
investigated by the use of the bispectrum and wavelet analysis. The
mathematical models are obtained using 2D elasticity theory and the
Finite Element Method (FEM). The numerical and experimental data
are approximated using the Particle Swarm Optimizer (PSO) method
and this way is possible to adjust the localization and the severity of
the damage. The experimental data are obtained through
accelerometers placed along the sample. The system is excited using
impact hammer.
Abstract: Antiseismic property of telecommunication equipment
is very important for the grasp of the damage and the restoration after
earthquake. Telecommunication business operators are regulating
seismic standard for their equipments. These standards are organized
to simulate the real seismic situations and usually define the minimum
value of first natural frequency of the equipments or the allowable
maximum displacement of top of the equipments relative to bottom.
Using the finite element analysis, natural frequency can be obtained
with high accuracy but the relative displacement of top of the
equipments is difficult to predict accurately using the analysis.
Furthermore, in the case of simulating the equipments with access
floor, predicting the relative displacement of top of the equipments
become more difficult.
In this study, using enormous experimental datum, an empirical
formula is suggested to forecast the relative displacement of top of the
equipments. Also it can be known that which physical quantities are
related with the relative displacement.
Abstract: The paper proposes a methodology to process the signals coming from the Transcranial Magnetic Stimulation (TMS) in order to identify the pathology and evaluate the therapy to treat the patients affected by demency diseases. In particular, a fuzzy model is developed to identify the demency of the patients affected by Subcortical Ischemic Vascular Dementia (SIVD) and to measure the effect of a repetitive TMS on their motor performances. A tool is also presented to support the mentioned analysis.
Abstract: Frequent pattern discovery over data stream is a hard
problem because a continuously generated nature of stream does not
allow a revisit on each data element. Furthermore, pattern discovery
process must be fast to produce timely results. Based on these
requirements, we propose an approximate approach to tackle the
problem of discovering frequent patterns over continuous stream.
Our approximation algorithm is intended to be applied to process a
stream prior to the pattern discovery process. The results of
approximate frequent pattern discovery have been reported in the
paper.
Abstract: Many systems in the natural world exhibit chaos or non-linear behavior, the complexity of which is so great that they appear to be random. Identification of chaos in experimental data is essential for characterizing the system and for analyzing the predictability of the data under analysis. The Lyapunov exponents provide a quantitative measure of the sensitivity to initial conditions and are the most useful dynamical diagnostic for chaotic systems. However, it is difficult to accurately estimate the Lyapunov exponents of chaotic signals which are corrupted by a random noise. In this work, a method for estimation of Lyapunov exponents from noisy time series using unscented transformation is proposed. The proposed methodology was validated using time series obtained from known chaotic maps. In this paper, the objective of the work, the proposed methodology and validation results are discussed in detail.
Abstract: The detection of outliers is very essential because of
their responsibility for producing huge interpretative problem in
linear as well as in nonlinear regression analysis. Much work has
been accomplished on the identification of outlier in linear
regression, but not in nonlinear regression. In this article we propose
several outlier detection techniques for nonlinear regression. The
main idea is to use the linear approximation of a nonlinear model and
consider the gradient as the design matrix. Subsequently, the
detection techniques are formulated. Six detection measures are
developed that combined with three estimation techniques such as the
Least-Squares, M and MM-estimators. The study shows that among
the six measures, only the studentized residual and Cook Distance
which combined with the MM estimator, consistently capable of
identifying the correct outliers.
Abstract: Web applications have become very complex and crucial, especially when combined with areas such as CRM (Customer Relationship Management) and BPR (Business Process Reengineering), the scientific community has focused attention to Web applications design, development, analysis, and testing, by studying and proposing methodologies and tools. This paper proposes an approach to automatic multi-dimensional concern mining for Web Applications, based on concepts analysis, impact analysis, and token-based concern identification. This approach lets the user to analyse and traverse Web software relevant to a particular concern (concept, goal, purpose, etc.) via multi-dimensional separation of concerns, to document, understand and test Web applications. This technique was developed in the context of WAAT (Web Applications Analysis and Testing) project. A semi-automatic tool to support this technique is currently under development.
Abstract: This paper focuses on cost and profit analysis of
single-server Markovian queuing system with two priority classes. In
this paper, functions of total expected cost, revenue and profit of the
system are constructed and subjected to optimization with respect to
its service rates of lower and higher priority classes. A computing
algorithm has been developed on the basis of fast converging
numerical method to solve the system of non linear equations formed
out of the mathematical analysis. A novel performance measure of
cost and profit analysis in view of its economic interpretation for the
system with priority classes is attempted to discuss in this paper. On
the basis of computed tables observations are also drawn to enlighten
the variational-effect of the model on the parameters involved
therein.
Abstract: We describe a novel method for removing noise (in wavelet domain) of unknown variance from microarrays. The method is based on a smoothing of the coefficients of the highest subbands. Specifically, we decompose the noisy microarray into wavelet subbands, apply smoothing within each highest subband, and reconstruct a microarray from the modified wavelet coefficients. This process is applied a single time, and exclusively to the first level of decomposition, i.e., in most of the cases, it is not necessary a multirresoltuion analysis. Denoising results compare favorably to the most of methods in use at the moment.
Abstract: A generalization of the concepts of Feistel Networks (FN), known as Extended Feistel Network (EFN) is examined. EFN splits the input blocks into n > 2 sub-blocks. Like conventional FN, EFN consists of a series of rounds whereby at least one sub-block is subjected to an F function. The function plays a key role in the diffusion process due to its completeness property. It is also important to note that in EFN the F-function is the most computationally expensive operation in a round. The aim of this paper is to determine a suitable type of EFN for a scalable cipher. This is done by analyzing the threshold number of rounds for different types of EFN to achieve the completeness property as well as the number of F-function required in the network. The work focuses on EFN-Type I, Type II and Type III only. In the analysis it is found that EFN-Type II and Type III diffuses at the same rate and both are faster than Type-I EFN. Since EFN-Type-II uses less F functions as compared to EFN-Type III, therefore Type II is the most suitable EFN for use in a scalable cipher.
Abstract: The use of the mechanical simulation (in particular the finite element analysis) requires the management of assumptions in order to analyse a real complex system. In finite element analysis (FEA), two modeling steps require assumptions to be able to carry out the computations and to obtain some results: the building of the physical model and the building of the simulation model. The simplification assumptions made on the analysed system in these two steps can generate two kinds of errors: the physical modeling errors (mathematical model, domain simplifications, materials properties, boundary conditions and loads) and the mesh discretization errors. This paper proposes a mesh adaptive method based on the use of an h-adaptive scheme in combination with an error estimator in order to choose the mesh of the simulation model. This method allows us to choose the mesh of the simulation model in order to control the cost and the quality of the finite element analysis.
Abstract: Missing data is a persistent problem in almost all
areas of empirical research. The missing data must be treated very
carefully, as data plays a fundamental role in every analysis.
Improper treatment can distort the analysis or generate biased results.
In this paper, we compare and contrast various imputation techniques
on missing data sets and make an empirical evaluation of these
methods so as to construct quality software models. Our empirical
study is based on NASA-s two public dataset. KC4 and KC1. The
actual data sets of 125 cases and 2107 cases respectively, without
any missing values were considered. The data set is used to create
Missing at Random (MAR) data Listwise Deletion(LD), Mean
Substitution(MS), Interpolation, Regression with an error term and
Expectation-Maximization (EM) approaches were used to compare
the effects of the various techniques.
Abstract: Using the finite element analyses, this paper discusses the effects of temperature-dependent material properties on the stress and temperature fields in a cracked metal plate under the electric current load. The practical and complicated results are obtained when the temperature-dependent material properties are adopted in the analysis. If the simplified (temperature-independent) material properties are used, incorrect results will be obtained.
Abstract: Text data mining is a process of exploratory data
analysis. Classification maps data into predefined groups or classes.
It is often referred to as supervised learning because the classes are
determined before examining the data. This paper describes proposed
radial basis function Classifier that performs comparative crossvalidation
for existing radial basis function Classifier. The feasibility
and the benefits of the proposed approach are demonstrated by means
of data mining problem: direct Marketing. Direct marketing has
become an important application field of data mining. Comparative
Cross-validation involves estimation of accuracy by either stratified
k-fold cross-validation or equivalent repeated random subsampling.
While the proposed method may have high bias; its performance
(accuracy estimation in our case) may be poor due to high variance.
Thus the accuracy with proposed radial basis function Classifier was
less than with the existing radial basis function Classifier. However
there is smaller the improvement in runtime and larger improvement
in precision and recall. In the proposed method Classification
accuracy and prediction accuracy are determined where the
prediction accuracy is comparatively high.
Abstract: The demand for new telecommunication services requiring higher capacities, data rates and different operating modes have motivated the development of new generation multi-standard wireless transceivers. A multi-standard design often involves extensive system level analysis and architectural partitioning, typically requiring extensive calculations. In this research, a decimation filter design tool for wireless communication standards consisting of GSM, WCDMA, WLANa, WLANb, WLANg and WiMAX is developed in MATLAB® using GUIDE environment for visual analysis. The user can select a required wireless communication standard, and obtain the corresponding multistage decimation filter implementation using this toolbox. The toolbox helps the user or design engineer to perform a quick design and analysis of decimation filter for multiple standards without doing extensive calculation of the underlying methods.
Abstract: Simulation of occlusal function during laboratory
material-s testing becomes essential in predicting long-term
performance before clinical usage. The aim of the study was to assess
the influence of chamfer preparation depth on failure risk of heat
pressed ceramic crowns with and without zirconia framework by
means of finite element analysis. 3D models of maxillary central
incisor, prepared for full ceramic crowns with different depths of the
chamfer margin (between 0.8 and 1.2 mm) and 6-degree tapered
walls together with the overlying crowns were generated using
literature data (Fig. 1, 2). The crowns were designed with and
without a zirconia framework with a thickness of 0.4 mm. For all
preparations and crowns, stresses in the pressed ceramic crown,
zirconia framework, pressed ceramic veneer, and dentin were
evaluated separately. The highest stresses were registered in the
dentin. The depth of the preparations had no significant influence on
the stress values of the teeth and pressed ceramics for the studied
cases, only for the zirconia framework. The zirconia framework
decreases the stress values in the veneer.
Abstract: The design of high-rise building is more often dictated
by its serviceability rather than strength. Structural Engineers are
always striving to overcome challenge of controlling lateral
deflection and storey drifts as well as self weight of structure
imposed on foundation.
One of the most effective techniques is the use of outrigger and
belt truss system in Composite structures that can astutely solve the
above two issues in High-rise constructions.
This paper investigates deflection control by effective utilisation
of belt truss and outrigger system on a 60-storey composite building
subjected to wind loads. A three dimensional Finite Element Analysis
is performed with one, two and three outrigger levels. The reductions
in lateral deflection are 34%, 42% and 51% respectively as compared
to a model without any outrigger system. There is an appreciable
decline in the storey drifts with the introduction of these stiffer
arrangements.
Abstract: Through 1980s, management accounting researchers
described the increasing irrelevance of traditional control and
performance measurement systems. The Balanced Scorecard (BSC)
is a critical business tool for a lot of organizations. It is a
performance measurement system which translates mission and
strategy into objectives. Strategy map approach is a development
variant of BSC in which some necessary causal relations must be
established. To recognize these relations, experts usually use
experience. It is also possible to utilize regression for the same
purpose. Structural Equation Modeling (SEM), which is one of the
most powerful methods of multivariate data analysis, obtains more
appropriate results than traditional methods such as regression. In the
present paper, we propose SEM for the first time to identify the
relations between objectives in the strategy map, and a test to
measure the importance of relations. In SEM, factor analysis and test
of hypotheses are done in the same analysis. SEM is known to be
better than other techniques at supporting analysis and reporting. Our
approach provides a framework which permits the experts to design
the strategy map by applying a comprehensive and scientific method
together with their experience. Therefore this scheme is a more
reliable method in comparison with the previously established
methods.