Abstract: Cogeneration may be defined as a system which
contains electricity production and regain of the thermo value of
exhaust gases simultaneously. The examination is based on the data-s
of an active cogeneration plant. This study, it is aimed to determine
which component of the system should be revised first to raise the
efficiency and decrease the loss of exergy. For this purpose, second
law analysis of thermodynamics is applied to each component due to
consider the effects of environmental conditions and take the quality
of energy into consideration as well as the quantity of it. The exergy
balance equations are produced and exergy loss is calculated for each
component. 44,44 % loss of exergy in heat exchanger, 29,59 % in
combustion chamber, 18,68 % in steam boiler, 5,25 % in gas turbine
and 2,03 % in compressor is calculated.
Abstract: Quantitative Investigation of impact of the factors' contribution towards measuring the reusability of software components could be helpful in evaluating the quality of developed or developing reusable software components and in identification of reusable component from existing legacy systems; that can save cost of developing the software from scratch. But the issue of the relative significance of contributing factors has remained relatively unexplored. In this paper, we have use the Taguchi's approach in analyzing the significance of different structural attributes or factors in deciding the reusability level of a particular component. The results obtained shows that the complexity is the most important factor in deciding the better Reusability of a function oriented Software. In case of Object Oriented Software, Coupling and Complexity collectively play significant role in high reusability.
Abstract: The explosion of interest in online gaming and
virtual worlds is leading many universities to investigate
possible educational applications of the new environments.
In this paper we explore the possibilities of 3D online worlds
for teacher education, particularly the field experience
component. Drawing upon two pedagogical examples, we
suggest that virtual simulations may, with certain limitations,
create safe spaces that allow preservice teachers to adopt
alternate identities and interact safely with the “other." In so
doing they may become aware of the constructed nature of
social categories and gain the essential pedagogical skill of
perspective-taking. We suggest that, ultimately, the ability to
be the principal creators of themselves in virtual environments
can increase their ability to do the same in the real world.
Abstract: Automatic reusability appraisal could be helpful in
evaluating the quality of developed or developing reusable software
components and in identification of reusable components from
existing legacy systems; that can save cost of developing the software
from scratch. But the issue of how to identify reusable components
from existing systems has remained relatively unexplored. In this
paper, we have mentioned two-tier approach by studying the
structural attributes as well as usability or relevancy of the
component to a particular domain. Latent semantic analysis is used
for the feature vector representation of various software domains. It
exploits the fact that FeatureVector codes can be seen as documents
containing terms -the idenifiers present in the components- and so
text modeling methods that capture co-occurrence information in
low-dimensional spaces can be used. Further, we devised Neuro-
Fuzzy hybrid Inference System, which takes structural metric values
as input and calculates the reusability of the software component.
Decision tree algorithm is used to decide initial set of fuzzy rules for
the Neuro-fuzzy system. The results obtained are convincing enough
to propose the system for economical identification and retrieval of
reusable software components.
Abstract: In literature, there are metrics for identifying the
quality of reusable components but the framework that makes use of
these metrics to precisely predict reusability of software components
is still need to be worked out. These reusability metrics if identified
in the design phase or even in the coding phase can help us to reduce
the rework by improving quality of reuse of the software component
and hence improve the productivity due to probabilistic increase in
the reuse level. As CK metric suit is most widely used metrics for
extraction of structural features of an object oriented (OO) software;
So, in this study, tuned CK metric suit i.e. WMC, DIT, NOC, CBO
and LCOM, is used to obtain the structural analysis of OO-based
software components. An algorithm has been proposed in which the
inputs can be given to K-Means Clustering system in form of
tuned values of the OO software component and decision tree is
formed for the 10-fold cross validation of data to evaluate the in
terms of linguistic reusability value of the component. The developed
reusability model has produced high precision results as desired.
Abstract: A minimal complexity version of component mode
synthesis is presented that requires simplified computer
programming, but still provides adequate accuracy for modeling
lower eigenproperties of large structures and their transient
responses. The novelty is that a structural separation into components
is done along a plane/surface that exhibits rigid-like behavior, thus
only normal modes of each component is sufficient to use, without
computing any constraint, attachment, or residual-attachment modes.
The approach requires only such input information as a few (lower)
natural frequencies and corresponding undamped normal modes of
each component. A novel technique is shown for formulation of
equations of motion, where a double transformation to generalized
coordinates is employed and formulation of nonproportional damping
matrix in generalized coordinates is shown.
Abstract: A thin layer on the component surface can be found
with high tensile residual stresses, due to turning operations, which
can dangerously affect the fatigue performance of the component. In
this paper an analytical approach is presented to reconstruct the
residual stress field from a limited incomplete set of measurements.
Airy stress function is used as the primary unknown to directly solve
the equilibrium equations and satisfying the boundary conditions. In
this new method there exists the flexibility to impose the physical
conditions that govern the behavior of residual stress to achieve a
meaningful complete stress field. The analysis is also coupled to a
least squares approximation and a regularization method to provide
stability of the inverse problem. The power of this new method is
then demonstrated by analyzing some experimental measurements
and achieving a good agreement between the model prediction and
the results obtained from residual stress measurement.
Abstract: Liver segmentation is the first significant process for
liver diagnosis of the Computed Tomography. It segments the liver
structure from other abdominal organs. Sophisticated filtering techniques
are indispensable for a proper segmentation. In this paper, we
employ a 3D anisotropic diffusion as a preprocessing step. While
removing image noise, this technique preserve the significant parts
of the image, typically edges, lines or other details that are important
for the interpretation of the image. The segmentation task is done
by using thresholding with automatic threshold values selection and
finally the false liver region is eliminated using 3D connected component.
The result shows that by employing the 3D anisotropic filtering,
better liver segmentation results could be achieved eventhough simple
segmentation technique is used.
Abstract: In hydrocyclones, the particle separation efficiency is
limited by the suspended fine particles, which are discharged with the
coarse product in the underflow. It is well known that injecting water
in the conical part of the cyclone reduces the fine particle fraction in
the underflow. This paper presents a mathematical model that
simulates the water injection in the conical component. The model
accounts for the fluid flow and the particle motion. Particle
interaction, due to hindered settling caused by increased density and
viscosity of the suspension, and fine particle entrainment by settling
coarse particles are included in the model. Water injection in the
conical part of the hydrocyclone is performed to reduce fine particle
discharge in the underflow. The model demonstrates the impact of
the injection rate, injection velocity, and injection location on the
shape of the partition curve. The simulations are compared with
experimental data of a 50-mm cyclone.
Abstract: The ElectroEncephaloGram (EEG) is useful for
clinical diagnosis and biomedical research. EEG signals often
contain strong ElectroOculoGram (EOG) artifacts produced
by eye movements and eye blinks especially in EEG recorded
from frontal channels. These artifacts obscure the underlying
brain activity, making its visual or automated inspection
difficult. The goal of ocular artifact removal is to remove
ocular artifacts from the recorded EEG, leaving the underlying
background signals due to brain activity. In recent times,
Independent Component Analysis (ICA) algorithms have
demonstrated superior potential in obtaining the least
dependent source components. In this paper, the independent
components are obtained by using the JADE algorithm (best
separating algorithm) and are classified into either artifact
component or neural component. Neural Network is used for
the classification of the obtained independent components.
Neural Network requires input features that exactly represent
the true character of the input signals so that the neural
network could classify the signals based on those key
characters that differentiate between various signals. In this
work, Auto Regressive (AR) coefficients are used as the input
features for classification. Two neural network approaches
are used to learn classification rules from EEG data. First, a
Polynomial Neural Network (PNN) trained by GMDH (Group
Method of Data Handling) algorithm is used and secondly,
feed-forward neural network classifier trained by a standard
back-propagation algorithm is used for classification and the
results show that JADE-FNN performs better than JADEPNN.
Abstract: The RR interval series is non-stationary and unevenly
spaced in time. For estimating its power spectral density (PSD) using
traditional techniques like FFT, require resampling at uniform
intervals. The researchers have used different interpolation
techniques as resampling methods. All these resampling methods
introduce the low pass filtering effect in the power spectrum. The
lomb transform is a means of obtaining PSD estimates directly from
irregularly sampled RR interval series, thus avoiding resampling. In
this work, the superiority of Lomb transform method has been
established over FFT based approach, after applying linear and
cubicspline interpolation as resampling methods, in terms of
reproduction of exact frequency locations as well as the relative
magnitudes of each spectral component.
Abstract: This paper studies the dependability of componentbased
applications, especially embedded ones, from the diagnosis
point of view. The principle of the diagnosis technique is to
implement inter-component tests in order to detect and locate the
faulty components without redundancy. The proposed approach for
diagnosing faulty components consists of two main aspects. The first
one concerns the execution of the inter-component tests which
requires integrating test functionality within a component. This is the
subject of this paper. The second one is the diagnosis process itself
which consists of the analysis of inter-component test results to
determine the fault-state of the whole system. Advantage of this
diagnosis method when compared to classical redundancy faulttolerant
techniques are application autonomy, cost-effectiveness and
better usage of system resources. Such advantage is very important
for many systems and especially for embedded ones.
Abstract: Safety instrumented systems (SISs) are becoming
increasingly complex and the proportion of programmable electronic
parts is growing. The IEC 61508 global standard was established to
ensure the functional safety of SISs, but it was expressed in highly
macroscopic terms. This study introduces an evaluation process for
hardware safety integrity levels through failure modes, effects, and
diagnostic analysis (FMEDA).FMEDA is widely used to evaluate
safety levels, and it provides the information on failure rates and
failure mode distributions necessary to calculate a diagnostic coverage
factor for a given component. In our evaluation process, the
components of the SIS subsystem are first defined in terms of failure
modes and effects. Then, the failure rate and failure mechanism
distribution are assigned to each component. The safety mode and
detectability of each failure mode are determined for each component.
Finally, the hardware safety integrity level is evaluated based on the
calculated results.
Abstract: Process-oriented software development is a new
software development paradigm in which software design is modeled
by a business process which is in turn translated into a process
execution language for execution. The building blocks of this
paradigm are software units that are composed together to work
according to the flow of the business process. This new paradigm
still exhibits the characteristic of the applications built with the
traditional software component technology. This paper discusses an
approach to apply a traditional technique for software component
fabrication to the design of process-oriented software units, called
process components. These process components result from
decomposing a business process of a particular application domain
into subprocesses, and these process components can be reused to
design the business processes of other application domains. The
decomposition considers five managerial goals, namely cost
effectiveness, ease of assembly, customization, reusability, and
maintainability. The paper presents how to design or decompose
process components from a business process model and measure
some technical features of the design that would affect the
managerial goals. A comparison between the measurement values
from different designs can tell which process component design is
more appropriate for the managerial goals that have been set. The
proposed approach can be applied in Web Services environment
which accommodates process-oriented software development.
Abstract: In this paper we will develop further the sequential life test approach presented in a previous article by [1] using an underlying two parameter Inverse Weibull sampling distribution. The location parameter or minimum life will be considered equal to zero. Once again we will provide rules for making one of the three possible decisions as each observation becomes available; that is: accept the null hypothesis H0; reject the null hypothesis H0; or obtain additional information by making another observation. The product being analyzed is a new electronic component. There is little information available about the possible values the parameters of the corresponding Inverse Weibull underlying sampling distribution could have.To estimate the shape and the scale parameters of the underlying Inverse Weibull model we will use a maximum likelihood approach for censored failure data. A new example will further develop the proposed sequential life testing approach.
Abstract: In this paper, the strength of a stabilizer is determined when the static and fatigue multiaxial loading are applied. Stabilizer is a part of suspension system in the heavy truck for stabilizing the cabin against the vibration of the road which composes of a thin-walled tube joined to a forge component by fillet weld. The component is loaded by non proportional random sequence of torsion and bending. Residual stress of welding process is considered here for static loading. This static loading with road irregularities are applied in this study as fatigue case that can affected in the fillet welded area of this part. The stresses in the welded structure are calculated using FEA. In addition, the fatigue with multi axial loading in the fillet weld is also investigated and the critical zone of the stabilizer is specified and presented by graphs. Residual stresses that have been resulted by the thermal forces are considered in FEA. Force increasing is the element of finding the critical point of the component.
Abstract: In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.
Abstract: A phorbol-12-myristate-13-acetate (TPA) is a synthetic analogue of phorbol ester (PE), a natural toxic compound of Euphorbiaceae plant. The oil extracted from plants of this family is useful source for primarily biofuel. However this oil might also be used as a foodstuff due to its significant nutrition content. The limitations for utilizing the oil as a foodstuff are mainly due to a toxicity of PE. Currently, a majority of PE detoxification processes are expensive as include multi steps alcohol extraction sequence.
Ozone is considered as a strong oxidative agent. It reacts with PE by attacking the carbon-carbon double bond of PE. This modification of PE molecular structure yields a non toxic ester with high lipid content.
This report presents data on development of simple and cheap PE detoxification process with water application as a buffer and ozone as reactive component. The core of this new technique is an application for a new microscale plasma unit to ozone production and the technology permits ozone injection to the water-TPA mixture in form of microbubbles.
The efficacy of a heterogeneous process depends on the diffusion coefficient which can be controlled by contact time and interfacial area. The low velocity of rising microbubbles and high surface to volume ratio allow efficient mass transfer to be achieved during the process. Direct injection of ozone is the most efficient way to process with such highly reactive and short lived chemical.
Data on the plasma unit behavior are presented and the influence of gas oscillation technology on the microbubble production mechanism has been discussed. Data on overall process efficacy for TPA degradation is shown.
Abstract: This paper aims to present the reviews of the
application of neural network in shunt active power filter (SAPF).
From the review, three out of four components of SAPF structure,
which are harmonic detection component, compensating current
control, and DC bus voltage control, have been adopted some of
neural network architecture as part of its component or even
substitution. The objectives of most papers in using neural network in
SAPF are to increase the efficiency, stability, accuracy, robustness,
tracking ability of the systems of each component. Moreover,
minimizing unneeded signal due to the distortion is the ultimate goal
in applying neural network to the SAPF. The most famous
architecture of neural network in SAPF applications are ADALINE
and Backpropagation (BP).
Abstract: This article describes the aspects of the formation of
the national idea and national identity through the prism of gender
control and its contradistinction to the obsolete, Soviet component.
The role of females in ethnic and national projects is considered from
the point of view of Dr. Nira Yuval-Davis: as biological reproducers
of the ethnic communities- members; as reproducers of the boarders
of ethnic/national groups; as central participants in the ideological
reproduction of community and transducers of its culture; as symbols
in ideology, reproduction and transformation of ethnic/national
categories; and as participants of national, economical, political and
military combats. The society of the transitional type uses the
symbolic resources of the formation of gender component in the
national project. The gender patterns act like cultural codes,
executing the important ideological function in formation of the
national female- image, i.e. the discussion on hijab - it-s not just the
discussion on control over the female body, it-s the discussion on the
metaphor of social order.