Abstract: This paper has been investigated a technique that predicts the performance of a bar-type unimorph piezoelectric vibration actuator depending on the frequency. This paper has been proposed an equivalent circuit that can be easily analyzed for the bar-type unimorph piezoelectric vibration actuator. In the dynamic analysis, rigidity and resonance frequency, which are important mechanical elements, were derived using the basic beam theory. In the equivalent circuit analysis, the displacement and bandwidth of the piezoelectric vibration actuator depending on the frequency were predicted. Also, for the reliability of the derived equations, the predicted performance depending on the shape change was compared with the result of a finite element analysis program.
Abstract: Fourier transform infrared (FT-IR) spectroscopic imaging
is an emerging technique that provides both chemically and
spatially resolved information. The rich chemical content of data
may be utilized for computer-aided determinations of structure and
pathologic state (cancer diagnosis) in histological tissue sections for
prostate cancer. FT-IR spectroscopic imaging of prostate tissue has
shown that tissue type (histological) classification can be performed to
a high degree of accuracy [1] and cancer diagnosis can be performed
with an accuracy of about 80% [2] on a microscopic (≈ 6μm)
length scale. In performing these analyses, it has been observed
that there is large variability (more than 60%) between spectra from
different points on tissue that is expected to consist of the same
essential chemical constituents. Spectra at the edges of tissues are
characteristically and consistently different from chemically similar
tissue in the middle of the same sample. Here, we explain these
differences using a rigorous electromagnetic model for light-sample
interaction. Spectra from FT-IR spectroscopic imaging of chemically
heterogeneous samples are different from bulk spectra of individual
chemical constituents of the sample. This is because spectra not
only depend on chemistry, but also on the shape of the sample.
Using coupled wave analysis, we characterize and quantify the nature
of spectral distortions at the edges of tissues. Furthermore, we
present a method of performing histological classification of tissue
samples. Since the mid-infrared spectrum is typically assumed to
be a quantitative measure of chemical composition, classification
results can vary widely due to spectral distortions. However, we
demonstrate that the selection of localized metrics based on chemical
information can make our data robust to the spectral distortions
caused by scattering at the tissue boundary.
Abstract: Numerical analysis for the aerodynamic characteristics
of the WIG (wing-in ground effect) craft with highly cambered and
aspect ratio of one is performed to predict the ground effect for the
case of with- and without- lower-extension endplate. The analysis is
included varying angles of attack from 0 to10 deg. and ground
clearances from 5% of chord to 50%. Due to the ground effect, the lift
by rising in pressure on the lower surface is increased and the
influence of wing-tip vortices is decreased. These two significant
effects improve the lift-drag ratio. On the other hand, the endplate
prevents the high-pressure air escaping from the air cushion at the
wing tip and causes to increase the lift and lift-drag ratio further. It is
found from the visualization of computation results that two wing-tip
vortices are generated from each surface of the wing tip and their
strength are weak and diminished rapidly. Irodov-s criteria are also
evaluated to investigate the static height stability. The comparison of
Irodov-s criteria shows that the endplate improves the deviation of the
static height stability with respect to pitch angles and heights. As the
results, the endplate can improve the aerodynamic characteristics and
static height stability of wings in ground effect, simultaneously.
Abstract: In this paper, we present a comparative study between two computer vision systems for objects recognition and tracking, these algorithms describe two different approach based on regions constituted by a set of pixels which parameterized objects in shot sequences. For the image segmentation and objects detection, the FCM technique is used, the overlapping between cluster's distribution is minimized by the use of suitable color space (other that the RGB one). The first technique takes into account a priori probabilities governing the computation of various clusters to track objects. A Parzen kernel method is described and allows identifying the players in each frame, we also show the importance of standard deviation value research of the Gaussian probability density function. Region matching is carried out by an algorithm that operates on the Mahalanobis distance between region descriptors in two subsequent frames and uses singular value decomposition to compute a set of correspondences satisfying both the principle of proximity and the principle of exclusion.
Abstract: A cross sectional survey design was used to collect
data from 370 diabetic patients. Two instruments were used in
obtaining data; in-depth interview guide and researchers- developed
questionnaire. Fisher's exact test was used to investigate association
between the identified factors and nonadherence. Factors identified
were: socio-demographic factors such as: gender, age, marital status,
educational level and occupation; psychosocial obstacles such as:
non-affordability of prescribed diet, frustration due to the restriction,
limited spousal support, feelings of deprivation, feeling that
temptation is inevitable, difficulty in adhering in social gatherings
and difficulty in revealing to host that one is diabetic; health care
providers obstacles were: poor attitude of health workers, irregular
diabetes education in clinics , limited number of nutrition education
sessions/ inability of the patients to estimate the desired quantity of
food, no reminder post cards or phone calls about upcoming patient
appointments and delayed start of appointment / time wasting in
clinics.
Abstract: Economically transformers constitute one of the largest investments in a Power system. For this reason, transformer condition assessment and management is a high priority task. If a transformer fails, it would have a significant negative impact on revenue and service reliability. Monitoring the state of health of power transformers has traditionally been carried out using laboratory Dissolved Gas Analysis (DGA) tests performed at periodic intervals on the oil sample, collected from the transformers. DGA of transformer oil is the single best indicator of a transformer-s overall condition and is a universal practice today, which started somewhere in the 1960s. Failure can occur in a transformer due to different reasons. Some failures can be limited or prevented by maintenance. Oil filtration is one of the methods to remove the dissolve gases and prevent the deterioration of the oil. In this paper we analysis the DGA data by regression method and predict the gas concentration in the oil in the future. We bring about a comparative study of different traditional methods of regression and the errors generated out of their predictions. With the help of these data we can deduce the health of the transformer by finding the type of fault if it has occurred or will occur in future. Additional in this paper effect of filtration on the transformer health is highlight by calculating the probability of failure of a transformer with and without oil filtrating.
Abstract: A catastrophic earthquake measuring 6.3 on the
Richter scale struck the Christchurch, New Zealand Central Business
District on February 22, 2012, abruptly disrupting the business of
teaching and learning at Christchurch Polytechnic Institute of
Technology. This paper presents the findings from a study
undertaken about the complexity of delivering an educational
programme in the face of this traumatic natural event. Nine
interconnected themes emerged from this multiple method study:
communication, decision making, leader- and follower-ship,
balancing personal and professional responsibilities, taking action,
preparedness and thinking ahead, all within a disruptive and uncertain
context. Sustainable responses that maximise business continuity, and
provide solutions to practical challenges, are among the study-s
recommendations.
Abstract: Appropriate description of business processes through
standard notations has become one of the most important assets for
organizations. Organizations must therefore deal with quality faults
in business process models such as the lack of understandability and
modifiability. These quality faults may be exacerbated if business
process models are mined by reverse engineering, e.g., from existing
information systems that support those business processes. Hence,
business process refactoring is often used, which change the internal
structure of business processes whilst its external behavior is
preserved. This paper aims to choose the most appropriate set of
refactoring operators through the quality assessment concerning
understandability and modifiability. These quality features are
assessed through well-proven measures proposed in the literature.
Additionally, a set of measure thresholds are heuristically established
for applying the most promising refactoring operators, i.e., those that
achieve the highest quality improvement according to the selected
measures in each case.
Abstract: The approach of subset selection in polynomial
regression model building assumes that the chosen fixed full set of
predefined basis functions contains a subset that is sufficient to
describe the target relation sufficiently well. However, in most cases
the necessary set of basis functions is not known and needs to be
guessed – a potentially non-trivial (and long) trial and error process.
In our research we consider a potentially more efficient approach –
Adaptive Basis Function Construction (ABFC). It lets the model
building method itself construct the basis functions necessary for
creating a model of arbitrary complexity with adequate predictive
performance. However, there are two issues that to some extent
plague the methods of both the subset selection and the ABFC,
especially when working with relatively small data samples: the
selection bias and the selection instability. We try to correct these
issues by model post-evaluation using Cross-Validation and model
ensembling. To evaluate the proposed method, we empirically
compare it to ABFC methods without ensembling, to a widely used
method of subset selection, as well as to some other well-known
regression modeling methods, using publicly available data sets.
Abstract: For more than 120 years, gold mining formed the
backbone the South Africa-s economy. The consequence of mine
closure was observed in large-scale land degradation and widespread
pollution of surface water and groundwater. This paper investigates
the feasibility of using natural zeolite in removing heavy metals
contaminating the Wonderfonteinspruit Catchment Area (WCA), a
water stream with high levels of heavy metals and radionuclide
pollution. Batch experiments were conducted to study the adsorption
behavior of natural zeolite with respect to Fe2+, Mn2+, Ni2+, and Zn2+.
The data was analysed using the Langmuir and Freudlich isotherms.
Langmuir was found to correlate the adsorption of Fe2+, Mn2+, Ni2+,
and Zn2+ better, with the adsorption capacity of 11.9 mg/g, 1.2 mg/g,
1.3 mg/g, and 14.7 mg/g, respectively. Two kinetic models namely,
pseudo-first order and pseudo second order were also tested to fit the
data. Pseudo-second order equation was found to be the best fit for
the adsorption of heavy metals by natural zeolite. Zeolite
functionalization with humic acid increased its uptake ability.
Abstract: As the gradual increase of the enterprise scale, the
firms may possess many manufacturing plants located in different
places geographically. This change will result in the multi-site
production planning problems under the environment of multiple
plants or production resources. Our research proposes the structural
framework to analyze the multi-site planning problems. The analytical
framework is composed of six elements: multi-site conceptual model,
product structure (bill of manufacturing), production strategy,
manufacturing capability and characteristics, production planning
constraints, and key performance indicators. As well as the discussion
of these six ingredients, we also review related literatures in this paper
to match our analytical framework. Finally we take a real-world
practical example of a TFT-LCD manufacturer in Taiwan to explain
our proposed analytical framework for the multi-site production
planning problems.
Abstract: Short Message Service (SMS) has grown in
popularity over the years and it has become a common way of
communication, it is a service provided through General System
for Mobile Communications (GSM) that allows users to send text
messages to others.
SMS is usually used to transport unclassified information, but
with the rise of mobile commerce it has become a popular tool for
transmitting sensitive information between the business and its
clients. By default SMS does not guarantee confidentiality and
integrity to the message content.
In the mobile communication systems, security (encryption)
offered by the network operator only applies on the wireless link.
Data delivered through the mobile core network may not be
protected. Existing end-to-end security mechanisms are provided
at application level and typically based on public key
cryptosystem.
The main concern in a public-key setting is the authenticity of
the public key; this issue can be resolved by identity-based (IDbased)
cryptography where the public key of a user can be derived
from public information that uniquely identifies the user.
This paper presents an encryption mechanism based on the IDbased
scheme using Elliptic curves to provide end-to-end security
for SMS. This mechanism has been implemented over the standard
SMS network architecture and the encryption overhead has been
estimated and compared with RSA scheme. This study indicates
that the ID-based mechanism has advantages over the RSA
mechanism in key distribution and scalability of increasing
security level for mobile service.
Abstract: This paper presented the potential of smart phone to
provide support on mapping the indoor asset. The advantage of using
the smart phone to generate the indoor map is that it has the ability to
capture, store and reproduces still or video images; indeed most of us
do have this powerful gadget. The captured images usually used by
maintenance team to save a record for future reference. Here, these
images are used to generate 3D models of an object precisely and
accurately for efficient and effective solution in data gathering. Thus,
it could be a resource for an informative database in asset
management.
Abstract: The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.
Abstract: This paper addresses the problem of the partial state
feedback stabilization of a class of nonlinear systems. In order to
stabilization this class systems, the especial place of this paper is
to reverse designing the state feedback control law from the method
of judging system stability with the center manifold theory. First of
all, the center manifold theory is applied to discuss the stabilization
sufficient condition and design the stabilizing state control laws for a
class of nonlinear. Secondly, the problem of partial stabilization for a
class of plane nonlinear system is discuss using the lyapunov second
method and the center manifold theory. Thirdly, we investigate specially
the problem of the stabilization for a class of homogenous plane
nonlinear systems, a class of nonlinear with dual-zero eigenvalues and
a class of nonlinear with zero-center using the method of lyapunov
function with homogenous derivative, specifically. At the end of this
paper, some examples and simulation results are given show that the
approach of this paper to this class of nonlinear system is effective
and convenient.
Abstract: The damage tolerance behavior of integrally and
conventional stiffened panel is investigated based on the fracture
mechanics and finite element analysis. The load bearing capability
and crack growth characteristic of both types of the stiffened panels
having same configuration subjected to distributed tensile load is
examined in this paper. A fourteen-stringer stiffened panel is
analyzed for a central skin crack propagating towards the adjacent
stringers. Stress intensity factors and fatigue crack propagation rates
of both types of the stiffened panels are then compared. The analysis
results show that integral stiffening causes higher stress intensity
factor than conventional stiffened panel as the crack tip passes
through the stringer and the integrally stiffened panel has less load
bearing capability than the riveted stiffened panel.
Abstract: In this paper, a nonlinear delay population model is investigated. Choosing the delay as a bifurcation parameter, we demonstrate that Hopf bifurcation will occur when the delay exceeds a critical value. Global existence of bifurcating periodic solutions is established. Numerical simulations supporting the theoretical findings are included.
Abstract: In this paper, we consider the problem of logic simplification for a special class of logic functions, namely complementary Boolean functions (CBF), targeting low power implementation using static CMOS logic style. The functions are uniquely characterized by the presence of terms, where for a canonical binary 2-tuple, D(mj) ∪ D(mk) = { } and therefore, we have | D(mj) ∪ D(mk) | = 0 [19]. Similarly, D(Mj) ∪ D(Mk) = { } and hence | D(Mj) ∪ D(Mk) | = 0. Here, 'mk' and 'Mk' represent a minterm and maxterm respectively. We compare the circuits minimized with our proposed method with those corresponding to factored Reed-Muller (f-RM) form, factored Pseudo Kronecker Reed-Muller (f-PKRM) form, and factored Generalized Reed-Muller (f-GRM) form. We have opted for algebraic factorization of the Reed-Muller (RM) form and its different variants, using the factorization rules of [1], as it is simple and requires much less CPU execution time compared to Boolean factorization operations. This technique has enabled us to greatly reduce the literal count as well as the gate count needed for such RM realizations, which are generally prone to consuming more cells and subsequently more power consumption. However, this leads to a drawback in terms of the design-for-test attribute associated with the various RM forms. Though we still preserve the definition of those forms viz. realizing such functionality with only select types of logic gates (AND gate and XOR gate), the structural integrity of the logic levels is not preserved. This would consequently alter the testability properties of such circuits i.e. it may increase/decrease/maintain the same number of test input vectors needed for their exhaustive testability, subsequently affecting their generalized test vector computation. We do not consider the issue of design-for-testability here, but, instead focus on the power consumption of the final logic implementation, after realization with a conventional CMOS process technology (0.35 micron TSMC process). The quality of the resulting circuits evaluated on the basis of an established cost metric viz., power consumption, demonstrate average savings by 26.79% for the samples considered in this work, besides reduction in number of gates and input literals by 39.66% and 12.98% respectively, in comparison with other factored RM forms.
Abstract: Selection of the best possible set of suppliers has a
significant impact on the overall profitability and success of any
business. For this reason, it is usually necessary to optimize all
business processes and to make use of cost-effective alternatives for
additional savings. This paper proposes a new efficient context-aware
supplier selection model that takes into account possible changes of
the environment while significantly reducing selection costs. The
proposed model is based on data clustering techniques while
inspiring certain principles of online algorithms for an optimally
selection of suppliers. Unlike common selection models which re-run
the selection algorithm from the scratch-line for any decision-making
sub-period on the whole environment, our model considers the
changes only and superimposes it to the previously defined best set
of suppliers to obtain a new best set of suppliers. Therefore, any recomputation
of unchanged elements of the environment is avoided
and selection costs are consequently reduced significantly. A
numerical evaluation confirms applicability of this model and proves
that it is a more optimal solution compared with common static
selection models in this field.
Abstract: For numerical prediction of the NOX in the exhaust of
a compression ignition engine a model was developed by considering
the parameter equivalence ratio. This model was validated by
comparing the predicted results of NOX with experimental ones. The
ultimate aim of the work was to access the applicability, robustness
and performance of the improved NOX model against other NOX
models.