Abstract: Background: Widespread use of chemotherapeutic
drugs in the treatment of cancer has lead to higher health hazards
among employee who handle and administer such drugs, so nurses
should know how to protect themselves, their patients and their work
environment against toxic effects of chemotherapy. Aim of this study
was carried out to examine the effect of chemotherapy safety protocol
for oncology nurses on their protective measure practices. Design: A
quasi experimental research design was utilized. Setting: The study
was carried out in oncology department of Menoufia university
hospital and Tanta oncology treatment center. Sample: A
convenience sample of forty five nurses in Tanta oncology treatment
center and eighteen nurses in Menoufiya oncology department.
Tools: 1. an interviewing questionnaire that covering sociodemographic
data, assessment of unit and nurses' knowledge about
chemotherapy. II: Obeservational check list to assess nurses' actual
practices of handling and adminestration of chemotherapy. A base
line data were assessed before implementing Chemotherapy Safety
protocol, then Chemotherapy Safety protocol was implemented, and
after 2 monthes they were assessed again. Results: reveled that 88.9%
of study group I and 55.6% of study group II improved to good total
knowledge scores after educating on the safety protocol, also 95.6%
of study group I and 88.9% of study group II had good total practice
score after educating on the safety protocol. Moreover less than half
of group I (44.4%) reported that heavy workload is the most barriers
for them, while the majority of group II (94.4%) had many barriers
for adhering to the safety protocol such as they didn’t know the
protocol, the heavy work load and inadequate equipment.
Conclusions: Safety protocol for Oncology Nurses seemed to have
positive effect on improving nurses' knowledge and practice.
Recommendation: chemotherapy safety protocol should be instituted
for all oncology nurses who are working in any oncology unit and/ or
center to enhance compliance, and this protocol should be done at
frequent intervals.
Abstract: This paper is concerned with the delay-distributiondependent
stability criteria for bidirectional associative memory
(BAM) neural networks with time-varying delays. Based on the
Lyapunov-Krasovskii functional and stochastic analysis approach,
a delay-probability-distribution-dependent sufficient condition is derived
to achieve the globally asymptotically mean square stable of
the considered BAM neural networks. The criteria are formulated in
terms of a set of linear matrix inequalities (LMIs), which can be
checked efficiently by use of some standard numerical packages. Finally,
a numerical example and its simulation is given to demonstrate
the usefulness and effectiveness of the proposed results.
Abstract: The role of entrepreneurs in generating the economy is
very important. Thus, nurturing entrepreneurship skills among
society is very crucial and should start from the early age. One of the
methods is to teach through game such as board game. Game
provides a fun and interactive platform for players to learn and play.
Besides that as today-s world is moving towards Islamic approach in
terms of finance, banking and entertainment but Islamic based game
is still hard to find in the market especially games on
entrepreneurship. Therefore, there is a gap in this segment that can be
filled by learning entrepreneurship through game. The objective of
this paper is to develop an entrepreneurship digital-based game
entitled “Catur Bistari" that is based on Islamic business approach.
Knowledge and skill of entrepreneurship and Islamic business
approach will be learned through the tasks that are incorporated
inside the game.
Abstract: In this paper, parallelism in the solution of Ordinary
Differential Equations (ODEs) to increase the computational speed is
studied. The focus is the development of parallel algorithm of the two
point Block Backward Differentiation Formulas (PBBDF) that can
take advantage of the parallel architecture in computer technology.
Parallelism is obtained by using Message Passing Interface (MPI).
Numerical results are given to validate the efficiency of the PBBDF
implementation as compared to the sequential implementation.
Abstract: Airport capacity has always been perceived in the
traditional sense as the number of aircraft operations during a
specified time corresponding to a tolerable level of average delay and
it mostly depends on the airside characteristics, on the fleet mix
variability and on the ATM. The adoption of the Directive
2002/30/EC in the EU countries drives the stakeholders to conceive
airport capacity in a different way though. Airport capacity in this
sense is fundamentally driven by environmental criteria, and since
acoustical externalities represent the most important factors, those are
the ones that could pose a serious threat to the growth of airports and
to aviation market itself in the short-medium term. The importance of
the regional airports in the deregulated market grew fast during the
last decade since they represent spokes for network carriers and a
preferential destination for low-fares carriers. Not only regional
airports have witnessed a fast and unexpected growth in traffic but
also a fast growth in the complaints for the nuisance by the people
living near those airports. In this paper the results of a study
conducted in cooperation with the airport of Bologna G. Marconi are
presented in order to investigate airport acoustical capacity as a defacto
constraint of airport growth.
Abstract: Overcurrent (OC) relays are the major protection
devices in a distribution system. The operating time of the OC relays
are to be coordinated properly to avoid the mal-operation of the
backup relays. The OC relay time coordination in ring fed
distribution networks is a highly constrained optimization problem
which can be stated as a linear programming problem (LPP). The
purpose is to find an optimum relay setting to minimize the time of
operation of relays and at the same time, to keep the relays properly
coordinated to avoid the mal-operation of relays.
This paper presents two phase simplex method for optimum time
coordination of OC relays. The method is based on the simplex
algorithm which is used to find optimum solution of LPP. The
method introduces artificial variables to get an initial basic feasible
solution (IBFS). Artificial variables are removed using iterative
process of first phase which minimizes the auxiliary objective
function. The second phase minimizes the original objective function
and gives the optimum time coordination of OC relays.
Abstract: Recently global concerns for the energy security have
steadily been on the increase and are expected to become a major
issue over the next few decades. Energy security refers to a resilient
energy system. This resilient system would be capable of
withstanding threats through a combination of active, direct security
measures and passive or more indirect measures such as redundancy,
duplication of critical equipment, diversity in fuel, other sources of
energy, and reliance on less vulnerable infrastructure. Threats and
disruptions (disturbances) to one part of the energy system affect
another. The paper presents methodology in theoretical background
about energy system as an interconnected network and energy supply
disturbances impact to the network. The proposed methodology uses
a network flow approach to develop mathematical model of the
energy system network as the system of nodes and arcs with energy
flowing from node to node along paths in the network.
Abstract: Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.
Abstract: The major building block of most elliptic curve cryptosystems
are computation of multi-scalar multiplication. This paper
proposes a novel algorithm for simultaneous multi-scalar multiplication,
that is by employing addition chains. The previously known
methods utilizes double-and-add algorithm with binary representations.
In order to accomplish our purpose, an efficient empirical
method for finding addition chains for multi-exponents has been
proposed.
Abstract: Localized surface plasmon resonance (LSPR) is the
coherent oscillation of conductive electrons confined in noble
metallic nanoparticles excited by electromagnetic radiation, and
nanosphere lithography (NSL) is one of the cost-effective methods to
fabricate metal nanostructures for LSPR. NSL can be categorized
into two major groups: dispersed NSL and closely pack NSL. In
recent years, gold nanocrescents and gold nanoholes with vertical
sidewalls fabricated by dispersed NSL, and silver nanotriangles and
gold nanocaps on silica nanospheres fabricated by closely pack NSL,
have been reported for LSPR biosensing. This paper introduces
several novel gold nanostructures fabricated by NSL in LSPR
applications, including 3D nanostructures obtained by evaporating
gold obliquely on dispersed nanospheres, nanoholes with slant
sidewalls, and patchy nanoparticles on closely packed nanospheres,
all of which render satisfactory sensitivity for LSPR sensing. Since
the LSPR spectrum is very sensitive to the shape of the metal
nanostructures, formulas are derived and software is developed for
calculating the profiles of the obtainable metal nanostructures by
NSL, for different nanosphere masks with different fabrication
conditions. The simulated profiles coincide well with the profiles of
the fabricated gold nanostructures observed under scanning electron
microscope (SEM) and atomic force microscope (AFM), which
proves that the software is a useful tool for the process design of
different LSPR nanostructures.
Abstract: Palladium-catalyzed hydrodechlorination is a
promising alternative for the treatment of environmentally relevant
water bodies, such as groundwater, contaminated with chlorinated
organic compounds (COCs). In the aqueous phase
hydrodechlorination of COCs, Pd-based catalysts were found to have
a very high catalytic activity. However, the full utilization of the
catalyst-s potential is impeded by the sensitivity of the catalyst to
poisoning and deactivation induced by reduced sulfur compounds
(e.g. sulfides). Several regenerants have been tested before to recover
the performance of sulfide-fouled Pd catalyst. But these only
delivered partial success with respect to re-establishment of the
catalyst activity. In this study, the deactivation behaviour of
Pd/Al2O3 in the presence of sulfide was investigated. Subsequent to
total deactivation the catalyst was regenerated in the aqueous phase
using potassium permanganate. Under neutral pH condition,
oxidative regeneration with permanganate delivered a slow recovery
of catalyst activity. However, changing the pH of the bulk solution to
acidic resulted in the complete recovery of catalyst activity within a
regeneration time of about half an hour. These findings suggest the
superiority of permanganate as regenerant in re-activating Pd/Al2O3
by oxidizing Pd-bound sulfide.
Abstract: Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. One such technique is Artificial Neural Network (ANN). This paper examined the application of ANN for software quality prediction using Object- Oriented (OO) metrics. Quality estimation includes estimating maintainability of software. The dependent variable in our study was maintenance effort. The independent variables were principal components of eight OO metrics. The results showed that the Mean Absolute Relative Error (MARE) was 0.265 of ANN model. Thus we found that ANN method was useful in constructing software quality model.
Abstract: We present a new method for the fully automatic 3D
reconstruction of the coronary artery centerlines, using two X-ray
angiogram projection images from a single rotating monoplane
acquisition system. During the first stage, the input images are
smoothed using curve evolution techniques. Next, a simple yet
efficient multiscale method, based on the information of the Hessian
matrix, for the enhancement of the vascular structure is introduced.
Hysteresis thresholding using different image quantiles, is used to
threshold the arteries. This stage is followed by a thinning procedure
to extract the centerlines. The resulting skeleton image is then pruned
using morphological and pattern recognition techniques to remove
non-vessel like structures. Finally, edge-based stereo correspondence
is solved using a parallel evolutionary optimization method based on
f symbiosis. The detected 2D centerlines combined with disparity
map information allow the reconstruction of the 3D vessel
centerlines. The proposed method has been evaluated on patient data
sets for evaluation purposes.
Abstract: In this paper we introduce an approach via optimization methods to find approximate solutions for nonlinear Fredholm integral equations of the first kind. To
this purpose, we consider two stages of approximation.
First we convert the integral equation to a moment problem and then we modify the new problem to two classes of optimization problems, non-constraint optimization problems
and optimal control problems. Finally numerical examples is
proposed.
Abstract: In this study, an ablation, mechanical and thermal properties of a rocket motor insulation from phenolic/ fiber matrix composites forming a laminate with different fiber between fiberglass and locally available synthetic fibers. The phenolic/ fiber matrix composites was mechanics and thermal properties by means of tensile strength, ablation, TGA and DSC. The design of thermal insulation involves several factors.Determined the mechanical properties according to MIL-I-24768: Density >1.3 g/cm3, Tensile strength >103 MPa and Ablation
Abstract: It is necessary to evaluate the bridges conditions and
strengthen bridges or parts of them. The reinforcement necessary due
to some reasons can be summarized as: First, a changing in use of
bridge could produce internal forces in a part of structural which
exceed the existing cross-sectional capacity. Second, bridges may
also need reinforcement because damage due to external factors
which reduced the cross-sectional resistance to external loads. One of
other factors could listed here its misdesign in some details, like
safety of bridge or part of its.This article identify the design demands
of Qing Shan bridge located in is in Heilongjiang Province He gang -
Nen Jiang Road 303 provincial highway, Wudalianchi area, China, is
an important bridge in the urban areas. The investigation program
was include the observation and evaluate the damage in T- section
concrete beams , prestressed concrete box girder bridges section in
additional evaluate the whole state of bridge includes the pier ,
abutments , bridge decks, wings , bearing and capping beam, joints,
........etc. The test results show that the bridges in general structural
condition are good. T beam span No 10 were observed, crack
extended upward along the ribbed T beam, and continue to the T
beam flange. Crack width varying between 0.1mm to 0.4mm, the
maximum about 0.4mm. The bridge needs to be improved flexural
bending strength especially at for T beam section.
Abstract: In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.
Abstract: The problem of estimating time-varying regression is
inevitably concerned with the necessity to choose the appropriate
level of model volatility - ranging from the full stationarity of instant
regression models to their absolute independence of each other. In the
stationary case the number of regression coefficients to be estimated
equals that of regressors, whereas the absence of any smoothness
assumptions augments the dimension of the unknown vector by the
factor of the time-series length. The Akaike Information Criterion
is a commonly adopted means of adjusting a model to the given
data set within a succession of nested parametric model classes,
but its crucial restriction is that the classes are rigidly defined by
the growing integer-valued dimension of the unknown vector. To
make the Kullback information maximization principle underlying the
classical AIC applicable to the problem of time-varying regression
estimation, we extend it onto a wider class of data models in which
the dimension of the parameter is fixed, but the freedom of its values
is softly constrained by a family of continuously nested a priori
probability distributions.
Abstract: To evaluate genetic variation of wheat (Triticum
aestivum) affected by heat and drought stress on eight Australian
wheat genotypes that are parents of Doubled Haploid (HD) mapping
populations at the vegetative stage, the water stress experiment was
conducted at 65% field capacity in growth room. Heat stress
experiment was conducted in the research field under irrigation over
summer. Result show that water stress decreased dry shoot weight
and RWC but increased osmolarity and means of Fv/Fm values in all
varieties except for Krichauff. Krichauff and Kukri had the
maximum RWC under drought stress. Trident variety was shown
maximum WUE, osmolarity (610 mM/Kg), dry mater, quantum yield
and Fv/Fm 0.815 under water stress condition. However, the
recovery of quantum yield was apparent between 4 to 7 days after
stress in all varieties. Nevertheless, increase in water stress after that
lead to strong decrease in quantum yield. There was a genetic
variation for leaf pigments content among varieties under heat stress.
Heat stress decreased significantly the total chlorophyll content that
measured by SPAD. Krichauff had maximum value of Anthocyanin
content (2.978 A/g FW), chlorophyll a+b (2.001 mg/g FW) and
chlorophyll a (1.502 mg/g FW). Maximum value of chlorophyll b
(0.515 mg/g FW) and Carotenoids (0.234 mg/g FW) content
belonged to Kukri. The quantum yield of all varieties decreased
significantly, when the weather temperature increased from 28 ÔùªC to
36 ÔùªC during the 6 days. However, the recovery of quantum yield
was apparent after 8th day in all varieties. The maximum decrease
and recovery in quantum yield was observed in Krichauff. Drought
and heat tolerant and moderately tolerant wheat genotypes were
included Trident, Krichauff, Kukri and RAC875. Molineux, Berkut
and Excalibur were clustered into most sensitive and moderately
sensitive genotypes. Finally, the results show that there was a
significantly genetic variation among the eight varieties that were
studied under heat and water stress.