Abstract: Advanced treatments such as forward osmosis (FO)
can be used to separate or reject nutrients from secondary treated
effluents. Forward osmosis uses the chemical potential across the
membrane, which is the osmotic pressure gradient, to induce water to
flow through the membrane from a feed solution (FS) into a draw
solution (DS). The performance of FO is affected by the membrane
characteristics, composition of the FS and DS, and operating
conditions. The aim of this study was to investigate the optimum
velocity and temperature for nutrient rejection and water flux
performance in FO treatments. MgCl2 was used as the DS in the FO
process. The results showed that higher cross flow velocities yielded
higher water fluxes. High rejection of nutrients was achieved by using
a moderate cross flow velocity at 0.25 m/s. Nutrient rejection was
insensitive to temperature variation, whereas water flux was
significantly impacted by it. A temperature of 25°C was found to be
good for nutrient rejection.
Abstract: Background - The TrendCare Patient Dependency
System is currently used by a large number of maternity Services
across Australia, New Zealand and Singapore. In 2012, 2013 and
2014 validation studies were initiated in all three countries to validate
the acuity tools used for women in labour, and postnatal mothers and
babies. This paper will present the findings of the validation study.
Aim - The aim of this study was to; identify if the care hours
provided by the TrendCare acuity system was an accurate reflection
of the care required by women and babies; obtain evidence of
changes required to acuity indicators and/or category timings to
ensure the TrendCare acuity system remains reliable and valid across
a range of maternity care models in three countries.
Method - A non-experimental action research methodology was
used across maternity services in four District Health Boards in New
Zealand, a large tertiary and a large secondary maternity service in
Singapore and a large public maternity service in Australia.
Standardised data collection forms and timing devices were used to
collect midwife contact times, with women and babies included in the
study. Rejection processes excluded samples when care was not
completed/rationed, and contact timing forms were incomplete. The
variances between actual timed midwife/mother/baby contact and the
TrendCare acuity category times were identified and investigated.
Results - Thirty two (88.9%) of the 36 TrendCare acuity category
timings, fell within the variance tolerance levels when compared to
the actual timings recorded for midwifery care. Four (11.1%)
TrendCare categories provided less minutes of care than the actual
timings and exceeded the variance tolerance level. These were all
night shift category timings. Nine postnatal categories were not able
to be compared as the sample size for these categories was
statistically insignificant. 100% of labour ward TrendCare categories
matched actual timings for midwifery care, all falling within the
variance tolerance levels.
The actual time provided by core midwifery staff to assist lead
maternity carer (LMC) midwives in New Zealand labour wards
showed a significant deviation to previous studies. The findings of
the study demonstrated the need for additional time allocations in
TrendCare to accommodate an increased level of assistance given to
LMC midwives.
Conclusion - The results demonstrated the importance of regularly
validating the TrendCare category timings with actual timings of the
care hours provided. It was evident from the findings that variances
to models of care and length of stay in maternity units have increased
midwifery workloads on the night shift. The level of assistance
provided by the core labour ward staff to the LMC midwife has
increased substantially.
Outcomes - As a consequence of this study, changes were made to
the night duty TrendCare maternity categories, additional acuity
indicators were developed and times for assisting LMC midwives in
labour ward increased. The updated TrendCare version was delivered
to maternity services in 2014.
Abstract: Quantification of cardiac function is performed by
calculating blood volume and ejection fraction in routine clinical
practice. However, these works have been performed by manual
contouring, which requires computational costs and varies on the
observer. In this paper, an automatic left ventricle segmentation
algorithm on cardiac magnetic resonance images (MRI) is presented.
Using knowledge on cardiac MRI, a K-mean clustering technique is
applied to segment blood region on a coil-sensitivity corrected image.
Then, a graph searching technique is used to correct segmentation
errors from coil distortion and noises. Finally, blood volume and
ejection fraction are calculated. Using cardiac MRI from 15 subjects,
the presented algorithm is tested and compared with manual
contouring by experts to show outstanding performance.
Abstract: We have developed a new computer program in
Fortran 90, in order to obtain numerical solutions of a system
of Relativistic Magnetohydrodynamics partial differential equations
with predetermined gravitation (GRMHD), capable of simulating
the formation of relativistic jets from the accretion disk of matter
up to his ejection. Initially we carried out a study on numerical
methods of unidimensional Finite Volume, namely Lax-Friedrichs,
Lax-Wendroff, Nessyahu-Tadmor method and Godunov methods
dependent on Riemann problems, applied to equations Euler in
order to verify their main features and make comparisons among
those methods. It was then implemented the method of Finite
Volume Centered of Nessyahu-Tadmor, a numerical schemes that
has a formulation free and without dimensional separation of
Riemann problem solvers, even in two or more spatial dimensions,
at this point, already applied in equations GRMHD. Finally, the
Nessyahu-Tadmor method was possible to obtain stable numerical
solutions - without spurious oscillations or excessive dissipation -
from the magnetized accretion disk process in rotation with respect
to a central black hole (BH) Schwarzschild and immersed in a
magnetosphere, for the ejection of matter in the form of jet over a
distance of fourteen times the radius of the BH, a record in terms
of astrophysical simulation of this kind. Also in our simulations,
we managed to get substructures jets. A great advantage obtained
was that, with the our code, we got simulate GRMHD equations in
a simple personal computer.
Abstract: In this paper a comprehensive review on various
factory layouts has been carried out for designing a lucrative process
layout for medium scale industries. Industry data base reveals that the
end product rejection rate is on the order of 10% amounting large
profit loss. In order to avoid these rejection rates and to increase the
quality product production an intermediate non-destructive testing
facility (INDTF) has been recommended for increasing the overall
profit. We observed through detailed case studies that while
introducing INDTF to medium scale industries the expensive
production process can be avoided to the defective products well
before its final shape. Additionally, the defective products identified
during the intermediate stage can be effectively utilized for other
applications or recycling; thereby the overall wastage of the raw
materials can be reduced and profit can be increased. We concluded
that the prudent design of a factory layout through critical path
method facilitating with INDTF will warrant profitable outcome.
Abstract: As an emerging business model, cloud computing has been initiated to satisfy the need of organizations and to push Information Technology as a utility. The shift to the cloud has changed the way Information Technology departments are managed traditionally and has raised many concerns for both, public and private sectors.
The purpose of this study is to investigate the possibility of cloud computing services replacing services provided traditionally by IT departments. Therefore, it aims to 1) explore whether organizations in Oman are ready to move to the cloud; 2) identify the deciding factors leading to the adoption or rejection of cloud computing services in Oman; and 3) provide two case studies, one for a successful Cloud provider and another for a successful adopter.
This paper is based on multiple research methods including conducting a set of interviews with cloud service providers and current cloud users in Oman; and collecting data using questionnaires from experts in the field and potential users of cloud services.
Despite the limitation of bandwidth capacity and Internet coverage offered in Oman that create a challenge in adopting the cloud, it was found that many information technology professionals are encouraged to move to the cloud while few are resistant to change.
The recent launch of a new Omani cloud service provider and the entrance of other international cloud service providers in the Omani market make this research extremely valuable as it aims to provide real-life experience as well as two case studies on the successful provision of cloud services and the successful adoption of these services.
Abstract: The coefficient diagram method is primarily an algebraic control design method whose objective is to easily obtain
a good controller with minimum user effort. As a matter of fact, if a
system model, in the form of linear differential equations, is known,
the user only need to define a time-constant and the controller order.
The later can be established regarding the expected disturbance type
via a lookup table first published by Koksal and Hamamci in 2004.
However an inaccuracy in this table was detected and pointed-out in
the present work. Moreover the above mentioned table was expanded
in order to enclose any k order type disturbance.
Abstract: Qatar’s primary source of fresh water is through
seawater desalination. Amongst the major processes that are
commercially available on the market, the most common large scale
techniques are Multi-Stage Flash distillation (MSF), Multi Effect
distillation (MED), and Reverse Osmosis (RO). Although commonly
used, these three processes are highly expensive down to high energy
input requirements and high operating costs allied with maintenance
and stress induced on the systems in harsh alkaline media. Beside that
cost, environmental footprint of these desalination techniques are
significant; from damaging marine eco-system, to huge land use, to
discharge of tons of GHG and huge carbon footprint.
Other less energy consuming techniques based on membrane
separation are being sought to reduce both the carbon footprint and
operating costs is membrane distillation (MD).
Emerged in 1960s, MD is an alternative technology for water
desalination attracting more attention since 1980s. MD process
involves the evaporation of a hot feed, typically below boiling point
of brine at standard conditions, by creating a water vapor pressure
difference across the porous, hydrophobic membrane. Main
advantages of MD compared to other commercially available
technologies (MSF and MED) and specially RO are reduction of
membrane and module stress due to absence of trans-membrane
pressure, less impact of contaminant fouling on distillate due to
transfer of only water vapor, utilization of low grade or waste heat
from oil and gas industries to heat up the feed up to required
temperature difference across the membrane, superior water quality,
and relatively lower capital and operating cost.
To achieve the objective of this study, state of the art flat-sheet
cross-flow DCMD bench scale unit was designed, commissioned, and
tested. The objective of this study is to analyze the characteristics and
morphology of the membrane suitable for DCMD through SEM
imaging and contact angle measurement and to study the water
quality of distillate produced by DCMD bench scale unit.
Comparison with available literature data is undertaken where
appropriate and laboratory data is used to compare a DCMD distillate
quality with that of other desalination techniques and standards.
Membrane SEM analysis showed that the PTFE membrane used
for the study has contact angle of 127º with highly porous surface
supported with less porous and bigger pore size PP membrane. Study
on the effect of feed solution (salinity) and temperature on water
quality of distillate produced from ICP and IC analysis showed that
with any salinity and different feed temperature (up to 70ºC) the
electric conductivity of distillate is less than 5 μS/cm with 99.99%
salt rejection and proved to be feasible and effective process capable
of consistently producing high quality distillate from very high feed
salinity solution (i.e. 100000 mg/L TDS) even with substantial
quality difference compared to other desalination methods such as
RO and MSF.
Abstract: The primary objective of this paper is to study the thermal effects of the electric arc on the breaker apparatus contacts for forecasting and improving the contact durability. We will propose a model which takes account of the main influence factors on the erosion contacts. This phenomenon is very complicated because the amount of ejected metal is not necessarily constituted by the whole melted metal bath but this depends on the balance of forces on the contact surface. Consequently, to calculate the metal ejection coefficient, we propose a method which consists in comparing the experimental results with the calculated ones. The proposed model estimates the mass lost by vaporization, by droplets ejection and by the extraction mechanism of liquid or solid metal. In the one-dimensional geometry, to calculate of the contact heating, we used Green’s function which expresses the point source and allows the transition to the surface source. However, for the two- dimensional model we used explicit and implicit numerical methods. The results are similar to those found by Wilson’s experiments.
Abstract: The primary objective of this paper is to elimination of the problem of sensitivity to parameter variation of induction motor drive. The proposed sensorless strategy is based on an algorithm permitting a better simultaneous estimation of the rotor speed and the stator resistance including an adaptive mechanism based on the lyaponov theory. To study the reliability and the robustness of the sensorless technique to abnormal operations, some simulation tests have been performed under several cases.
The proposed sensorless vector control scheme showed a good performance behavior in the transient and steady states, with an excellent disturbance rejection of the load torque.
Abstract: This paper realized the 2-DOF controller structure for first order with time delay systems. The co-prime factorization is used to design observer based controller K(s), representing one degree of freedom. The problem is based on H∞ norm of mixed sensitivity and aims to achieve stability, robustness and disturbance rejection. Then, the other degree of freedom, prefilter F(s), is formulated as fixed structure polynomial controller to meet open loop processing of reference model. This model matching problem is solved by minimizing integral square error between reference model and proposed model. The feedback controller and prefilter designs are posed as optimization problem and solved using Particle Swarm Optimization (PSO). To show the efficiency of the designed approach different variety of processes are taken and compared for analysis.
Abstract: Reconfigurable antennas represent a recent innovation in antenna design that changes from classical fixed-form, fixed function antennas to modifiable structures that can be adapted to fit the requirements of a time varying system.
The ability to control the operating band of an antenna system can have many useful applications. Systems that operate in an acquire-and-track configuration would see a benefit from active bandwidth control. In such systems a wide band search mode is first employed to find a desired signal then a narrow band track mode is used to follow only that signal. Utilizing active antenna bandwidth control, a single antenna would function for both the wide band and narrow band configurations providing the rejection of unwanted signals with the antenna hardware. This ability to move a portion of the RF filtering out of the receiver and onto the antenna itself will also aid in reducing the complexity of the often expensive RF processing subsystems.
Abstract: In all his novels, Hawthorne, the American writer, created settings in which his moral concerns could be presented through the actions of his characters. He illustrated his concern over the moral fall of man in the nineteenth century obsession for technological advancement. In “The Blithedale Romance” and “The House of Seven Gable” quite vividly, he pictured individualistic moral vices as the result of outside forces which caused social immorality. “The Marble Faun”, in its own turn, has the same type of social moral concerns to present: the story of nineteenth century modern man and his individualistic moral issues which lead to his social moral fall. He depicted the dominant themes of individualistic moral vices which all lead to social alienation and rejection. He showed hypocrisy and evil intentions as leading to social immoral atmosphere.
Abstract: This paper proposes a study of input impedance of 2 types of CMOS active inductors. It derives 2 input impedance formulas. The first formula is the input impedance of the grounded active inductor. The second formula is the input impedance of the floating active inductor. After that, these formulas can be used to simulate magnitude and phase response of input impedance as a function of current consumption with MATLAB. Common mode rejection ratio (CMRR) of the fully differential bandpass amplifier is derived based on superposition principle. CMRR as a function of input frequency is plotted as a function of current consumption.
Abstract: Most empirical studies have analyzed how liquidity risks faced by individual institutions turn into systemic risk. Recent banking crisis has highlighted the importance of grasping and controlling the systemic risk, and the acceptance by Central Banks to ease their monetary policies for saving default or illiquid banks. This last point shows that banks would pay less attention to liquidity risk which, in turn, can become a new important channel of loss. The financial regulation focuses on the most important and “systemic” banks in the global network. However, to quantify the expected loss associated with liquidity risk, it is worth to analyze sensitivity to this channel for the various elements of the global bank network. A small bank is not considered as potentially systemic; however the interaction of small banks all together can become a systemic element. This paper analyzes the impact of medium and small banks interaction on a set of banks which is considered as the core of the network. The proposed method uses the structure of agent-based model in a two-class environment. In first class, the data from actual balance sheets of 22 large and systemic banks (such as BNP Paribas or Barclays) are collected. In second one, to model a network as closely as possible to actual interbank market, 578 fictitious banks smaller than the ones belonging to first class have been split into two groups of small and medium ones. All banks are active on the European interbank network and have deposit and market activity. A simulation of 12 three month periods representing a midterm time interval three years is projected. In each period, there is a set of behavioral descriptions: repayment of matured loans, liquidation of deposits, income from securities, collection of new deposits, new demands of credit, and securities sale. The last two actions are part of refunding process developed in this paper. To strengthen reliability of proposed model, random parameters dynamics are managed with stochastic equations as rates the variations of which are generated by Vasicek model. The Central Bank is considered as the lender of last resort which allows banks to borrow at REPO rate and some ejection conditions of banks from the system are introduced.
Liquidity crunch due to exogenous crisis is simulated in the first class and the loss impact on other bank classes is analyzed though aggregate values representing the aggregate of loans and/or the aggregate of borrowing between classes. It is mainly shown that the three groups of European interbank network do not have the same response, and that intermediate banks are the most sensitive to liquidity risk.
Abstract: This study presents a new method for detecting the
cutting tool wear based on the measured cutting force signals using
the regression model and I-kaz method. The detection of tool wear
was done automatically using the in-house developed regression
model and 3D graphic presentation of I-kaz 3D coefficient during
machining process. The machining tests were carried out on a CNC
turning machine Colchester Master Tornado T4 in dry cutting
condition, and Kistler 9255B dynamometer was used to measure the
cutting force signals, which then stored and displayed in the DasyLab
software. The progression of the cutting tool flank wear land (VB)
was indicated by the amount of the cutting force generated. Later, the
I-kaz was used to analyze all the cutting force signals from beginning
of the cut until the rejection stage of the cutting tool. Results of the IKaz
analysis were represented by various characteristic of I-kaz 3D
coefficient and 3D graphic presentation. The I-kaz 3D coefficient
number decreases when the tool wear increases. This method can be
used for real time tool wear monitoring.
Abstract: Provision of optical devices without proper instruction
and training may cause frustration resulting in rejection or incorrect
use of the magnifiers. However training in the use of magnifiers
increases the cost of providing these devices. This study compared
the efficacy of providing instruction alone and instruction plus
training in the use of magnifiers. 24 participants randomly assigned
to two groups. 15 received instruction and training and 9 received
instruction only. Repeated measures of print size and reading speed
were performed at pre, post training and follow up. Print size
decreased in both groups between pre and post training maintained at
follow up. Reading speed increased in both groups over time with the
training group demonstrating more rapid improvement. Whilst
overall outcomes were similar, training decreased the time required
to increase reading speed supporting the use of training for increased
efficiency. A cost effective form of training is suggested.
Abstract: An effective approach for extracting document images from a noisy background is introduced. The entire scheme is divided into three sub- stechniques – the initial preprocessing operations for noise cluster tightening, introduction of a new thresholding method by maximizing the ratio of stan- dard deviations of the combined effect on the image to the sum of weighted classes and finally the image restoration phase by image binarization utiliz- ing the proposed optimum threshold level. The proposed method is found to be efficient compared to the existing schemes in terms of computational complexity as well as speed with better noise rejection.
Abstract: The research on two-wheels balancing robot has
gained momentum due to their functionality and reliability when
completing certain tasks. This paper presents investigations into the
performance comparison of Linear Quadratic Regulator (LQR) and
PID-PID controllers for a highly nonlinear 2–wheels balancing robot.
The mathematical model of 2-wheels balancing robot that is highly
nonlinear is derived. The final model is then represented in statespace
form and the system suffers from mismatched condition. Two
system responses namely the robot position and robot angular
position are obtained. The performances of the LQR and PID-PID
controllers are examined in terms of input tracking and disturbances
rejection capability. Simulation results of the responses of the
nonlinear 2–wheels balancing robot are presented in time domain. A
comparative assessment of both control schemes to the system
performance is presented and discussed.
Abstract: A new approach is adopted in this paper based
on Turk and Pentland-s eigenface method. It was found that the
probability density function of the distance between the projection
vector of the input face image and the average projection vector of
the subject in the face database, follows Rayleigh distribution. In
order to decrease the false acceptance rate and increase the
recognition rate, the input face image has been recognized using two
thresholds including the acceptance threshold and the rejection
threshold. We also find out that the value of two thresholds will be
close to each other as number of trials increases. During the training,
in order to reduce the number of trials, the projection vectors for each
subject has been averaged. The recognition experiments using the
proposed algorithm show that the recognition rate achieves to
92.875% whilst the average number of judgment is only 2.56 times.