Abstract: Buildings are one of the valuable assets to provide
people with shelters for work, leisure and rest. After years of
attacks by weather, buildings will deteriorate which need proper
maintenance in order to fulfill the requirements and satisfaction of
the users. Poorly managed buildings not just give a negative image
to the city itself, but also pose potential risk hazards to the health
and safety of the general public. As a result, the management of
maintenance projects has played an important role in cities like
Hong Kong where the problem of urban decay has drawn much
attention. However, most research has focused on managing new
construction, and little research effort has been put on maintenance
projects. Given the short duration and more diversified nature of
work, repair and maintenance works are found to be more difficult
to monitor and regulate when compared with new works. Project
participants may face with problems in running maintenance
projects which should be investigated so that proper strategies can
be established. This paper aims to provide a thorough analysis on
the problems of running maintenance projects. A review of
literature on the characteristics of building maintenance projects
was firstly conducted, which forms a solid basis for the empirical
study. Results on the problems and difficulties of running
maintenance projects from the viewpoints of industry practitioners
will also be delivered with a view to formulating effective
strategies for managing maintenance projects successfully.
Abstract: Polymerase chain reaction (PCR) assay and
conventional microbiological methods were used to detect bacterial
contamination of egg shells and egg content in different commercial
housing systems, open house system and evaporative cooling system.
A PCR assay was developed for direct detection using a set of
primers specific for the invasion by A gene (invA) of Salmonella spp.
PCR detected the presence of Salmonella in 2 samples of shell egg
from the evaporative cooling system, while conventional cultural
methods detected no Salmonella from the same samples.
Abstract: Most of the drugs used for pharmaceutical purposes
are poorly water-soluble drugs. About 40% of all newly discovered
drugs are lipophilic and the numbers of lipophilic drugs seem to
increase more and more. Drug delivery systems such as
nanoparticles, micelles or liposomes are applied to improve their
solubility and thus their bioavailability. Besides various techniques of
solubilization, oil-in-water emulsions are often used to incorporate
lipophilic drugs into the oil phase. To stabilize emulsions surface
active substances (surfactants) are generally used. An alternative
method to avoid the application of surfactants was of great interest.
One possibility is to develop O/W-emulsion without any addition of
surface active agents or the so called “surfactant-free emulsion or
SFE”. The aim of this study was to develop and characterize SFE as a
drug carrier by varying the production conditions. Lidocaine base
was used as a model drug. The injection method was developed.
Effects of ultrasound as well as of temperature on the properties of
the emulsion were studied. Particle sizes and release were
determined. The long-term stability up to 30 days was performed.
The results showed that the surfactant-free O/W emulsions with
pharmaceutical oil as drug carrier can be produced.
Abstract: EPC Class-1 Generation-2 UHF tags, one of Radio
frequency identification or RFID tag types, is expected that most
companies are planning to use it in the supply chain in the short term
and in consumer packaging in the long term due to its inexpensive
cost. Because of the very cost, however, its resources are extremely
scarce and it is hard to have any valuable security algorithms in it. It
causes security vulnerabilities, in particular cloning the tags for
counterfeits. In this paper, we propose a product authentication
solution for anti-counterfeiting at application level in the supply chain
and mobile RFID environment. It aims to become aware of
distribution of spurious products with fake RFID tags and to provide a
product authentication service to general consumers with mobile
RFID devices like mobile phone or PDA which has a mobile RFID
reader. We will discuss anti-counterfeiting mechanisms which are
required to our proposed solution and address requirements that the
mechanisms should have.
Abstract: Dynamic bandwidth allocation in EPONs can be
generally separated into inter-ONU scheduling and intra-ONU scheduling. In our previous work, the active intra-ONU scheduling
(AS) utilizes multiple queue reports (QRs) in each report message to cooperate with the inter-ONU scheduling and makes the granted
bandwidth fully utilized without leaving unused slot remainder (USR).
This scheme successfully solves the USR problem originating from the
inseparability of Ethernet frame. However, without proper setting of
threshold value in AS, the number of QRs constrained by the IEEE
802.3ah standard is not enough, especially in the unbalanced traffic
environment. This limitation may be solved by enlarging the threshold
value. The large threshold implies the large gap between the adjacent QRs, thus resulting in the large difference between the best granted bandwidth and the real granted bandwidth. In this paper, we integrate
AS with a cooperative prediction mechanism and distribute multiple
QRs to reduce the penalty brought by the prediction error.
Furthermore, to improve the QoS and save the usage of queue reports,
the highest priority (EF) traffic which comes during the waiting time is
granted automatically by OLT and is not considered in the requested
bandwidth of ONU. The simulation results show that the proposed
scheme has better performance metrics in terms of bandwidth
utilization and average delay for different classes of packets.
Abstract: Polyphenolics and sugar are the components of many
fruit juices. In this work, the performance of ultra-filtration (UF) for
separating phenolic compounds from apple juice was studied by
performing batch experiments in a membrane module with an area of
0.1 m2 and fitted with a regenerated cellulose membrane of 1 kDa
MWCO. The effects of various operating conditions: transmembrane
pressure (3, 4, 5 bar), temperature (30, 35, 40 ºC), pH (2, 3, 4, 5),
feed concentration (3, 5, 7, 10, 15 ºBrix for apple juice) and feed flow
rate (1, 1.5, 1.8 L/min) on the performance were determined.
The optimum operating conditions were: transmembrane pressure
4 bar, temperature 30 ºC, feed flow rate 1 – 1.8 L/min, pH 3 and 10
Brix (apple juice). After performing ultrafiltration under these
conditions, the concentration of polyphenolics in retentate was
increased by a factor of up to 2.7 with up to 70% recovered in the
permeate and with approx. 20% of the sugar in that stream..
Application of diafiltration (addition of water to the concentrate) can
regain the flux by a factor of 1.5, which has been decreased due to
fouling. The material balance performed on the process has shown
the amount of deposits on the membrane and the extent of fouling in
the system. In conclusion, ultrafiltration has been demonstrated as a
potential technology to separate the polyphenolics and sugars from
their mixtures and can be applied to remove sugars from fruit juice.
Abstract: Contractor selection in Saudi Arabia is very important due to the large construction boom and the contractor role to get over construction risks. The need for investigating contractor selection is due to the following reasons; large number of defaulted or failed projects (18%), large number of disputes attributed to contractor during the project execution stage (almost twofold), the extension of the General Agreement on Tariffs and Trade (GATT) into construction industry, and finally the few number of researches. The selection strategy is not perfect and considered as the reason behind irresponsible contractors. As a response, this research was conducted to review the contractor selection strategies as an integral part of a long advanced research to develop a good selection model. Many techniques can be used to form a selection strategy; multi criteria for optimizing decision, prequalification to discover contractor-s responsibility, bidding process for competition, third party guarantee to enhance the selection, and fuzzy techniques for ambiguities and incomplete information.
Abstract: When a high DC voltage is applied to a capacitor with
strongly asymmetrical electrodes, it generates a mechanical force that
affects the whole capacitor. This phenomenon is most likely to be
caused by the motion of ions generated around the smaller of the two
electrodes and their subsequent interaction with the surrounding
medium. A method to measure this force has been devised and used.
A formula describing the force has also been derived. After
comparing the data gained through experiments with those acquired
using the theoretical formula, a difference was found above a certain
value of current. This paper also gives reasons for this difference.
Abstract: This paper describes an efficient and practical method
for economic dispatch problem in one and two area electrical power
systems with considering the constraint of the tie transmission line
capacity constraint. Direct search method (DSM) is used with some
equality and inequality constraints of the production units with any
kind of fuel cost function. By this method, it is possible to use several
inequality constraints without having difficulty for complex cost
functions or in the case of unavailability of the cost function
derivative. To minimize the number of total iterations in searching,
process multi-level convergence is incorporated in the DSM.
Enhanced direct search method (EDSM) for two area power system
will be investigated. The initial calculation step size that causes less
iterations and then less calculation time is presented. Effect of the
transmission tie line capacity, between areas, on economic dispatch
problem and on total generation cost will be studied; line
compensation and active power with reactive power dispatch are
proposed to overcome the high generation costs for this multi-area
system.
Abstract: In recent years in Kazakhstan, as well as in all countries, we have been talking not only about the professional stress, but also professional Burnout Syndrome of employees. Burnout is essentially a response to chronic emotional stress – manifests itself in the form of chronic fatigue, despondency, unmotivated aggression, anger, and others. This condition is due to mental fatigue among teachers as a sort of payment for overstrain when professional commitments include the impact of “heat your soul", emotional investment. The emergence of professional Burnout among teachers is due to the system of interrelated and mutually reinforcing factors relating to the various levels of the personality: individually-psychological level is psychodynamic special subject characteristics of valuemotivational sphere and formation of skills and habits of selfregulation; the socio-psychological level includes especially the Organization and interpersonal interaction of a teacher. Signs of the Burnout were observed in 15 testees, and virtually a symptom could be observed in every teacher. As a result of the diagnosis 48% of teachers had the signs of stress (phase syndrome), resulting in a sense of anxiety, mood, heightened emotional susceptibility. The following results have also been got:-the fall of General energy potential – 14 pers. -Psychosomatic and psycho vegetative syndrome – 26 pers. -emotional deficit-34 pers. -emotional Burnout Syndrome-6 pers. The problem of professional Burnout of teachers in the current conditions should become not only meaningful, but particularly relevant. The quality of education of the younger generation depends on professional development; teachers- training level, and how “healthy" teachers are. That is why the systematic maintenance of pedagogic-professional development for teachers (including disclosure of professional Burnout Syndrome factors) takes on a special meaning.
Abstract: Metal matrix composites (MMC) are generating
extensive interest in diverse fields like defense, aerospace, electronics
and automotive industries. In this present investigation, material
removal rate (MRR) modeling has been carried out using an
axisymmetric model of Al-SiC composite during electrical discharge
machining (EDM). A FEA model of single spark EDM was
developed to calculate the temperature distribution.Further, single
spark model was extended to simulate the second discharge. For
multi-discharge machining material removal was calculated by
calculating the number of pulses. Validation of model has been done
by comparing the experimental results obtained under the same
process parameters with the analytical results. A good agreement was
found between the experimental results and the theoretical value.
Abstract: Segmentation, filtering out of measurement errors and
identification of breakpoints are integral parts of any analysis of
microarray data for the detection of copy number variation (CNV).
Existing algorithms designed for these tasks have had some successes
in the past, but they tend to be O(N2) in either computation time or
memory requirement, or both, and the rapid advance of microarray
resolution has practically rendered such algorithms useless. Here we
propose an algorithm, SAD, that is much faster and much less thirsty
for memory – O(N) in both computation time and memory requirement
-- and offers higher accuracy. The two key ingredients of SAD are the
fundamental assumption in statistics that measurement errors are
normally distributed and the mathematical relation that the product of
two Gaussians is another Gaussian (function). We have produced a
computer program for analyzing CNV based on SAD. In addition to
being fast and small it offers two important features: quantitative
statistics for predictions and, with only two user-decided parameters,
ease of use. Its speed shows little dependence on genomic profile.
Running on an average modern computer, it completes CNV analyses
for a 262 thousand-probe array in ~1 second and a 1.8 million-probe
array in 9 seconds
Abstract: Aluminum salt that is generally presents as a solid
phase in the water purification sludge (WPS) can be dissolved,
recovering a liquid phase, by adding strong acid to the sludge solution.
According to the reaction kinetics, when reactant is in the form of
small particles with a large specific surface area, or when the reaction
temperature is high, the quantity of dissolved aluminum salt or
reaction rate, respectively are high. Therefore, in this investigation,
water purification sludge (WPS) solution was treated with ultrasonic
waves to break down the sludge, and different acids (1 N HCl and 1 N
H2SO4) were used to acidify it. Acid dosages that yielded the solution
pH of less than two were used. The results thus obtained indicate that
the quantity of dissolved aluminum in H2SO4-acidified solution
exceeded that in HCl-acidified solution. Additionally, ultrasonic
treatment increased the rate of dissolution of aluminum and the
amount dissolved. The quantity of aluminum dissolved at 60℃ was 1.5
to 2.0 times higher than that at 25℃.
Abstract: The visualization of geographic information on mobile devices has become popular as the widespread use of mobile Internet. The mobility of these devices brings about much convenience to people-s life. By the add-on location-based services of the devices, people can have an access to timely information relevant to their tasks. However, visual analysis of geographic data on mobile devices presents several challenges due to the small display and restricted computing resources. These limitations on the screen size and resources may impair the usability aspects of the visualization applications. In this paper, a variable-scale visualization method is proposed to handle the challenge of small mobile display. By merging multiple scales of information into a single image, the viewer is able to focus on the interesting region, while having a good grasp of the surrounding context. This is essentially visualizing the map through a fisheye lens. However, the fisheye lens induces undesirable geometric distortion in the peripheral, which renders the information meaningless. The proposed solution is to apply map generalization that removes excessive information around the peripheral and an automatic smoothing process to correct the distortion while keeping the local topology consistent. The proposed method is applied on both artificial and real geographical data for evaluation.
Abstract: Graph transformation has recently become more and
more popular as a general visual modeling language to formally state
the dynamic semantics of the designed models. Especially, it is a
very natural formalism for languages which basically are graph (e.g.
UML). Using this technique, we present a highly understandable yet
precise approach to formally model and analyze the behavioral
semantics of UML 2.0 Activity diagrams. In our proposal, AGG is
used to design Activities, then using our previous approach to model
checking graph transformation systems, designers can verify and
analyze designed Activity diagrams by checking the interesting
properties as combination of graph rules and LTL (Linear Temporal
Logic) formulas on the Activities.
Abstract: A generalized Dirichlet to Neumann map is
one of the main aspects characterizing a recently introduced
method for analyzing linear elliptic PDEs, through which it
became possible to couple known and unknown components
of the solution on the boundary of the domain without
solving on its interior. For its numerical solution, a well conditioned
quadratically convergent sine-Collocation method
was developed, which yielded a linear system of equations
with the diagonal blocks of its associated coefficient matrix
being point diagonal. This structural property, among others,
initiated interest for the employment of iterative methods for
its solution. In this work we present a conclusive numerical
study for the behavior of classical (Jacobi and Gauss-Seidel)
and Krylov subspace (GMRES and Bi-CGSTAB) iterative
methods when they are applied for the solution of the Dirichlet
to Neumann map associated with the Laplace-s equation
on regular polygons with the same boundary conditions on
all edges.
Abstract: The objective of this work is to explicit knowledge on the interactions between the chlorophyll-a and nine meroplankton larvae of epibenthonic fauna. The studied case is the Arraial do Cabo upwelling system, Southeastern of Brazil, which provides different environmental conditions. To assess this information a network approach based in probability estimative was used. Comparisons among the generated graphs are made in the light of different water masses, application of Shannon biodiversity index, and the closeness and betweenness centralities measurements. Our results show the main pattern among different water masses and how the core organisms belonging to the network skeleton are correlated to the main environmental variable. We conclude that the approach of complex networks is a promising tool for environmental diagnostic.
Abstract: A new approach to promote the generalization ability
of neural networks is presented. It is based on the point of view of
fuzzy theory. This approach is implemented through shrinking or
magnifying the input vector, thereby reducing the difference between
training set and testing set. It is called “shrinking-magnifying
approach" (SMA). At the same time, a new algorithm; α-algorithm is
presented to find out the appropriate shrinking-magnifying-factor
(SMF) α and obtain better generalization ability of neural networks.
Quite a few simulation experiments serve to study the effect of SMA
and α-algorithm. The experiment results are discussed in detail, and
the function principle of SMA is analyzed in theory. The results of
experiments and analyses show that the new approach is not only
simpler and easier, but also is very effective to many neural networks
and many classification problems. In our experiments, the proportions
promoting the generalization ability of neural networks have even
reached 90%.
Abstract: The usual correctness condition for a schedule of
concurrent database transactions is some form of serializability of
the transactions. For general forms, the problem of deciding whether
a schedule is serializable is NP-complete. In those cases other approaches
to proving correctness, using proof rules that allow the steps
of the proof of serializability to be guided manually, are desirable.
Such an approach is possible in the case of conflict serializability
which is proved algebraically by deriving serial schedules using
commutativity of non-conflicting operations. However, conflict serializability
can be an unnecessarily strong form of serializability restricting
concurrency and thereby reducing performance. In practice,
weaker, more general, forms of serializability for extended models of
transactions are used. Currently, there are no known methods using
proof rules for proving those general forms of serializability. In this
paper, we define serializability for an extended model of partitioned
transactions, which we show to be as expressive as serializability
for general partitioned transactions. An algebraic method for proving
general serializability is obtained by giving an initial-algebra specification
of serializable schedules of concurrent transactions in the
model. This demonstrates that it is possible to conduct algebraic
proofs of correctness of concurrent transactions in general cases.
Abstract: In general dynamic analyses, lower mode response is
of interest, however the higher modes of spatially discretized
equations generally do not represent the real behavior and not affects
to global response much. Some implicit algorithms, therefore, are
introduced to filter out the high-frequency modes using intended
numerical error. The objective of this study is to introduce the
P-method and PC α-method to compare that with dissipation method
and Newmark method through the stability analysis and numerical
example. PC α-method gives more accuracy than other methods
because it based on the α-method inherits the superior properties of the
implicit α-method. In finite element analysis, the PC α-method is more
useful than other methods because it is the explicit scheme and it
achieves the second order accuracy and numerical damping
simultaneously.