Abstract: The Muslim faith requires individuals to fast between
the hours of sunrise and sunset during the month of Ramadan. Our
recent work has concentrated on some of the changes that take place
during the daytime when fasting. A questionnaire was developed to
assess subjective estimates of physical, mental and social activities,
and fatigue. Four days were studied: in the weeks before and after
Ramadan (control days) and during the first and last weeks of
Ramadan (experimental days). On each of these four days, this
questionnaire was given several times during the daytime and once
after the fast had been broken and just before individuals retired at
night.
During Ramadan, daytime mental, physical and social activities
all decreased below control values but then increased to abovecontrol
values in the evening. The desires to perform physical and
mental activities showed very similar patterns. That is, individuals
tried to conserve energy during the daytime in preparation for the
evenings when they ate and drank, often with friends. During
Ramadan also, individuals were more fatigued in the daytime and
napped more often than on control days. This extra fatigue probably
reflected decreased sleep, individuals often having risen earlier
(before sunrise, to prepare for fasting) and retired later (to enable
recovery from the fast).
Some physiological measures and objective measures of
performance (including the response to a bout of exercise) have also
been investigated. Urine osmolality fell during the daytime on
control days as subjects drank, but rose in Ramadan to reach values
at sunset indicative of dehydration. Exercise performance was also
compromised, particularly late in the afternoon when the fast had
lasted several hours. Self-chosen exercise work-rates fell and a set
amount of exercise felt more arduous. There were also changes in
heart rate and lactate accumulation in the blood, indicative of greater
cardiovascular and metabolic stress caused by the exercise in
subjects who had been fasting. Daytime fasting in Ramadan produces
widespread effects which probably reflect combined effects of sleep
loss and restrictions to intakes of water and food.
Abstract: In this paper, we study a class of serially concatenated block codes (SCBC) based on matrix interleavers, to be employed in fixed wireless communication systems. The performances of SCBC¬coded systems are investigated under various interleaver dimensions. Numerical results reveal that the matrix interleaver could be a competitive candidate over conventional block interleaver for frame lengths of 200 bits; hence, the SCBC coding based on matrix interleaver is a promising technique to be employed for speech transmission applications in many international standards such as pan-European Global System for Mobile communications (GSM), Digital Cellular Systems (DCS) 1800, and Joint Detection Code Division Multiple Access (JD-CDMA) mobile radio systems, where the speech frame contains around 200 bits.
Abstract: This paper proposed a novel model for short term load
forecast (STLF) in the electricity market. The prior electricity
demand data are treated as time series. The model is composed of
several neural networks whose data are processed using a wavelet
technique. The model is created in the form of a simulation program
written with MATLAB. The load data are treated as time series data.
They are decomposed into several wavelet coefficient series using
the wavelet transform technique known as Non-decimated Wavelet
Transform (NWT). The reason for using this technique is the belief
in the possibility of extracting hidden patterns from the time series
data. The wavelet coefficient series are used to train the neural
networks (NNs) and used as the inputs to the NNs for electricity load
prediction. The Scale Conjugate Gradient (SCG) algorithm is used as
the learning algorithm for the NNs. To get the final forecast data, the
outputs from the NNs are recombined using the same wavelet
technique. The model was evaluated with the electricity load data of
Electronic Engineering Department in Mandalay Technological
University in Myanmar. The simulation results showed that the
model was capable of producing a reasonable forecasting accuracy in
STLF.
Abstract: Subsurface erosion in river banks and its details, in
spite of its occurrence in various parts of the world has rarely been
paid attention by researchers. In this paper, quantitative concept of
the subsurface bank erosion has been investigated for vertical banks.
Vertical banks were simulated experimentally by considering a sandy
erodible layer overlaid by clayey one under uniformly distributed
constant overhead pressure. Results of the experiments are indicated
that rate of sandy layer erosion is decreased by an increase in
overburden; likewise, substituting 20% of coarse (3.5 mm) sand layer
bed material by fine material (1.4 mm) may lead to a decrease in
erosion rate by one-third. This signifies the importance of the bed
material composition effect on sandy layers erosion due to subsurface
erosion in river banks.
Abstract: Some meta-schedulers query the information system of individual supercomputers in order to submit jobs to the least busy supercomputer on a computational Grid. However, this information can become outdated by the time a job starts due to changes in scheduling priorities. The MSR scheme is based on Multiple Simultaneous Requests and can take advantage of opportunities resulting from these priorities changes. This paper presents the SWARM meta-scheduler, which can speed up the execution of large sets of tasks by minimizing the job queuing time through the submission of multiple requests. Performance tests have shown that this new meta-scheduler is faster than an implementation of the MSR scheme and the gLite meta-scheduler. SWARM has been used through the GridQTL project beta-testing portal during the past year. Statistics are provided for this usage and demonstrate its capacity to achieve reliably a substantial reduction of the execution time in production conditions.
Abstract: Reciprocating compressors are flexible to handle wide capacity and condition swings, offer a very efficient method of compressing almost any gas mixture in wide range of pressure, can generate high head independent of density, and have numerous applications and wide power ratings. These make them vital component in various units of industrial plants. In this paper optimum reciprocating compressor configuration regarding interstage pressures, low suction pressure, non-lubricated cylinder, speed of machine, capacity control system, compressor valve, lubrication system, piston rod coating, cylinder liner material, barring device, pressure drops, rod load, pin reversal, discharge temperature, cylinder coolant system, performance, flow, coupling, special tools, condition monitoring (including vibration, thermal and rod drop monitoring), commercial points, delivery and acoustic conditions are presented.
Abstract: This study aims to propose three evaluation methods to
evaluate the Tokyo Cap and Trade Program when emissions trading is
performed virtually among enterprises, focusing on carbon dioxide
(CO2), which is the only emitted greenhouse gas that tends to increase.
The first method clarifies the optimum reduction rate for the highest
cost benefit, the second discusses emissions trading among enterprises
through market trading, and the third verifies long-term emissions
trading during the term of the plan (2010-2019), checking the validity
of emissions trading partly using Geographic Information Systems
(GIS). The findings of this study can be summarized in the following
three points.
1. Since the total cost benefit is the greatest at a 44% reduction rate, it
is possible to set it more highly than that of the Tokyo Cap and
Trade Program to get more total cost benefit.
2. At a 44% reduction rate, among 320 enterprises, 8 purchasing
enterprises and 245 sales enterprises gain profits from emissions
trading, and 67 enterprises perform voluntary reduction without
conducting emissions trading. Therefore, to further promote
emissions trading, it is necessary to increase the sales volumes of
emissions trading in addition to sales enterprises by increasing the
number of purchasing enterprises.
3. Compared to short-term emissions trading, there are few enterprises
which benefit in each year through the long-term emissions trading
of the Tokyo Cap and Trade Program. Only 81 enterprises at the
most can gain profits from emissions trading in FY 2019. Therefore,
by setting the reduction rate more highly, it is necessary to increase
the number of enterprises that participate in emissions trading and
benefit from the restraint of CO2 emissions.
Abstract: This paper describes a system-level SoC energy
consumption estimation method based on a dynamic behavior of
embedded software in the early stages of the SoC development. A
major problem of SOC development is development rework caused by
unreliable energy consumption estimation at the early stages. The
energy consumption of an SoC used in embedded systems is strongly
affected by the dynamic behavior of the software. At the early stages
of SoC development, modeling with a high level of abstraction is
required for both the dynamic behavior of the software, and the
behavior of the SoC. We estimate the energy consumption by a UML
model-based simulation. The proposed method is applied for an actual
embedded system in an MFP. The energy consumption estimation of
the SoC is more accurate than conventional methods and this proposed
method is promising to reduce the chance of development rework in
the SoC development. ∈
Abstract: Information on weed distribution within the field is necessary to implement spatially variable herbicide application. Since hand labor is costly, an automated weed control system could be feasible. This paper deals with the development of an algorithm for real time specific weed recognition system based on Histogram Maxima with threshold of an image that is used for the weed classification. This algorithm is specifically developed to classify images into broad and narrow class for real-time selective herbicide application. The developed system has been tested on weeds in the lab, which have shown that the system to be very effectiveness in weed identification. Further the results show a very reliable performance on images of weeds taken under varying field conditions. The analysis of the results shows over 95 percent classification accuracy over 140 sample images (broad and narrow) with 70 samples from each category of weeds.
Abstract: In recent years, everything is trending toward digitalization
and with the rapid development of the Internet technologies,
digital media needs to be transmitted conveniently over the network.
Attacks, misuse or unauthorized access of information is of great
concern today which makes the protection of documents through
digital media a priority problem. This urges us to devise new data
hiding techniques to protect and secure the data of vital significance.
In this respect, steganography often comes to the fore as a tool for
hiding information. Steganography is a process that involves hiding
a message in an appropriate carrier like image or audio. It is of
Greek origin and means "covered or hidden writing". The goal of
steganography is covert communication. Here the carrier can be sent
to a receiver without any one except the authenticated receiver only
knows existence of the information. Considerable amount of work
has been carried out by different researchers on steganography. In this
work the authors propose a novel Steganographic method for hiding
information within the spatial domain of the gray scale image. The
proposed approach works by selecting the embedding pixels using
some mathematical function and then finds the 8 neighborhood of
the each selected pixel and map each bit of the secret message in
each of the neighbor pixel coordinate position in a specified manner.
Before embedding a checking has been done to find out whether the
selected pixel or its neighbor lies at the boundary of the image or not.
This solution is independent of the nature of the data to be hidden
and produces a stego image with minimum degradation.
Abstract: Performance of millimeter-wave (mm-wave) multiband
orthogonal frequency division multiplexing (MB-OFDM) ultrawideband
(UWB) signal generation using frequency quadrupling
technique and transmission over fiber is experimentally investigated.
The frequency quadrupling is achived by using only one Mach-
Zehnder modulator (MZM) that is biased at maximum transmission
(MATB) point. At the output, a frequency quadrupling signal is
obtained then sent to a second MZM. This MZM is used for MBOFDM
UWB signal modulation. In this work, we demonstrate 30-
GHz mm-wave wireless that carries three-bands OFDM UWB
signals, and error vector magnitude (EVM) is used to analyze the
transmission quality. It is found that our proposed technique leads to
an improvement of 3.5 dB in EVM at 40% of local oscillator (LO)
modulation with comparison to the technique using two cascaded
MZMs biased at minimum transmission (MITB) point.
Abstract: The least mean square (LMS) algorithmis one of the
most well-known algorithms for mobile communication systems
due to its implementation simplicity. However, the main limitation
is its relatively slow convergence rate. In this paper, a booster
using the concept of Markov chains is proposed to speed up the
convergence rate of LMS algorithms. The nature of Markov
chains makes it possible to exploit the past information in the
updating process. Moreover, since the transition matrix has a
smaller variance than that of the weight itself by the central limit
theorem, the weight transition matrix converges faster than the
weight itself. Accordingly, the proposed Markov-chain based
booster thus has the ability to track variations in signal
characteristics, and meanwhile, it can accelerate the rate of
convergence for LMS algorithms. Simulation results show that the
LMS algorithm can effectively increase the convergence rate and
meantime further approach the Wiener solution, if the
Markov-chain based booster is applied. The mean square error is
also remarkably reduced, while the convergence rate is improved.
Abstract: In this paper the exact solution of infinite boundary integral equation (IBIE) of the second kind with degenerate kernel is presented. Moreover Galerkin method with Laguerre polynomial is applied to get the approximate solution of IBIE. Numerical examples are given to show the validity of the method presented.
Abstract: In the classical buckling analysis of rectangular plates
subjected to the concurrent action of shear and uniaxial forces, the
Euler shear buckling stress is generally evaluated separately, so that
no influence on the shear buckling coefficient, due to the in-plane
tensile or compressive forces, is taken into account.
In this paper the buckling problem of simply supported rectangular
plates, under the combined action of shear and uniaxial forces, is
discussed from the beginning, in order to obtain new project formulas
for the shear buckling coefficient that take into account the presence
of uniaxial forces.
Furthermore, as the classical expression of the shear buckling
coefficient for simply supported rectangular plates is considered only
a “rough" approximation, as the exact one is defined by a system of
intersecting curves, the convergence and the goodness of the classical
solution are analyzed, too.
Finally, as the problem of the Euler shear buckling stress
evaluation is a very important topic for a variety of structures, (e.g.
ship ones), two numerical applications are carried out, in order to
highlight the role of the uniaxial stresses on the plating scantling
procedures and the goodness of the proposed formulas.
Abstract: Many agent-oriented software engineering
methodologies have been proposed for software developing; however
their application is still limited due to their lack of maturity.
Evaluating the strengths and weaknesses of these methodologies
plays an important role in improving them and in developing new
stronger methodologies. This paper presents an evaluation framework
for agent-oriented methodologies, which addresses six major areas:
concepts, notation, process, pragmatics, support for software
engineering and marketability. The framework is then used to
evaluate the Gaia methodology to identify its strengths and
weaknesses, and to prove the ability of the framework for promoting
the agent-oriented methodologies by detecting their weaknesses in
detail.
Abstract: This research is a comparative study of complexity, as a multidimensional concept, in the context of streetscape composition in Algeria and Japan. 80 streetscapes visual arrays have been collected and then presented to 20 participants, with different cultural backgrounds, in order to be categorized and classified according to their degrees of complexity. Three analysis methods have been used in this research: cluster analysis, ranking method and Hayashi Quantification method (Method III). The results showed that complexity, disorder, irregularity and disorganization are often conflicting concepts in the urban context. Algerian daytime streetscapes seem to be balanced, ordered and regular, and Japanese daytime streetscapes seem to be unbalanced, regular and vivid. Variety, richness and irregularity with some aspects of order and organization seem to characterize Algerian night streetscapes. Japanese night streetscapes seem to be more related to balance, regularity, order and organization with some aspects of confusion and ambiguity. Complexity characterized mainly Algerian avenues with green infrastructure. Therefore, for Japanese participants, Japanese traditional night streetscapes were complex. And for foreigners, Algerian and Japanese avenues nightscapes were the most complex visual arrays.
Abstract: Extensive use of the Internet coupled with the
marvelous growth in e-commerce and m-commerce has created a
huge demand for information security. The Secure Socket Layer
(SSL) protocol is the most widely used security protocol in the
Internet which meets this demand. It provides protection against
eaves droppings, tampering and forgery. The cryptographic
algorithms RC4 and HMAC have been in use for achieving security
services like confidentiality and authentication in the SSL. But recent
attacks against RC4 and HMAC have raised questions in the
confidence on these algorithms. Hence two novel cryptographic
algorithms MAJE4 and MACJER-320 have been proposed as
substitutes for them. The focus of this work is to demonstrate the
performance of these new algorithms and suggest them as dependable
alternatives to satisfy the need of security services in SSL. The
performance evaluation has been done by using practical
implementation method.
Abstract: In this paper, we propose a single sample path based
algorithm with state aggregation to optimize the average rewards of
singularly perturbed Markov reward processes (SPMRPs) with a
large scale state spaces. It is assumed that such a reward process
depend on a set of parameters. Differing from the other kinds of
Markov chain, SPMRPs have their own hierarchical structure. Based
on this special structure, our algorithm can alleviate the load in the
optimization for performance. Moreover, our method can be applied
on line because of its evolution with the sample path simulated.
Compared with the original algorithm applied on these problems of
general MRPs, a new gradient formula for average reward
performance metric in SPMRPs is brought in, which will be proved
in Appendix, and then based on these gradients, the schedule of the
iteration algorithm is presented, which is based on a single sample
path, and eventually a special case in which parameters only
dominate the disturbance matrices will be analyzed, and a precise
comparison with be displayed between our algorithm with the old
ones which is aim to solve these problems in general Markov reward
processes. When applied in SPMRPs, our method will approach a fast
pace in these cases. Furthermore, to illustrate the practical value of
SPMRPs, a simple example in multiple programming in computer
systems will be listed and simulated. Corresponding to some practical
model, physical meanings of SPMRPs in networks of queues will be
clarified.
Abstract: The importance of ensuring safe meat handling and
processing practices has been demonstrated in global reports on food
safety scares and related illness and deaths. This necessitated stricter
meat safety control strategies. Today, many countries have regulated
towards preventative and systematic control over safe meat
processing at abattoirs utilizing the Hazard Analysis Critical Control
Point (HACCP) principles. HACCP systems have been reported as
effective in managing food safety risks, if correctly implemented.
South Africa has regulated the Hygiene Management System (HMS)
based on HACCP principles applicable to abattoirs. Regulators utilise
the Hygiene Assessment System (HAS) to audit compliance at
abattoirs. These systems were benchmarked from the United
Kingdom (UK). Little research has been done them since inception as
of 2004. This paper presents a review of the two systems, its
implementation and comparison with HACCP. Recommendations are
made for future research to demonstrate the utility of the HMS and
HAS in assuring safe meat to consumers.
Abstract: This paper aims at presenting the biotechnology used
to obtain collagen-based gels from shark (Squalus acanthias) and brill
skin, marine fish growing in the Black Sea. Due to the structure of its
micro-fibres, collagen can be considered a nanomaterial; in order to
use collagen-based matrixes as biomaterial, rheological studies must
be performed first, to state whether they are stable or not. For the
triple-helix structure to remain stable within these gels at room or
human body temperature, they must be stabilized by reticulation.