Abstract: Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).
Abstract: Breastfeeding has been receiving much attention of late. Prolonged sitting for breastfeeding often results in back pain of the mothers. This paper reports the findings of a study on the effect of some factors, especially lumbar support, on back pain of breastfeeding mothers. The results showed that the use of lumbar support can reduce back pain of breastfeeding mothers significantly. Back pain was found to increase with breastfeeding time and the rate of increase was lower when lumbar supports were used. When lumbar support thickness was increased gradually from zero (no support) to 11 cm., the degree of low back pain decreased; rapidly at first, then slowly, and leveled off when the thickness reached 9 cm. Younger mothers were less prone to back pain than older mothers. The implications of the findings are discussed.
Abstract: This paper presents a integer frequency offset (IFO)
estimation scheme for the 3GPP long term evolution (LTE) downlink
system. Firstly, the conventional joint detection method for IFO and
sector cell index (CID) information is introduced. Secondly, an IFO
estimation without explicit sector CID information is proposed, which
can operate jointly with the proposed IFO estimation and reduce
the time delay in comparison with the conventional joint method.
Also, the proposed method is computationally efficient and has almost
similar performance in comparison with the conventional method over
the Pedestrian and Vehicular channel models.
Abstract: In this paper, a new learning approach for network
intrusion detection using naïve Bayesian classifier and ID3 algorithm
is presented, which identifies effective attributes from the training
dataset, calculates the conditional probabilities for the best attribute
values, and then correctly classifies all the examples of training and
testing dataset. Most of the current intrusion detection datasets are
dynamic, complex and contain large number of attributes. Some of
the attributes may be redundant or contribute little for detection
making. It has been successfully tested that significant attribute
selection is important to design a real world intrusion detection
systems (IDS). The purpose of this study is to identify effective
attributes from the training dataset to build a classifier for network
intrusion detection using data mining algorithms. The experimental
results on KDD99 benchmark intrusion detection dataset demonstrate
that this new approach achieves high classification rates and reduce
false positives using limited computational resources.
Abstract: In a previous work, we presented the numerical
solution of the two dimensional second order telegraph partial
differential equation discretized by the centred and rotated five-point
finite difference discretizations, namely the explicit group (EG) and
explicit decoupled group (EDG) iterative methods, respectively. In
this paper, we utilize a domain decomposition algorithm on these
group schemes to divide the tasks involved in solving the same
equation. The objective of this study is to describe the development
of the parallel group iterative schemes under OpenMP programming
environment as a way to reduce the computational costs of the
solution processes using multicore technologies. A detailed
performance analysis of the parallel implementations of points and
group iterative schemes will be reported and discussed.
Abstract: The identification and elimination of bad
measurements is one of the basic functions of a robust state estimator
as bad data have the effect of corrupting the results of state
estimation according to the popular weighted least squares method.
However this is a difficult problem to handle especially when dealing
with multiple errors from the interactive conforming type. In this
paper, a self adaptive genetic based algorithm is proposed. The
algorithm utilizes the results of the classical linearized normal
residuals approach to tune the genetic operators thus instead of
making a randomized search throughout the whole search space it is
more likely to be a directed search thus the optimum solution is
obtained at very early stages(maximum of 5 generations). The
algorithm utilizes the accumulating databases of already computed
cases to reduce the computational burden to minimum. Tests are
conducted with reference to the standard IEEE test systems. Test
results are very promising.
Abstract: This study has investigated the antidiabetic and
antioxidant potential of Pseudovaria macrophylla bark extract on
streptozotocin–nicotinamide induced type 2 diabetic rats. LCMSQTOF
and NMR experiments were done to determine the chemical
composition in the methanolic bark extract. For in vivo experiments,
the STZ (60 mg/kg/b.w, 15 min after 120 mg/kg/1 nicotinamide, i.p.)
induced diabetic rats were treated with methanolic extract of
Pseuduvaria macrophylla (200 and 400 mg/kg·bw) and
glibenclamide (2.5 mg/kg) as positive control respectively.
Biochemical parameters were assayed in the blood samples of all
groups of rats. The pro-inflammatory cytokines, antioxidant status
and plasma transforming growth factor βeta-1 (TGF-β1) were
evaluated. The histological study of the pancreas was examined and
its expression level of insulin was observed by
immunohistochemistry. In addition, the expression of glucose
transporters (GLUT 1, 2 and 4) were assessed in pancreas tissue by
western blot analysis. The outcomes of the study displayed that the
bark methanol extract of Pseuduvaria macrophylla has potentially
normalized the elevated blood glucose levels and improved serum
insulin and C-peptide levels with significant increase in the
antioxidant enzyme, reduced glutathione (GSH) and decrease in the
level of lipid peroxidation (LPO). Additionally, the extract has
markedly decreased the levels of serum pro-inflammatory cytokines
and transforming growth factor beta-1 (TGF-β1). Histopathology
analysis demonstrated that Pseuduvaria macrophylla has the
potential to protect the pancreas of diabetic rats against peroxidation
damage by downregulating oxidative stress and elevated
hyperglycaemia. Furthermore, the expression of insulin protein,
GLUT-1, GLUT-2 and GLUT-4 in pancreatic cells was enhanced.
The findings of this study support the anti-diabetic claims of
Pseudovaria macrophylla bark.
Abstract: In Image processing the Image compression can improve
the performance of the digital systems by reducing the cost and
time in image storage and transmission without significant reduction
of the Image quality. This paper describes hardware architecture of
low complexity Discrete Cosine Transform (DCT) architecture for
image compression[6]. In this DCT architecture, common computations
are identified and shared to remove redundant computations
in DCT matrix operation. Vector processing is a method used for
implementation of DCT. This reduction in computational complexity
of 2D DCT reduces power consumption. The 2D DCT is performed
on 8x8 matrix using two 1-Dimensional Discrete cosine transform
blocks and a transposition memory [7]. Inverse discrete cosine
transform (IDCT) is performed to obtain the image matrix and
reconstruct the original image. The proposed image compression
algorithm is comprehended using MATLAB code. The VLSI design
of the architecture is implemented Using Verilog HDL. The proposed
hardware architecture for image compression employing DCT was
synthesized using RTL complier and it was mapped using 180nm
standard cells. . The Simulation is done using Modelsim. The
simulation results from MATLAB and Verilog HDL are compared.
Detailed analysis for power and area was done using RTL compiler
from CADENCE. Power consumption of DCT core is reduced to
1.027mW with minimum area[1].
Abstract: The design of a pattern classifier includes an attempt
to select, among a set of possible features, a minimum subset of
weakly correlated features that better discriminate the pattern classes.
This is usually a difficult task in practice, normally requiring the
application of heuristic knowledge about the specific problem
domain. The selection and quality of the features representing each
pattern have a considerable bearing on the success of subsequent
pattern classification. Feature extraction is the process of deriving
new features from the original features in order to reduce the cost of
feature measurement, increase classifier efficiency, and allow higher
classification accuracy. Many current feature extraction techniques
involve linear transformations of the original pattern vectors to new
vectors of lower dimensionality. While this is useful for data
visualization and increasing classification efficiency, it does not
necessarily reduce the number of features that must be measured
since each new feature may be a linear combination of all of the
features in the original pattern vector. In this paper a new approach is
presented to feature extraction in which feature selection, feature
extraction, and classifier training are performed simultaneously using
a genetic algorithm. In this approach each feature value is first
normalized by a linear equation, then scaled by the associated weight
prior to training, testing, and classification. A knn classifier is used to
evaluate each set of feature weights. The genetic algorithm optimizes
a vector of feature weights, which are used to scale the individual
features in the original pattern vectors in either a linear or a nonlinear
fashion. By this approach, the number of features used in classifying
can be finely reduced.
Abstract: A new approach based on the consideration that electroencephalogram (EEG) signals are chaotic signals was presented for automated diagnosis of electroencephalographic changes. This consideration was tested successfully using the nonlinear dynamics tools, like the computation of Lyapunov exponents. This paper presented the usage of statistics over the set of the Lyapunov exponents in order to reduce the dimensionality of the extracted feature vectors. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The selected Lyapunov exponents of the EEG signals were used as inputs of the MLPNN trained with Levenberg- Marquardt algorithm. The classification results confirmed that the proposed MLPNN has potential in detecting the electroencephalographic changes.
Abstract: Productivity has been one of the major concerns with the increasingly high cost of software development. Choosing the right development language with high productivity is one approach to reduce development costs. Working on the large database with 4106 projects ever developed, we found the factors significant to productivity. After the removal of the effects of other factors on productivity, we compare the productivity differences of the ten general development programs. The study supports the fact that fourth-generation languages are more productive than thirdgeneration languages.
Abstract: The demand for higher performance graphics
continues to grow because of the incessant desire towards realism.
And, rapid advances in fabrication technology have enabled us to
build several processor cores on a single die. Hence, it is important to
develop single chip parallel architectures for such data-intensive
applications. In this paper, we propose an efficient PIM architectures
tailored for computer graphics which requires a large number of
memory accesses. We then address the two important tasks necessary
for maximally exploiting the parallelism provided by the architecture,
namely, partitioning and placement of graphic data, which affect
respectively load balances and communication costs. Under the
constraints of uniform partitioning, we develop approaches for optimal
partitioning and placement, which significantly reduce search space.
We also present heuristics for identifying near-optimal placement,
since the search space for placement is impractically large despite our
optimization. We then demonstrate the effectiveness of our partitioning
and placement approaches via analysis of example scenes; simulation
results show considerable search space reductions, and our heuristics
for placement performs close to optimal – the average ratio of
communication overheads between our heuristics and the optimal was
1.05. Our uniform partitioning showed average load-balance ratio of
1.47 for geometry processing and 1.44 for rasterization, which is
reasonable.
Abstract: Fire disaster is the major factor to endanger the public
and environmental safety. People lost their life during fire disaster
mainly be attributed to the dense smoke and toxic gas under
combustion, which hinder the escape of people and the rescue of
firefighters under fire disaster. The smoke suppression effect of
several transitional metals oxide on the epoxy resin treated with
intumescent flame retardant and titanate couple agent
(EP/IFR/Titanate) system have been investigated. The results showed
manganese dioxide has great effect on reducing the smoke density rate
(SDR) of EP/IFR/Titanate system; however it has little effect to reduce
the maximum smoke density (MSD) of EP/IFR/Titanate system.
Copper oxide can decrease the maximum smoke density (MSD) and
smoke density rate of EP/IFR/Titanate system substantially. The MSD
and SDR of EP/IFR/Titanate system can reduce 20.3% and 39.1%
respectively when 2% of copper oxide is introduced.
Abstract: Natural ventilation is an important means to improve indoor thermal comfort and reduce the energy consumption. A solar chimney system is an enhancing natural draft device, which uses solar radiation to heat the air inside the chimney, thereby converting the thermal energy into kinetic energy. The present study considered some parameters such as chimney width and solar intensity, which were believed to have a significant effect on space ventilation. Fluent CFD software was used to predict buoyant air flow and flow rates in the cavities. The results were compared with available published experimental and theoretical data from the literature. There was an acceptable trend match between the present results and the published data for the room air change per hour, ACH. Further, it was noticed that the solar intensity has a more significant effect on ACH.
Abstract: The rapidly increasing costs of power line extensions
and fossil fuel, combined with the desire to reduce carbon dioxide
emissions pushed the development of hybrid power system suited for
remote locations, the purpose in mind being that of autonomous local
power systems. The paper presents the suggested solution for a “high
penetration" hybrid power system, it being determined by the
location of the settlement and its “zero policy" on carbon dioxide
emissions. The paper focuses on the technical solution and the power
flow management algorithm of the system, taking into consideration
local conditions of development.
Abstract: Experiments have been performed to investigate the radiation effects on mixed convection heat transfer for thermally developing airflow in vertical ducts with two differentially heated isothermal walls and two adiabatic walls. The investigation covers the Reynolds number Re = 800 to Re = 2900, heat flux varied from 256 W/m2 to 863 W/m2, hot wall temperature ranges from 27°C to 100 °C, aspect ratios 1 & 0.5 and the emissivity of internal walls are 0.05 and 0.85. In the present study, combined flow visualization was conducted to observe the flow patterns. The effect of surface temperature along the walls was studied to investigate the local Nusselt number variation within the duct. The result shows that flow condition and radiation significantly affect the total Nusselt number and tends to reduce the buoyancy condition.
Abstract: We analyze the effectivity of different pseudo noise (PN) and orthogonal sequences for encrypting speech signals in terms of perceptual intelligence. Speech signal can be viewed as sequence of correlated samples and each sample as sequence of bits. The residual intelligibility of the speech signal can be reduced by removing the correlation among the speech samples. PN sequences have random like properties that help in reducing the correlation among speech samples. The mean square aperiodic auto-correlation (MSAAC) and the mean square aperiodic cross-correlation (MSACC) measures are used to test the randomness of the PN sequences. Results of the investigation show the effectivity of large Kasami sequences for this purpose among many PN sequences.
Abstract: Continuously growing needs for Internet applications
that transmit massive amount of data have led to the emergence of
high speed network. Data transfer must take place without any
congestion and hence feedback parameters must be transferred from
the receiver end to the sender end so as to restrict the sending rate in
order to avoid congestion. Even though TCP tries to avoid
congestion by restricting the sending rate and window size, it never
announces the sender about the capacity of the data to be sent and
also it reduces the window size by half at the time of congestion
therefore resulting in the decrease of throughput, low utilization of
the bandwidth and maximum delay. In this paper, XCP protocol is
used and feedback parameters are calculated based on arrival rate,
service rate, traffic rate and queue size and hence the receiver
informs the sender about the throughput, capacity of the data to be
sent and window size adjustment, resulting in no drastic decrease in
window size, better increase in sending rate because of which there is
a continuous flow of data without congestion. Therefore as a result of
this, there is a maximum increase in throughput, high utilization of
the bandwidth and minimum delay. The result of the proposed work
is presented as a graph based on throughput, delay and window size.
Thus in this paper, XCP protocol is well illustrated and the various
parameters are thoroughly analyzed and adequately presented.
Abstract: A linear feedback shift register (LFSR) is proposed which targets to reduce the power consumption from within. It reduces the power consumption during testing of a Circuit Under Test (CUT) at two stages. At first stage,
Control Logic (CL) makes the clocks of the switching units
of the register inactive for a time period when output from
them is going to be same as previous one and thus reducing
unnecessary switching of the flip-flops. And at second stage,
the LFSR reorders the test vectors by interchanging the bit
with its next and closest neighbor bit. It keeps fault coverage
capacity of the vectors unchanged but reduces the Total Hamming Distance (THD) so that there is reduction in power
while shifting operation.
Abstract: The steam cracking reactions are always accompanied with the formation of coke which deposits on the walls of the tubular reactors. The investigation has attempted to control catalytic coking by the applying aluminum, zinc and ceramic coating like aluminum-magnesium by thermal spray and pack cementation method. Rate of coke formation during steam cracking of naphtha has been investigated both for uncoated stainless steel (with different alloys) and metal coating constructed with thermal Spray and pack cementation method with metal powders of Aluminum, Aluminum-Magnesium, zinc, silicon, nickel and chromium. The results of the study show that passivating the surface of SS321 with a coating of Aluminum and Aluminum-Magnesium can significantly reduce the rate of coke deposition during naphtha pyrolysis. SEM and EDAX techniques (Philips XL Series) were used to examine the coke deposits formed by the metal-hydrocarbon reactions. Our objective was to separate the different stages by identifying the characteristic morphologies.