Abstract: Rough set theory is a very effective tool to deal with granularity and vagueness in information systems. Covering-based rough set theory is an extension of classical rough set theory. In this paper, firstly we present the characteristics of the reducible element and the minimal description covering-based rough sets through downsets. Then we establish lattices and topological spaces in coveringbased rough sets through down-sets and up-sets. In this way, one can investigate covering-based rough sets from algebraic and topological points of view.
Abstract: For about two decades scientists have been
developing techniques for enhancing the quality of medical images
using Fourier transform, DWT (Discrete wavelet transform),PDE
model etc., Gabor wavelet on hexagonal sampled grid of the images
is proposed in this work. This method has optimal approximation
theoretic performances, for a good quality image. The computational
cost is considerably low when compared to similar processing in the
rectangular domain. As X-ray images contain light scattered pixels,
instead of unique sigma, the parameter sigma of 0.5 to 3 is found to
satisfy most of the image interpolation requirements in terms of high
Peak Signal-to-Noise Ratio (PSNR) , lower Mean Squared Error
(MSE) and better image quality by adopting windowing technique.
Abstract: A fast adaptive Tomlinson Harashima (T-H) precoder structure is presented for indoor wireless communications, where the channel may vary due to rotation and small movement of the mobile terminal. A frequency-selective slow fading channel which is time-invariant over a frame is assumed. In this adaptive T-H precoder, feedback coefficients are updated at the end of every uplink frame by using system identification technique for channel estimation in contrary with the conventional T-H precoding concept where the channel is estimated during the starting of the uplink frame via Wiener solution. In conventional T-H precoder it is assumed the channel is time-invariant in both uplink and downlink frames. However assuming the channel is time-invariant over only one frame instead of two, the proposed adaptive T-H precoder yields better performance than conventional T-H precoder if the channel is varied in uplink after receiving the training sequence.
Abstract: A multilayer self organizing neural neural network
(MLSONN) architecture for binary object extraction, guided by a beta
activation function and characterized by backpropagation of errors
estimated from the linear indices of fuzziness of the network output
states, is discussed. Since the MLSONN architecture is designed to
operate in a single point fixed/uniform thresholding scenario, it does
not take into cognizance the heterogeneity of image information in
the extraction process. The performance of the MLSONN architecture
with representative values of the threshold parameters of the beta
activation function employed is also studied. A three layer bidirectional
self organizing neural network (BDSONN) architecture
comprising fully connected neurons, for the extraction of objects from
a noisy background and capable of incorporating the underlying image
context heterogeneity through variable and adaptive thresholding,
is proposed in this article. The input layer of the network architecture
represents the fuzzy membership information of the image scene to
be extracted. The second layer (the intermediate layer) and the final
layer (the output layer) of the network architecture deal with the self
supervised object extraction task by bi-directional propagation of the
network states. Each layer except the output layer is connected to the
next layer following a neighborhood based topology. The output layer
neurons are in turn, connected to the intermediate layer following
similar topology, thus forming a counter-propagating architecture
with the intermediate layer. The novelty of the proposed architecture
is that the assignment/updating of the inter-layer connection weights
are done using the relative fuzzy membership values at the constituent
neurons in the different network layers. Another interesting feature
of the network lies in the fact that the processing capabilities of
the intermediate and the output layer neurons are guided by a beta
activation function, which uses image context sensitive adaptive
thresholding arising out of the fuzzy cardinality estimates of the
different network neighborhood fuzzy subsets, rather than resorting to
fixed and single point thresholding. An application of the proposed
architecture for object extraction is demonstrated using a synthetic
and a real life image. The extraction efficiency of the proposed
network architecture is evaluated by a proposed system transfer index
characteristic of the network.
Abstract: Multimedia security is an incredibly significant area of concern. The paper aims to discuss a robust image watermarking scheme, which can withstand geometric attacks. The source image is initially moment normalized in order to make it withstand geometric attacks. The moment normalized image is wavelet transformed. The first level wavelet transformed image is segmented into blocks if size 8x8. The product of mean and standard and standard deviation of each block is computed. The second level wavelet transformed image is divided into 8x8 blocks. The product of block mean and the standard deviation are computed. The difference between products in the two levels forms the watermark. The watermark is inserted by modulating the coefficients of the mid frequencies. The modulated image is inverse wavelet transformed and inverse moment normalized to generate the watermarked image. The watermarked image is now ready for transmission. The proposed scheme can be used to validate identification cards and financial instruments. The performance of this scheme has been evaluated using a set of parameters. Experimental results show the effectiveness of this scheme.
Abstract: The goal of steganography is to avoid drawing
suspicion to the transmission of a hidden message. If suspicion is
raised, steganography may fail. The success of steganography
depends on the secrecy of the action. If steganography is detected,
the system will fail but data security depends on the robustness of the
applied algorithm. In this paper, we propose a novel plausible
deniability scheme in steganography by using a diversionary message
and encrypt it with a DES-based algorithm. Then, we compress the
secret message and encrypt it by the receiver-s public key along with
the stego key and embed both messages in a carrier using an
embedding algorithm. It will be demonstrated how this method can
support plausible deniability and is robust against steganalysis.
Abstract: An accurate optimal design of laminated composite
structures may present considerable difficulties due to the complexity
and multi-modality of the functional design space. The Big Bang
– Big Crunch (BB-BC) optimization method is a relatively new
technique and has already proved to be a valuable tool for structural
optimization. In the present study the exceptional efficiency of the
method is demonstrated by an example of the lay-up optimization
of multilayered anisotropic cylinders based on a three-dimensional
elasticity solution. It is shown that, due to its simplicity and speed,
the BB-BC is much more efficient for this class of problems when
compared to the genetic algorithms.
Abstract: The aim of our research was to evaluate the effects of
physical exercise on lipid profile and anthropometric characteristics
in young subjects, diagnosed with metabolic syndrome (MS). The
study has been developed during 28 weeks on 20 young obese
patients which have undertaken an intermittent submaximal exercise
program. After 28 weeks of physical activity, the results show
significant effects on anthropometric characteristics and serum lipid
profile of research subjects. Additionally, the results of this study
confirms the major correlation between the variations of
intraabdominal adiposity, determined ultrasonographycally,
and the changes of serum lipid concentrations, a better
correlation than it is used abdominal circumference or body
weight index.
Abstract: It has been proven that early establishment of
microbial flora in digestive tract of ruminants, has a beneficial effect
on their health condition and productivity. A probiotic compound,
made from five bacteria isolated from adult bovine cattle, was dosed
to 15 Holstein newborn calves in order to measure its capacity of
improving body weight gain and reduce diarrhea incidence. The test
was performed in the municipality of Cajicá (Colombia), at 2580
m.a.s.l., throughout rainy season, with environmental temperature
that oscillated between 4 to 25 °C. Five calves were allotted to
control (no addition of probiotic). Treatments 1, and 2 (5 calves per
group) received 10 ml Probiotic mix 1 and 2, respectively. Probiotic
mixes 1 and 2 where similar in microbial composition but different in
production process. Probiotics were added to the morning milk and
dosed on a daily basis by a month and then on a weekly basis for
three additional months. Diarrhea incidence was measured by
observance of number of animals affected in each group; each animal
was weighed up on a daily basis for obtaining weight gain and rumen
fluid samples were extracted with oro-esophageal catheter for
determining level of fiber and grain consumption.
Abstract: Multirate multimedia delivery applications in multihop Wireless Mesh Network (WMN) are data redundant and delay-sensitive, which brings a lot of challenges for designing efficient transmission systems. In this paper, we propose a new cross layer resource allocation scheme to minimize the receiver side distortion within the delay bound requirements, by exploring application layer Position and Value (P-V) diversity as well as the multihop Effective Capacity (EC). We specifically consider image transmission optimization here. First of all, the maximum supportable source traffic rate is identified by exploring the multihop Effective Capacity (EC) model. Furthermore, the optimal source coding rate is selected according to the P-V diversity of multirate media streaming, which significantly increases the decoded media quality. Simulation results show the proposed approach improved media quality significantly compared with traditional approaches under the same QoS requirements.
Abstract: This paper explores the implementation of adaptive
coding and modulation schemes for Multiple-Input Multiple-Output
Orthogonal Frequency Division Multiplexing (MIMO-OFDM) feedback
systems. Adaptive coding and modulation enables robust and
spectrally-efficient transmission over time-varying channels. The basic
premise is to estimate the channel at the receiver and feed this estimate
back to the transmitter, so that the transmission scheme can be
adapted relative to the channel characteristics. Two types of codebook
based channel feedback techniques are used in this work. The longterm
and short-term CSI at the transmitter is used for efficient channel
utilization. OFDM is a powerful technique employed in communication
systems suffering from frequency selectivity. Combined with
multiple antennas at the transmitter and receiver, OFDM proves to be
robust against delay spread. Moreover, it leads to significant data rates
with improved bit error performance over links having only a single
antenna at both the transmitter and receiver. The coded modulation
increases the effective transmit power relative to uncoded variablerate
variable-power MQAM performance for MIMO-OFDM feedback
system. Hence proposed arrangement becomes an attractive approach
to achieve enhanced spectral efficiency and improved error rate
performance for next generation high speed wireless communication
systems.
Abstract: Mobile Learning (M-Learning) is a new technology
which is to enhance current learning practices and activities for all
people especially students and academic practitioners UTP is
currently, implemented two types of learning styles which are
conventional and electronic learning. In order to improve current
learning approaches, it is necessary for UTP to implement m-learning
in UTP. This paper presents a study on the students- perceptions on
mobile utilization in the learning practices in UTP. Besides, this
paper also presents a survey that was conducted among 82 students
from System Analysis and Design (SAD) course in UTP. The survey
includes basic information of mobile devices that have been used by
the students, opinions on current learning practices and also the
opinions regarding the m-learning implementation in the current
learning practices especially in SAD course. Based on the results of
the survey, majority of the students are using the mobile devices that
can support m-learning environment. Other than that, students also
agreed that current learning practices are ineffective and they believe
that m-learning utilization can improve the effectiveness of current
learning practices.
Abstract: In these days, multimedia data is transmitted and
processed in compressed format. Due to the decoding procedure and
filtering for edge detection, the feature extraction process of MPEG-7
Edge Histogram Descriptor is time-consuming as well as
computationally expensive. To improve efficiency of compressed
image retrieval, we propose a new edge histogram generation
algorithm in DCT domain in this paper. Using the edge information
provided by only two AC coefficients of DCT coefficients, we can get
edge directions and strengths directly in DCT domain. The
experimental results demonstrate that our system has good
performance in terms of retrieval efficiency and effectiveness.
Abstract: Fossil fuel-firing power plants dominate electric
power generation in Taiwan, which are also the major contributor to
Green House gases (GHG). CO2 is the most important greenhouse
gas that cause global warming. This paper penetrates the relationship
between carbon trading for GHG reduction and power generation
expansion planning (GEP) problem for the electrical utility. The
Particle Swarm Optimization (PSO) Algorithm is presented to deal
with the generation expansion planning strategy of the utility with
independent power providers (IPPs). The utility has to take both the
IPPs- participation and environment impact into account when a new
generation unit is considering expanded from view of supply side.
Abstract: Pressure wave velocity in a hydraulic system was
determined using piezo pressure sensors without removing fluid from
the system. The measurements were carried out in a low pressure
range (0.2 – 6 bar) and the results were compared with the results of
other studies. This method is not as accurate as measurement with
separate measurement equipment, but the fluid is in the actual
machine the whole time and the effect of air is taken into
consideration if air is present in the system. The amount of air is
estimated by calculations and comparisons between other studies.
This measurement equipment can also be installed in an existing
machine and it can be programmed so that it measures in real time.
Thus, it could be used e.g. to control dampers.
Abstract: The tubes in an Ammonia primary reformer furnace
operate close to the limits of materials technology in terms of the
stress induced as a result of very high temperatures, combined with
large differential pressures across the tube wall. Operation at tube
wall temperatures significantly above design can result in a rapid
increase in the number of tube failures, since tube life is very
sensitive to the absolute operating temperature of the tube. Clearly it
is important to measure tube wall temperatures accurately in order to
prevent premature tube failure by overheating.. In the present study,
the catalyst tubes in an Ammonia primary reformer has been modeled
taking into consideration heat, mass and momentum transfer as well
as reformer characteristics.. The investigations concern the effects of
tube characteristics and superficial tube wall temperatures on of the
percentage of heat flux, unconverted methane and production of
Hydrogen for various values of steam to carbon ratios. The results
show the impact of catalyst tubes length and diameters on the
performance of operating parameters in ammonia primary reformers.
Abstract: In this paper back-propagation artificial neural
network (BPANN) is employed to predict the limiting drawing ratio
(LDR) of the deep drawing process. To prepare a training set for
BPANN, some finite element simulations were carried out. die and
punch radius, die arc radius, friction coefficient, thickness, yield
strength of sheet and strain hardening exponent were used as the
input data and the LDR as the specified output used in the training of
neural network. As a result of the specified parameters, the program
will be able to estimate the LDR for any new given condition.
Comparing FEM and BPANN results, an acceptable correlation was
found.
Abstract: Cross layer optimization based on utility functions has
been recently studied extensively, meanwhile, numerous types of
utility functions have been examined in the corresponding literature.
However, a major drawback is that most utility functions take a fixed
mathematical form or are based on simple combining, which can
not fully exploit available information. In this paper, we formulate a
framework of cross layer optimization based on Adaptively Weighted
Utility Functions (AWUF) for fairness balancing in OFDMA networks.
Under this framework, a two-step allocation algorithm is
provided as a sub-optimal solution, whose control parameters can be
updated in real-time to accommodate instantaneous QoS constrains.
The simulation results show that the proposed algorithm achieves
high throughput while balancing the fairness among multiple users.
Abstract: In this manuscript, we discuss the problem of determining the optimum stratification of a study (or main) variable based on the auxiliary variable that follows a uniform distribution. If the stratification of survey variable is made using the auxiliary variable it may lead to substantial gains in precision of the estimates. This problem is formulated as a Nonlinear Programming Problem (NLPP), which turn out to multistage decision problem and is solved using dynamic programming technique.
Abstract: Gas flaring is one of the most GHG emitting sources in the oil and gas industries. It is also a major way for wasting such an energy that could be better utilized and even generates revenue. Minimize flaring is an effective approach for reducing GHG emissions and also conserving energy in flaring systems. Integrating waste and flared gases into the fuel gas networks (FGN) of refineries is an efficient tool. A fuel gas network collects fuel gases from various source streams and mixes them in an optimal manner, and supplies them to different fuel sinks such as furnaces, boilers, turbines, etc. In this article we use fuel gas network model proposed by Hasan et al. as a base model and modify some of its features and add constraints on emission pollution by gas flaring to reduce GHG emissions as possible. Results for a refinery case study showed that integration of flare gas stream with waste and natural gas streams to construct an optimal FGN can significantly reduce total annualized cost and flaring emissions.