Abstract: Aldehydes as secondary lipid oxidation products are highly specific to the oxidative degradation of particular polyunsaturated fatty acids present in foods. Gas chromatographic analysis of those volatile compounds has been widely used for monitoring of the deterioration of food products. Developed static headspace gas chromatography method using flame ionization detector (SHS GC FID) was applied to monitor the aldehydes present in processed foods such as bakery, meat and confectionary products.
Five selected aldehydes were determined in samples without any sample preparation, except grinding for bakery and meat products. SHS–GC analysis allows the separation of propanal, pentanal, hexanal, heptanal and octanal, within 15min. Aldehydes were quantified in fresh and stored samples, and the obtained range of aldehydes in crackers was 1.62±0.05 – 9.95±0.05mg/kg, in sausages 6.62±0.46 – 39.16±0.39mg/kg; and in cocoa spread cream 0.48±0.01 – 1.13±0.02mg/kg. Referring to the obtained results, the following can be concluded, proposed method is suitable for different types of samples, content of aldehydes varies depending on the type of a sample, and differs in fresh and stored samples of the same type.
Abstract: In Algeria, now, the oil pumping plants are fed with electric power by independent local sources. This type of feeding has many advantages (little climatic influence, independent operation). However it requires a qualified maintenance staff, a rather high frequency of maintenance and repair and additional fuel costs. Taking into account the increasing development of the national electric supply network (Sonelgaz), a real possibility of transfer of the local sources towards centralized sources appears.These latter cannot only be more economic but more reliable than the independent local sources as well. In order to carry out this transfer, it is necessary to work out an optimal strategy to rebuilding these networks taking in account the economic parameters and the indices of reliability.
Abstract: Einstein vacuum equations, that is a system of nonlinear
partial differential equations (PDEs) are derived from Weyl metric
by using relation between Einstein tensor and metric tensor. The
symmetries of Einstein vacuum equations for static axisymmetric
gravitational fields are obtained using the Lie classical method. We
have examined the optimal system of vector fields which is further
used to reduce nonlinear PDE to nonlinear ordinary differential
equation (ODE). Some exact solutions of Einstein vacuum equations
in general relativity are also obtained.
Abstract: The Lattice Boltzmann Method (LBM) with double populations is applied to solve the steady-state laminar natural convective heat transfer in a triangular cavity filled with water. The bottom wall is heated, the vertical wall is cooled, and the inclined wall is kept adiabatic. The buoyancy effect was modeled by applying the Boussinesq approximation to the momentum equation. The fluid velocity is determined by D2Q9 LBM and the energy equation is discritized by D2Q4 LBM to compute the temperature field. Comparisons with previously published work are performed and found to be in excellent agreement. Numerical results are obtained for a wide range of parameters: the Rayleigh number from to and the inclination angle from 0° to 360°. Flow and thermal fields were exhibited by means of streamlines and isotherms. It is observed that inclination angle can be used as a relevant parameter to control heat transfer in right-angled triangular enclosures.
Abstract: Nowadays, we are facing with network threats that
cause enormous damage to the Internet community day by day. In
this situation, more and more people try to prevent their network
security using some traditional mechanisms including firewall,
Intrusion Detection System, etc. Among them honeypot is a versatile
tool for a security practitioner, of course, they are tools that are meant
to be attacked or interacted with to more information about attackers,
their motives and tools. In this paper, we will describe usefulness of
low-interaction honeypot and high-interaction honeypot and
comparison between them. And then we propose hybrid honeypot
architecture that combines low and high -interaction honeypot to
mitigate the drawback. In this architecture, low-interaction honeypot
is used as a traffic filter. Activities like port scanning can be
effectively detected by low-interaction honeypot and stop there.
Traffic that cannot be handled by low-interaction honeypot is handed
over to high-interaction honeypot. In this case, low-interaction
honeypot is used as proxy whereas high-interaction honeypot offers
the optimal level realism. To prevent the high-interaction honeypot
from infections, containment environment (VMware) is used.
Abstract: The aim of this study is to analyze influence of
differences of heat insulation methods on indoor thermal environment and comfort of apartment buildings.
This study analyzes indoor thermal environment and comfort on units of apartment buildings using calculation software "THERB" and
compares three different kinds of heat insulation methods. Those are
outside insulation on outside walls, inside insulation on outside walls and interior insulation. In terms of indoor thermal environment, outside insulation is the best to stabilize room temperature. In winter, room temperature on
outside insulation after heating is higher than other and it is kept 3-5 degrees higher through all night. But the surface temperature with
outside insulation did not dramatically increase when heating was used, which was 3 to 5oC lower than the temperature with other
insulation. The PMV of interior insulation fall nearly range of comfort when the heating and cooling was use.
Abstract: A 1.2 V, 0.61 mA bias current, low noise amplifier
(LNA) suitable for low-power applications in the 2.4 GHz band is
presented. Circuit has been implemented, laid out and simulated using
a UMC 130 nm RF-CMOS process. The amplifier provides a 13.3 dB
power gain a noise figure NF< 2.28 dB and a 1-dB compression point
of -15.69 dBm, while dissipating 0.74 mW. Such performance make
this design suitable for wireless sensor networks applications such as
ZigBee.
Abstract: This paper presents a comparative analysis of a new
unsupervised PCA-based technique for steel plates texture segmentation
towards defect detection. The proposed scheme called Variance
Based Component Analysis or VBCA employs PCA for feature
extraction, applies a feature reduction algorithm based on variance of
eigenpictures and classifies the pixels as defective and normal. While
the classic PCA uses a clusterer like Kmeans for pixel clustering,
VBCA employs thresholding and some post processing operations to
label pixels as defective and normal. The experimental results show
that proposed algorithm called VBCA is 12.46% more accurate and
78.85% faster than the classic PCA.
Abstract: In this paper a novel method was presented for
evaluating the fabric pills using digital image processing techniques. This work provides a novel technique for
detecting pills and also measuring their heights, surfaces and
volumes. Surely, measuring the intensity of defects by human vision is an inaccurate method for quality control; as a result, this problem became a motivation for employing digital image processing techniques for detection of defects of fabric
surface. In the former works, the systems were just limited to measuring of the surface of defects, but in the presented
method the height and the volume of defects were also
measured, which leads to a more accurate quality control. An algorithm was developed to first, find pills and then measure their average intensity by using three criteria of height, surface
and volume. The results showed a meaningful relation
between the number of rotations and the quality of pilled fabrics.
Abstract: The main objectives of this paper are to measure
pollutants concentrations in the oil refinery area in Kuwait over three
periods during one year, obtain recent emission inventory for the
three refineries of Kuwait, use AERMOD and the emission inventory
to predict pollutants concentrations and distribution, compare model
predictions against measured data, and perform numerical
experiments to determine conditions at which emission rates and the
resulting pollutant dispersion is below maximum allowable limits.
Abstract: The purpose of the present study is the calculation of Gutenber-Richter parameters (a, b) and analyze the mean annual rate of exceedance of earthquake magnitude (Om ) of southern segment of the Sagaing fault and its associate components. The study area is situated about 200 km radius centered at Yangon. Earthquake data file is using from 1975 to 2006 August 31. The bounded Gutenberg- Richter recurrence law for 0 M is 4.0 and max M is 7.5.
Abstract: A large number of semantic web service composition
approaches are developed by the research community and one is
more efficient than the other one depending on the particular
situation of use. So a close look at the requirements of ones particular
situation is necessary to find a suitable approach to use. In this paper,
we present a Technique Recommendation System (TRS) which using
a classification of state-of-art semantic web service composition
approaches, can provide the user of the system with the
recommendations regarding the use of service composition approach
based on some parameters regarding situation of use. TRS has
modular architecture and uses the production-rules for knowledge
representation.
Abstract: Knowledge about the magnetic quantities in a magnetic circuit is always of great interest. On the one hand, this information is needed for the simulation of a transformer. On the other hand, parameter studies are more reliable, if the magnetic quantities are derived from a well established model. One possibility to model the 3-phase transformer is by using a magnetic equivalent circuit (MEC). Though this is a well known system, it is often not an easy task to set up such a model for a large number of lumped elements which additionally includes the nonlinear characteristic of the magnetic material. Here we show the setup of a solver for a MEC and the results of the calculation in comparison to measurements taken. The equations of the MEC are based on a rearranged system of the nodal analysis. Thus it is possible to achieve a minimum number of equations, and a clear and simple structure. Hence, it is uncomplicated in its handling and it supports the iteration process. Additional helpful tasks are implemented within the solver to enhance the performance. The electric circuit is described by an electric equivalent circuit (EEC). Our results for the 3-phase transformer demonstrate the computational efficiency of the solver, and show the benefit of the application of a MEC.
Abstract: A theoretical study is conducted to design and explore
the effect of different parameters such as heat loads, the tube size of
piping system, wick thickness, porosity and hole size on the
performance and capability of a Loop Heat Pipe(LHP). This paper
presents a steady state model that describes the different phenomena
inside a LHP. Loop Heat Pipes(LHPs) are two-phase heat transfer
devices with capillary pumping of a working fluid. By their original
design comparing with heat pipes and special properties of the
capillary structure, they-re capable of transferring heat efficiency for
distances up to several meters at any orientation in the gravity field,
or to several meters in a horizontal position. This theoretical model is
described by different relations to satisfy important limits such as
capillary and nucleate boiling. An algorithm is developed to predict
the size of the LHP satisfying the limitations mentioned above for a
wide range of applied loads. Finally, to assess and evaluate the
algorithm and all the relations considered, we have used to design a
new kind of LHP to recover the heat from the exhaust of an actual
Gas Turbine. By finding the results, it showed that we can use the
LHP as a very high efficient device to recover the heat even in high
amount of loads(exhaust of a gas turbine). The sizes of all parts of the
LHP were obtained using the developed algorithm.
Abstract: Mostly the systems are dealing with time varying
signals. The Power efficiency can be achieved by adapting the system
activity according to the input signal variations. In this context
an adaptive rate filtering technique, based on the level crossing sampling
is devised. It adapts the sampling frequency and the filter order
by following the input signal local variations. Thus, it correlates the
processing activity with the signal variations. Interpolation is required
in the proposed technique. A drastic reduction in the interpolation
error is achieved by employing the symmetry during the interpolation
process. Processing error of the proposed technique is
calculated. The computational complexity of the proposed filtering
technique is deduced and compared to the classical one. Results
promise a significant gain of the computational efficiency and hence
of the power consumption.
Abstract: Producing companies aspire to high delivery
availability despite appearing disruptions. To ensure high delivery
availability safety stocksare required. Howeversafety stock leads to
additional capital commitment and compensates disruptions instead
of solving the reasons.The intention is to increase the stability in
production by configuring the production planning and control
systematically. Thus the safety stock can be reduced. The largest
proportion of inventory in producing companies is caused by batch
inventory, schedule deviations and variability of demand rates.These
reasons for high inventory levels can be reduced by configuring the
production planning and control specifically. Hence the inventory
level can be reduced. This is enabled by synchronizing the lot size
straightening the demand as well as optimizing the releasing order,
sequencing and capacity control.
Abstract: Granular computing deals with representation of information in the form of some aggregates and related methods for transformation and analysis for problem solving. A granulation scheme based on clustering and Rough Set Theory is presented with focus on structured conceptualization of information has been presented in this paper. Experiments for the proposed method on four labeled data exhibit good result with reference to classification problem. The proposed granulation technique is semi-supervised imbibing global as well as local information granulation. To represent the results of the attribute oriented granulation a tree structure is proposed in this paper.
Abstract: The growth of open networks created the interest to commercialise it. The establishment of an electronic business mechanism must be accompanied by a digital-electronic payment system to transfer the value of transactions. Financial organizations are requested to offer a secure e-payment synthesis with equivalent levels of trust and security served in conventional paper-based payment transactions. The paper addresses the challenge of the first trade problem in e-commerce, provides a brief literature review on electronic payment and attempts to explain the underlying concept and method of trust in relevance to electronic payment.
Abstract: Masonry cavity walls are loaded by wind pressure and vertical load from upper floors. These loads results in bending moments and compression forces in the ties connecting the outer and the inner wall in a cavity wall. Large cavity walls are furthermore loaded by differential movements from the temperature gradient between the outer and the inner wall, which results in critical increase of the bending moments in the ties. Since the ties are loaded by combined compression and moment forces, the loadbearing capacity is derived from instability equilibrium equations. Most of them are iterative, since exact instability solutions are complex to derive, not to mention the extra complexity introducing dimensional instability from the temperature gradients. Using an inverse variable substitution and comparing an exact theory with an analytical instability solution a method to design tie-connectors in cavity walls was developed. The method takes into account constraint conditions limiting the free length of the wall tie, and the instability in case of pure compression which gives an optimal load bearing capacity. The model is illustrated with examples from praxis.
Abstract: The main focus of this paper is on the human induced
forces. Almost all existing force models for this type of load (defined
either in the time or frequency domain) are developed from the
assumption of perfect periodicity of the force and are based on force
measurements conducted on rigid (i.e. high frequency) surfaces. To
verify the different authors conclusions the vertical pressure
measurements invoked during the walking was performed, using
pressure gauges in various configurations. The obtained forces are
analyzed using Fourier transformation. This load is often decisive in
the design of footbridges. Design criteria and load models proposed
by widely used standards and other researchers were introduced and a
comparison was made.