Abstract: This paper deals with the experimental investigations
of the in-cylinder tumble flows in an unfired internal combustion
engine with a flat piston at the engine speeds ranging from 400 to
1000 rev/min., and also with the dome and dome-cavity pistons at an
engine speed of 1000 rev/min., using particle image velocimetry.
From the two-dimensional in-cylinder flow measurements, tumble
flow analysis is carried out in the combustion space on a vertical
plane passing through cylinder axis. To analyze the tumble flows,
ensemble average velocity vectors are used and to characterize it,
tumble ratio is estimated. From the results, generally, we have found
that tumble ratio varies mainly with crank angle position. Also, at the
end of compression stroke, average turbulent kinetic energy is more
at higher engine speeds. We have also found that, at 330 crank angle
position, flat piston shows an improvement of about 85 and 23% in
tumble ratio, and about 24 and 2.5% in average turbulent kinetic
energy compared to dome and dome-cavity pistons respectively
Abstract: The challenge in the case of image authentication is that in many cases images need to be subjected to non malicious operations like compression, so the authentication techniques need to be compression tolerant. In this paper we propose an image authentication system that is tolerant to JPEG lossy compression operations. A scheme for JPEG grey scale images is proposed based on a data embedding method that is based on a secret key and a secret mapping vector in the frequency domain. An encrypted feature vector extracted from the image DCT coefficients, is embedded redundantly, and invisibly in the marked image. On the receiver side, the feature vector from the received image is derived again and compared against the extracted watermark to verify the image authenticity. The proposed scheme is robust against JPEG compression up to a maximum compression of approximately 80%,, but sensitive to malicious attacks such as cutting and pasting.
Abstract: This paper describes the results of an extensive study
and comparison of popular hash functions SHA-1, SHA-256,
RIPEMD-160 and RIPEMD-320 with JERIM-320, a 320-bit hash
function. The compression functions of hash functions like SHA-1
and SHA-256 are designed using serial successive iteration whereas
those like RIPEMD-160 and RIPEMD-320 are designed using two
parallel lines of message processing. JERIM-320 uses four parallel
lines of message processing resulting in higher level of security than
other hash functions at comparable speed and memory requirement.
The performance evaluation of these methods has been done by using
practical implementation and also by using step computation
methods. JERIM-320 proves to be secure and ensures the integrity of
messages at a higher degree. The focus of this work is to establish
JERIM-320 as an alternative of the present day hash functions for the
fast growing internet applications.
Abstract: The new programming technologies allow for the
creation of components which can be automatically or manually
assembled to reach a new experience in knowledge understanding
and mastering or in getting skills for a specific knowledge area. The
project proposes an interactive framework that permits the creation,
combination and utilization of components that are specific to
mathematical training in high schools.
The main framework-s objectives are:
• authoring lessons by the teacher or the students; all they need
are simple operating skills for Equation Editor (or something
similar, or Latex); the rest are just drag & drop operations,
inserting data into a grid, or navigating through menus
• allowing sonorous presentations of mathematical texts and
solving hints (easier understood by the students)
• offering graphical representations of a mathematical function
edited in Equation
• storing of learning objects in a database
• storing of predefined lessons (efficient for expressions and
commands, the rest being calculations; allows a high
compression)
• viewing and/or modifying predefined lessons, according to the
curricula
The whole thing is focused on a mathematical expressions minicompiler,
storing the code that will be later used for different
purposes (tables, graphics, and optimisations).
Programming technologies used. A Visual C# .NET
implementation is proposed. New and innovative digital learning
objects for mathematics will be developed; they are capable to
interpret, contextualize and react depending on the architecture
where they are assembled.
Abstract: This paper presents a new fingerprint coding technique
based on contourlet transform and multistage vector quantization.
Wavelets have shown their ability in representing natural images that
contain smooth areas separated with edges. However, wavelets
cannot efficiently take advantage of the fact that the edges usually
found in fingerprints are smooth curves. This issue is addressed by
directional transforms, known as contourlets, which have the
property of preserving edges. The contourlet transform is a new
extension to the wavelet transform in two dimensions using
nonseparable and directional filter banks. The computation and
storage requirements are the major difficulty in implementing a
vector quantizer. In the full-search algorithm, the computation and
storage complexity is an exponential function of the number of bits
used in quantizing each frame of spectral information. The storage
requirement in multistage vector quantization is less when compared
to full search vector quantization. The coefficients of contourlet
transform are quantized by multistage vector quantization. The
quantized coefficients are encoded by Huffman coding. The results
obtained are tabulated and compared with the existing wavelet based
ones.
Abstract: In this work we study analytically and numerically the
performance of the mean heave motion of an OWC coupled with the
governing equation of the spreading ocean waves due to the wide
variation in an open parabolic channel with constant depth. This
paper considers that the ocean wave propagation is under the
assumption of a shallow flow condition. In order to verify the effect
of the waves in the OWC firstly we establish the analytical model in
a non-dimensional form based on the energy equation. The proposed
wave-power system has to aims: one is to perturb the ocean waves as
a consequence of the channel shape in order to concentrate the
maximum ocean wave amplitude in the neighborhood of the OWC
and the second is to determine the pressure and volume oscillation of
air inside the compression chamber.
Abstract: This paper presented two new efficient algorithms
for contour approximation. The proposed algorithm is compared
with Ramer (good quality), Triangle (faster) and Trapezoid (fastest)
in this work; which are briefly described. Cartesian co-ordinates of
an input contour are processed in such a manner that finally
contours is presented by a set of selected vertices of the edge of the
contour. In the paper the main idea of the analyzed procedures for
contour compression is performed. For comparison, the mean
square error and signal-to-noise ratio criterions are used.
Computational time of analyzed methods is estimated depending on
a number of numerical operations. Experimental results are
obtained both in terms of image quality, compression ratios, and
speed. The main advantages of the analyzed algorithm is small
numbers of the arithmetic operations compared to the existing
algorithms.
Abstract: We study the performance of compressed beamforming
weights feedback technique in generalized triangular decomposition
(GTD) based MIMO system. GTD is a beamforming technique that
enjoys QoS flexibility. The technique, however, will perform at its
optimum only when the full knowledge of channel state information
(CSI) is available at the transmitter. This would be impossible in
the real system, where there are channel estimation error and limited
feedback. We suggest a way to implement the quantized beamforming
weights feedback, which can significantly reduce the feedback data,
on GTD-based MIMO system and investigate the performance of
the system. Interestingly, we found that compressed beamforming
weights feedback does not degrade the BER performance of the
system at low input power, while the channel estimation error
and quantization do. For comparison, GTD is more sensitive to
compression and quantization, while SVD is more sensitive to the
channel estimation error. We also explore the performance of GTDbased
MU-MIMO system, and find that the BER performance starts
to degrade largely at around -20 dB channel estimation error.
Abstract: In this paper a novel scheme for watermarking digital
audio during its compression to MPEG-1 Layer III format is
proposed. For this purpose we slightly modify some of the selected
MDCT coefficients, which are used during MPEG audio
compression procedure. Due to the possibility of modifying different
MDCT coefficients, there will be different choices for embedding the
watermark into audio data, considering robustness and transparency
factors. Our proposed method uses a genetic algorithm to select the
best coefficients to embed the watermark. This genetic selection is
done according to the parameters that are extracted from the
perceptual content of the audio to optimize the robustness and
transparency of the watermark. On the other hand the watermark
security is increased due to the random nature of the genetic
selection. The information of the selected MDCT coefficients that
carry the watermark bits, are saves in a database for future extraction
of the watermark. The proposed method is suitable for online MP3
stores to pursue illegal copies of musical artworks. Experimental
results show that the detection ratio of the watermarks at the bitrate
of 128kbps remains above 90% while the inaudibility of the
watermark is preserved.
Abstract: Recently the use of data mining to scientific bibliographic data bases has been implemented to analyze the pathways of the knowledge or the core scientific relevances of a laureated novel or a country. This specific case of data mining has been named citation mining, and it is the integration of citation bibliometrics and text mining. In this paper we present an improved WEB implementation of statistical physics algorithms to perform the text mining component of citation mining. In particular we use an entropic like distance between the compression of text as an indicator of the similarity between them. Finally, we have included the recently proposed index h to characterize the scientific production. We have used this web implementation to identify users, applications and impact of the Mexican scientific institutions located in the State of Morelos.
Abstract: In this work, we developed the concept of
supercompression, i.e., compression above the compression standard
used. In this context, both compression rates are multiplied. In fact,
supercompression is based on super-resolution. That is to say,
supercompression is a data compression technique that superpose
spatial image compression on top of bit-per-pixel compression to
achieve very high compression ratios. If the compression ratio is very
high, then we use a convolutive mask inside decoder that restores the
edges, eliminating the blur. Finally, both, the encoder and the
complete decoder are implemented on General-Purpose computation
on Graphics Processing Units (GPGPU) cards. Specifically, the
mentio-ned mask is coded inside texture memory of a GPGPU.
Abstract: In this work a new method for low complexity
image coding is presented, that permits different settings and great
scalability in the generation of the final bit stream. This coding
presents a continuous-tone still image compression system that
groups loss and lossless compression making use of finite arithmetic
reversible transforms. Both transformation in the space of color and
wavelet transformation are reversible. The transformed coefficients
are coded by means of a coding system in depending on a
subdivision into smaller components (CFDS) similar to the bit
importance codification. The subcomponents so obtained are
reordered by means of a highly configure alignment system
depending on the application that makes possible the re-configure of
the elements of the image and obtaining different importance levels
from which the bit stream will be generated. The subcomponents of
each importance level are coded using a variable length entropy
coding system (VBLm) that permits the generation of an embedded
bit stream. This bit stream supposes itself a bit stream that codes a
compressed still image. However, the use of a packing system on the
bit stream after the VBLm allows the realization of a final highly
scalable bit stream from a basic image level and one or several
improvement levels.
Abstract: In this paper, the effects of thermodynamic,
hydrodynamic and geometric of an air cooled condenser on COP of
vapor compression cycle are investigated for a fixed condenser facing
surface area. The system is utilized with a scroll compressor,
modeled based on thermodynamic and heat transfer equations
employing Matlab software. The working refrigerant is R134a whose
thermodynamic properties are called from Engineering Equation
Software. This simulation shows that vapor compression cycle can
be designed by different configurations and COPs, economical and
optimum working condition can be obtained via considering these
parameters.
Abstract: The Pulsed Compression Reactor promises to be a
compact, economical and energy efficient alternative to conventional
chemical reactors.
In this article, the production of synthesis gas using the Pulsed
Compression Reactor is investigated. This is done experimentally as
well as with simulations. The experiments are done by means of a
single shot reactor, which replicates a representative, single
reciprocation of the Pulsed Compression Reactor with great control
over the reactant composition, reactor temperature and pressure and
temperature history. Simulations are done with a relatively simple
method, which uses different models for the chemistry and
thermodynamic properties of the species in the reactor. Simulation
results show very good agreement with the experimental data, and
give great insight into the reaction processes that occur within the
cycle.
Abstract: The greenhouse effect and limitations on carbon
dioxide emissions concern engine maker and the future of the
internal combustion engines should go toward substantially and
improved thermal efficiency engine. Homogeneous charge
compression ignition (HCCI) is an alternative high-efficiency
technology for combustion engines to reduce exhaust emissions and
fuel consumption. However, there are still tough challenges in the
successful operation of HCCI engines, such as controlling the
combustion phasing, extending the operating range, and high
unburned hydrocarbon and CO emissions. HCCI and the exploitation
of ethanol as an alternative fuel is one way to explore new frontiers
of internal combustion engines with an eye towards maintaining its
sustainability. This study was done to extend database knowledge
about HCCI with ethanol a fuel.
Abstract: This paper concerns about the experimental and
numerical investigations of energy absorption and axial tearing
behaviour of aluminium 6060 circular thin walled tubes under static
axial compression. The tubes are received in T66 heat treatment
condition with fixed outer diameter of 42mm, thickness of 1.5mm
and length of 120mm. The primary variables are the conical die
angles (15°, 20° and 25°). Numerical simulations are carried on
ANSYS/LS-DYNA software tool, for investigating the effect of
friction between the tube and the die.
Abstract: While compressing text files is useful, compressing
still image files is almost a necessity. A typical image takes up much
more storage than a typical text message and without compression
images would be extremely clumsy to store and distribute. The
amount of information required to store pictures on modern
computers is quite large in relation to the amount of bandwidth
commonly available to transmit them over the Internet and
applications. Image compression addresses the problem of reducing
the amount of data required to represent a digital image. Performance
of any image compression method can be evaluated by measuring the
root-mean-square-error & peak signal to noise ratio. The method of
image compression that will be analyzed in this paper is based on the
lossy JPEG image compression technique, the most popular
compression technique for color images. JPEG compression is able to
greatly reduce file size with minimal image degradation by throwing
away the least “important" information. In JPEG, both color
components are downsampled simultaneously, but in this paper we
will compare the results when the compression is done by
downsampling the single chroma part. In this paper we will
demonstrate more compression ratio is achieved when the
chrominance blue is downsampled as compared to downsampling the
chrominance red in JPEG compression. But the peak signal to noise
ratio is more when the chrominance red is downsampled as compared
to downsampling the chrominance blue in JPEG compression. In
particular we will use the hats.jpg as a demonstration of JPEG
compression using low pass filter and demonstrate that the image is
compressed with barely any visual differences with both methods.
Abstract: The algorithm represents the DCT coefficients to concentrate signal energy and proposes combination and dictator to eliminate the correlation in the same level subband for encoding the DCT-based images. This work adopts DCT and modifies the SPIHT algorithm to encode DCT coefficients. The proposed algorithm also provides the enhancement function in low bit rate in order to improve the perceptual quality. Experimental results indicate that the proposed technique improves the quality of the reconstructed image in terms of both PSNR and the perceptual results close to JPEG2000 at the same bit rate.
Abstract: Multimedia information availability has increased
dramatically with the advent of video broadcasting on handheld
devices. But with this availability comes problems of maintaining the
security of information that is displayed in public. ISMA Encryption
and Authentication (ISMACryp) is one of the chosen technologies for
service protection in DVB-H (Digital Video Broadcasting-
Handheld), the TV system for portable handheld devices. The
ISMACryp is encoded with H.264/AVC (advanced video coding),
while leaving all structural data as it is. Two modes of ISMACryp are
available; the CTR mode (Counter type) and CBC mode (Cipher
Block Chaining) mode. Both modes of ISMACryp are based on 128-
bit AES algorithm. AES algorithms are more complex and require
larger time for execution which is not suitable for real time
application like live TV. The proposed system aims to gain a deep
understanding of video data security on multimedia technologies and
to provide security for real time video applications using selective
encryption for H.264/AVC. Five level of security proposed in this
paper based on the content of NAL unit in Baseline Constrain profile
of H.264/AVC. The selective encryption in different levels provides
encryption of intra-prediction mode, residue data, inter-prediction
mode or motion vectors only. Experimental results shown in this
paper described that fifth level which is ISMACryp provide higher
level of security with more encryption time and the one level provide
lower level of security by encrypting only motion vectors with lower
execution time without compromise on compression and quality of
visual content. This encryption scheme with compression process
with low cost, and keeps the file format unchanged with some direct
operations supported. Simulation was being carried out in Matlab.
Abstract: PCCI engines can reduce NOx and PM emissions
simultaneously without sacrificing thermal efficiency, but a low
combustion temperature resulting from early fuel injection, and
ignition occurring prior to TDC, can cause higher THC and CO
emissions and fuel consumption. In conclusion, it was found that the
PCCI combustion achieved by the 2-stage injection strategy with
optimized calibration factors (e.g. EGR rate, injection pressure, swirl
ratio, intake pressure, injection timing) can reduce NOx and PM
emissions simultaneously. This research works are expected to
provide valuable information conducive to a development of an
innovative combustion engine that can fulfill upcoming stringent
emission standards.