Abstract: In this paper, a semi-fragile watermarking scheme is proposed for color image authentication. In this particular scheme, the color image is first transformed from RGB to YST color space, suitable for watermarking the color media. Each channel is divided into 4×4 non-overlapping blocks and its each 2×2 sub-block is selected. The embedding space is created by setting the two LSBs of selected sub-block to zero, which will hold the authentication and recovery information. For verification of work authentication and parity bits denoted by 'a' & 'p' are computed for each 2×2 subblock. For recovery, intensity mean of each 2×2 sub-block is computed and encoded upto six to eight bits depending upon the channel selection. The size of sub-block is important for correct localization and fast computation. For watermark distribution 2DTorus Automorphism is implemented using a private key to have a secure mapping of blocks. The perceptibility of watermarked image is quite reasonable both subjectively and objectively. Our scheme is oblivious, correctly localizes the tampering and able to recovery the original work with probability of near one.
Abstract: In this paper, we consider a new particle filter inspired
by biological evolution. In the standard particle filter, a resampling
scheme is used to decrease the degeneracy phenomenon and improve
estimation performance. Unfortunately, however, it could cause the
undesired the particle deprivation problem, as well. In order to
overcome this problem of the particle filter, we propose a novel
filtering method called the genetic filter. In the proposed filter, we
embed the genetic algorithm into the particle filter and overcome the
problems of the standard particle filter. The validity of the proposed
method is demonstrated by computer simulation.
Abstract: Digital watermarking has become an important technique for copyright protection but its robustness against attacks remains a major problem. In this paper, we propose a normalizationbased robust image watermarking scheme. In the proposed scheme, original host image is first normalized to a standard form. Zernike transform is then applied to the normalized image to calculate Zernike moments. Dither modulation is adopted to quantize the magnitudes of Zernike moments according to the watermark bit stream. The watermark extracting method is a blind method. Security analysis and false alarm analysis are then performed. The quality degradation of watermarked image caused by the embedded watermark is visually transparent. Experimental results show that the proposed scheme has very high robustness against various image processing operations and geometric attacks.
Abstract: In this work, we present for the first time in our perception an efficient digital watermarking scheme for mpeg audio layer 3 files that operates directly in the compressed data domain, while manipulating the time and subband/channel domain. In addition, it does not need the original signal to detect the watermark. Our scheme was implemented taking special care for the efficient usage of the two limited resources of computer systems: time and space. It offers to the industrial user the capability of watermark embedding and detection in time immediately comparable to the real music time of the original audio file that depends on the mpeg compression, while the end user/audience does not face any artifacts or delays hearing the watermarked audio file. Furthermore, it overcomes the disadvantage of algorithms operating in the PCMData domain to be vulnerable to compression/recompression attacks, as it places the watermark in the scale factors domain and not in the digitized sound audio data. The strength of our scheme, that allows it to be used with success in both authentication and copyright protection, relies on the fact that it gives to the users the enhanced capability their ownership of the audio file not to be accomplished simply by detecting the bit pattern that comprises the watermark itself, but by showing that the legal owner knows a hard to compute property of the watermark.
Abstract: Faults in a network may take various forms such as hardware/software errors, vertex/edge faults, etc. Folded hypercube is a well-known variation of the hypercube structure and can be constructed from a hypercube by adding a link to every pair of nodes with complementary addresses. Let FFv (respectively, FFe) be the set of faulty nodes (respectively, faulty links) in an n-dimensional folded hypercube FQn. Hsieh et al. have shown that FQn - FFv - FFe for n ≥ 3 contains a fault-free cycle of length at least 2n -2|FFv|, under the constraints that (1) |FFv| + |FFe| ≤ 2n - 4 and (2) every node in FQn is incident to at least two fault-free links. In this paper, we further consider the constraints |FFv| + |FFe| ≤ 2n - 3. We prove that FQn - FFv - FFe for n ≥ 5 still has a fault-free cycle of length at least 2n - 2|FFv|, under the constraints : (1) |FFv| + |FFe| ≤ 2n - 3, (2) |FFe| ≥ n + 2, and (3) every vertex is still incident with at least two links.
Abstract: The more recent satellite projects/programs makes
extensive usage of real – time embedded systems. 16 bit processors
which meet the Mil-Std-1750 standard architecture have been used in
on-board systems. Most of the Space Applications have been written
in ADA. From a futuristic point of view, 32 bit/ 64 bit processors are
needed in the area of spacecraft computing and therefore an effort is
desirable in the study and survey of 64 bit architectures for space
applications. This will also result in significant technology
development in terms of VLSI and software tools for ADA (as the
legacy code is in ADA).
There are several basic requirements for a special processor for
this purpose. They include Radiation Hardened (RadHard) devices,
very low power dissipation, compatibility with existing operational
systems, scalable architectures for higher computational needs,
reliability, higher memory and I/O bandwidth, predictability, realtime
operating system and manufacturability of such processors.
Further on, these may include selection of FPGA devices, selection
of EDA tool chains, design flow, partitioning of the design, pin
count, performance evaluation, timing analysis etc.
This project deals with a brief study of 32 and 64 bit processors
readily available in the market and designing/ fabricating a 64 bit
RISC processor named RISC MicroProcessor with added
functionalities of an extended double precision floating point unit
and a 32 bit signal processing unit acting as co-processors. In this
paper, we emphasize the ease and importance of using Open Core
(OpenSparc T1 Verilog RTL) and Open “Source" EDA tools such as
Icarus to develop FPGA based prototypes quickly. Commercial tools
such as Xilinx ISE for Synthesis are also used when appropriate.
Abstract: Clustering algorithms help to understand the hidden
information present in datasets. A dataset may contain intrinsic and
nested clusters, the detection of which is of utmost importance. This
paper presents a Distributed Grid-based Density Clustering algorithm
capable of identifying arbitrary shaped embedded clusters as well as
multi-density clusters over large spatial datasets. For handling
massive datasets, we implemented our method using a 'sharednothing'
architecture where multiple computers are interconnected
over a network. Experimental results are reported to establish the
superiority of the technique in terms of scale-up, speedup as well as
cluster quality.
Abstract: In this paper we present a novel technique for data
hiding in binary document images. We use the concept of entropy in
order to identify document specific least distortive areas throughout
the binary document image. The document image is treated as any
other image and the proposed method utilizes the standard document
characteristics for the embedding process. Proposed method
minimizes perceptual distortion due to embedding and allows
watermark extraction without the requirement of any side information
at the decoder end.
Abstract: A novel low-cost impedance control structure is
proposed for monitoring the contact force between end-effector and
environment without installing an expensive force/torque sensor.
Theoretically, the end-effector contact force can be estimated from the
superposition of each joint control torque. There have a nonlinear
matrix mapping function between each joint motor control input and
end-effector actuating force/torques vector. This new force control
structure can be implemented based on this estimated mapping matrix.
First, the robot end-effector is manipulated to specified positions, then
the force controller is actuated based on the hall sensor current
feedback of each joint motor. The model-free fuzzy sliding mode
control (FSMC) strategy is employed to design the position and force
controllers, respectively. All the hardware circuits and software
control programs are designed on an Altera Nios II embedded
development kit to constitute an embedded system structure for a
retrofitted Mitsubishi 5 DOF robot. Experimental results show that PI
and FSMC force control algorithms can achieve reasonable contact
force monitoring objective based on this hardware control structure.
Abstract: Safer driver behavior promoting is the main goal of this paper. It is a fact that drivers behavior is relatively safer when being monitored. Thus, in this paper, we propose a monitoring system to report specific driving event as well as the potentially aggressive events for estimation of the driving performance. Our driving monitoring system is composed of two parts. The first part is the in-vehicle embedded system which is composed of a GPS receiver, a two-axis accelerometer, radar sensor, OBD interface, and GPRS modem. The design considerations that led to this architecture is described in this paper. The second part is a web server where an adaptive hierarchical fuzzy system is proposed to classify the driving performance based on the data that is sent by the in-vehicle embedded system and the data that is provided by the geographical information system (GIS). Our system is robust, inexpensive and small enough to fit inside a vehicle without distracting the driver.
Abstract: To increase reliability of face recognition system, the
system must be able to distinguish real face from a copy of face such
as a photograph. In this paper, we propose a fast and memory efficient
method of live face detection for embedded face recognition system,
based on the analysis of the movement of the eyes. We detect eyes in
sequential input images and calculate variation of each eye region to
determine whether the input face is a real face or not. Experimental
results show that the proposed approach is competitive and promising
for live face detection.
Abstract: A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.
Abstract: The major objective of this study is to understand the
potential of a newly fabricated equipment to study the thermal
properties of nonwoven textile fabrics treated with aerogel at subzero
temperatures. Thermal conductivity was calculated by using the
empirical relation Fourier’s law, The relationship between the
thermal conductivity and thermal resistance of the samples were
studied at various environmental temperatures (which was set in the
clima temperature system between +25oC to -25oC). The newly
fabricated equipment was found to be a suitable for measuring at
subzero temperatures. This field of measurements is being developed
and will be the subject of further research which will be more suitable
for measurement of the various thermal characteristics.
Abstract: The huge development of new technologies and the
apparition of open communication system more and more
sophisticated create a new challenge to protect digital content from
piracy. Digital watermarking is a recent research axis and a new
technique suggested as a solution to these problems. This technique
consists in inserting identification information (watermark) into
digital data (audio, video, image, databases...) in an invisible and
indelible manner and in such a way not to degrade original medium-s
quality. Moreover, we must be able to correctly extract the
watermark despite the deterioration of the watermarked medium (i.e
attacks). In this paper we propose a system for watermarking satellite
images. We chose to embed the watermark into frequency domain,
precisely the discrete wavelet transform (DWT). We applied our
algorithm on satellite images of Tunisian center. The experiments
show satisfying results. In addition, our algorithm showed an
important resistance facing different attacks, notably the compression
(JEPG, JPEG2000), the filtering, the histogram-s manipulation and
geometric distortions such as rotation, cropping, scaling.
Abstract: With the drastically growth in optical communication
technology, a lossless, low-crosstalk and multifunction optical switch
is most desirable for large-scale photonic network. To realize such a
switch, we have introduced the new architecture of optical switch
that embedded many functions on single device. The asymmetrical
architecture of OXADM consists of 3 parts; selective port, add/drop
operation, and path routing. Selective port permits only the interest
wavelength pass through and acts as a filter. While add and drop
function can be implemented in second part of OXADM architecture.
The signals can then be re-routed to any output port or/and perform
an accumulation function which multiplex all signals onto single path
and then exit to any interest output port. This will be done by path
routing operation. The unique features offered by OXADM has
extended its application to Fiber to-the Home Technology (FTTH),
here the OXADM is used as a wavelength management element in
Optical Line Terminal (OLT). Each port is assigned specifically with
the operating wavelengths and with the dynamic routing management
to ensure no traffic combustion occurs in OLT.
Abstract: In this paper we have proposed a novel dynamic least cost multicast routing protocol using hybrid genetic algorithm for IP networks. Our protocol finds the multicast tree with minimum cost subject to delay, degree, and bandwidth constraints. The proposed protocol has the following features: i. Heuristic local search function has been devised and embedded with normal genetic operation to increase the speed and to get the optimized tree, ii. It is efficient to handle the dynamic situation arises due to either change in the multicast group membership or node / link failure, iii. Two different crossover and mutation probabilities have been used for maintaining the diversity of solution and quick convergence. The simulation results have shown that our proposed protocol generates dynamic multicast tree with lower cost. Results have also shown that the proposed algorithm has better convergence rate, better dynamic request success rate and less execution time than other existing algorithms. Effects of degree and delay constraints have also been analyzed for the multicast tree interns of search success rate.
Abstract: A series of Ti based shape memory alloys with
composition of Ti50Ni49Cr1, Ti50Ni47Cr3 and Ti50Ni45Cr5 were
developed by vacuum arc-melting under a purified argon atmosphere.
The histometric and corrosion evaluation of Ti-Ni-Cr shape memory
alloys have been considered in this research work. The alloys were
developed by vacuum arc melting and implanted subcutaneously in
rabbits for 4, 8 and 12 weeks. Metallic implants were embedded in
order to determine the outcome of implantation on histometric and
corrosion evaluation of Ti-Ni-Cr metallic strips. Encapsulating
membrane formation around the alloys was minimal in the case of all
materials. After histomorphometric analyses it was possible to
demonstrate that there were no statistically significant differences
between the materials. Corrosion rate was also determined in this
study which is within acceptable range. The results showed the Ti-
Ni-Cr alloy was neither cytotoxic, nor have any systemic reaction on
living system in any of the test performed. Implantation shows good
compatibility and a potential of being used directly in vivo system.
Abstract: Introducing survivability into embedded real-time system (ERTS) can improve the survivability power of the system. This paper mainly discusses about the survivability of ERTS. The first is the survivability origin of ERTS. The second is survivability analysis. According to the definition of survivability based on survivability specification and division of the entire survivability analysis process for ERTS, a survivability analysis profile is presented. The quantitative analysis model of this profile is emphasized and illuminated in detail, the quantifying analysis of system was showed helpful to evaluate system survivability more accurate. The third is platform design of survivability analysis. In terms of the profile, the analysis process is encapsulated and assembled into one platform, on which quantification, standardization and simplification of survivability analysis are all achieved. The fourth is survivability design. According to character of ERTS, strengthened design method is selected to realize system survivability design. Through the analysis of embedded mobile video-on-demand system, intrusion tolerant technology is introduced in whole survivability design.
Abstract: Choosing the right metadata is a critical, as good
information (metadata) attached to an image will facilitate its
visibility from a pile of other images. The image-s value is enhanced
not only by the quality of attached metadata but also by the technique
of the search. This study proposes a technique that is simple but
efficient to predict a single human image from a website using the
basic image data and the embedded metadata of the image-s content
appearing on web pages. The result is very encouraging with the
prediction accuracy of 95%. This technique may become a great
assist to librarians, researchers and many others for automatically and
efficiently identifying a set of human images out of a greater set of
images.
Abstract: In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.