Abstract: The Simulation based VLSI Implementation of
FELICS (Fast Efficient Lossless Image Compression System)
Algorithm is proposed to provide the lossless image compression and
is implemented in simulation oriented VLSI (Very Large Scale
Integrated). To analysis the performance of Lossless image
compression and to reduce the image without losing image quality
and then implemented in VLSI based FELICS algorithm. In FELICS
algorithm, which consists of simplified adjusted binary code for
Image compression and these compression image is converted in
pixel and then implemented in VLSI domain. This parameter is used
to achieve high processing speed and minimize the area and power.
The simplified adjusted binary code reduces the number of arithmetic
operation and achieved high processing speed. The color difference
preprocessing is also proposed to improve coding efficiency with
simple arithmetic operation. Although VLSI based FELICS
Algorithm provides effective solution for hardware architecture
design for regular pipelining data flow parallelism with four stages.
With two level parallelisms, consecutive pixels can be classified into
even and odd samples and the individual hardware engine is
dedicated for each one. This method can be further enhanced by
multilevel parallelisms.
Abstract: In this paper, we present a robust algorithm to recognize extracted text from grocery product images captured by mobile phone cameras. Recognition of such text is challenging since text in grocery product images varies in its size, orientation,
style, illumination, and can suffer from perspective distortion.
Pre-processing is performed to make the characters scale and
rotation invariant. Since text degradations can not be appropriately
defined using well-known geometric transformations such
as translation, rotation, affine transformation and shearing, we
use the whole character black pixels as our feature vector.
Classification is performed with minimum distance classifier
using the maximum likelihood criterion, which delivers very
promising Character Recognition Rate (CRR) of 89%. We
achieve considerably higher Word Recognition Rate (WRR) of
99% when using lower level linguistic knowledge about product
words during the recognition process.
Abstract: Real time image and video processing is a demand in
many computer vision applications, e.g. video surveillance, traffic
management and medical imaging. The processing of those video
applications requires high computational power. Thus, the optimal
solution is the collaboration of CPU and hardware accelerators. In
this paper, a Canny edge detection hardware accelerator is proposed.
Edge detection is one of the basic building blocks of video and image
processing applications. It is a common block in the pre-processing
phase of image and video processing pipeline. Our presented
approach targets offloading the Canny edge detection algorithm from
processing system (PS) to programmable logic (PL) taking the
advantage of High Level Synthesis (HLS) tool flow to accelerate the
implementation on Zynq platform. The resulting implementation
enables up to a 100x performance improvement through hardware
acceleration. The CPU utilization drops down and the frame rate
jumps to 60 fps of 1080p full HD input video stream.
Abstract: The tomato is a very important crop, whose
cultivation in the Mediterranean basin is severely affected by the
phytoparasitic weed Phelipanche ramosa. The semiarid regions of
the world are considered the main areas where this parasitic weed is
established causing heavy infestation as it is able to produce high
numbers of seeds (up to 500,000 per plant), which remain viable for
extended period (more than 20 years). In this paper the results
obtained from eleven treatments in order to control this parasitic
weed including chemical, agronomic, biological and biotechnological
methods compared with the untreated test under two plowing depths
(30 and 50 cm) are reported. The split-plot design with 3 replicates
was adopted. In 2014 a trial was performed in Foggia province
(southern Italy) on processing tomato (cv Docet) grown in the field
infested by Phelipanche ramosa. Tomato seedlings were transplant
on May 5, on a clay-loam soil. During the growing cycle of the
tomato crop, at 56-78 and 92 days after transplantation, the number
of parasitic shoots emerged in each plot was detected. At tomato
harvesting, on August 18, the major quantity-quality yield parameters
were determined (marketable yield, mean weight, dry matter, pH,
soluble solids and color of fruits). All data were subjected to analysis
of variance (ANOVA) and the means were compared by Tukey's test.
Each treatment studied did not provide complete control against
Phelipanche ramosa. However, among the different methods tested,
some of them which Fusarium, gliphosate, radicon biostimulant and
Red Setter tomato cv (improved genotypes obtained by Tilling
technology) under deeper plowing (50 cm depth) proved to mitigate
the virulence of the Phelipanche ramose attacks. It is assumed that
these effects can be improved combining some of these treatments
each other, especially for a gradual and continuing reduction of the
“seed bank” of the parasite in the soil.
Abstract: Logistics distributors face the issue of having to
provide increasing service levels while being forced to reduce costs at
the same time. Same-day delivery, quick order processing and rapidly
growing ranges of articles are only some of the prevailing challenges.
One key aspect of the performance of an intra-logistics system is how
often and in which amplitude congestions and dysfunctions affect the
processing operations. By gaining knowledge of the so called
‘performance availability’ of such a system during the planning stage,
oversizing and wasting can be reduced whereas planning
transparency is increased. State of the art for the determination of this
KPI is simulation studies. However, their structure and therefore their
results may vary unforeseeably. This article proposes a concept for
the establishment of ‘certified’ and hence reliable and comparable
simulation models.
Abstract: The Smart Help for persons with disability (PWD) is a
part of the project SMARTDISABLE which aims to develop relevant
solution for PWD that target to provide an adequate workplace
environment for them. It would support PWD needs smartly through
smart help to allow them access to relevant information and
communicate with other effectively and flexibly, and smart editor
that assist them in their daily work. It will assist PWD in knowledge
processing and creation as well as being able to be productive at the
work place. The technical work of the project involves design of a
technological scenario for the Ambient Intelligence (AmI) - based
assistive technologies at the workplace consisting of an integrated
universal smart solution that suits many different impairment
conditions and will be designed to empower the Physically disabled
persons (PDP) with the capability to access and effectively utilize the
ICTs in order to execute knowledge rich working tasks with
minimum efforts and with sufficient comfort level. The proposed
technology solution for PWD will support voice recognition along
with normal keyboard and mouse to control the smart help and smart
editor with dynamic auto display interface that satisfies the
requirements for different PWD group. In addition, a smart help will
provide intelligent intervention based on the behavior of PWD to
guide them and warn them about possible misbehavior. PWD can
communicate with others using Voice over IP controlled by voice
recognition. Moreover, Auto Emergency Help Response would be
supported to assist PWD in case of emergency. This proposed
technology solution intended to make PWD very effective at the
work environment and flexible using voice to conduct their tasks at
the work environment. The proposed solution aims to provide
favorable outcomes that assist PWD at the work place, with the
opportunity to participate in PWD assistive technology innovation
market which is still small and rapidly growing as well as upgrading
their quality of life to become similar to the normal people at the
workplace. Finally, the proposed smart help solution is applicable in
all workplace setting, including offices, manufacturing, hospital, etc.
Abstract: Structured Query Language (SQL) is the standard de facto language to access and manipulate data in a relational database. Although SQL is a language that is simple and powerful, most novice users will have trouble with SQL syntax. Thus, we are presenting SQL generator tool which is capable of translating actions and displaying SQL commands and data sets simultaneously. The tool was developed based on Model-View-Controller (MVC) pattern. The MVC pattern is a widely used software design pattern that enforces the separation between the input, processing, and output of an application. Developers take full advantage of it to reduce the complexity in architectural design and to increase flexibility and reuse of code. In addition, we use White-Box testing for the code verification in the Model module.
Abstract: Creating a database scheme is essentially a manual
process. From a requirement specification the information contained
within has to be analyzed and reduced into a set of tables, attributes
and relationships. This is a time consuming process that has to go
through several stages before an acceptable database schema is
achieved. The purpose of this paper is to implement a Natural
Language Processing (NLP) based tool to produce a relational
database from a requirement specification. The Stanford CoreNLP
version 3.3.1 and the Java programming were used to implement the
proposed model. The outcome of this study indicates that a first draft
of a relational database schema can be extracted from a requirement
specification by using NLP tools and techniques with minimum user
intervention. Therefore this method is a step forward in finding a
solution that requires little or no user intervention.
Abstract: Floods play a key role in landform evolution of an
area. This process is likely to alter the topography of the earth’s
surface. The present study area, Kota Bharu is very prone to floods
extends from upstream of Kelantan River near Kemubu to the
downstream area near Kuala Besar. These flood events which occur
every year in the study area exhibit a strong bearing on river
morphological set-up. In the present study, three satellite imageries of
different time periods have been used to manifest the post-flood
landform changes. The pre-processing of the images such as subset,
geometric corrections and atmospheric corrections were carried-out
using ENVI 4.5 followed by the analysis processes. Twenty sets of
cross sections were plotted using software Erdas 9.2, ERDAS and
ArcGis 10 for the all three images. The results show a significant
change in the length of the cross section which suggest that the
geomorphological processes play a key role in carving and shaping
the river banks during the floods.
Abstract: An algorithm is a well-defined procedure that takes
some input in the form of some values, processes them and gives the
desired output. It forms the basis of many other algorithms such as
searching, pattern matching, digital filters etc., and other applications
have been found in database systems, data statistics and processing,
data communications and pattern matching. This paper introduces
algorithmic “Enhanced Bidirectional Selection” sort which is
bidirectional, stable. It is said to be bidirectional as it selects two
values smallest from the front and largest from the rear and assigns
them to their appropriate locations thus reducing the number of
passes by half the total number of elements as compared to selection
sort.
Abstract: This paper aims to analyze the role of natural
language processing (NLP). The paper will discuss the role in the
context of automated data retrieval, automated question answer, and
text structuring. NLP techniques are gaining wider acceptance in real
life applications and industrial concerns. There are various
complexities involved in processing the text of natural language that
could satisfy the need of decision makers. This paper begins with the
description of the qualities of NLP practices. The paper then focuses
on the challenges in natural language processing. The paper also
discusses major techniques of NLP. The last section describes
opportunities and challenges for future research.
Abstract: A cleaner production project was implemented in a
bakery. The project is based on the substitution of the best available
technique for an obsolete leaven production technology. The new
technology enables production of durable, high-quality leavens.
Moreover, 25% of flour as the original raw material can be replaced
by pastry from the previous day production which has not been sold.
That pastry was previously disposed in a waste incineration plant.
Besides the environmental benefits resulting from less waste, lower
consumption of energy, reduction of sewage waters quantity and
floury dustiness there are also significant economic benefits. Payback
period of investment was calculated with help of static method of
financial analysis about 2.6 years, using dynamic method 3.5 years
and an internal rate of return more than 29%. The supposed annual
average profit after taxationin the second year of operation was
incompliance with the real profit.
Abstract: Images are important source of information used as
evidence during any investigation process. Their clarity and accuracy
is essential and of the utmost importance for any investigation.
Images are vulnerable to losing blocks and having noise added to
them either after alteration or when the image was taken initially,
therefore, having a high performance image processing system and it
is implementation is very important in a forensic point of view. This
paper focuses on improving the quality of the forensic images.
For different reasons packets that store data can be affected,
harmed or even lost because of noise. For example, sending the
image through a wireless channel can cause loss of bits. These types
of errors might give difficulties generally for the visual display
quality of the forensic images.
Two of the images problems: noise and losing blocks are covered.
However, information which gets transmitted through any way of
communication may suffer alteration from its original state or even
lose important data due to the channel noise. Therefore, a developed
system is introduced to improve the quality and clarity of the forensic
images.
Abstract: Key frame extraction methods select the most
representative frames of a video, which can be used in different areas
of video processing such as video retrieval, video summary, and video
indexing. In this paper we present a novel approach for extracting key
frames from video sequences. The frame is characterized uniquely by
his contours which are represented by the dominant blocks. These
dominant blocks are located on the contours and its near textures.
When the video frames have a noticeable changement, its dominant
blocks changed, then we can extracte a key frame. The dominant
blocks of every frame is computed, and then feature vectors are
extracted from the dominant blocks image of each frame and arranged
in a feature matrix. Singular Value Decomposition is used to calculate
sliding windows ranks of those matrices. Finally the computed ranks
are traced and then we are able to extract key frames of a video.
Experimental results show that the proposed approach is robust
against a large range of digital effects used during shot transition.
Abstract: The purpose of this work is examining the multiproduct
multi-stage in a battery production line. To improve the
performances of an assembly production line by determine the
efficiency of each workstation. Data collected from every
workstation. The data are throughput rate, number of operator, and
number of parts that arrive and leaves during part processing. Data
for the number of parts that arrives and leaves are collected at least at
the amount of ten samples to make the data is possible to be analyzed
by Chi-Squared Goodness Test and queuing theory. Measures of this
model served as the comparison with the standard data available in
the company. Validation of the task time value resulted by comparing
it with the task time value based on the company database. Some
performance factors for the multi-product multi-stage in a battery
production line in this work are shown.
The efficiency in each workstation was also shown. Total
production time to produce each part can be determined by adding
the total task time in each workstation. To reduce the queuing time
and increase the efficiency based on the analysis any probably
improvement should be done. One probably action is by increasing
the number of operators how manually operate this workstation.
Abstract: Existing methods of data mining cannot be applied on
spatial data because they require spatial specificity consideration, as
spatial relationships.
This paper focuses on the classification with decision trees, which
are one of the data mining techniques. We propose an extension of
the C4.5 algorithm for spatial data, based on two different approaches
Join materialization and Querying on the fly the different tables.
Similar works have been done on these two main approaches, the
first - Join materialization - favors the processing time in spite of
memory space, whereas the second - Querying on the fly different
tables- promotes memory space despite of the processing time.
The modified C4.5 algorithm requires three entries tables: a target
table, a neighbor table, and a spatial index join that contains the
possible spatial relationship among the objects in the target table and
those in the neighbor table. Thus, the proposed algorithms are applied
to a spatial data pattern in the accidentology domain.
A comparative study of our approach with other works of
classification by spatial decision trees will be detailed.
Abstract: This review summarizes the potential of starch
agroindustrial residues as substrate for biohydrogen production.
Types of potential starch agroindustrial residues, recent developments
and bio-processing conditions for biohydrogen production will be
discussed. Biohydrogen is a clean energy source with great potential
to be an alternative fuel, because it releases energy explosively in
heat engines or generates electricity in fuel cells producing water as
only by-product. Anaerobic hydrogen fermentation or dark
fermentation seems to be more favorable, since hydrogen is yielded
at high rates and various organic waste enriched with carbohydrates
as substrate result in low cost for hydrogen production. Abundant
biomass from various industries could be source for biohydrogen
production where combination of waste treatment and energy
production would be an advantage. Carbohydrate-rich nitrogendeficient
solid wastes such as starch residues can be used for
hydrogen production by using suitable bioprocess technologies.
Alternatively, converting biomass into gaseous fuels, such as
biohydrogen is possibly the most efficient way to use these
agroindustrial residues.
Abstract: The operation of nuclear power plants involves
continuous monitoring of the environment in their area. This
monitoring is performed using a complex data acquisition system,
which collects status information about the system itself and values
of many important physical variables e.g. temperature, humidity,
dose rate etc. This paper describes a proposal and optimization of
communication that takes place in teledosimetric system between the
central control server responsible for the data processing and storing
and the decentralized measuring stations, which are measuring the
physical variables. Analyzes of ongoing communication were
performed and consequently the optimization of the system
architecture and communication was done.
Abstract: Assertion-Based software testing has been shown to
be a promising tool for generating test cases that reveal program
faults. Because the number of assertions may be very large for
industry-size programs, one of the main concerns to the applicability
of assertion-based testing is the amount of search time required to
explore a large number of assertions. This paper presents a new
approach for assertions exploration during the process of Assertion-
Based software testing. Our initial exterminations with the proposed
approach show that the performance of Assertion-Based testing may
be improved, therefore, making this approach more efficient when
applied on programs with large number of assertions.
Abstract: In today’s world, the LED display has been used for
presenting visual information under various circumstances. Such
information is an important intermediary in the human information
processing. Researchers have been investigated diverse factors that
influence this process effectiveness. The letter size is undoubtedly
one major factor that has been tested and recommended by many
standards and guidelines. However, viewing information on the
display from direct perpendicular position is a typical assumption
whereas many actual events are required viewing from the angles.
This current research aims to study the effect of oblique viewing
angle and viewing distance on ability to recognize alphabet, number,
and English word. The total of ten participants was volunteered to our
3 x 4 x 4 within subject study. Independent variables include three
distance levels (2, 6, and 12 m), four oblique angles (0, 45, 60, 75
degree), and four target types (alphabet, number, short word, and
long word). Following the method of constant stimuli our study
suggests that the larger oblique angle, ranging from 0 to 75 degree
from the line of sight, results in significant higher legibility threshold
or larger font size required (p-value < 0.05). Viewing distance factor
also shows to have significant effect on the threshold (p-value