Abstract: In the field of fashion design, 3D Mannequin is a kind
of assisting tool which could rapidly realize the design concepts.
While the concept of 3D Mannequin is applied to the computer added
fashion design, it will connect with the development and the
application of design platform and system. Thus, the situation
mentioned above revealed a truth that it is very critical to develop a
module of 3D Mannequin which would correspond with the necessity
of fashion design. This research proposes a concrete plan that
developing and constructing a system of 3D Mannequin with Kinect.
In the content, ergonomic measurements of objective human features
could be attained real-time through the implement with depth camera
of Kinect, and then the mesh morphing can be implemented through
transformed the locations of the control-points on the model by
inputting those ergonomic data to get an exclusive 3D mannequin
model. In the proposed methodology, after the scanned points from the
Kinect are revised for accuracy and smoothening, a complete human
feature would be reconstructed by the ICP algorithm with the method
of image processing. Also, the objective human feature could be
recognized to analyze and get real measurements. Furthermore, the
data of ergonomic measurements could be applied to shape morphing
for the division of 3D Mannequin reconstructed by feature curves. Due
to a standardized and customer-oriented 3D Mannequin would be
generated by the implement of subdivision, the research could be
applied to the fashion design or the presentation and display of 3D
virtual clothes. In order to examine the practicality of research
structure, a system of 3D Mannequin would be constructed with JAVA
program in this study. Through the revision of experiments the
practicability-contained research result would come out.
Abstract: The system for analyzing and eliciting public
grievances serves its main purpose to receive and process all sorts of
complaints from the public and respond to users. Due to the more
number of complaint data becomes big data which is difficult to store
and process. The proposed system uses HDFS to store the big data
and uses MapReduce to process the big data. The concept of cache
was applied in the system to provide immediate response and timely
action using big data analytics. Cache enabled big data increases the
response time of the system. The unstructured data provided by the
users are efficiently handled through map reduce algorithm. The
processing of complaints takes place in the order of the hierarchy of
the authority. The drawbacks of the traditional database system used
in the existing system are set forth by our system by using Cache
enabled Hadoop Distributed File System. MapReduce framework
codes have the possible to leak the sensitive data through
computation process. We propose a system that add noise to the
output of the reduce phase to avoid signaling the presence of
sensitive data. If the complaints are not processed in the ample time,
then automatically it is forwarded to the higher authority. Hence it
ensures assurance in processing. A copy of the filed complaint is sent
as a digitally signed PDF document to the user mail id which serves
as a proof. The system report serves to be an essential data while
making important decisions based on legislation.
Abstract: Image segmentation and edge detection is a fundamental section in image processing. In case of noisy images Edge Detection is very less effective if we use conventional Spatial Filters like Sobel, Prewitt, LOG, Laplacian etc. To overcome this problem we have proposed the use of Stochastic Gradient Mask instead of Spatial Filters for generating gradient images. The present study has shown that the resultant images obtained by applying Stochastic Gradient Masks appear to be much clearer and sharper as per Edge detection is considered.
Abstract: Organizational tendencies towards computer-based
information processing have been observed noticeably in the
third-world countries. Many enterprises are taking major initiatives
towards computerized working environment because of massive
benefits of computer-based information processing. However,
designing and developing information resource management software
for small and mid-size enterprises under budget costs and strict
deadline is always challenging for software engineers. Therefore, we
introduced an approach to design mid-size enterprise software by
using the Waterfall model, which is one of the SDLC (Software
Development Life Cycles), in a cost effective way. To fulfill research
objectives, in this study, we developed mid-sized enterprise software
named “BSK Management System” that assists enterprise software
clients with information resource management and perform complex
organizational tasks. Waterfall model phases have been applied to
ensure that all functions, user requirements, strategic goals, and
objectives are met. In addition, Rich Picture, Structured English, and
Data Dictionary have been implemented and investigated properly in
engineering manner. Furthermore, an assessment survey with 20
participants has been conducted to investigate the usability and
performance of the proposed software. The survey results indicated
that our system featured simple interfaces, easy operation and
maintenance, quick processing, and reliable and accurate transactions.
Abstract: In this paper, we present a robust algorithm to recognize extracted text from grocery product images captured by mobile phone cameras. Recognition of such text is challenging since text in grocery product images varies in its size, orientation,
style, illumination, and can suffer from perspective distortion.
Pre-processing is performed to make the characters scale and
rotation invariant. Since text degradations can not be appropriately
defined using well-known geometric transformations such
as translation, rotation, affine transformation and shearing, we
use the whole character black pixels as our feature vector.
Classification is performed with minimum distance classifier
using the maximum likelihood criterion, which delivers very
promising Character Recognition Rate (CRR) of 89%. We
achieve considerably higher Word Recognition Rate (WRR) of
99% when using lower level linguistic knowledge about product
words during the recognition process.
Abstract: This paper aims to analyze the role of natural
language processing (NLP). The paper will discuss the role in the
context of automated data retrieval, automated question answer, and
text structuring. NLP techniques are gaining wider acceptance in real
life applications and industrial concerns. There are various
complexities involved in processing the text of natural language that
could satisfy the need of decision makers. This paper begins with the
description of the qualities of NLP practices. The paper then focuses
on the challenges in natural language processing. The paper also
discusses major techniques of NLP. The last section describes
opportunities and challenges for future research.
Abstract: The purpose of this work is examining the multiproduct
multi-stage in a battery production line. To improve the
performances of an assembly production line by determine the
efficiency of each workstation. Data collected from every
workstation. The data are throughput rate, number of operator, and
number of parts that arrive and leaves during part processing. Data
for the number of parts that arrives and leaves are collected at least at
the amount of ten samples to make the data is possible to be analyzed
by Chi-Squared Goodness Test and queuing theory. Measures of this
model served as the comparison with the standard data available in
the company. Validation of the task time value resulted by comparing
it with the task time value based on the company database. Some
performance factors for the multi-product multi-stage in a battery
production line in this work are shown.
The efficiency in each workstation was also shown. Total
production time to produce each part can be determined by adding
the total task time in each workstation. To reduce the queuing time
and increase the efficiency based on the analysis any probably
improvement should be done. One probably action is by increasing
the number of operators how manually operate this workstation.
Abstract: In today’s world, the LED display has been used for
presenting visual information under various circumstances. Such
information is an important intermediary in the human information
processing. Researchers have been investigated diverse factors that
influence this process effectiveness. The letter size is undoubtedly
one major factor that has been tested and recommended by many
standards and guidelines. However, viewing information on the
display from direct perpendicular position is a typical assumption
whereas many actual events are required viewing from the angles.
This current research aims to study the effect of oblique viewing
angle and viewing distance on ability to recognize alphabet, number,
and English word. The total of ten participants was volunteered to our
3 x 4 x 4 within subject study. Independent variables include three
distance levels (2, 6, and 12 m), four oblique angles (0, 45, 60, 75
degree), and four target types (alphabet, number, short word, and
long word). Following the method of constant stimuli our study
suggests that the larger oblique angle, ranging from 0 to 75 degree
from the line of sight, results in significant higher legibility threshold
or larger font size required (p-value < 0.05). Viewing distance factor
also shows to have significant effect on the threshold (p-value
Abstract: Segmentation is one of the essential tasks in image
processing. Thresholding is one of the simplest techniques for
performing image segmentation. Multilevel thresholding is a simple
and effective technique. The primary objective of bi-level or
multilevel thresholding for image segmentation is to determine a best
thresholding value. To achieve multilevel thresholding various
techniques has been proposed. A study of some nature inspired
metaheuristic algorithms for multilevel thresholding for image
segmentation is conducted. Here, we study about Particle swarm
optimization (PSO) algorithm, artificial bee colony optimization
(ABC), Ant colony optimization (ACO) algorithm and Cuckoo
search (CS) algorithm.
Abstract: In this paper, we propose a smart music player that combines the musical genre classification and the spatial audio processing. The musical genre is classified based on content analysis of the musical segment detected from the audio stream. In parallel with the classification, the spatial audio quality is achieved by adding an artificial reverberation in a virtual acoustic space to the input mono sound. Thereafter, the spatial sound is boosted with the given frequency gains based on the musical genre when played back. Experiments measured the accuracy of detecting the musical segment from the audio stream and its musical genre classification. A listening test was performed based on the virtual acoustic space based spatial audio processing.
Abstract: Localization of mobile robots are important tasks for
developing autonomous mobile robots. This paper proposes a method
to estimate positions of a mobile robot using a omnidirectional
camera on the robot. Landmarks for points of references are set
up on a field where the robot works. The omnidirectional camera
which can obtain 360 [deg] around images takes photographs of
these landmarks. The positions of the robots are estimated from
directions of these landmarks that are extracted from the images
by image processing. This method can obtain the robot positions
without accumulative position errors. Accuracy of the estimated
robot positions by the proposed method are evaluated through some
experiments. The results show that it can obtain the positions with
small standard deviations. Therefore the method has possibilities of
more accurate localization by tuning of appropriate offset parameters.
Abstract: Object detection using Wavelet Neural Network (WNN) plays a major contribution in the analysis of image processing. Existing cluster-based algorithm for co-saliency object detection performs the work on the multiple images. The co-saliency detection results are not desirable to handle the multi scale image objects in WNN. Existing Super Resolution (SR) scheme for landmark images identifies the corresponding regions in the images and reduces the mismatching rate. But the Structure-aware matching criterion is not paying attention to detect multiple regions in SR images and fail to enhance the result percentage of object detection. To detect the objects in the high-resolution remote sensing images, Tagged Grid Matching (TGM) technique is proposed in this paper. TGM technique consists of the three main components such as object determination, object searching and object verification in WNN. Initially, object determination in TGM technique specifies the position and size of objects in the current image. The specification of the position and size using the hierarchical grid easily determines the multiple objects. Second component, object searching in TGM technique is carried out using the cross-point searching. The cross out searching point of the objects is selected to faster the searching process and reduces the detection time. Final component performs the object verification process in TGM technique for identifying (i.e.,) detecting the dissimilarity of objects in the current frame. The verification process matches the search result grid points with the stored grid points to easily detect the objects using the Gabor wavelet Transform. The implementation of TGM technique offers a significant improvement on the multi-object detection rate, processing time, precision factor and detection accuracy level.
Abstract: This paper describes an identification of specific shapes within binary images using the morphological Hit-or-Miss Transform (HMT). Hit-or-Miss transform is a general binary morphological operation that can be used in searching of particular patterns of foreground and background pixels in an image. It is actually a basic operation of binary morphology since almost all other binary morphological operators are derived from it. The input of this method is a binary image and a structuring element (a template which will be searched in a binary image) while the output is another binary image. In this paper a modification of Hit-or-Miss transform has been proposed. The accuracy of algorithm is adjusted according to the similarity of the template and the sought template. The implementation of this method has been done by C language. The algorithm has been tested on several images and the results have shown that this new method can be used for similar shape detection.
Abstract: Antioxidants are became the most analyzed substances in last decades. Antioxidants act as in activator for free radicals. Spices and vegetables are one of major antioxidant sources. Most common antioxidants in vegetables and spices are vitamin C, E, phenolic compounds, carotenoids. Therefore, it is important to get some view about antioxidant changes in spices and vegetables during processing. In this article was analyzed nine fresh and dried spices and vegetables- celery (Apium graveolens), parsley (Petroselinum crispum), dill (Anethum graveolens), leek (Allium ampeloprasum L.), garlic (Allium sativum L.), onion (Allium cepa), celery root (Apium graveolens var. rapaceum), pumpkin (Curcubica maxima), carrot (Daucus carota)- grown in Latvia 2013. Total carotenoids and phenolic compounds and their antiradical scavenging activity were determined for all samples. Dry matter content was calculated from moisture content. After drying process carotenoid content significantly decreases in all analyzed samples, except one -carotenoid content increases in parsley. Phenolic composition was different and depends on sample – fresh or dried. Total phenolic, flavonoid and phenolic acid content increases in dried spices. Flavan-3-ol content is not detected in fresh spice samples. For dried vegetables- phenolic acid content decreases significantly, but increases flavan-3-ols content. The higher antiradical scavenging activity was observed in samples with higher flavonoid and phenolic acid content.
Abstract: Although there is no theoretical weakness in a cryptographic algorithm, Side Channel Analysis can find out some secret data from the physical implementation of a cryptosystem. The analysis is based on extra information such as timing information, power consumption, electromagnetic leaks or even sound which can be exploited to break the system. Differential Power Analysis is one of the most popular analyses, as computing the statistical correlations of the secret keys and power consumptions. It is usually necessary to calculate huge data and takes a long time. It may take several weeks for some devices with countermeasures. We suggest and evaluate the methods to shorten the time to analyze cryptosystems. Our methods include distributed computing and parallelized processing.
Abstract: The proliferation of multimedia technology and services in today’s world provide ample research scope in the frontiers of visual signal processing. Wide spread usage of video based applications in heterogeneous environment needs viable methods of Video Quality Assessment (VQA). The evaluation of video quality not only depends on high QoS requirements but also emphasis the need of novel term ‘QoE’ (Quality of Experience) that perceive video quality as user centric. This paper discusses two vital video quality assessment methods namely, subjective and objective assessment methods. The evolution of various video quality metrics, their classification models and applications are reviewed in this work. The Mean Opinion Score (MOS) based subjective measurements and algorithm based objective metrics are discussed and their challenges are outlined. Further, this paper explores the recent progress of VQA in emerging technologies such as mobile video and 3D video.
Abstract: In this paper, five ontologies are described, which include the event concepts. The paper provides an overview and comparison of existing event models. The main criteria for comparison are that there should be possibilities to model events with stretch in the time and location and participation of objects; however, there are other factors that should be taken into account as well. The paper also shows an example of using ontologies in complex event processing.
Abstract: Medical image analysis is one of the great effects of computer image processing. There are several processes to analysis the medical images which the segmentation process is one of the challenging and most important step. In this paper the segmentation method proposed in order to segment the dental radiograph images. Thresholding method has been applied to simplify the images and to morphologically open binary image technique performed to eliminate the unnecessary regions on images. Furthermore, horizontal and vertical integral projection techniques used to extract the each individual tooth from radiograph images. Segmentation process has been done by applying the level set method on each extracted images. Nevertheless, the experiments results by 90% accuracy demonstrate that proposed method achieves high accuracy and promising result.
Abstract: In this paper, a low cost duty-cycle modulation scheme is studied in depth and compared to the standard pulse width modulation technique. Using a mix of analytical reasoning and electronics simulation tools, it is shown that under the same operating conditions, most characteristics of the proposed duty-cycle modulation scheme are better than those provided by a standard pulse width modulation technique. The simulation results obtained when testing both modulation control policies on prototyping systems, indicate that the proposed duty-cycle modulation approach, appears to be a high quality control policy in a wide variety of application areas, including A/D and D/A conversion, signal transmission and switching control in power electronics.
Abstract: The proposed method is to study and analyze Electrocardiograph (ECG) waveform to detect abnormalities present with reference to P, Q, R and S peaks. The first phase includes the acquisition of real time ECG data. In the next phase, generation of signals followed by pre-processing. Thirdly, the procured ECG signal is subjected to feature extraction. The extracted features detect abnormal peaks present in the waveform Thus the normal and abnormal ECG signal could be differentiated based on the features extracted. The work is implemented in the most familiar multipurpose tool, MATLAB. This software efficiently uses algorithms and techniques for detection of any abnormalities present in the ECG signal. Proper utilization of MATLAB functions (both built-in and user defined) can lead us to work with ECG signals for processing and analysis in real time applications. The simulation would help in improving the accuracy and the hardware could be built conveniently.