Abstract: Laban Movement Analysis (LMA), developed in the
dance community over the past seventy years, is an effective method
for observing, describing, notating, and interpreting human
movement to enhance communication and expression in everyday
and professional life. Many applications that use motion capture data
might be significantly leveraged if the Laban qualities will be
recognized automatically. This paper presents an automated
recognition method of Laban qualities from motion capture skeletal
recordings and it is demonstrated on the output of Microsoft’s Kinect
V2 sensor.
Abstract: Communicating users' needs, goals and problems help
designers and developers overcome challenges faced by end users.
Personas are used to represent end users’ needs. In our research,
creating personas allowed the following questions to be answered:
Who are the potential user groups? What do they want to achieve by
using the service? What are the problems that users face? What
should the service provide to them? To develop realistic personas, we
conducted a focus group discussion with undergraduate and graduate
students and also interviewed a university librarian. The personas
were created to help evaluating the Institutional Repository that is
based on the DSpace system. The profiles helped to communicate
users' needs, abilities, tasks, and problems, and the task scenarios
used in the heuristic evaluation were based on these personas. Four
personas resulted of a focus group discussion with undergraduate and
graduate students and from interviewing a university librarian. We
then used these personas to create focused task-scenarios for a
heuristic evaluation on the system interface to ensure that it met
users' needs, goals, problems and desires. In this paper, we present
the process that we used to create the personas that led to devise the
task scenarios used in the heuristic evaluation as a follow up study of
the DSpace university repository.
Abstract: Web mining is to discover and extract useful
Information. Different users may have different search goals when
they search by giving queries and submitting it to a search engine.
The inference and analysis of user search goals can be very useful for
providing an experience result for a user search query. In this project,
we propose a novel approach to infer user search goals by analyzing
search web logs. First, we propose a novel approach to infer user
search goals by analyzing search engine query logs, the feedback
sessions are constructed from user click-through logs and it
efficiently reflect the information needed for users. Second we
propose a preprocessing technique to clean the unnecessary data’s
from web log file (feedback session). Third we propose a technique
to generate pseudo-documents to representation of feedback sessions
for clustering. Finally we implement k-medoids clustering algorithm
to discover different user search goals and to provide a more optimal
result for a search query based on feedback sessions for the user.
Abstract: The web services applications for digital reference
service (WSDRS) of LIS model is an informal model that claims to
reduce the problems of digital reference services in libraries. It uses
web services technology to provide efficient way of satisfying users’
needs in the reference section of libraries. The formal WSDRS model
consists of the Z specifications of all the informal specifications of
the model. This paper discusses the formal validation of the Z
specifications of WSDRS model. The authors formally verify and
thus validate the properties of the model using Z/EVES theorem
prover.
Abstract: Tamil handwritten document is taken as a key source of data to identify the writer. Tamil is a classical language which has 247 characters include compound characters, consonants, vowels and special character. Most characters of Tamil are multifaceted in nature. Handwriting is a unique feature of an individual. Writer may change their handwritings according to their frame of mind and this place a risky challenge in identifying the writer. A new discriminative model with pooled features of handwriting is proposed and implemented using support vector machine. It has been reported on 100% of prediction accuracy by RBF and polynomial kernel based classification model.
Abstract: In some applications, such as image recognition or
compression, segmentation refers to the process of partitioning a
digital image into multiple segments. Image segmentation is typically
used to locate objects and boundaries (lines, curves, etc.) in images.
Image segmentation is to classify or cluster an image into several
parts (regions) according to the feature of image, for example, the
pixel value or the frequency response. More precisely, image
segmentation is the process of assigning a label to every pixel in an
image such that pixels with the same label share certain visual
characteristics. The result of image segmentation is a set of segments
that collectively cover the entire image, or a set of contours extracted
from the image. Several image segmentation algorithms were
proposed to segment an image before recognition or compression. Up
to now, many image segmentation algorithms exist and be
extensively applied in science and daily life. According to their
segmentation method, we can approximately categorize them into
region-based segmentation, data clustering, and edge-base
segmentation. In this paper, we give a study of several popular image
segmentation algorithms that are available.
Abstract: Elliptic curve discrete logarithm problem(ECDLP) is
one of problems on which the security of pairing-based cryptography
is based. This paper considers Pollard’s rho method to evaluate
the security of ECDLP on Barreto-Naehrig(BN) curve that is an
efficient pairing-friendly curve. Some techniques are proposed to
make the rho method efficient. Especially, the group structure on
BN curve, distinguished point method, and Montgomery trick are
well-known techniques. This paper applies these techniques and
shows its optimization. According to the experimental results for
which a large-scale parallel system with MySQL is applied, 94-bit
ECDLP was solved about 28 hours by parallelizing 71 computers.
Abstract: One of the most important challenging factors in
medical images is nominated as noise. Image denoising refers to the
improvement of a digital medical image that has been infected by
Additive White Gaussian Noise (AWGN). The digital medical image
or video can be affected by different types of noises. They are
impulse noise, Poisson noise and AWGN. Computed tomography
(CT) images are subjects to low quality due to the noise. Quality of
CT images is dependent on absorbed dose to patients directly in such
a way that increase in absorbed radiation, consequently absorbed
dose to patients (ADP), enhances the CT images quality. In this
manner, noise reduction techniques on purpose of images quality
enhancement exposing no excess radiation to patients is one the
challenging problems for CT images processing. In this work, noise
reduction in CT images was performed using two different
directional 2 dimensional (2D) transformations; i.e., Curvelet and
Contourlet and Discrete Wavelet Transform (DWT) thresholding
methods of BayesShrink and AdaptShrink, compared to each other
and we proposed a new threshold in wavelet domain for not only
noise reduction but also edge retaining, consequently the proposed
method retains the modified coefficients significantly that result good
visual quality. Data evaluations were accomplished by using two
criterions; namely, peak signal to noise ratio (PSNR) and Structure
similarity (Ssim).
Abstract: Tamil handwritten document is taken as a key source
of data to identify the writer. Tamil is a classical language which has
247 characters include compound characters, consonants, vowels and
special character. Most characters of Tamil are multifaceted in
nature. Handwriting is a unique feature of an individual. Writer may
change their handwritings according to their frame of mind and this
place a risky challenge in identifying the writer. A new
discriminative model with pooled features of handwriting is proposed
and implemented using support vector machine. It has been reported
on 100% of prediction accuracy by RBF and polynomial kernel based
classification model.
Abstract: In VLSI, testing plays an important role. Major
problem in testing are test data volume and test power. The important
solution to reduce test data volume and test time is test data
compression. The Proposed technique combines the bit maskdictionary
and 2n pattern run length-coding method and provides a
substantial improvement in the compression efficiency without
introducing any additional decompression penalty. This method has
been implemented using Mat lab and HDL Language to reduce test
data volume and memory requirements. This method is applied on
various benchmark test sets and compared the results with other
existing methods. The proposed technique can achieve a compression
ratio up to 86%.
Abstract: The apportionment method is used by many countries, to calculate the distribution of seats in political bodies. For example, this method is used in the United States (U.S.) to distribute house seats proportionally based on the population of the electoral district. Famous apportionment methods include the divisor methods called the Adams Method, Dean Method, Hill Method, Jefferson Method and Webster Method. Sometimes the results from the implementation of these divisor methods are unfair and include errors. Therefore, it is important to examine the optimization of this method by using a bias measurement to figure out precise and fair results. In this research we investigate the bias of divisor methods in the U.S. Houses of Representatives toward large and small states by applying the Stolarsky Mean Method. We compare the bias of the apportionment method by using two famous bias measurements: the Balinski and Young measurement and the Ernst measurement. Both measurements have a formula for large and small states. The Third measurement however, which was created by the researchers, did not factor in the element of large and small states into the formula. All three measurements are compared and the results show that our measurement produces similar results to the other two famous measurements.
Abstract: The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data of the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as «signaling parameters» (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of COTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources, but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as rule. This report contains describing of the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.
Abstract: Evolution strategy (ES) is a well-known instance of evolutionary algorithms, and there have been many studies on ES. In this paper, the author proposes an extended ES for solving fuzzy-valued optimization problems. In the proposed ES, genotype values are not real numbers but fuzzy numbers. Evolutionary processes in the ES are extended so that it can handle genotype instances with fuzzy numbers. In this study, the proposed method is experimentally applied to the evolution of neural networks with fuzzy weights and biases. Results reveal that fuzzy neural networks evolved using the proposed ES with fuzzy genotype values can model hidden target fuzzy functions even though no training data are explicitly provided. Next, the proposed method is evaluated in terms of variations in specifying fuzzy numbers as genotype values. One of the mostly adopted fuzzy numbers is a symmetric triangular one that can be specified by its lower and upper bounds (LU) or its center and width (CW). Experimental results revealed that the LU model contributed better to the fuzzy ES than the CW model, which indicates that the LU model should be adopted in future applications of the proposed method.
Abstract: This paper introduces the concept and principle of data
cleaning, analyzes the types and causes of dirty data, and proposes
several key steps of typical cleaning process, puts forward a well
scalability and versatility data cleaning framework, in view of data
with attribute dependency relation, designs several of violation data
discovery algorithms by formal formula, which can obtain inconsistent
data to all target columns with condition attribute dependent no matter
data is structured (SQL) or unstructured (NoSql), and gives 6 data
cleaning methods based on these algorithms.
Abstract: The McEliece cryptosystem is an asymmetric type of
cryptography based on error correction code. The classical McEliece
used irreducible binary Goppa code which considered unbreakable
until now especially with parameter [1024, 524, and 101], but it is
suffering from large public key matrix which leads to be difficult to
be used practically. In this work Irreducible and Separable Goppa
codes have been introduced. The Irreducible and Separable Goppa
codes used are with flexible parameters and dynamic error vectors. A
Comparison between Separable and Irreducible Goppa code in
McEliece Cryptosystem has been done. For encryption stage, to get
better result for comparison, two types of testing have been chosen;
in the first one the random message is constant while the parameters
of Goppa code have been changed. But for the second test, the
parameters of Goppa code are constant (m=8 and t=10) while the
random message have been changed. The results show that the time
needed to calculate parity check matrix in separable are higher than
the one for irreducible McEliece cryptosystem, which is considered
expected results due to calculate extra parity check matrix in
decryption process for g2(z) in separable type, and the time needed to
execute error locator in decryption stage in separable type is better
than the time needed to calculate it in irreducible type. The proposed
implementation has been done by Visual studio C#.
Abstract: Digital reference service is when a traditional library
reference service is provided electronically. In most cases users do
not get full satisfaction from using digital reference service due to
variety of reasons. This paper discusses the formal specification of
web services applications for digital reference services (WSDRS).
WSDRS is an informal model that claims to reduce the problems of
digital reference services in libraries. It uses web services technology
to provide efficient digital way of satisfying users’ need in the
reference section of libraries. Informal model is in natural language
which is inconsistent and ambiguous that may cause difficulties to the
developers of the system. In order to solve this problem we decided
to convert the informal specifications into formal specifications. This
is supposed to reduce the overall development time and cost. We use
Z language to develop the formal model and verify it with Z/EVES
theorem prover tool.
Abstract: In this paper, we describe the use of formal methods
to model malware behaviour. The modelling of harmful behaviour
rests upon syntactic structures that represent malicious procedures
inside malware. The malicious activities are modelled by a formal
grammar, where API calls’ components are the terminals and the set
of API calls used in combination to achieve a goal are designated
non-terminals. The combination of different non-terminals in various
ways and tiers make up the attack vectors that are used by harmful
software. Based on these syntactic structures a parser can be
generated which takes execution traces as input for pattern
recognition.
Abstract: Health of a person plays a vital role in the collective
health of his community and hence the well-being of the society as a
whole. But, in today’s fast paced technology driven world, health
issues are increasingly being associated with human behaviors – their
lifestyle. Social networks have tremendous impact on the health
behavior of individuals. Many researchers have used social network
analysis to understand human behavior that implicates their social
and economic environments. It would be interesting to use a similar
analysis to understand human behaviors that have health
implications. This paper focuses on concepts of those behavioural
analyses that have health implications using social networks analysis
and provides possible algorithmic approaches. The results of these
approaches can be used by the governing authorities for rolling out
health plans, benefits and take preventive measures, while the
pharmaceutical companies can target specific markets, helping health
insurance companies to better model their insurance plans.
Abstract: Due to the fast and flawless technological innovation
there is a tremendous amount of data dumping all over the world in
every domain such as Pattern Recognition, Machine Learning, Spatial
Data Mining, Image Analysis, Fraudulent Analysis, World Wide
Web etc., This issue turns to be more essential for developing several
tools for data mining functionalities. The major aim of this paper is to
analyze various tools which are used to build a resourceful analytical
or descriptive model for handling large amount of information more
efficiently and user friendly. In this survey the diverse tools are
illustrated with their extensive technical paradigm, outstanding
graphical interface and inbuilt multipath algorithms in which it is
very useful for handling significant amount of data more indeed.
Abstract: In this paper, we present a comparative study of three
methods of 2D face recognition system such as: Iso-Geodesic Curves
(IGC), Geodesic Distance (GD) and Geodesic-Intensity Histogram
(GIH). These approaches are based on computing of geodesic
distance between points of facial surface and between facial curves.
In this study we represented the image at gray level as a 2D surface in
a 3D space, with the third coordinate proportional to the intensity
values of pixels. In the classifying step, we use: Neural Networks
(NN), K-Nearest Neighbor (KNN) and Support Vector Machines
(SVM). The images used in our experiments are from two wellknown
databases of face images ORL and YaleB. ORL data base was
used to evaluate the performance of methods under conditions where
the pose and sample size are varied, and the database YaleB was used
to examine the performance of the systems when the facial
expressions and lighting are varied.