Abstract: Article 5(3) of the Brussels I Regulation provides that a person domiciled in a Member State may be sued in another Member State in matters relating to tort, delict or quasi-delict, in the courts for the place where the harmful events occurred or may occur. For a number of years Article 5 (3) of the Brussels I Regulation has been at the centre of the debate regarding the intellectual property rights infringement over the Internet. Nothing has been done to adapt the provisions relating to non-internet cases of infringement of intellectual property rights to the context of the Internet. The author’s findings indicate that in the case of intellectual property rights infringement on the Internet, the plaintiff has the option to sue either: the court of the Member State of the event giving rise to the damage: where the publisher of the newspaper is established; the court of the Member State where the damage occurred: where defamatory article is distributed. However, it must be admitted that whilst infringement over the Internet has some similarity to multi-State defamation by means of newspapers, the position is not entirely analogous due to the cross-border nature of the Internet. A simple example which may appropriately illustrate its contentious nature is a defamatory statement published on a website accessible in different Member States, and available in different languages. Therefore, we need to answer the question: how these traditional jurisdictional rules apply in the case of intellectual property rights infringement over the Internet? Should these traditional jurisdictional rules be modified?
Abstract: This paper proposes an algorithm which automatically aligns and stitches the component medical images (fluoroscopic) with varying degrees of overlap into a single composite image. The alignment method is based on similarity measure between the component images. As applied here the technique is intensity based rather than feature based. It works well in domains where feature based methods have difficulty, yet more robust than traditional correlation. Component images are stitched together using the new triangular averaging based blending algorithm. The quality of the resultant image is tested for photometric inconsistencies and geometric misalignments. This method cannot correct rotational, scale and perspective artifacts.
Abstract: This research work proposed a study of fruit bruise detection by means of a biospeckle method, selecting the papaya fruit (Carica papaya) as testing body. Papaya is recognized as a fruit of outstanding nutritional qualities, showing high vitamin A content, calcium, carbohydrates, exhibiting high popularity all over the world, considering consumption and acceptability. The commercialization of papaya faces special problems which are associated to bruise generation during harvesting, packing and transportation. Papaya is classified as climacteric fruit, permitting to be harvested before the maturation is completed. However, by one side bruise generation is partially controlled once the fruit flesh exhibits high mechanical firmness. By the other side, mechanical loads can set a future bruise at that maturation stage, when it can not be detected yet by conventional methods. Mechanical damages of fruit skin leave an entrance door to microorganisms and pathogens, which will cause severe losses of quality attributes. Traditional techniques of fruit quality inspection include total soluble solids determination, mechanical firmness tests, visual inspections, which would hardly meet required conditions for a fully automated process. However, the pertinent literature reveals a new method named biospeckle which is based on the laser reflectance and interference phenomenon. The laser biospeckle or dynamic speckle is quantified by means of the Moment of Inertia, named after its mechanical counterpart due to similarity between the defining formulae. Biospeckle techniques are able to quantify biological activities of living tissues, which has been applied to seed viability analysis, vegetable senescence and similar topics. Since the biospeckle techniques can monitor tissue physiology, it could also detect changes in the fruit caused by mechanical damages. The proposed technique holds non invasive character, being able to generate numerical results consistent with an adequate automation. The experimental tests associated to this research work included the selection of papaya fruit at different maturation stages which were submitted to artificial mechanical bruising tests. Damages were visually compared with the frequency maps yielded by the biospeckle technique. Results were considered in close agreement.
Abstract: This paper is concerned with the production of an Arabic word semantic similarity benchmark dataset. It is the first of its kind for Arabic which was particularly developed to assess the accuracy of word semantic similarity measurements. Semantic similarity is an essential component to numerous applications in fields such as natural language processing, artificial intelligence, linguistics, and psychology. Most of the reported work has been done for English. To the best of our knowledge, there is no word similarity measure developed specifically for Arabic. In this paper, an Arabic benchmark dataset of 70 word pairs is presented. New methods and best possible available techniques have been used in this study to produce the Arabic dataset. This includes selecting and creating materials, collecting human ratings from a representative sample of participants, and calculating the overall ratings. This dataset will make a substantial contribution to future work in the field of Arabic WSS and hopefully it will be considered as a reference basis from which to evaluate and compare different methodologies in the field.
Abstract: There have been numerous implementations of
security system using biometric, especially for identification and
verification cases. An example of pattern used in biometric is the iris
pattern in human eye. The iris pattern is considered unique for each
person. The use of iris pattern poses problems in encoding the human
iris.
In this research, an efficient iris recognition method is proposed.
In the proposed method the iris segmentation is based on the
observation that the pupil has lower intensity than the iris, and the
iris has lower intensity than the sclera. By detecting the boundary
between the pupil and the iris and the boundary between the iris and
the sclera, the iris area can be separated from pupil and sclera. A step
is taken to reduce the effect of eyelashes and specular reflection of
pupil. Then the four levels Coiflet wavelet transform is applied to the
extracted iris image. The modified Hamming distance is employed to
measure the similarity between two irises.
This research yields the identification success rate of 84.25% for
the CASIA version 1.0 database. The method gives an accuracy of
77.78% for the left eyes of MMU 1 database and 86.67% for the
right eyes. The time required for the encoding process, from the
segmentation until the iris code is generated, is 0.7096 seconds.
These results show that the accuracy and speed of the method is
better than many other methods.
Abstract: This paper presents the analysis of similarity between local decisions, in the process of alphanumeric hand-prints classification. From the analysis of local characteristics of handprinted numerals and characters, extracted by a zoning method, the set of classification decisions is obtained and the similarity among them is investigated. For this purpose the Similarity Index is used, which is an estimator of similarity between classifiers, based on the analysis of agreements between their decisions. The experimental tests, carried out using numerals and characters from the CEDAR and ETL database, respectively, show to what extent different parts of the patterns provide similar classification decisions.
Abstract: The lecture represents significant advances in
understanding of the transfer processes mechanism in turbulent
separated flows. Based upon experimental data suggesting the
governing role of generated local pressure gradient that takes place in
the immediate vicinity of the wall in separated flow as a result of
intense instantaneous accelerations induced by large-scale vortex
flow structures similarity laws for mean velocity and temperature and
spectral characteristics and heat and mass transfer law for turbulent
separated flows have been developed. These laws are confirmed by
available experimental data. The results obtained were employed for
analysis of heat and mass transfer in some very complex processes
occurring in technological applications such as impinging jets, heat
transfer of cylinders in cross flow and in tube banks, packed beds
where processes manifest distinct properties which allow them to be
classified under turbulent separated flows. Many facts have got an
explanation for the first time.
Abstract: As a result of the daily workflow in the design
development departments of companies, databases containing huge
numbers of 3D geometric models are generated. According to the
given problem engineers create CAD drawings based on their design
ideas and evaluate the performance of the resulting design, e.g. by
computational simulations. Usually, new geometries are built either
by utilizing and modifying sets of existing components or by adding
single newly designed parts to a more complex design.
The present paper addresses the two facets of acquiring
components from large design databases automatically and providing
a reasonable overview of the parts to the engineer. A unified
framework based on the topographic non-negative matrix
factorization (TNMF) is proposed which solves both aspects
simultaneously. First, on a given database meaningful components
are extracted into a parts-based representation in an unsupervised
manner. Second, the extracted components are organized and
visualized on square-lattice 2D maps. It is shown on the example of
turbine-like geometries that these maps efficiently provide a wellstructured
overview on the database content and, at the same time,
define a measure for spatial similarity allowing an easy access and
reuse of components in the process of design development.
Abstract: Searching similar documents and document
management subjects have important place in text mining. One of the
most important parts of similar document research studies is the
process of classifying or clustering the documents. In this study, a
similar document search approach that includes discussion of out the
case of belonging to multiple categories (multiple categories
problem) has been carried. The proposed method that based on Fuzzy
Similarity Classification (FSC) has been compared with Rocchio
algorithm and naive Bayes method which are widely used in text
mining. Empirical results show that the proposed method is quite
successful and can be applied effectively. For the second stage,
multiple categories vector method based on information of categories
regarding to frequency of being seen together has been used.
Empirical results show that achievement is increased almost two
times, when proposed method is compared with classical approach.
Abstract: This paper proposes fractal patterns for power quality
(PQ) detection using color relational analysis (CRA) based classifier.
Iterated function system (IFS) uses the non-linear interpolation in the
map and uses similarity maps to construct various fractal patterns of
power quality disturbances, including harmonics, voltage sag, voltage
swell, voltage sag involving harmonics, voltage swell involving
harmonics, and voltage interruption. The non-linear interpolation
functions (NIFs) with fractal dimension (FD) make fractal patterns
more distinguishing between normal and abnormal voltage signals.
The classifier based on CRA discriminates the disturbance events in a
power system. Compared with the wavelet neural networks, the test
results will show accurate discrimination, good robustness, and faster
processing time for detecting disturbing events.
Abstract: In this article, a simulation method called the Homotopy Perturbation Method (HPM) is employed in the steady flow of a Walter's B' fluid in a vertical channel with porous wall. We employed Homotopy Perturbation Method to derive solution of a nonlinear form of equation obtained from exerting similarity transforming to the ordinary differential equation gained from continuity and momentum equations of this kind of flow. The results obtained from the Homotopy Perturbation Method are then compared with those from the Runge–Kutta method in order to verify the accuracy of the proposed method. The results show that the Homotopy Perturbation Method can achieve good results in predicting the solution of such problems. Ultimately we use this solution to obtain the other terms of velocities and physical discussion about it.
Abstract: This paper proposes rough set models with three
different level knowledge granules in incomplete information system
under tolerance relation by similarity between objects according to
their attribute values. Through introducing dominance relation on the
discourse to decompose similarity classes into three subclasses: little
better subclass, little worse subclass and vague subclass, it dismantles
lower and upper approximations into three components. By using
these components, retrieving information to find naturally hierarchical
expansions to queries and constructing answers to elaborative queries
can be effective. It illustrates the approach in applying rough set
models in the design of information retrieval system to access different
granular expanded documents. The proposed method enhances rough
set model application in the flexibility of expansions and elaborative
queries in information retrieval.
Abstract: Locality Sensitive Hashing (LSH) is one of the most
promising techniques for solving nearest neighbour search problem in
high dimensional space. Euclidean LSH is the most popular variation
of LSH that has been successfully applied in many multimedia
applications. However, the Euclidean LSH presents limitations that
affect structure and query performances. The main limitation of the
Euclidean LSH is the large memory consumption. In order to achieve
a good accuracy, a large number of hash tables is required. In this
paper, we propose a new hashing algorithm to overcome the storage
space problem and improve query time, while keeping a good
accuracy as similar to that achieved by the original Euclidean LSH.
The Experimental results on a real large-scale dataset show that the
proposed approach achieves good performances and consumes less
memory than the Euclidean LSH.
Abstract: This paper examines long-range dependence or longmemory
of financial time series on the exchange rate data by the
fractional Brownian motion (fBm). The principle of spectral density
function in Section 2 is used to find the range of Hurst parameter (H)
of the fBm. If 0< H
Abstract: The intention of this lessons is to assess the probability
of optical coherence tomography (OCT) for biometric recognition.
The OCT is the foundation on an optical signal acquisition and
processing method and has the micrometer-resolution. In this study,
we used the porcine skin for verifying the abovementioned means. The
porcine tissue was sound acknowledged for structural and
immunohistochemical similarity with human skin, so it could be
suitable for pre-clinical trial as investigational specimen. For this
reason, it was tattooed by the tattoo machine with the tattoo-pigment.
We detected the pattern of the tattooed skin by the OCT according to
needle speed. The result was consistent with the histology images.
This result showed that the OCT was effective to examine the tattooed
skin section noninvasively. It might be available to identify
morphological changes inside the skin.
Abstract: Source code retrieval is of immense importance in the software engineering field. The complex tasks of retrieving and extracting information from source code documents is vital in the development cycle of the large software systems. The two main subtasks which result from these activities are code duplication prevention and plagiarism detection. In this paper, we propose a Mohamed Amine Ouddan, and Hassane Essafi source code retrieval system based on two-level fingerprint representation, respectively the structural and the semantic information within a source code. A sequence alignment technique is applied on these fingerprints in order to quantify the similarity between source code portions. The specific purpose of the system is to detect plagiarism and duplicated code between programs written in different programming languages belonging to the same class, such as C, Cµ, Java and CSharp. These four languages are supported by the actual version of the system which is designed such that it may be easily adapted for any programming language.
Abstract: In this study, a fuzzy similarity approach for Arabic web pages classification is presented. The approach uses a fuzzy term-category relation by manipulating membership degree for the training data and the degree value for a test web page. Six measures are used and compared in this study. These measures include: Einstein, Algebraic, Hamacher, MinMax, Special case fuzzy and Bounded Difference approaches. These measures are applied and compared using 50 different Arabic web-pages. Einstein measure was gave best performance among the other measures. An analysis of these measures and concluding remarks are drawn in this study.
Abstract: In this paper a one-dimension Self Organizing Map
algorithm (SOM) to perform feature selection is presented. The
algorithm is based on a first classification of the input dataset on a
similarity space. From this classification for each class a set of
positive and negative features is computed. This set of features is
selected as result of the procedure. The procedure is evaluated on an
in-house dataset from a Knowledge Discovery from Text (KDT)
application and on a set of publicly available datasets used in
international feature selection competitions. These datasets come
from KDT applications, drug discovery as well as other applications.
The knowledge of the correct classification available for the training
and validation datasets is used to optimize the parameters for positive
and negative feature extractions. The process becomes feasible for
large and sparse datasets, as the ones obtained in KDT applications,
by using both compression techniques to store the similarity matrix
and speed up techniques of the Kohonen algorithm that take
advantage of the sparsity of the input matrix. These improvements
make it feasible, by using the grid, the application of the
methodology to massive datasets.
Abstract: Most of the biclustering/projected clustering algorithms are based either on the Euclidean distance or correlation coefficient which capture only linear relationships. However, in many applications, like gene expression data and word-document data, non linear relationships may exist between the objects. Mutual Information between two variables provides a more general criterion to investigate dependencies amongst variables. In this paper, we improve upon our previous algorithm that uses mutual information for biclustering in terms of computation time and also the type of clusters identified. The algorithm is able to find biclusters with mixed relationships and is faster than the previous one. To the best of our knowledge, none of the other existing algorithms for biclustering have used mutual information as a similarity measure. We present the experimental results on synthetic data as well as on the yeast expression data. Biclusters on the yeast data were found to be biologically and statistically significant using GO Tool Box and FuncAssociate.
Abstract: The fractal-shaped orifices are assumed to have a
significant effect on the pressure drop downstream pipe flow due to
their edge self-similarity shape which enhances the mixing
properties. Here, we investigate the pressure drop after these fractals
using a digital micro-manometer at different stations downstream a
turbulent flow pipe then a direct comparison has been made with the
pressure drop measured from regular orifices with the same flow
area. Our results showed that the fractal-shaped orifices have a
significant effect on the pressure drop downstream the flow. Also
the pressure drop measured across the fractal-shaped orifices is
noticed to be lower that that from ordinary orifices of the same flow
areas. This result could be important in designing piping systems
from point of view of losses consideration with the same flow
control area. This is promising to use the fractal-shaped orifices as
flowmeters as they can sense the pressure drop across them
accurately with minimum losses than the regular ones.