Abstract: Privacy of RFID systems is receiving increasing attention in the RFID community. RFID privacy is important as the RFID tags will be attached to all kinds of products and physical objects including people. The possible abuse or excessive use of RFID tracking capability by malicious users can lead to potential privacy violations. In this paper, we will discuss how the different industries use RFID and the potential privacy and security issues while RFID is implemented in these industries. Although RFID technology offers interesting services to customer and retailers, it could also endanger the privacy of end-users. Personal data can be leaked if a protection mechanism is not deployed in the RFID systems. The paper summarizes many different solutions for implementing privacy and security while deploying RFID systems.
Abstract: Emotions classification of text documents is applied to reveal if the document expresses a determined emotion from its writer. As different supervised methods are previously used for emotion documents’ classification, in this research we present a novel model that supports the classification algorithms for more accurate results by the support of TF-IDF measure. Different experiments have been applied to reveal the applicability of the proposed model, the model succeeds in raising the accuracy percentage according to the determined metrics (precision, recall, and f-measure) based on applying the refinement of the lexicon, integration of lexicons using different perspectives, and applying the TF-IDF weighting measure over the classifying features. The proposed model has also been compared with other research to prove its competence in raising the results’ accuracy.
Abstract: Document Analysis is an important research field that aims to gather the information by analyzing the data in documents. As one of the important targets for many fields is to understand what people actually want, sentimental analysis field has been one of the vital fields that are tightly related to the document analysis. This research focuses on analyzing text documents to classify each document according to its opinion. The aim of this research is to detect the emotions from text documents based on enriching the lexicon with adapting their content based on semantic patterns extraction. The proposed approach has been presented, and different experiments are applied by different perspectives to reveal the positive impact of the proposed approach on the classification results.
Abstract: Unified Modeling Language (UML) is considered as one of the widespread modeling language standardized by the Object Management Group (OMG). Therefore, the model driving engineering (MDE) community attempts to provide reuse of UML diagrams, and do not construct it from scratch. The UML model appears according to a specific software development process. The existing method generation models focused on the different techniques of transformation without considering the development process. Our work aims to construct an UML component from fragments of UML diagram basing on an agile method. We define UML fragment as a portion of a UML diagram, which express a business target. To guide the generation of fragments of UML models using an agile process, we need a flexible approach, which adapts to the agile changes and covers all its activities. We use the software product line (SPL) to derive a fragment of process agile method. This paper explains our approach, named RECUP, to generate UML fragments following an agile process, and overviews the different aspects. In this paper, we present the approach and we define the different phases and artifacts.
Abstract: There are about 1% of the world population suffering
from the hidden disability known as epilepsy and major developing
countries are not fully equipped to counter this problem. In order to
reduce the inconvenience and danger of epilepsy, different methods
have been researched by using a artificial neural network (ANN)
classification to distinguish epileptic waveforms from normal brain
waveforms. This paper outlines the aim of achieving massive
ANN parallelization through a dedicated hardware using bit-serial
processing. The design of this bit-serial Neural Processing Element
(NPE) is presented which implements the functionality of a complete
neuron using variable accuracy. The proposed design has been tested
taking into consideration non-idealities of a hardware ANN. The NPE
consists of a bit-serial multiplier which uses only 16 logic elements
on an Altera Cyclone IV FPGA and a bit-serial ALU as well as a
look-up table. Arrays of NPEs can be driven by a single controller
which executes the neural processing algorithm. In conclusion, the
proposed compact NPE design allows the construction of complex
hardware ANNs that can be implemented in a portable equipment
that suits the needs of a single epileptic patient in his or her daily
activities to predict the occurrences of impending tonic conic seizures.
Abstract: Edge detection is one of the most important tasks in image processing. Medical image edge detection plays an important role in segmentation and object recognition of the human organs. It refers to the process of identifying and locating sharp discontinuities in medical images. In this paper, a neuro-fuzzy based approach is introduced to detect the edges for noisy medical images. This approach uses desired number of neuro-fuzzy subdetectors with a postprocessor for detecting the edges of medical images. The internal parameters of the approach are optimized by training pattern using artificial images. The performance of the approach is evaluated on different medical images and compared with popular edge detection algorithm. From the experimental results, it is clear that this approach has better performance than those of other competing edge detection algorithms for noisy medical images.
Abstract: Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Abstract: A peer-to-peer storage system has challenges like; peer availability, data protection, churn rate. To address these challenges different redundancy, replacement and repair schemes are used. This paper presents a hybrid scheme of redundancy using replication and erasure coding. We calculate and compare the storage, access, and maintenance costs of our proposed scheme with existing redundancy schemes. For realistic behaviour of peers a trace of live peer-to-peer system is used. The effect of different replication, and repair schemes are also shown. The proposed hybrid scheme performs better than existing double coding hybrid scheme in all metrics and have an improved maintenance cost than hierarchical codes.
Abstract: This paper presents an unsupervised color image segmentation method. It is based on a hierarchical analysis of 2-D histogram in RGB color space. This histogram minimizes storage space of images and thus facilitates the operations between them. The improved segmentation approach shows a better identification of objects in a color image and, at the same time, the system is fast.
Abstract: In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.
Abstract: Access to advanced medical services has been one of the medical challenges faced by our present society especially in distant geographical locations which may be inaccessible. Then the need for telemedicine arises through which live videos of a doctor can be streamed to a patient located anywhere in the world at any time. Patients’ medical records contain very sensitive information which should not be made accessible to unauthorized people in order to protect privacy, integrity and confidentiality. This research work focuses on a more robust security measure which is biometric (fingerprint) as a form of access control to data of patients by the medical specialist/practitioner.
Abstract: Due to its high computational cost, mutation testing has been neglected by researchers. Recently, many cost and mutants’ reduction techniques have been developed, improved, and experimented, but few of them has relied the possibility of reducing the cost of mutation testing on the program type of the application under test. This paper is a comparative study between four operators’ selection techniques (mutants sampling, class level operators, method level operators, and all operators’ selection) based on the program code type of each application under test. It aims at finding an alternative approach to reveal the effect of code type on mutation testing score. The result of our experiment shows that the program code type can affect the mutation score and that the programs using polymorphism are best suited to be tested with mutation testing.
Abstract: AmI proposes a new way of thinking about computers, which follows the ideas of the Ubiquitous Computing vision of Mark Weiser. In these, there is what is known as a Disappearing Computer Initiative, with users immersed in intelligent environments. Hence, technologies need to be adapted so that they are capable of replacing the traditional inputs to the system by embedding these in every-day artifacts. In this work, we present an approach, which uses Radiofrequency Identification (RFID) and Near Field Communication (NFC) technologies. In the latter, a new form of interaction appears by contact. We compare both technologies by analyzing their requirements and advantages. In addition, we propose using a combination of RFID and NFC.
Abstract: Mining big data represents a big challenge nowadays. Many types of research are concerned with mining massive amounts of data and big data streams. Mining big data faces a lot of challenges including scalability, speed, heterogeneity, accuracy, provenance and privacy. In telecommunication industry, mining big data is like a mining for gold; it represents a big opportunity and maximizing the revenue streams in this industry. This paper discusses the characteristics of big data (volume, variety, velocity and veracity), data mining techniques and tools for handling very large data sets, mining big data in telecommunication and the benefits and opportunities gained from them.
Abstract: Social networks have recently gained a growing
interest on the web. Traditional formalisms for representing social
networks are static and suffer from the lack of semantics. In this
paper, we will show how semantic web technologies can be used to
model social data. The SemTemp ontology aligns and extends
existing ontologies such as FOAF, SIOC, SKOS and OWL-Time to
provide a temporal and semantically rich description of social data.
We also present a modeling scenario to illustrate how our ontology
can be used to model social networks.
Abstract: In recent years, a wide variety of applications are developed with Support Vector Machines -SVM- methods and Artificial Neural Networks -ANN-. In general, these methods depend on intrusion knowledge databases such as KDD99, ISCX, and CAIDA among others. New classes of detectors are generated by machine learning techniques, trained and tested over network databases. Thereafter, detectors are employed to detect anomalies in network communication scenarios according to user’s connections behavior. The first detector based on training dataset is deployed in different real-world networks with mobile and non-mobile devices to analyze the performance and accuracy over static detection. The vulnerabilities are based on previous work in telemedicine apps that were developed on the research group. This paper presents the differences on detections results between some network scenarios by applying traditional detectors deployed with artificial neural networks and support vector machines.
Abstract: Nowadays, food safety is a great public concern;
therefore, robust and effective techniques are required for detecting
the safety situation of goods. Hyperspectral Imaging (HSI) is an
attractive material for researchers to inspect food quality and safety
estimation such as meat quality assessment, automated poultry
carcass inspection, quality evaluation of fish, bruise detection of
apples, quality analysis and grading of citrus fruits, bruise detection
of strawberry, visualization of sugar distribution of melons,
measuring ripening of tomatoes, defect detection of pickling
cucumber, and classification of wheat kernels. HSI can be used to
concurrently collect large amounts of spatial and spectral data on the
objects being observed. This technique yields with exceptional
detection skills, which otherwise cannot be achieved with either
imaging or spectroscopy alone. This paper presents a nonlinear
technique based on kernel Fukunaga-Koontz transform (KFKT) for
detection of fat content in ground meat using HSI. The KFKT which
is the nonlinear version of FKT is one of the most effective
techniques for solving problems involving two-pattern nature. The
conventional FKT method has been improved with kernel machines
for increasing the nonlinear discrimination ability and capturing
higher order of statistics of data. The proposed approach in this paper
aims to segment the fat content of the ground meat by regarding the
fat as target class which is tried to be separated from the remaining
classes (as clutter). We have applied the KFKT on visible and nearinfrared
(VNIR) hyperspectral images of ground meat to determine
fat percentage. The experimental studies indicate that the proposed
technique produces high detection performance for fat ratio in ground
meat.
Abstract: We present an approach to triangle mesh simplification
designed to be executed on the GPU. We use a quadric error metric
to calculate an error value for each vertex of the mesh and order all
vertices based on this value. This step is followed by the parallel
removal of a number of vertices with the lowest calculated error
values. To allow for the parallel removal of multiple vertices we use
a set of per-vertex boundaries that prevent mesh foldovers even when
simplification operations are performed on neighbouring vertices. We
execute multiple iterations of the calculation of the vertex errors,
ordering of the error values and removal of vertices until either a
desired number of vertices remains in the mesh or a minimum error
value is reached. This parallel approach is used to speed up the
simplification process while maintaining mesh topology and avoiding
foldovers at every step of the simplification.
Abstract: Cloud computing can reduce the start-up expenses of implementing EHR (Electronic Health Records). However, many of the healthcare institutions are yet to implement cloud computing due to the associated privacy and security issues. In this paper, we analyze the challenges and opportunities of implementing cloud computing in healthcare. We also analyze data of over 5000 US hospitals that use Telemedicine applications. This analysis helps to understand the importance of smart phones over the desktop systems in different departments of the healthcare institutions. The wide usage of smartphones and cloud computing allows ubiquitous and affordable access to the health data by authorized persons, including patients and doctors. Cloud computing will prove to be beneficial to a majority of the departments in healthcare. Through this analysis, we attempt to understand the different healthcare departments that may benefit significantly from the implementation of cloud computing.
Abstract: Wireless sensors, also known as wireless sensor nodes,
have been making a significant impact on human daily life. The
Radio Frequency Identification (RFID) and Wireless Sensor Network
(WSN) are two complementary technologies; hence, an integrated
implementation of these technologies expands the overall
functionality in obtaining long-range and real-time information on the
location and properties of objects and people. An approach for
integrating ZigBee and RFID networks is proposed in this paper, to
create an energy-efficient network improved by the benefits of
combining ZigBee and RFID architecture. Furthermore, the
compatibility and requirements of the ZigBee device and
communication links in the typical RFID system which is presented
with the real world experiment on the capabilities of the proposed
RFID system.