Abstract: The large pose discrepancy is one of the critical
challenges in face recognition during video surveillance. Due to
the entanglement of pose attributes with identity information, the
conventional approaches for pose-independent representation lack
in providing quality results in recognizing largely posed faces. In
this paper, we propose a practical approach to disentangle the pose
attribute from the identity information followed by synthesis of a face
using a classifier network in latent space. The proposed approach
employs a modified generative adversarial network framework
consisting of an encoder-decoder structure embedded with a classifier
in manifold space for carrying out factorization on the latent
encoding. It can be further generalized to other face and non-face
attributes for real-life video frames containing faces with significant
attribute variations. Experimental results and comparison with state
of the art in the field prove that the learned representation of the
proposed approach synthesizes more compelling perceptual images
through a combination of adversarial and classification losses.
Abstract: In this study, a cross-layer design which combines
adaptive modulation and coding (AMC) and hybrid automatic repeat
request (HARQ) techniques for a cooperative wireless network is
investigated analytically. Previous analyses of such systems in the
literature are confined to the case where the fading channel is
independent at each retransmission, which can be unrealistic unless
the channel is varying very fast. On the other hand, temporal channel
correlation can have a significant impact on the performance of
HARQ systems. In this study, utilizing a Markov channel model
which accounts for the temporal correlation, the performance of
non-cooperative and cooperative networks are investigated in terms of
packet loss rate and throughput metrics for Chase combining HARQ
strategy.
Abstract: Data assets protection is a crucial issue in the
cybersecurity field. Companies use logical access control tools to
vault their information assets and protect them against external
threats, but they lack solutions to counter insider threats. Nowadays,
insider threats are the most significant concern of security analysts.
They are mainly individuals with legitimate access to companies
information systems, which use their rights with malicious intents.
In several fields, behavior anomaly detection is the method used by
cyber specialists to counter the threats of user malicious activities
effectively. In this paper, we present the step toward the construction
of a user and entity behavior analysis framework by proposing a
behavior anomaly detection model. This model combines machine
learning classification techniques and graph-based methods, relying
on linear algebra and parallel computing techniques. We show the
utility of an ensemble learning approach in this context. We present
some detection methods tests results on an representative access
control dataset. The use of some explored classifiers gives results
up to 99% of accuracy.
Abstract: The security aspect of the IoT occupies a place of great
importance especially after the evolution that has known this field
lastly because it must take into account the transformations and the
new applications .Blockchain is a new technology dedicated to the
data sharing. However, this does not work the same way in the
different systems with different operating principles. This article will
discuss network security using the Blockchain to facilitate the sending
of messages and information, enabling the use of new processes and
enabling autonomous coordination of devices. To do this, we will
discuss proposed solutions to ensure a high level of security in these
networks in the work of other researchers. Finally, our article will
propose a method of security more adapted to our needs as a team
working in the ad hoc networks, this method is based on the principle
of the Blockchain and that we named ”MPR Blockchain”.
Abstract: In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods.
Abstract: During their activity, all systems must be operational without failures and in this context, the dependability concept is essential avoiding disruption of their function. As computer networks are systems with the same requirements of dependability, this article deals with an analysis of failures for a computer network. The proposed approach integrates specific tools of the plat-form KB3, usually applied in dependability studies of industrial systems. The methodology is supported by a multi-agent system formed by six agents grouped in three meta agents, dealing with two levels. The first level concerns a modeling step through a conceptual agent and a generating agent. The conceptual agent is dedicated to the building of the knowledge base from the system specifications written in the FIGARO language. The generating agent allows producing automatically both the structural model and a dependability model of the system. The second level, the simulation, shows the effects of the failures of the system through a simulation agent. The approach validation is obtained by its application on a specific computer network, giving an analysis of failures through their effects for the considered network.
Abstract: Wireless networks are getting more and more used
in every new technology or feature, especially those without
infrastructure (Ad hoc mode) which provide a low cost alternative
to the infrastructure mode wireless networks and a great flexibility
for application domains such as environmental monitoring, smart
cities, precision agriculture, and so on. These application domains
present a common characteristic which is the need of coexistence and
intercommunication between modules belonging to different types
of ad hoc networks like wireless sensor networks, mesh networks,
mobile ad hoc networks, vehicular ad hoc networks, etc. This vision
to bring to life such heterogeneous networks will make humanity
duties easier but its development path is full of challenges. One
of these challenges is the communication complexity between its
components due to the lack of common or compatible protocols
standard. This article proposes a new patented routing protocol based
on the OLSR standard in order to resolve the heterogeneous ad hoc
networks communication issue. This new protocol is applied on a
specific network architecture composed of MANET, VANET, and
FANET.
Abstract: The Bacterial Foraging Optimization (BFO) algorithm is inspired by the behavior of bacteria such as Escherichia coli or Myxococcus xanthus when searching for food, more precisely the chemotaxis behavior. Bacteria perceive chemical gradients in the environment, such as nutrients, and also other individual bacteria, and move toward or in the opposite direction to those signals. The application example considered as a case study consists in establishing the dependency between the reaction yield of hydrogels based on polyacrylamide and the working conditions such as time, temperature, monomer, initiator, crosslinking agent and inclusion polymer concentrations, as well as type of the polymer added. This process is modeled with a neural network which is included in an optimization procedure based on BFO. An experimental study of BFO parameters is performed. The results show that the algorithm is quite robust and can obtain good results for diverse combinations of parameter values.
Abstract: Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.
Abstract: Arabic offline handwriting recognition systems are considered as one of the most challenging topics. Arabic Handwritten Numeral Strings are used to automate systems that deal with numbers such as postal code, banking account numbers and numbers on car plates. Segmentation of connected numerals is the main bottleneck in the handwritten numeral recognition system. This is in turn can increase the speed and efficiency of the recognition system. In this paper, we proposed algorithms for automatic segmentation and feature extraction of Arabic handwritten numeral strings based on Watershed approach. The algorithms have been designed and implemented to achieve the main goal of segmenting and extracting the string of numeral digits written by hand especially in a courtesy amount of bank checks. The segmentation algorithm partitions the string into multiple regions that can be associated with the properties of one or more criteria. The numeral extraction algorithm extracts the numeral string digits into separated individual digit. Both algorithms for segmentation and feature extraction have been tested successfully and efficiently for all types of numerals.
Abstract: Quantum cryptography is described as a point-to-point secure key generation technology that has emerged in recent times in providing absolute security. Researchers have started studying new innovative approaches to exploit the security of Quantum Key Distribution (QKD) for a large-scale communication system. A number of approaches and models for utilization of QKD for secure communication have been developed. The uncertainty principle in quantum mechanics created a new paradigm for QKD. One of the approaches for use of QKD involved network fashioned security. The main goal was point-to-point Quantum network that exploited QKD technology for end-to-end network security via high speed QKD. Other approaches and models equipped with QKD in network fashion are introduced in the literature as. A different approach that this paper deals with is using QKD in existing protocols, which are widely used on the Internet to enhance security with main objective of unconditional security. Our work is towards the analysis of the QKD in Mobile ad-hoc network (MANET).
Abstract: The neural network quantization is highly desired
procedure to perform before running neural networks on mobile
devices. Quantization without fine-tuning leads to accuracy drop of
the model, whereas commonly used training with quantization is done
on the full set of the labeled data and therefore is both time- and
resource-consuming. Real life applications require simplification and
acceleration of quantization procedure that will maintain accuracy of
full-precision neural network, especially for modern mobile neural
network architectures like Mobilenet-v1, MobileNet-v2 and MNAS. Here we present a method to significantly optimize training with
quantization procedure by introducing the trained scale factors for
discretization thresholds that are separate for each filter. Using the
proposed technique, we quantize the modern mobile architectures of
neural networks with the set of train data of only ∼ 10% of the
total ImageNet 2012 sample. Such reduction of train dataset size and
small number of trainable parameters allow to fine-tune the network
for several hours while maintaining the high accuracy of quantized
model (accuracy drop was less than 0.5%). Ready-for-use models and
code are available in the GitHub repository.
Abstract: Digital Do It Yourself (DIY) is a contemporary socio-technological phenomenon, enabled by technological tools. The nature and potential long-term effects of this phenomenon have been widely studied within the framework of the EU funded project ‘Digital Do It Yourself’, in which the authors have created and experimented a specific Digital Do It Yourself (DiDIY) co-design process. The phenomenon was first studied through a literature research to understand its multiple dimensions and complexity. Therefore, co-design workshops were used to investigate the phenomenon by involving people to achieve a complete understanding of the DiDIY practices and its enabling factors. These analyses allowed the definition of the DiDIY fundamental factors that were then translated into a design tool. The objective of the tool is to shape design concepts by transferring these factors into different environments to achieve innovation. The aim of this paper is to present the ‘DiDIY Factor Stimuli’ tool, describing the research path and the findings behind it.
Abstract: The swirl gripper is an electrically activated noncontact handling device that uses swirling airflow to generate a lifting force. This force can be used to pick up a workpiece placed underneath the swirl gripper without any contact. It is applicable, for example, in the semiconductor wafer production line, where contact must be avoided during the handling and moving of a workpiece to minimize damage. When a workpiece levitates underneath a swirl gripper, the gap height between them is crucial for safe handling. Therefore, in this paper, we propose a method to estimate the levitation gap height by detecting pressure at two points. The method is based on theoretical model of the swirl gripper, and has been experimentally verified. Furthermore, the force between the gripper and the workpiece can also be estimated using the detected pressure. As a result, the nonlinear relationship between the force and gap height can be linearized by adjusting the rotating speed of the fan in the swirl gripper according to the estimated force and gap height. The linearized relationship is expected to enhance handling stability of the workpiece.
Abstract: In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.
Abstract: Various flows in the network require to go through
different types of middlebox. The improper placement of network
middlebox and path assignment for flows could greatly increase
the network latency and also decrease the performance of network.
Minimizing the total end to end latency of all the ows requires to
assign path for the incoming flows. In this paper, the flow path
assignment problem in regard to the placement of various kinds
of middlebox is studied. The flow path assignment problem is
formulated to a linear programming problem, which is very time
consuming. On the other hand, a naive greedy algorithm is studied.
Which is very fast but causes much more latency than the linear
programming algorithm. At last, the paper presents a heuristic
algorithm named FPA, which takes bottleneck link information and
estimated bandwidth occupancy into consideration, and achieves
near optimal latency in much less time. Evaluation results validate
the effectiveness of the proposed algorithm.
Abstract: Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.
Abstract: Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.
Abstract: Geometric modeling plays an important role in the
constructions and manufacturing of curve, surface and solid
modeling. Their algorithms are critically important not only in
the automobile, ship and aircraft manufacturing business, but are
also absolutely necessary in a wide variety of modern applications,
e.g., robotics, optimization, computer vision, data analytics and
visualization. The calculation and display of geometric objects
can be accomplished by these six techniques: Polynomial basis,
Recursive, Iterative, Coefficient matrix, Polar form approach and
Pyramidal algorithms. In this research, the coefficient matrix (simply
called monomial form approach) will be used to model polynomial
rectangular patches, i.e., Said-Ball, Wang-Ball, DP, Dejdumrong and
NB1 surfaces. Some examples of the monomial forms for these
surface modeling are illustrated in many aspects, e.g., construction,
derivatives, model transformation, degree elevation and degress
reduction.
Abstract: Newton-Lagrange Interpolations are widely used in
numerical analysis. However, it requires a quadratic computational
time for their constructions. In computer aided geometric design
(CAGD), there are some polynomial curves: Wang-Ball, DP and
Dejdumrong curves, which have linear time complexity algorithms.
Thus, the computational time for Newton-Lagrange Interpolations
can be reduced by applying the algorithms of Wang-Ball, DP and
Dejdumrong curves. In order to use Wang-Ball, DP and Dejdumrong
algorithms, first, it is necessary to convert Newton-Lagrange
polynomials into Wang-Ball, DP or Dejdumrong polynomials. In
this work, the algorithms for converting from both uniform and
non-uniform Newton-Lagrange polynomials into Wang-Ball, DP and
Dejdumrong polynomials are investigated. Thus, the computational
time for representing Newton-Lagrange polynomials can be reduced
into linear complexity. In addition, the other utilizations of using
CAGD curves to modify the Newton-Lagrange curves can be taken.