Software Evolution Based Sequence Diagrams Merging

The need to merge software artifacts seems inherent to modern software development. Distribution of development over several teams and breaking tasks into smaller, more manageable pieces are an effective means to deal with the kind of complexity. In each case, the separately developed artifacts need to be assembled as efficiently as possible into a consistent whole in which the parts still function as described. In addition, earlier changes are introduced into the life cycle and easier is their management by designers. Interaction-based specifications such as UML sequence diagrams have been found effective in this regard. As a result, sequence diagrams can be used not only for capturing system behaviors but also for merging changes in order to create a new version. The objective of this paper is to suggest a new approach to deal with the problem of software merging at the level of sequence diagrams by using the concept of dependence analysis that captures, formally, all mapping, and differences between elements of sequence diagrams and serves as a key concept to create a new version of sequence diagram.

Methods of Geodesic Distance in Two-Dimensional Face Recognition

In this paper, we present a comparative study of three methods of 2D face recognition system such as: Iso-Geodesic Curves (IGC), Geodesic Distance (GD) and Geodesic-Intensity Histogram (GIH). These approaches are based on computing of geodesic distance between points of facial surface and between facial curves. In this study we represented the image at gray level as a 2D surface in a 3D space, with the third coordinate proportional to the intensity values of pixels. In the classifying step, we use: Neural Networks (NN), K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). The images used in our experiments are from two wellknown databases of face images ORL and YaleB. ORL data base was used to evaluate the performance of methods under conditions where the pose and sample size are varied, and the database YaleB was used to examine the performance of the systems when the facial expressions and lighting are varied.

Comprehensive Analysis of Data Mining Tools

Due to the fast and flawless technological innovation there is a tremendous amount of data dumping all over the world in every domain such as Pattern Recognition, Machine Learning, Spatial Data Mining, Image Analysis, Fraudulent Analysis, World Wide Web etc., This issue turns to be more essential for developing several tools for data mining functionalities. The major aim of this paper is to analyze various tools which are used to build a resourceful analytical or descriptive model for handling large amount of information more efficiently and user friendly. In this survey the diverse tools are illustrated with their extensive technical paradigm, outstanding graphical interface and inbuilt multipath algorithms in which it is very useful for handling significant amount of data more indeed.

Understanding Health Behavior Using Social Network Analysis

Health of a person plays a vital role in the collective health of his community and hence the well-being of the society as a whole. But, in today’s fast paced technology driven world, health issues are increasingly being associated with human behaviors – their lifestyle. Social networks have tremendous impact on the health behavior of individuals. Many researchers have used social network analysis to understand human behavior that implicates their social and economic environments. It would be interesting to use a similar analysis to understand human behaviors that have health implications. This paper focuses on concepts of those behavioural analyses that have health implications using social networks analysis and provides possible algorithmic approaches. The results of these approaches can be used by the governing authorities for rolling out health plans, benefits and take preventive measures, while the pharmaceutical companies can target specific markets, helping health insurance companies to better model their insurance plans.

Automatic Intelligent Analysis of Malware Behaviour

In this paper, we describe the use of formal methods to model malware behaviour. The modelling of harmful behaviour rests upon syntactic structures that represent malicious procedures inside malware. The malicious activities are modelled by a formal grammar, where API calls’ components are the terminals and the set of API calls used in combination to achieve a goal are designated non-terminals. The combination of different non-terminals in various ways and tiers make up the attack vectors that are used by harmful software. Based on these syntactic structures a parser can be generated which takes execution traces as input for pattern recognition.

Sliding Mode Control of Autonomous Underwater Vehicles

This paper describes a sliding mode controller for autonomous underwater vehicles (AUVs). The dynamic of AUV model is highly nonlinear because of many factors, such as hydrodynamic drag, damping, and lift forces, Coriolis and centripetal forces, gravity and buoyancy forces, as well as forces from thruster. To address these difficulties, a nonlinear sliding mode controller is designed to approximate the nonlinear dynamics of AUV and improve trajectory tracking. Moreover, the proposed controller can profoundly attenuate the effects of uncertainties and external disturbances in the closed-loop system. Using the Lyapunov theory the boundedness of AUV tracking errors and the stability of the proposed control system are also guaranteed. Numerical simulation studies of an AUV are included to illustrate the effectiveness of the presented approach.

Formal Specification of Web Services Applications for Digital Reference Services of Library Information System

Digital reference service is when a traditional library reference service is provided electronically. In most cases users do not get full satisfaction from using digital reference service due to variety of reasons. This paper discusses the formal specification of web services applications for digital reference services (WSDRS). WSDRS is an informal model that claims to reduce the problems of digital reference services in libraries. It uses web services technology to provide efficient digital way of satisfying users’ need in the reference section of libraries. Informal model is in natural language which is inconsistent and ambiguous that may cause difficulties to the developers of the system. In order to solve this problem we decided to convert the informal specifications into formal specifications. This is supposed to reduce the overall development time and cost. We use Z language to develop the formal model and verify it with Z/EVES theorem prover tool.

Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem

The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.

Research of Data Cleaning Methods Based on Dependency Rules

This paper introduces the concept and principle of data cleaning, analyzes the types and causes of dirty data, and proposes several key steps of typical cleaning process, puts forward a well scalability and versatility data cleaning framework, in view of data with attribute dependency relation, designs several of violation data discovery algorithms by formal formula, which can obtain inconsistent data to all target columns with condition attribute dependent no matter data is structured (SQL) or unstructured (NoSql), and gives 6 data cleaning methods based on these algorithms.

Evolution of Fuzzy Neural Networks Using an Evolution Strategy with Fuzzy Genotype Values

Evolution strategy (ES) is a well-known instance of evolutionary algorithms, and there have been many studies on ES. In this paper, the author proposes an extended ES for solving fuzzy-valued optimization problems. In the proposed ES, genotype values are not real numbers but fuzzy numbers. Evolutionary processes in the ES are extended so that it can handle genotype instances with fuzzy numbers. In this study, the proposed method is experimentally applied to the evolution of neural networks with fuzzy weights and biases. Results reveal that fuzzy neural networks evolved using the proposed ES with fuzzy genotype values can model hidden target fuzzy functions even though no training data are explicitly provided. Next, the proposed method is evaluated in terms of variations in specifying fuzzy numbers as genotype values. One of the mostly adopted fuzzy numbers is a symmetric triangular one that can be specified by its lower and upper bounds (LU) or its center and width (CW). Experimental results revealed that the LU model contributed better to the fuzzy ES than the CW model, which indicates that the LU model should be adopted in future applications of the proposed method.

Instant Location Detection of Objects Moving at High-Speedin C-OTDR Monitoring Systems

The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data of the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as «signaling parameters» (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of COTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources, but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as rule. This report contains describing of the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.

A Comparison of Bias Among Relaxed Divisor Methods Using 3 Bias Measurements

The apportionment method is used by many countries, to calculate the distribution of seats in political bodies. For example, this method is used in the United States (U.S.) to distribute house seats proportionally based on the population of the electoral district. Famous apportionment methods include the divisor methods called the Adams Method, Dean Method, Hill Method, Jefferson Method and Webster Method. Sometimes the results from the implementation of these divisor methods are unfair and include errors. Therefore, it is important to examine the optimization of this method by using a bias measurement to figure out precise and fair results. In this research we investigate the bias of divisor methods in the U.S. Houses of Representatives toward large and small states by applying the Stolarsky Mean Method. We compare the bias of the apportionment method by using two famous bias measurements: the Balinski and Young measurement and the Ernst measurement. Both measurements have a formula for large and small states. The Third measurement however, which was created by the researchers, did not factor in the element of large and small states into the formula. All three measurements are compared and the results show that our measurement produces similar results to the other two famous measurements.

Management Decision System for the Documentary Archives in the Library of a Public Moroccan Institution: Case of Sultan Moulay Slimane University, Beni Mellal

This paper deals with the problem of management of information resources in libraries of the public institution Sultan Moulay Slimane University (SMSU) in order to analyze the satisfaction of the readers, and allow university leaders to make better strategic and instant decisions. For this, the integration of an integrated management decision library system is a priority program of higher education, as part of the Digital Morocco, which has a proactive policy to develop the use of new technologies information and communication in higher institutions. This operational information system can provide better services to the students and for the leaders. Our approach is to integrate the tools of business intelligence (BI) in the library management by using power BI.

Test Data Compression Using a Hybrid of Bitmask Dictionary and 2n Pattern Runlength Coding Methods

In VLSI, testing plays an important role. Major problem in testing are test data volume and test power. The important solution to reduce test data volume and test time is test data compression. The Proposed technique combines the bit maskdictionary and 2n pattern run length-coding method and provides a substantial improvement in the compression efficiency without introducing any additional decompression penalty. This method has been implemented using Mat lab and HDL Language to reduce test data volume and memory requirements. This method is applied on various benchmark test sets and compared the results with other existing methods. The proposed technique can achieve a compression ratio up to 86%.

Prediction of Writer Using Tamil Handwritten Document Image Based on Pooled Features

Tamil handwritten document is taken as a key source of data to identify the writer. Tamil is a classical language which has 247 characters include compound characters, consonants, vowels and special character. Most characters of Tamil are multifaceted in nature. Handwriting is a unique feature of an individual. Writer may change their handwritings according to their frame of mind and this place a risky challenge in identifying the writer. A new discriminative model with pooled features of handwriting is proposed and implemented using support vector machine. It has been reported on 100% of prediction accuracy by RBF and polynomial kernel based classification model.

CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet

One of the most important challenging factors in medical images is nominated as noise. Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjects to low quality due to the noise. Quality of CT images is dependent on absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete Wavelet Transform (DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).

Solving 94-bit ECDLP with 70 Computers in Parallel

Elliptic curve discrete logarithm problem(ECDLP) is one of problems on which the security of pairing-based cryptography is based. This paper considers Pollard’s rho method to evaluate the security of ECDLP on Barreto-Naehrig(BN) curve that is an efficient pairing-friendly curve. Some techniques are proposed to make the rho method efficient. Especially, the group structure on BN curve, distinguished point method, and Montgomery trick are well-known techniques. This paper applies these techniques and shows its optimization. According to the experimental results for which a large-scale parallel system with MySQL is applied, 94-bit ECDLP was solved about 28 hours by parallelizing 71 computers.

A Comparative Study of Image Segmentation Algorithms

In some applications, such as image recognition or compression, segmentation refers to the process of partitioning a digital image into multiple segments. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. Image segmentation is to classify or cluster an image into several parts (regions) according to the feature of image, for example, the pixel value or the frequency response. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image. Several image segmentation algorithms were proposed to segment an image before recognition or compression. Up to now, many image segmentation algorithms exist and be extensively applied in science and daily life. According to their segmentation method, we can approximately categorize them into region-based segmentation, data clustering, and edge-base segmentation. In this paper, we give a study of several popular image segmentation algorithms that are available.

Prediction of Writer Using Tamil Handwritten Document Image Based on Pooled Features

Tamil handwritten document is taken as a key source of data to identify the writer. Tamil is a classical language which has 247 characters include compound characters, consonants, vowels and special character. Most characters of Tamil are multifaceted in nature. Handwriting is a unique feature of an individual. Writer may change their handwritings according to their frame of mind and this place a risky challenge in identifying the writer. A new discriminative model with pooled features of handwriting is proposed and implemented using support vector machine. It has been reported on 100% of prediction accuracy by RBF and polynomial kernel based classification model.

Validation of the Formal Model of Web Services Applications for Digital Reference Service of Library Information System

The web services applications for digital reference service (WSDRS) of LIS model is an informal model that claims to reduce the problems of digital reference services in libraries. It uses web services technology to provide efficient way of satisfying users’ needs in the reference section of libraries. The formal WSDRS model consists of the Z specifications of all the informal specifications of the model. This paper discusses the formal validation of the Z specifications of WSDRS model. The authors formally verify and thus validate the properties of the model using Z/EVES theorem prover.