An Intelligent System for Phish Detection, using Dynamic Analysis and Template Matching

Phishing, or stealing of sensitive information on the web, has dealt a major blow to Internet Security in recent times. Most of the existing anti-phishing solutions fail to handle the fuzziness involved in phish detection, thus leading to a large number of false positives. This fuzziness is attributed to the use of highly flexible and at the same time, highly ambiguous HTML language. We introduce a new perspective against phishing, that tries to systematically prove, whether a given page is phished or not, using the corresponding original page as the basis of the comparison. It analyzes the layout of the pages under consideration to determine the percentage distortion between them, indicative of any form of malicious alteration. The system design represents an intelligent system, employing dynamic assessment which accurately identifies brand new phishing attacks and will prove effective in reducing the number of false positives. This framework could potentially be used as a knowledge base, in educating the internet users against phishing.

Environmental and Technical Modeling of Industrial Solid Waste Management Using Analytical Network Process; A Case Study: Gilan-IRAN

Proper management of residues originated from industrial activities is considered as one of the serious challenges faced by industrial societies due to their potential hazards to the environment. Common disposal methods for industrial solid wastes (ISWs) encompass various combinations of solely management options, i.e. recycling, incineration, composting, and sanitary landfilling. Indeed, the procedure used to evaluate and nominate the best practical methods should be based on environmental, technical, economical, and social assessments. In this paper an environmentaltechnical assessment model is developed using analytical network process (ANP) to facilitate the decision making practice for ISWs generated at Gilan province, Iran. Using the results of performed surveys on industrial units located at Gilan, the various groups of solid wastes in the research area were characterized, and four different ISW management scenarios were studied. The evaluation process was conducted using the above-mentioned model in the Super Decisions software (version 2.0.8) environment. The results indicates that the best ISW management scenario for Gilan province is consist of recycling the metal industries residues, composting the putrescible portion of ISWs, combustion of paper, wood, fabric and polymeric wastes as well as energy extraction in the incineration plant, and finally landfilling the rest of the waste stream in addition with rejected materials from recycling and compost production plants and ashes from the incineration unit.

Averaging Mechanisms to Decision Making for Handover in GSM

In cellular networks, limited availability of resources has to be tapped to its fullest potential. In view of this aspect, a sophisticated averaging and voting technique has been discussed in this paper, wherein the radio resources available are utilized to the fullest value by taking into consideration, several network and radio parameters which decide on when the handover has to be made and thereby reducing the load on Base station .The increase in the load on the Base station might be due to several unnecessary handover taking place which can be eliminated by making judicious use of the radio and network parameters.

Performance Evaluation of Para-virtualization on Modern Mobile Phone Platform

Emergence of smartphones brings to live the concept of converged devices with the availability of web amenities. Such trend also challenges the mobile devices manufactures and service providers in many aspects, such as security on mobile phones, complex and long time design flow, as well as higher development cost. Among these aspects, security on mobile phones is getting more and more attention. Microkernel based virtualization technology will play a critical role in addressing these challenges and meeting mobile market needs and preferences, since virtualization provides essential isolation for security reasons and it allows multiple operating systems to run on one processor accelerating development and cutting development cost. However, virtualization benefits do not come for free. As an additional software layer, it adds some inevitable virtualization overhead to the system, which may decrease the system performance. In this paper we evaluate and analyze the virtualization performance cost of L4 microkernel based virtualization on a competitive mobile phone by comparing the L4Linux, a para-virtualized Linux on top of L4 microkernel, with the native Linux performance using lmbench and a set of typical mobile phone applications.

Robust Face Recognition using AAM and Gabor Features

In this paper, we propose a face recognition algorithm using AAM and Gabor features. Gabor feature vectors which are well known to be robust with respect to small variations of shape, scaling, rotation, distortion, illumination and poses in images are popularly employed for feature vectors for many object detection and recognition algorithms. EBGM, which is prominent among face recognition algorithms employing Gabor feature vectors, requires localization of facial feature points where Gabor feature vectors are extracted. However, localization method employed in EBGM is based on Gabor jet similarity and is sensitive to initial values. Wrong localization of facial feature points affects face recognition rate. AAM is known to be successfully applied to localization of facial feature points. In this paper, we devise a facial feature point localization method which first roughly estimate facial feature points using AAM and refine facial feature points using Gabor jet similarity-based facial feature localization method with initial points set by the rough facial feature points obtained from AAM, and propose a face recognition algorithm using the devised localization method for facial feature localization and Gabor feature vectors. It is observed through experiments that such a cascaded localization method based on both AAM and Gabor jet similarity is more robust than the localization method based on only Gabor jet similarity. Also, it is shown that the proposed face recognition algorithm using this devised localization method and Gabor feature vectors performs better than the conventional face recognition algorithm using Gabor jet similarity-based localization method and Gabor feature vectors like EBGM.

Stability Issues on an Implemented All-Pass Filter Circuitry

The so-called all-pass filter circuits are commonly used in the field of signal processing, control and measurement. Being connected to capacitive loads, these circuits tend to loose their stability; therefore the elaborate analysis of their dynamic behavior is necessary. The compensation methods intending to increase the stability of such circuits are discussed in this paper, including the socalled lead-lag compensation technique being treated in detail. For the dynamic modeling, a two-port network model of the all-pass filter is being derived. The results of the model analysis show, that effective lead-lag compensation can be achieved, alone by the optimization of the circuit parameters; therefore the application of additional electric components are not needed to fulfill the stability requirement.

Performance Analysis of Software Reliability Models using Matrix Method

This paper presents a computational methodology based on matrix operations for a computer based solution to the problem of performance analysis of software reliability models (SRMs). A set of seven comparison criteria have been formulated to rank various non-homogenous Poisson process software reliability models proposed during the past 30 years to estimate software reliability measures such as the number of remaining faults, software failure rate, and software reliability. Selection of optimal SRM for use in a particular case has been an area of interest for researchers in the field of software reliability. Tools and techniques for software reliability model selection found in the literature cannot be used with high level of confidence as they use a limited number of model selection criteria. A real data set of middle size software project from published papers has been used for demonstration of matrix method. The result of this study will be a ranking of SRMs based on the Permanent value of the criteria matrix formed for each model based on the comparison criteria. The software reliability model with highest value of the Permanent is ranked at number – 1 and so on.

Experimental Studies on Multiphase Flow in Porous Media and Pore Wettability

Multiphase flow transport in porous medium is very common and significant in science and engineering applications. For example, in CO2 Storage and Enhanced Oil Recovery processes, CO2 has to be delivered to the pore spaces in reservoirs and aquifers. CO2 storage and enhance oil recovery are actually displacement processes, in which oil or water is displaced by CO2. This displacement is controlled by pore size, chemical and physical properties of pore surfaces and fluids, and also pore wettability. In this study, a technique was developed to measure the pressure profile for driving gas/liquid to displace water in pores. Through this pressure profile, the impact of pore size on the multiphase flow transport and displacement can be analyzed. The other rig developed can be used to measure the static and dynamic pore wettability and investigate the effects of pore size, surface tension, viscosity and chemical structure of liquids on pore wettability.

Estimation of Broadcast Probability in Wireless Adhoc Networks

Most routing protocols (DSR, AODV etc.) that have been designed for wireless adhoc networks incorporate the broadcasting operation in their route discovery scheme. Probabilistic broadcasting techniques have been developed to optimize the broadcast operation which is otherwise very expensive in terms of the redundancy and the traffic it generates. In this paper we have explored percolation theory to gain a different perspective on probabilistic broadcasting schemes which have been actively researched in the recent years. This theory has helped us estimate the value of broadcast probability in a wireless adhoc network as a function of the size of the network. We also show that, operating at those optimal values of broadcast probability there is at least 25-30% reduction in packet regeneration during successful broadcasting.

Optical Coherence Tomography Combined with the Confocal Microscopy Method and Fluorescence for Class V Cavities Investigations

The purpose of this study is to present a non invasive method for the marginal adaptation evaluation in class V composite restorations. Standardized class V cavities, prepared in human extracted teeth, were filled with Premise (Kerr) composite. The specimens were thermo cycled. The interfaces were examined by Optical Coherence Tomography method (OCT) combined with the confocal microscopy and fluorescence. The optical configuration uses two single mode directional couplers with a superluminiscent diode as the source at 1300 nm. The scanning procedure is similar to that used in any confocal microscope, where the fast scanning is enface (line rate) and the depth scanning is much slower (at the frame rate). Gaps at the interfaces as well as inside the composite resin materials were identified. OCT has numerous advantages which justify its use in vivo as well as in vitro in comparison with conventional techniques.

Corporate Credit Rating using Multiclass Classification Models with order Information

Corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has been one of the attractive research topics in the literature. In recent years, multiclass classification models such as artificial neural network (ANN) or multiclass support vector machine (MSVM) have become a very appealing machine learning approaches due to their good performance. However, most of them have only focused on classifying samples into nominal categories, thus the unique characteristic of the credit rating - ordinality - has been seldom considered in their approaches. This study proposes new types of ANN and MSVM classifiers, which are named OMANN and OMSVM respectively. OMANN and OMSVM are designed to extend binary ANN or SVM classifiers by applying ordinal pairwise partitioning (OPP) strategy. These models can handle ordinal multiple classes efficiently and effectively. To validate the usefulness of these two models, we applied them to the real-world bond rating case. We compared the results of our models to those of conventional approaches. The experimental results showed that our proposed models improve classification accuracy in comparison to typical multiclass classification techniques with the reduced computation resource.

Effect of Size of the Step in the Response Surface Methodology using Nonlinear Test Functions

The response surface methodology (RSM) is a collection of mathematical and statistical techniques useful in the modeling and analysis of problems in which the dependent variable receives the influence of several independent variables, in order to determine which are the conditions under which should operate these variables to optimize a production process. The RSM estimated a regression model of first order, and sets the search direction using the method of maximum / minimum slope up / down MMS U/D. However, this method selects the step size intuitively, which can affect the efficiency of the RSM. This paper assesses how the step size affects the efficiency of this methodology. The numerical examples are carried out through Monte Carlo experiments, evaluating three response variables: efficiency gain function, the optimum distance and the number of iterations. The results in the simulation experiments showed that in response variables efficiency and gain function at the optimum distance were not affected by the step size, while the number of iterations is found that the efficiency if it is affected by the size of the step and function type of test used.

Performance Improvements of DSP Applications on a Generic Reconfigurable Platform

Speedups from mapping four real-life DSP applications on an embedded system-on-chip that couples coarsegrained reconfigurable logic with an instruction-set processor are presented. The reconfigurable logic is realized by a 2-Dimensional Array of Processing Elements. A design flow for improving application-s performance is proposed. Critical software parts, called kernels, are accelerated on the Coarse-Grained Reconfigurable Array. The kernels are detected by profiling the source code. For mapping the detected kernels on the reconfigurable logic a prioritybased mapping algorithm has been developed. Two 4x4 array architectures, which differ in their interconnection structure among the Processing Elements, are considered. The experiments for eight different instances of a generic system show that important overall application speedups have been reported for the four applications. The performance improvements range from 1.86 to 3.67, with an average value of 2.53, compared with an all-software execution. These speedups are quite close to the maximum theoretical speedups imposed by Amdahl-s law.

Competitiveness of the Baltic States within the International Ratings

Baltic competitiveness is quite controversial. In a situation with the rapid structural changes, economy develops in balance very rarely - in different fields will always be more rapid changes in another more stagnation. Analyzing different economic indices developed by international organizations the situation in three Baltic countries are described from a different competitiveness positions highlighting strengths and weaknesses of each country. Exploring the openness of the economy, it is possible to observe certain risks included in the reports describing situation of competitiveness where government policies competing in the tax system, the rates of labour market policies, investment environment, etc. This is a very important factor resulting in competitive advantage. Baltic countries are still at a weak position from a technological perspective, and need to borrow the knowledge and technology from more developed countries.

Property Aggregation and Uncertainty with Links to the Management and Determination of Critical Design Features

Within the domain of Systems Engineering the need to perform property aggregation to understand, analyze and manage complex systems is unequivocal. This can be seen in numerous domains such as capability analysis, Mission Essential Competencies (MEC) and Critical Design Features (CDF). Furthermore, the need to consider uncertainty propagation as well as the sensitivity of related properties within such analysis is equally as important when determining a set of critical properties within such a system. This paper describes this property breakdown in a number of domains within Systems Engineering and, within the area of CDFs, emphasizes the importance of uncertainty analysis. As part of this, a section of the paper describes possible techniques which may be used within uncertainty propagation and in conclusion an example is described utilizing one of the techniques for property and uncertainty aggregation within an aircraft system to aid the determination of Critical Design Features.

Ensuring Data Security and Consistency in FTIMA - A Fault Tolerant Infrastructure for Mobile Agents

Transaction management is one of the most crucial requirements for enterprise application development which often require concurrent access to distributed data shared amongst multiple application / nodes. Transactions guarantee the consistency of data records when multiple users or processes perform concurrent operations. Existing Fault Tolerance Infrastructure for Mobile Agents (FTIMA) provides a fault tolerant behavior in distributed transactions and uses multi-agent system for distributed transaction and processing. In the existing FTIMA architecture, data flows through the network and contains personal, private or confidential information. In banking transactions a minor change in the transaction can cause a great loss to the user. In this paper we have modified FTIMA architecture to ensure that the user request reaches the destination server securely and without any change. We have used triple DES for encryption/ decryption and MD5 algorithm for validity of message.

3D Oil Reservoir Visualisation Using Octree Compression Techniques Utilising Logical Grid Co-Ordinates

Octree compression techniques have been used for several years for compressing large three dimensional data sets into homogeneous regions. This compression technique is ideally suited to datasets which have similar values in clusters. Oil engineers represent reservoirs as a three dimensional grid where hydrocarbons occur naturally in clusters. This research looks at the efficiency of storing these grids using octree compression techniques where grid cells are broken into active and inactive regions. Initial experiments yielded high compression ratios as only active leaf nodes and their ancestor, header nodes are stored as a bitstream to file on disk. Savings in computational time and memory were possible at decompression, as only active leaf nodes are sent to the graphics card eliminating the need of reconstructing the original matrix. This results in a more compact vertex table, which can be loaded into the graphics card quicker and generating shorter refresh delay times.

Prediction of Compressive Strength of Self- Compacting Concrete with Fuzzy Logic

The paper presents the potential of fuzzy logic (FL-I) and neural network techniques (ANN-I) for predicting the compressive strength, for SCC mixtures. Six input parameters that is contents of cement, sand, coarse aggregate, fly ash, superplasticizer percentage and water-to-binder ratio and an output parameter i.e. 28- day compressive strength for ANN-I and FL-I are used for modeling. The fuzzy logic model showed better performance than neural network model.

Online Partial Discharge Source Localization and Characterization Using Non-Conventional Method

Power cables are vulnerable to failure due to aging or defects that occur with the passage of time under continuous operation and loading stresses. PD detection and characterization provide information on the location, nature, form and extent of the degradation. As a result, PD monitoring has become an important part of condition based maintenance (CBM) program among power utilities. Online partial discharge (PD) localization of defect sources in power cable system is possible using the time of flight method. The information regarding the time difference between the main and reflected pulses and cable length can help in locating the partial discharge source along the cable length. However, if the length of the cable is not known and the defect source is located at the extreme ends of the cable or in the middle of the cable, then double ended measurement is required to indicate the location of PD source. Use of multiple sensors can also help in discriminating the cable PD or local/ external PD. This paper presents the experience and results from online partial discharge measurements conducted in the laboratory and the challenges in partial discharge source localization.

The Influence of Substrate Bias on the Mechanical Properties of a W- and S-containing DLC-based Solid-lubricant Film

A diamond-like carbon (DLC) based solid-lubricant film was designed and DLC films were successfully prepared using a microwave plasma enhanced magnetron sputtering deposition technology. Post-test characterizations including Raman spectrometry, X-ray diffraction, nano-indentation test, adhesion test, friction coefficient test were performed to study the influence of substrate bias voltage on the mechanical properties of the W- and S-doped DLC films. The results indicated that the W- and S-doped DLC films also had the typical structure of DLC films and a better mechanical performance achieved by the application of a substrate bias of -200V.