Video Quality Assessment Methods: A Bird’s-Eye View

The proliferation of multimedia technology and services in today’s world provide ample research scope in the frontiers of visual signal processing. Wide spread usage of video based applications in heterogeneous environment needs viable methods of Video Quality Assessment (VQA). The evaluation of video quality not only depends on high QoS requirements but also emphasis the need of novel term ‘QoE’ (Quality of Experience) that perceive video quality as user centric. This paper discusses two vital video quality assessment methods namely, subjective and objective assessment methods. The evolution of various video quality metrics, their classification models and applications are reviewed in this work. The Mean Opinion Score (MOS) based subjective measurements and algorithm based objective metrics are discussed and their challenges are outlined. Further, this paper explores the recent progress of VQA in emerging technologies such as mobile video and 3D video.

An Algorithm for the Map Labeling Problem with Two Kinds of Priorities

We consider the problem of placing labels of the points on a plane. For each point, its position, the size of its label and a priority are given. Moreover, several candidates of its label positions are prespecified, and each of such label positions is assigned a priority. The objective of our problem is to maximize the total sum of priorities of placed labels and their points. By refining a labeling algorithm that can use these priorities, we propose a new heuristic algorithm which is more suitable for treating the assigned priorities.

Frequency- and Content-Based Tag Cloud Font Distribution Algorithm

The spread of Web 2.0 has caused user-generated content explosion. Users can tag resources to describe and organize them. Tag clouds provide rough impression of relative importance of each tag within overall cloud in order to facilitate browsing among numerous tags and resources. The goal of our paper is to enrich visualization of tag clouds. A font distribution algorithm has been proposed to calculate a novel metric based on frequency and content, and to classify among classes from this metric based on power law distribution and percentages. The suggested algorithm has been validated and verified on the tag cloud of a real-world thesis portal.

On Phase Based Stereo Matching and Its Related Issues

The paper focuses on the problem of the point correspondence matching in stereo images. The proposed matching algorithm is based on the combination of simpler methods such as normalized sum of squared differences (NSSD) and a more complex phase correlation based approach, by considering the noise and other factors, as well. The speed of NSSD and the preciseness of the phase correlation together yield an efficient approach to find the best candidate point with sub-pixel accuracy in stereo image pairs. The task of the NSSD in this case is to approach the candidate pixel roughly. Afterwards the location of the candidate is refined by an enhanced phase correlation based method which in contrast to the NSSD has to run only once for each selected pixel.

Touch Interaction through Tagging Context

Ambient Intelligence promotes a shift in computing which involves fitting-out the environments with devices to support context-aware applications. One of main objectives is the reduction to a minimum of the user’s interactive effort, the diversity and quantity of devices with which people are surrounded with, in existing environments; increase the level of difficulty to achieve this goal. The mobile phones and their amazing global penetration, makes it an excellent device for delivering new services to the user, without requiring a learning effort. The environment will have to be able to perceive all of the interaction techniques. In this paper, we present the PICTAC model (Perceiving touch Interaction through TAgging Context), which similarly delivers service to members of a research group.

Dark and Bright Envelopes for Dehazing Images

We present a method for dehazing images. A dark envelope image is derived with the bilateral minimum filter and a bright envelope is derived with the bilateral maximum filter. The ambient light and transmission of the scene are estimated from these two envelope images. An image without haze is reconstructed from the estimated ambient light and transmission.

Accelerating Side Channel Analysis with Distributed and Parallelized Processing

Although there is no theoretical weakness in a cryptographic algorithm, Side Channel Analysis can find out some secret data from the physical implementation of a cryptosystem. The analysis is based on extra information such as timing information, power consumption, electromagnetic leaks or even sound which can be exploited to break the system. Differential Power Analysis is one of the most popular analyses, as computing the statistical correlations of the secret keys and power consumptions. It is usually necessary to calculate huge data and takes a long time. It may take several weeks for some devices with countermeasures. We suggest and evaluate the methods to shorten the time to analyze cryptosystems. Our methods include distributed computing and parallelized processing.

Applying Sequential Pattern Mining to Generate Block for Scheduling Problems

The main idea in this paper is using sequential pattern mining to find the information which is helpful for finding high performance solutions. By combining this information, it is defined as blocks. Using the blocks to generate artificial chromosomes (ACs) could improve the structure of solutions. Estimation of Distribution Algorithms (EDAs) is adapted to solve the combinatorial problems. Nevertheless many of these approaches are advantageous for this application, but only some of them are used to enhance the efficiency of application. Generating ACs uses patterns and EDAs could increase the diversity. According to the experimental result, the algorithm which we proposed has a better performance to solve the permutation flow-shop problems.

Comparing SVM and Naïve Bayes Classifier for Automatic Microaneurysm Detections

Diabetic retinopathy is characterized by the development of retinal microaneurysms. The damage can be prevented if disease is treated in its early stages. In this paper, we are comparing Support Vector Machine (SVM) and Naïve Bayes (NB) classifiers for automatic microaneurysm detection in images acquired through non-dilated pupils. The Nearest Neighbor classifier is used as a baseline for comparison. Detected microaneurysms are validated with expert ophthalmologists’ hand-drawn ground-truths. The sensitivity, specificity, precision and accuracy of each method are also compared.

Experience of the Formation of Professional Competence of Students of IT – Specialties

The article describes an approach to build competence in research of Bachelor and Master, which is now an important feature of modern specialist in the field of engineering. We provide an example of methodical teaching methods with the research aspect, including the formulation of the problem, the method of conducting experiments, analysis of the results. Implementation of methods allows the student to better consolidate their knowledge and skills at the same time to get research. Knowledge on the part of the media requires some training in the subject area and teaching methods.

Application of the Discrete-Event Simulation When Optimizing of Business Processes in Trading Companies

Optimization of business processes in trading companies is reviewed in the report. There is the presentation of the “Wholesale Customer Order Handling Process” business process model applicable for small and medium businesses. It is proposed to apply the algorithm for automation of the customer order processing which will significantly reduce labor costs and time expenditures and increase the profitability of companies. An optimized business process is an element of the information system of accounting of spare parts trading network activity. The considered algorithm may find application in the trading industry as well.

Neural Network in Fixed Time for Collision Detection between Two Convex Polyhedra

In this paper, a different architecture of a collision detection neural network (DCNN) is developed. This network, which has been particularly reviewed, has enabled us to solve with a new approach the problem of collision detection between two convex polyhedra in a fixed time (O (1) time). We used two types of neurons, linear and threshold logic, which simplified the actual implementation of all the networks proposed. The study of the collision detection is divided into two sections, the collision between a point and a polyhedron and then the collision between two convex polyhedra. The aim of this research is to determine through the AMAXNET network a mini maximum point in a fixed time, which allows us to detect the presence of a potential collision.

Maximum Induced Subgraph of an Augmented Cube

Let maxζG(m) denote the maximum number of edges in a subgraph of graph G induced by m nodes. The n-dimensional augmented cube, denoted as AQn, a variation of the hypercube, possesses some properties superior to those of the hypercube. We study the cases when G is the augmented cube AQn.

Scalable Systolic Multiplier over Binary Extension Fields Based on Two-Level Karatsuba Decomposition

Shifted polynomial basis (SPB) is a variation of polynomial basis representation. SPB has potential for efficient bit level and digi -level implementations of multiplication over binary extension fields with subquadratic space complexity. For efficient implementation of pairing computation with large finite fields, this paper presents a new SPB multiplication algorithm based on Karatsuba schemes, and used that to derive a novel scalable multiplier architecture. Analytical results show that the proposed multiplier provides a trade-off between space and time complexities. Our proposed multiplier is modular, regular, and suitable for very large scale integration (VLSI) implementations. It involves less area complexity compared to the multipliers based on traditional decomposition methods. It is therefore, more suitable for efficient hardware implementation of pairing based cryptography and elliptic curve cryptography (ECC) in constraint driven applications.

Integral Image-Based Differential Filters

We describe a relationship between integral images and differential images. First, we derive a simple difference filter from conventional integral image. In the derivation, we show that an integral image and the corresponding differential image are related to each other by simultaneous linear equations, where the numbers of unknowns and equations are the same, and therefore, we can execute the integration and differentiation by solving the simultaneous equations. We applied the relationship to an image fusion problem, and experimentally verified the effectiveness of the proposed method.

Application of Customer Relationship Management Systems in Business: Challenges and Opportunities

Customer relationship management (CRM) systems in business are a reality of the contemporary business world for the last decade or so. Still, there are grey areas regarding the successful implementation and operation of CRM systems in business. This paper, through the systematic study of the CRM implementation paradigm, attempts to identify the most important challenges and opportunities that the CRM systems face in a rapidly changing business world.

Development a Recommendation Library System Based On Android Application

In this paper, we present a recommendation library application on Android system. The objective of this system is to support and advice user to use library resources based on mobile application. We describe the design approaches and functional components of this system. The system was developed based on under association rules, Apriori algorithm. In this project, it was divided the result by the research purposes into 2 parts: developing the Mobile application for online library service and testing and evaluating the system. Questionnaires were used to measure user satisfaction with system usability by specialists and users. The results were satisfactory both specialists and users.

Resident-Aware Green Home

The amount of energy the world uses doubles every 20 years. Green homes play an important role in reducing the residential energy demand. This paper presents a platform that is intended to learn the behavior of home residents and build a profile about their habits and actions. The proposed resident aware home controller intervenes in the operation of home appliances in order to save energy without compromising the convenience of the residents. The presented platform can be used to simulate the actions and movements happening inside a home. The paper includes several optimization techniques that are meant to save energy in the home. In addition, several test scenarios are presented that show how the controller works. Moreover, this paper shows the computed actual savings when each of the presented techniques is implemented in a typical home. The test scenarios have validated that the techniques developed are capable of effectively saving energy at homes.

Financial Ethics: A Review of 2010 Flash Crash

Modern day stock markets have almost entirely became automated. Even though it means increased profits for the investors by algorithms acting upon the slightest price change in order of microseconds, it also has given birth to many ethical dilemmas in the sense that slightest mistake can cause people to lose all of their livelihoods. This paper reviews one such event that happened on May 06, 2010 in which $1 trillion dollars disappeared from the Dow Jones Industrial Average. We are going to discuss its various aspects and the ethical dilemmas that have arisen due to it.

Some Issues with Extension of an HPC Cluster

Homemade HPC clusters are widely used in many small labs, because they are easy to build and cost-effective. Even though incremental growth is an advantage of clusters, it results in heterogeneous systems anyhow. Instead of adding new nodes to the cluster, we can extend clusters to include some other Internet servers working independently on the same LAN, so that we can make use of their idle times, especially during the night. However extension across a firewall raises some security problems with NFS. In this paper, we propose a method to solve such a problem using SSH tunneling, and suggest a modified structure of the cluster that implements it.