Composite Relevance Feedback for Image Retrieval

This paper presents content-based image retrieval (CBIR) frameworks with relevance feedback (RF) based on combined learning of support vector machines (SVM) and AdaBoosts. The framework incorporates only most relevant images obtained from both the learning algorithm. To speed up the system, it removes irrelevant images from the database, which are returned from SVM learner. It is the key to achieve the effective retrieval performance in terms of time and accuracy. The experimental results show that this framework had significant improvement in retrieval effectiveness, which can finally improve the retrieval performance.

A Decision Matrix for the Evaluation of Triplestores for Use in a Virtual Research Environment

The Tropical Data Hub (TDH) is a virtual research environment that provides researchers with an e-research infrastructure to congregate significant tropical data sets for data reuse, integration, searching, and correlation. However, researchers often require data and metadata synthesis across disciplines for cross-domain analyses and knowledge discovery. A triplestore offers a semantic layer to achieve a more intelligent method of search to support the synthesis requirements by automating latent linkages in the data and metadata. Presently, the benchmarks to aid the decision of which triplestore is best suited for use in an application environment like the TDH are limited to performance. This paper describes a new evaluation tool developed to analyze both features and performance. The tool comprises a weighted decision matrix to evaluate the interoperability, functionality, performance, and support availability of a range of integrated and native triplestores to rank them according to requirements of the TDH.

Effects of Mobile Design Quality and Innovation Characteristics on Intention to Use Mobile Tourism Guide

This study investigates theoretical model of tourist intention in the context of mobile tourism guide. The research model consists of three constructs: mobile design quality, innovation characteristics, and intention to use mobile tourism guide. In order to investigate the effects of determinants and examine the relationships, partial least squares is employed for data analysis and research model development. The results show that mobile design quality and innovation quality significantly impact on tourists’ intention to use mobile tourism guide. Furthermore, mobile design quality has a strong influence on innovation characteristics, and cannot be the moderator on the relationship between innovation characteristics and tourists’ intention to use mobile tourism guide. Our findings propose theoretical model for mobile research and provide an important guideline for developing mobile application.

Increasing Replica Consistency Performances with Load Balancing Strategy in Data Grid Systems

Data replication in data grid systems is one of the important solutions that improve availability, scalability, and fault tolerance. However, this technique can also bring some involved issues such as maintaining replica consistency. Moreover, as grid environment are very dynamic some nodes can be more uploaded than the others to become eventually a bottleneck. The main idea of our work is to propose a complementary solution between replica consistency maintenance and dynamic load balancing strategy to improve access performances under a simulated grid environment.

Trust and Reliability for Public Sector Data

The public sector holds large amounts of data of various areas such as social affairs, economy, or tourism. Various initiatives such as Open Government Data or the EU Directive on public sector information aim to make these data available for public and private service providers. Requirements for the provision of public sector data are defined by legal and organizational frameworks. Surprisingly, the defined requirements hardly cover security aspects such as integrity or authenticity. In this paper we discuss the importance of these missing requirements and present a concept to assure the integrity and authenticity of provided data based on electronic signatures. We show that our concept is perfectly suitable for the provisioning of unaltered data. We also show that our concept can also be extended to data that needs to be anonymized before provisioning by incorporating redactable signatures. Our proposed concept enhances trust and reliability of provided public sector data.

The Haar Wavelet Transform of the DNA Signal Representation

The Deoxyribonucleic Acid (DNA) which is a doublestranded helix of nucleotides consists of: Adenine (A), Cytosine (C), Guanine (G) and Thymine (T). In this work, we convert this genetic code into an equivalent digital signal representation. Applying a wavelet transform, such as Haar wavelet, we will be able to extract details that are not so clear in the original genetic code. We compare between different organisms using the results of the Haar wavelet Transform. This is achieved by using the trend part of the signal since the trend part bears the most energy of the digital signal representation. Consequently, we will be able to quantitatively reconstruct different biological families.

Qmulus – A Cloud Driven GPS Based Tracking System for Real-Time Traffic Routing

This paper presents Qmulus- a Cloud Based GPS Model. Qmulus is designed to compute the best possible route which would lead the driver to the specified destination in the shortest time while taking into account real-time constraints. Intelligence incorporated to Qmulus-s design makes it capable of generating and assigning priorities to a list of optimal routes through customizable dynamic updates. The goal of this design is to minimize travel and cost overheads, maintain reliability and consistency, and implement scalability and flexibility. The model proposed focuses on reducing the bridge between a Client Application and a Cloud service so as to render seamless operations. Qmulus-s system model is closely integrated and its concept has the potential to be extended into several other integrated applications making it capable of adapting to different media and resources.

An Evaluation on Fixed Wing and Multi-Rotor UAV Images Using Photogrammetric Image Processing

This paper has introduced a slope photogrammetric mapping using unmanned aerial vehicle. There are two units of UAV has been used in this study; namely; fixed wing and multi-rotor. Both UAVs were used to capture images at the study area. A consumer digital camera was mounted vertically at the bottom of UAV and captured the images at an altitude. The objectives of this study are to obtain three dimensional coordinates of slope area and to determine the accuracy of photogrammetric product produced from both UAVs. Several control points and checkpoints were established Real Time Kinematic Global Positioning System (RTK-GPS) in the study area. All acquired images from both UAVs went through all photogrammetric processes such as interior orientation, exterior orientation, aerial triangulation and bundle adjustment using photogrammetric software. Two primary results were produced in this study; namely; digital elevation model and digital orthophoto. Based on results, UAV system can be used to mapping slope area especially for limited budget and time constraints project.

An Immersive Motion Capture Environment

Motion capturing technology has been used for quite a while and several research has been done within this area. Nevertheless, we discovered open issues within current motion capturing environments. In this paper we provide a state-of-the-art overview of the addressed research areas and show issues with current motion capturing environments. Observations, interviews and questionnaires have been used to reveal the challenges actors are currently facing in a motion capturing environment. Furthermore, the idea to create a more immersive motion capturing environment to improve the acting performances and motion capturing outcomes as a potential solution is introduced. It is hereby the goal to explain the found open issues and the developed ideas which shall serve for further research as a basis. Moreover, a methodology to address the interaction and systems design issues is proposed. A future outcome could be that motion capture actors are able to perform more naturally, especially if using a non-body-worn solution.

A Hybrid Scheme for on-Line Diagnostic Decision Making Using Optimal Data Representation and Filtering Technique

The early diagnostic decision making in industrial processes is absolutely necessary to produce high quality final products. It helps to provide early warning for a special event in a process, and finding its assignable cause can be obtained. This work presents a hybrid diagnostic schmes for batch processes. Nonlinear representation of raw process data is combined with classification tree techniques. The nonlinear kernel-based dimension reduction is executed for nonlinear classification decision boundaries for fault classes. In order to enhance diagnosis performance for batch processes, filtering of the data is performed to get rid of the irrelevant information of the process data. For the diagnosis performance of several representation, filtering, and future observation estimation methods, four diagnostic schemes are evaluated. In this work, the performance of the presented diagnosis schemes is demonstrated using batch process data.

Characterizations of Star-Shaped, L-Convex, and Convex Polygons

A chord of a simple polygon P is a line segment [xy] that intersects the boundary of P only at both endpoints x and y. A chord of P is called an interior chord provided the interior of [xy] lies in the interior of P. P is weakly visible from [xy] if for every point v in P there exists a point w in [xy] such that [vw] lies in P. In this paper star-shaped, L-convex, and convex polygons are characterized in terms of weak visibility properties from internal chords and starshaped subsets of P. A new Krasnoselskii-type characterization of isothetic star-shaped polygons is also presented.

A Visual Educational Modeling Language to Help Teachers in Learning Scenario Design

The success of an e-learning system is highly dependent on the quality of its educational content and how effective, complete, and simple the design tool can be for teachers. Educational modeling languages (EMLs) are proposed as design languages intended to teachers for modeling diverse teaching-learning experiences, independently of the pedagogical approach and in different contexts. However, most existing EMLs are criticized for being too abstract and too complex to be understood and manipulated by teachers. In this paper, we present a visual EML that simplifies the process of designing learning scenarios for teachers with no programming background. Based on the conceptual framework of the activity theory, our resulting visual EML focuses on using Domainspecific modeling techniques to provide a pedagogical level of abstraction in the design process.

A Case of Study for 3D Stereoscopic Conversion in Visual Effects Industry

This paper covered a series of key points in terms of 2D to 3D stereoscopic conversion. A successfully applied stereoscopic conversion approach in current visual effects industry was presented. The purpose of this paper is to cover a detailed workflow and concept, which has been successfully used in 3D stereoscopic conversion for feature films in visual effects industry, and therefore to clarify the process in stereoscopic conversion production and provide a clear idea for those entry-level artists to improve an overall understanding of 3D stereoscopic in digital compositing field as well as to the higher education factor of visual effects and hopefully inspire further collaboration and participants particularly between academia and industry.

A GPU Based Texture Mapping Technique for 3D Models Using Multi-View Images

Previous the 3D model texture generation from multi-view images and mapping algorithms has issues in the texture chart generation which are the self-intersection and the concentration of the texture in texture space. Also we may suffer from some problems due to the occluded areas, such as inside parts of thighs. In this paper we propose a texture mapping technique for 3D models using multi-view images on the GPU. We do texture mapping directly on the GPU fragment shader per pixel without generation of the texture map. And we solve for the occluded area using the 3D model depth information. Our method needs more calculation on the GPU than previous works, but it has shown real-time performance and previously mentioned problems do not occur.

Estimation of Relative Self-Localization Based On Natural Landmark and an Improved SURF

It is important for an autonomous mobile robot to know where it is in any time in an indoor environment. In this paper, we design a relative self-localization algorithm. The algorithm compare the interest point in two images and compute the relative displacement and orientation to determent the posture. Firstly, we use the SURF algorithm to extract the interest points of the ceiling. Second, in order to reduce amount of calculation, a replacement SURF is used to extract orientation and description of the interest points. At last, according to the transformation of the interest points in two images, the relative self-localization of the mobile robot will be estimated greatly.

On Speeding Up Support Vector Machines: Proximity Graphs Versus Random Sampling for Pre-Selection Condensation

Support vector machines (SVMs) are considered to be the best machine learning algorithms for minimizing the predictive probability of misclassification. However, their drawback is that for large data sets the computation of the optimal decision boundary is a time consuming function of the size of the training set. Hence several methods have been proposed to speed up the SVM algorithm. Here three methods used to speed up the computation of the SVM classifiers are compared experimentally using a musical genre classification problem. The simplest method pre-selects a random sample of the data before the application of the SVM algorithm. Two additional methods use proximity graphs to pre-select data that are near the decision boundary. One uses k-Nearest Neighbor graphs and the other Relative Neighborhood Graphs to accomplish the task.

A Proposed Technique for Software Development Risks Identification by using FTA Model

Software Development Risks Identification (SDRI), using Fault Tree Analysis (FTA), is a proposed technique to identify not only the risk factors but also the causes of the appearance of the risk factors in software development life cycle. The method is based on analyzing the probable causes of software development failures before they become problems and adversely affect a project. It uses Fault tree analysis (FTA) to determine the probability of a particular system level failures that are defined by A Taxonomy for Sources of Software Development Risk to deduce failure analysis in which an undesired state of a system by using Boolean logic to combine a series of lower-level events. The major purpose of this paper is to use the probabilistic calculations of Fault Tree Analysis approach to determine all possible causes that lead to software development risk occurrence

Acute Coronary Syndrome Prediction Using Data Mining Techniques- An Application

In this paper we use data mining techniques to investigate factors that contribute significantly to enhancing the risk of acute coronary syndrome. We assume that the dependent variable is diagnosis – with dichotomous values showing presence or  absence of disease. We have applied binary regression to the factors affecting the dependent variable. The data set has been taken from two different cardiac hospitals of Karachi, Pakistan. We have total sixteen variables out of which one is assumed dependent and other 15 are independent variables. For better performance of the regression model in predicting acute coronary syndrome, data reduction techniques like principle component analysis is applied. Based on results of data reduction, we have considered only 14 out of sixteen factors.

The Design and Development of Driving Game as an Evaluation Instrument for Driving License Test

The focus of this paper is to highlight the design and development of an educational game prototype as an evaluation instrument for the Malaysia driving license static test. This educational game brings gaming technology into the conventional objective static test to make it more effective, real and interesting. From the feeling of realistic, the future driver can learn something, memorized and use it in the real life. The current online objective static test only make the user memorized the answer without knowing and understand the true purpose of the question. Therefore, in real life, they will not behave as expected due to behavior and moral lacking. This prototype has been developed inform of multiple-choice questions integrated with 3D gaming environment to make it simulate the real environment and scenarios. Based on the testing conducted, the respondent agrees with the use of this game prototype it can increase understanding and promote obligation towards traffic rules.

Parametric Modeling Approach for Call Holding Times for IP based Public Safety Networks via EM Algorithm

This paper presents parametric probability density models for call holding times (CHTs) into emergency call center based on the actual data collected for over a week in the public Emergency Information Network (EIN) in Mongolia. When the set of chosen candidates of Gamma distribution family is fitted to the call holding time data, it is observed that the whole area in the CHT empirical histogram is underestimated due to spikes of higher probability and long tails of lower probability in the histogram. Therefore, we provide the Gaussian parametric model of a mixture of lognormal distributions with explicit analytical expressions for the modeling of CHTs of PSNs. Finally, we show that the CHTs for PSNs are fitted reasonably by a mixture of lognormal distributions via the simulation of expectation maximization algorithm. This result is significant as it expresses a useful mathematical tool in an explicit manner of a mixture of lognormal distributions.