Block-Based 2D to 3D Image Conversion Method

With the advent of three-dimension (3D) technology, there are lots of research in converting 2D images to 3D images. The main difference between 2D and 3D is the visual illusion of depth in 3D images. In the recent era, there are more depth estimation techniques. The objective of this paper is to convert 2D images to 3D images with less computation time. For this, the input image is divided into blocks from which the depth information is obtained. Having the depth information, a depth map is generated. Then the 3D image is warped using the original image and the depth map. The proposed method is tested on Make3D dataset and NYU-V2 dataset. The experimental results are compared with other recent methods. The proposed method proved to work with less computation time and good accuracy.

An Alternative Approach for Assessing the Impact of Cutting Conditions on Surface Roughness Using Single Decision Tree

In this study, an approach to identify factors affecting on surface roughness in a machining process is presented. This study is based on 81 data about surface roughness over a wide range of cutting tools (conventional, cutting tool with holes, cutting tool with composite material), workpiece materials (AISI 1045 Steel, AA2024 aluminum alloy, A48-class30 gray cast iron), spindle speed (630-1000 rpm), feed rate (0.05-0.075 mm/rev), depth of cut (0.05-0.15 mm) and tool overhang (41-65 mm). A single decision tree (SDT) analysis was done to identify factors for predicting a model of surface roughness, and the CART algorithm was employed for building and evaluating regression tree. Results show that a single decision tree is better than traditional regression models with higher rate and forecast accuracy and strong value.

A Minimum Spanning Tree-Based Method for Initializing the K-Means Clustering Algorithm

The traditional k-means algorithm has been widely used as a simple and efficient clustering method. However, the algorithm often converges to local minima for the reason that it is sensitive to the initial cluster centers. In this paper, an algorithm for selecting initial cluster centers on the basis of minimum spanning tree (MST) is presented. The set of vertices in MST with same degree are regarded as a whole which is used to find the skeleton data points. Furthermore, a distance measure between the skeleton data points with consideration of degree and Euclidean distance is presented. Finally, MST-based initialization method for the k-means algorithm is presented, and the corresponding time complexity is analyzed as well. The presented algorithm is tested on five data sets from the UCI Machine Learning Repository. The experimental results illustrate the effectiveness of the presented algorithm compared to three existing initialization methods.

Determination of the Bank's Customer Risk Profile: Data Mining Applications

In this study, the clients who applied to a bank branch for loan were analyzed through data mining. The study was composed of the information such as amounts of loans received by personal and SME clients working with the bank branch, installment numbers, number of delays in loan installments, payments available in other banks and number of banks to which they are in debt between 2010 and 2013. The client risk profile was examined through Classification and Regression Tree (CART) analysis, one of the decision tree classification methods. At the end of the study, 5 different types of customers have been determined on the decision tree. The classification of these types of customers has been created with the rating of those posing a risk for the bank branch and the customers have been classified according to the risk ratings.

Approximately Similarity Measurement of Web Sites Using Genetic Algorithms and Binary Trees

In this paper, we determine the similarity of two HTML web applications. We are going to use a genetic algorithm in order to determine the most significant web pages of each application (we are not going to use every web page of a site). Using these significant web pages, we will find the similarity value between the two applications. The algorithm is going to be efficient because we are going to use a reduced number of web pages for comparisons but it will return an approximate value of the similarity. The binary trees are used to keep the tags from the significant pages. The algorithm was implemented in Java language.

Performance Comparison of ADTree and Naive Bayes Algorithms for Spam Filtering

Classification is an important data mining technique and could be used as data filtering in artificial intelligence. The broad application of classification for all kind of data leads to be used in nearly every field of our modern life. Classification helps us to put together different items according to the feature items decided as interesting and useful. In this paper, we compare two classification methods Naïve Bayes and ADTree use to detect spam e-mail. This choice is motivated by the fact that Naive Bayes algorithm is based on probability calculus while ADTree algorithm is based on decision tree. The parameter settings of the above classifiers use the maximization of true positive rate and minimization of false positive rate. The experiment results present classification accuracy and cost analysis in view of optimal classifier choice for Spam Detection. It is point out the number of attributes to obtain a tradeoff between number of them and the classification accuracy.

Use of Carica papaya as a Bio-Sorbent for Removal of Heavy Metals in Wastewater

The study assessed the effectiveness of Pawpaw (Carica papaya) wood in reducing the concentrations of heavy metals in wastewater acting as a bio-sorbent. The following heavy metals were considered; Zinc, Cadmium, Lead, Copper, Iron, Selenium, Nickel and Manganese. The physiochemical properties of Carica papaya stem were studied. The experimental sample was sourced from the trunk of a felled matured pawpaw tree. Wastewater for experimental use was prepared by dissolving soil samples collected from a dump site at Owerri, Imo state of Nigeria in water. The concentration of each metal remaining in solution as residual metal after bio-sorption was determined using Atomic absorption Spectrometer. The effects of pH and initial heavy metal concentration were studied in a batch reactor. The results of Spectrometer test showed that there were different functional groups detected in the Carica papaya stem biomass. There was increase in metal removal as the pH increased for all the metals considered except for Nickel and Manganese. Optimum bio-sorption occurred at pH 5.9 with 5g/100ml solution of bio-sorbent. The results of the study showed that the treated wastewater is fit for irrigation purpose based on Canada wastewater quality guideline for the protection of Agricultural standard. This approach thus provides a cost effective and environmentally friendly option for treating wastewater.

Tree Sign Patterns of Small Order that Allow an Eventually Positive Matrix

A sign pattern is a matrix whose entries belong to the set {+,−, 0}. An n-by-n sign pattern A is said to allow an eventually positive matrix if there exist some real matrices A with the same sign pattern as A and a positive integer k0 such that Ak > 0 for all k ≥ k0. It is well known that identifying and classifying the n-by-n sign patterns that allow an eventually positive matrix are posed as two open problems. In this article, the tree sign patterns of small order that allow an eventually positive matrix are classified completely.

A Look at the Gezi Park Protests through the Lens of Media

The Gezi Park protests of 2013 have significantly changed the Turkish agenda and its effects have been felt historically. The protests, which rapidly spread throughout the country, were triggered by the proposal to recreate the Ottoman Army Barracks to function as a shopping mall on Gezi Park located in Istanbul’s Taksim neighbourhood despite the oppositions of several NGOs and when trees were cut in the park for this purpose. Once the news that the construction vehicles entered the park on May 27 spread on social media, activists moved into the park to stop the demolition, against whom the police used disproportioned force. With this police intervention and the then prime-minister Tayyip Erdoğan's insistent statements about the construction plans, the protests turned into anti- government demonstrations, which then spread to the rest of the country, mainly in big cities like Ankara and Izmir. According to the Ministry of Internal Affairs’ June 23rd reports, 2.5 million people joined the demonstrations in 79 provinces, that is all of them, except for the provinces of Bayburt and Bingöl, while even more people shared their opinions via social networks. As a result of these events, 8 civilians and 2 security personnel lost their lives, namely police chief Mustafa Sarı, police officer Ahmet Küçükdağ, citizens Mehmet Ayvalıtaş, Abdullah Cömert, Ethem Sarısülük, Ali İsmail Korkmaz, Ahmet Atakan, Berkin Elvan, Burak Can Karamanoğlu, Mehmet İstif, and Elif Çermik, and 8163 more were injured. Besides being a turning point in Turkish history, the Gezi Park protests also had broad repercussions in both in Turkish and in global media, which focused on Turkey throughout the events. Our study conducts content analysis of three Turkish reporting newspapers with varying ideological standpoints, Hürriyet, Cumhuriyet ve Yeni Şafak, in order to reveal their basic approach to news casting in context of the Gezi Park protests. Headlines, news segments, and news content relating to the Gezi protests were treated and analysed for this purpose. The aim of this study is to understand the social effects of the Gezi Park protests through media samples with varying political attitudes towards news casting.

A New DIDS Design Based on a Combination Feature Selection Approach

Feature selection has been used in many fields such as classification, data mining and object recognition and proven to be effective for removing irrelevant and redundant features from the original dataset. In this paper, a new design of distributed intrusion detection system using a combination feature selection model based on bees and decision tree. Bees algorithm is used as the search strategy to find the optimal subset of features, whereas decision tree is used as a judgment for the selected features. Both the produced features and the generated rules are used by Decision Making Mobile Agent to decide whether there is an attack or not in the networks. Decision Making Mobile Agent will migrate through the networks, moving from node to another, if it found that there is an attack on one of the nodes, it then alerts the user through User Interface Agent or takes some action through Action Mobile Agent. The KDD Cup 99 dataset is used to test the effectiveness of the proposed system. The results show that even if only four features are used, the proposed system gives a better performance when it is compared with the obtained results using all 41 features.

Online Optic Disk Segmentation Using Fractals

Optic disk segmentation plays a key role in the mass screening of individuals with diabetic retinopathy and glaucoma ailments. An efficient hardware-based algorithm for optic disk localization and segmentation would aid for developing an automated retinal image analysis system for real time applications. Herein, TMS320C6416DSK DSP board pixel intensity based fractal analysis algorithm for an automatic localization and segmentation of the optic disk is reported. The experiment has been performed on color and fluorescent angiography retinal fundus images. Initially, the images were pre-processed to reduce the noise and enhance the quality. The retinal vascular tree of the image was then extracted using canny edge detection technique. Finally, a pixel intensity based fractal analysis is performed to segment the optic disk by tracing the origin of the vascular tree. The proposed method is examined on three publicly available data sets of the retinal image and also with the data set obtained from an eye clinic. The average accuracy achieved is 96.2%. To the best of the knowledge, this is the first work reporting the use of TMS320C6416DSK DSP board and pixel intensity based fractal analysis algorithm for an automatic localization and segmentation of the optic disk. This will pave the way for developing devices for detection of retinal diseases in the future.

Identification of Healthy and BSR-Infected Oil Palm Trees Using Color Indices

Most of the oil palm plantations have been threatened by Basal Stem Rot (BSR) disease which causes serious economic impact. This study was conducted to identify the healthy and BSRinfected oil palm tree using thirteen color indices. Multispectral and thermal camera was used to capture 216 images of the leaves taken from frond number 1, 9 and 17. Indices of normalized difference vegetation index (NDVI), red (R), green (G), blue (B), near infrared (NIR), green – blue (GB), green/blue (G/B), green – red (GR), green/red (G/R), hue (H), saturation (S), intensity (I) and thermal index (T) were used. From this study, it can be concluded that G index taken from frond number 9 is the best index to differentiate between the healthy and BSR-infected oil palm trees. It not only gave high value of correlation coefficient (R=-0.962), but also high value of separation between healthy and BSR-infected oil palm tree. Furthermore, power and S model developed using G index gave the highest R2 value which is 0.985.

A Comprehensive Method of Fault Detection and Isolation Based On Testability Modeling Data

Testability modeling is a commonly used method in testability design and analysis of system. A dependency matrix will be obtained from testability modeling, and we will give a quantitative evaluation about fault detection and isolation. Based on the dependency matrix, we can obtain the diagnosis tree. The tree provides the procedures of the fault detection and isolation. But the dependency matrix usually includes built-in test (BIT) and manual test in fact. BIT runs the test automatically and is not limited by the procedures. The method above cannot give a more efficient diagnosis and use the advantages of the BIT. A Comprehensive method of fault detection and isolation is proposed. This method combines the advantages of the BIT and Manual test by splitting the matrix. The result of the case study shows that the method is effective.

Pressure Losses on Realistic Geometry of Tracheobronchial Tree

Real bronchial tree is very complicated piping system. Analysis of flow and pressure losses in this system is very difficult. Due to the complex geometry and the very small size in the lower generations is examination by CFD possible only in the central part of bronchial tree. For specify the pressure losses of lower generations is necessary to provide a mathematical equation. Determination of mathematical formulas for calculation of pressure losses in the real lungs is time consuming and inefficient process due to its complexity and diversity. For these calculations is necessary to slightly simplify the geometry of lungs (same cross-section over the length of individual generation) or use one of the idealized models of lungs (Horsfield, Weibel). The article compares the values of pressure losses obtained from CFD simulation of air flow in the central part of the real bronchial tree with the values calculated in a slightly simplified real lungs by using a mathematical relationship derived from the Bernoulli and continuity equations. The aim of the article is to analyse the accuracy of the analytical method and its possibility of use for the calculation of pressure losses in lower generations, which is difficult to solve by numerical method due to the small geometry.

The Pressure Losses in the Model of Human Lungs

For the treatment of acute and chronic lung diseases it is preferred to deliver medicaments by inhalation. The drug is delivered directly to tracheobronchial tree. This way allows the given medicament to get directly into the place of action and it makes rapid onset of action and maximum efficiency. The transport of aerosol particles in the particular part of the lung is influenced by their size, anatomy of the lungs, breathing pattern and airway resistance. This article deals with calculation of airway resistance in the lung model of Horsfield. It solves the problem of determination of the pressure losses in bifurcation and thus defines the pressure drop at a given location in the bronchial tree. The obtained data will be used as boundary conditions for transport of aerosol particles in a central part of bronchial tree realized by Computational Fluid Dynamics (CFD) approach. The results obtained from CFD simulation will allow us to provide information on the required particle size and optimal inhalation technique for particle transport into particular part of the lung.

Some New Bounds for a Real Power of the Normalized Laplacian Eigenvalues

For a given a simple connected graph, we present some new bounds via a new approach for a special topological index given by the sum of the real number power of the non-zero normalized Laplacian eigenvalues. To use this approach presents an advantage not only to derive old and new bounds on this topic but also gives an idea how some previous results in similar area can be developed.

Applying Spanning Tree Graph Theory for Automatic Database Normalization

In Knowledge and Data Engineering field, relational database is the best repository to store data in a real world. It has been using around the world more than eight decades. Normalization is the most important process for the analysis and design of relational databases. It aims at creating a set of relational tables with minimum data redundancy that preserve consistency and facilitate correct insertion, deletion, and modification. Normalization is a major task in the design of relational databases. Despite its importance, very few algorithms have been developed to be used in the design of commercial automatic normalization tools. It is also rare technique to do it automatically rather manually. Moreover, for a large and complex database as of now, it make even harder to do it manually. This paper presents a new complete automated relational database normalization method. It produces the directed graph and spanning tree, first. It then proceeds with generating the 2NF, 3NF and also BCNF normal forms. The benefit of this new algorithm is that it can cope with a large set of complex function dependencies.

Spatial Analysis of Trees Composition, Diversity and Richnesss in the Built up Areas of University of Port Harcourt, Nigeria

The study investigated the spatial analysis of trees composition, diversity and richness in the built up area of University of Port Harcourt, Nigeria. Four quadrats of 25m x 25m size were laid randomly in each of the three parks and inventories of trees ≥10cm girth at breast height were taken and used to calculate the species composition, diversity and richness. Results showed that species composition and diversity in Abuja Park was the highest with 134 species and 0.866 respectively while the species richness was highest in Choba Park with a value of 2.496. The correlation between the size of park (spatial coverage) and species composition was 0.99 while the correlation between the size of the park and species diversity was 0.78. There was direct relationship between species composition and diversity while the relationship between species composition and species richness was inversely proportional. Rational use of these resources is encouraged.

Decision Tree Based Scheduling for Flexible Job Shops with Multiple Process Plans

This paper suggests a decision tree based approach for flexible job shop scheduling with multiple process plans, i.e. each job can be processed through alternative operations, each of which can be processed on alternative machines. The main decision variables are: (a) selecting operation/machine pair; and (b) sequencing the jobs assigned to each machine. As an extension of the priority scheduling approach that selects the best priority rule combination after many simulation runs, this study suggests a decision tree based approach in which a decision tree is used to select a priority rule combination adequate for a specific system state and hence the burdens required for developing simulation models and carrying out simulation runs can be eliminated. The decision tree based scheduling approach consists of construction and scheduling modules. In the construction module, a decision tree is constructed using a four-stage algorithm, and in the scheduling module, a priority rule combination is selected using the decision tree. To show the performance of the decision tree based approach suggested in this study, a case study was done on a flexible job shop with reconfigurable manufacturing cells and a conventional job shop, and the results are reported by comparing it with individual priority rule combinations for the objectives of minimizing total flow time and total tardiness.