IMDC: An Image-Mapped Data Clustering Technique for Large Datasets

In this paper, we present a new algorithm for clustering data in large datasets using image processing approaches. First the dataset is mapped into a binary image plane. The synthesized image is then processed utilizing efficient image processing techniques to cluster the data in the dataset. Henceforth, the algorithm avoids exhaustive search to identify clusters. The algorithm considers only a small set of the data that contains critical boundary information sufficient to identify contained clusters. Compared to available data clustering techniques, the proposed algorithm produces similar quality results and outperforms them in execution time and storage requirements.

Cross-Industry Innovations – Systematic Identification and Adaption

Due to today-s fierce competition, companies have to be proactive creators of the future by effectively developing innovations. Especially radical innovations allow high profit margins – but they also entail high risks. One possibility to realize radical innovations and reduce the risk of failure is cross-industry innovation (CII). CII brings together problems and solution ideas from different industries. However, there is a lack of systematic ways towards CII. Bridging this gap, the present paper provides a systematic approach towards planned CII. Starting with the analysis of potentials, the definition of promising search strategies is crucial. Subsequently, identified solution ideas need to be assessed. For the most promising ones, the adaption process has to be systematically planned – regarding the risk affinity of a company. The introduced method is explained on a project from the furniture industry.

Knowledge Management in Cross- Organizational Networks as Illustrated by One of the Largest European ICT Associations A Case Study of the “METORA

In networks, mainly small and medium-sized businesses benefit from the knowledge, experiences and solutions offered by experts from industry and science or from the exchange with practitioners. Associations which focus, among other things, on networking, information and knowledge transfer and which are interested in supporting such cooperations are especially well suited to provide such networks and the appropriate web platforms. Using METORA as an example – a project developed and run by the Federal Association for Information Economy, Telecommunications and New Media e.V. (BITKOM) for the Federal Ministry of Economics and Technology (BMWi) – This paper will discuss how associations and other network organizations can achieve this task and what conditions they have to consider.

Modeling and Visualizing Seismic Wave Propagation in Elastic Medium Using Multi-Dimension Wave Digital Filtering Approach

A novel PDE solver using the multidimensional wave digital filtering (MDWDF) technique to achieve the solution of a 2D seismic wave system is presented. In essence, the continuous physical system served by a linear Kirchhoff circuit is transformed to an equivalent discrete dynamic system implemented by a MD wave digital filtering (MDWDF) circuit. This amounts to numerically approximating the differential equations used to describe elements of a MD passive electronic circuit by a grid-based difference equations implemented by the so-called state quantities within the passive MDWDF circuit. So the digital model can track the wave field on a dense 3D grid of points. Details about how to transform the continuous system into a desired discrete passive system are addressed. In addition, initial and boundary conditions are properly embedded into the MDWDF circuit in terms of state quantities. Graphic results have clearly demonstrated some physical effects of seismic wave (P-wave and S–wave) propagation including radiation, reflection, and refraction from and across the hard boundaries. Comparison between the MDWDF technique and the finite difference time domain (FDTD) approach is also made in terms of the computational efficiency.

Near-Field Robust Adaptive Beamforming Based on Worst-Case Performance Optimization

The performance of adaptive beamforming degrades substantially in the presence of steering vector mismatches. This degradation is especially severe in the near-field, for the 3-dimensional source location is more difficult to estimate than the 2-dimensional direction of arrival in far-field cases. As a solution, a novel approach of near-field robust adaptive beamforming (RABF) is proposed in this paper. It is a natural extension of the traditional far-field RABF and belongs to the class of diagonal loading approaches, with the loading level determined based on worst-case performance optimization. However, different from the methods solving the optimal loading by iteration, it suggests here a simple closed-form solution after some approximations, and consequently, the optimal weight vector can be expressed in a closed form. Besides simplicity and low computational cost, the proposed approach reveals how different factors affect the optimal loading as well as the weight vector. Its excellent performance in the near-field is confirmed via a number of numerical examples.

An Innovational Intermittent Algorithm in Networks-On-Chip (NOC)

Every day human life experiences new equipments more automatic and with more abilities. So the need for faster processors doesn-t seem to finish. Despite new architectures and higher frequencies, a single processor is not adequate for many applications. Parallel processing and networks are previous solutions for this problem. The new solution to put a network of resources on a chip is called NOC (network on a chip). The more usual topology for NOC is mesh topology. There are several routing algorithms suitable for this topology such as XY, fully adaptive, etc. In this paper we have suggested a new algorithm named Intermittent X, Y (IX/Y). We have developed the new algorithm in simulation environment to compare delay and power consumption with elders' algorithms.

Performance Assessment of Computational Gridon Weather Indices from HOAPS Data

Long term rainfall analysis and prediction is a challenging task especially in the modern world where the impact of global warming is creating complications in environmental issues. These factors which are data intensive require high performance computational modeling for accurate prediction. This research paper describes a prototype which is designed and developed on grid environment using a number of coupled software infrastructural building blocks. This grid enabled system provides the demanding computational power, efficiency, resources, user-friendly interface, secured job submission and high throughput. The results obtained using sequential execution and grid enabled execution shows that computational performance has enhanced among 36% to 75%, for decade of climate parameters. Large variation in performance can be attributed to varying degree of computational resources available for job execution. Grid Computing enables the dynamic runtime selection, sharing and aggregation of distributed and autonomous resources which plays an important role not only in business, but also in scientific implications and social surroundings. This research paper attempts to explore the grid enabled computing capabilities on weather indices from HOAPS data for climate impact modeling and change detection.

Routing Load Analysis over 802.11 DCF of Reactive Routing Protocols DSR and DYMO

The Mobile Ad-hoc Network (MANET) is a collection of self-configuring and rapidly deployed mobile nodes (routers) without any central infrastructure. Routing is one of the potential issues. Many routing protocols are reported but it is difficult to decide which one is best in all scenarios. In this paper on demand routing protocols DSR and DYMO based on IEEE 802.11 DCF MAC protocol are examined and characteristic summary of these routing protocols is presented. Their performance is analyzed and compared on performance measuring metrics throughput, dropped packets due to non availability of routes, duplicate RREQ generated for route discovery and normalized routing load by varying CBR data traffic load using QualNet 5.0.2 network simulator.

n-Butanol as an Extractant for Lactic Acid Recovery

Extraction of lactic acid from aqueous solution using n-butanol as an extractant was studied. Effect of mixing time, pH of the aqueous solution, initial lactic acid concentration, and volume ratio between the organic and the aqueous phase were investigated. Distribution coefficient and degree of lactic acid extraction was found to increase when the pH of aqueous solution was decreased. The pH Effect was substantially pronounced at pH of the aqueous solution less than 1. Initial lactic acid concentration and organic-toaqueous volume ratio appeared to have positive effect on the distribution coefficient and the degree of extraction. Due to the nature of n-butanol that is partially miscible in water, incorporation of aqueous solution into organic phase was observed in the extraction with large organic-to-aqueous volume ratio.

An Anomaly Detection Approach to Detect Unexpected Faults in Recordings from Test Drives

In the automotive industry test drives are being conducted during the development of new vehicle models or as a part of quality assurance of series-production vehicles. The communication on the in-vehicle network, data from external sensors, or internal data from the electronic control units is recorded by automotive data loggers during the test drives. The recordings are used for fault analysis. Since the resulting data volume is tremendous, manually analysing each recording in great detail is not feasible. This paper proposes to use machine learning to support domainexperts by preventing them from contemplating irrelevant data and rather pointing them to the relevant parts in the recordings. The underlying idea is to learn the normal behaviour from available recordings, i.e. a training set, and then to autonomously detect unexpected deviations and report them as anomalies. The one-class support vector machine “support vector data description” is utilised to calculate distances of feature vectors. SVDDSUBSEQ is proposed as a novel approach, allowing to classify subsequences in multivariate time series data. The approach allows to detect unexpected faults without modelling effort as is shown with experimental results on recordings from test drives.

SOA Embedded in BPM: A High Level View of Object Oriented Paradigm

The trends of design and development of information systems have undergone a variety of ongoing phases and stages. These variations have been evolved due to brisk changes in user requirements and business needs. To meet these requirements and needs, a flexible and agile business solution was required to come up with the latest business trends and styles. Another obstacle in agility of information systems was typically different treatment of same diseases of two patients: business processes and information services. After the emergence of information technology, the business processes and information systems have become counterparts. But these two business halves have been treated under totally different standards. There is need to streamline the boundaries of these both pillars that are equally sharing information system's burdens and liabilities. In last decade, the object orientation has evolved into one of the major solutions for modern business needs and now, SOA is the solution to shift business on ranks of electronic platform. BPM is another modern business solution that assists to regularize optimization of business processes. This paper discusses how object orientation can be conformed to incorporate or embed SOA in BPM for improved information systems.

A Tutorial on Dynamic Simulation of DC Motor and Implementation of Kalman Filter on a Floating Point DSP

With the advent of inexpensive 32 bit floating point digital signal processor-s availability in market, many computationally intensive algorithms such as Kalman filter becomes feasible to implement in real time. Dynamic simulation of a self excited DC motor using second order state variable model and implementation of Kalman Filter in a floating point DSP TMS320C6713 is presented in this paper with an objective to introduce and implement such an algorithm, for beginners. A fractional hp DC motor is simulated in both Matlab® and DSP and the results are included. A step by step approach for simulation of DC motor in Matlab® and “C" routines in CC Studio® is also given. CC studio® project file details and environmental setting requirements are addressed. This tutorial can be used with 6713 DSK, which is based on floating point DSP and CC Studio either in hardware mode or in simulation mode.

The Pell Equation x2 − (k2 − k)y2 = 2t

Let k, t, d be arbitrary integers with k ≥ 2, t ≥ 0 and d = k2 - k. In the first section we give some preliminaries from Pell equations x2 - dy2 = 1 and x2 - dy2 = N, where N be any fixed positive integer. In the second section, we consider the integer solutions of Pell equations x2 - dy2 = 1 and x2 - dy2 = 2t. We give a method for the solutions of these equations. Further we derive recurrence relations on the solutions of these equations

Zero-Dissipative Explicit Runge-Kutta Method for Periodic Initial Value Problems

In this paper zero-dissipative explicit Runge-Kutta method is derived for solving second-order ordinary differential equations with periodical solutions. The phase-lag and dissipation properties for Runge-Kutta (RK) method are also discussed. The new method has algebraic order three with dissipation of order infinity. The numerical results for the new method are compared with existing method when solving the second-order differential equations with periodic solutions using constant step size.

Knowledge Representation and Retrieval in Design Project Memory

Knowledge sharing in general and the contextual access to knowledge in particular, still represent a key challenge in the knowledge management framework. Researchers on semantic web and human machine interface study techniques to enhance this access. For instance, in semantic web, the information retrieval is based on domain ontology. In human machine interface, keeping track of user's activity provides some elements of the context that can guide the access to information. We suggest an approach based on these two key guidelines, whilst avoiding some of their weaknesses. The approach permits a representation of both the context and the design rationale of a project for an efficient access to knowledge. In fact, the method consists of an information retrieval environment that, in the one hand, can infer knowledge, modeled as a semantic network, and on the other hand, is based on the context and the objectives of a specific activity (the design). The environment we defined can also be used to gather similar project elements in order to build classifications of tasks, problems, arguments, etc. produced in a company. These classifications can show the evolution of design strategies in the company.

Compromise Ratio Method for Decision Making under Fuzzy Environment using Fuzzy Distance Measure

The aim of this paper is to adopt a compromise ratio (CR) methodology for fuzzy multi-attribute single-expert decision making proble. In this paper, the rating of each alternative has been described by linguistic terms, which can be expressed as triangular fuzzy numbers. The compromise ratio method for fuzzy multi-attribute single expert decision making has been considered here by taking the ranking index based on the concept that the chosen alternative should be as close as possible to the ideal solution and as far away as possible from the negative-ideal solution simultaneously. From logical point of view, the distance between two triangular fuzzy numbers also is a fuzzy number, not a crisp value. Therefore a fuzzy distance measure, which is itself a fuzzy number, has been used here to calculate the difference between two triangular fuzzy numbers. Now in this paper, with the help of this fuzzy distance measure, it has been shown that the compromise ratio is a fuzzy number and this eases the problem of the decision maker to take the decision. The computation principle and the procedure of the compromise ratio method have been described in detail in this paper. A comparative analysis of the compromise ratio method previously proposed [1] and the newly adopted method have been illustrated with two numerical examples.

Speech Recognition Using Scaly Neural Networks

This research work is aimed at speech recognition using scaly neural networks. A small vocabulary of 11 words were established first, these words are “word, file, open, print, exit, edit, cut, copy, paste, doc1, doc2". These chosen words involved with executing some computer functions such as opening a file, print certain text document, cutting, copying, pasting, editing and exit. It introduced to the computer then subjected to feature extraction process using LPC (linear prediction coefficients). These features are used as input to an artificial neural network in speaker dependent mode. Half of the words are used for training the artificial neural network and the other half are used for testing the system; those are used for information retrieval. The system components are consist of three parts, speech processing and feature extraction, training and testing by using neural networks and information retrieval. The retrieve process proved to be 79.5-88% successful, which is quite acceptable, considering the variation to surrounding, state of the person, and the microphone type.

Evaluation of Evolution Strategy, Genetic Algorithm and their Hybrid on Evolving Simulated Car Racing Controllers

Researchers have been applying tional intelligence (AI/CI) methods to computer games. In this research field, further researchesare required to compare AI/CI methods with respect to each game application. In th our experimental result on the comparison of three evolutionary algorithms – evolution strategy, genetic algorithm, and their hybrid applied to evolving controller agents for the CIG 2007 Simulated Car Racing competition. Our experimental result shows that, premature convergence of solutions was observed in the case of ES, and GA outperformed ES in the last half of generations. Besides, a hybrid which uses GA first and ES next evolved the best solution among the whole solutions being generated. This result shows the ability of GA in globally searching promising areas in the early stage and the ability of ES in locally searching the focused area (fine-tuning solutions).

Image Compression with Back-Propagation Neural Network using Cumulative Distribution Function

Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network. Feedforward Networks using Back propagation Algorithm adopting the method of steepest descent for error minimization is popular and widely adopted and is directly applied to image compression. Various research works are directed towards achieving quick convergence of the network without loss of quality of the restored image. In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Back-propagation Network, it takes longer time to converge. The reason for this is, the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbors with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative distribution function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used, the Back-propagation Neural Network yields high compression ratio as well as it converges quickly.

Burning Rate Response of Solid Fuels in Laminar Boundary Layer

Solid fuel transient burning behavior under oxidizer gas flow is numerically investigated. It is done using analysis of the regression rate responses to the imposed sudden and oscillatory variation at inflow properties. The conjugate problem is considered by simultaneous solution of flow and solid phase governing equations to compute the fuel regression rate. The advection upstream splitting method is used as flow computational scheme in finite volume method. The ignition phase is completely simulated to obtain the exact initial condition for response analysis. The results show that the transient burning effects which lead to the combustion instabilities and intermittent extinctions could be observed in solid fuels as the solid propellants.