Wireless Neural Stimulator with Adjustable Electrical Quantity

The neural stimulation has been gaining much interest in neuromodulation research and clinical trials. For efficiency, there is a need for variable electrical stimulation such as current and voltage stimuli as well as wireless framework. In this regard, we develop the wireless neural stimulator capable of voltage and current stimuli. The system consists of ZigBee which is a wireless communication module and stimulus generator. The stimulus generator with 8-bits resolution enable both mono-polar and bi-polar waveform in voltage (-3.3~3.3V) and current(-330~330µA) stimulus mode which is controllable. The experimental results suggest that the proposed neural stimulator can play a role as an effective approach for neuromodulation.

A New Automatic System of Cell Colony Counting

The counting process of cell colonies is always a long and laborious process that is dependent on the judgment and ability of the operator. The judgment of the operator in counting can vary in relation to fatigue. Moreover, since this activity is time consuming it can limit the usable number of dishes for each experiment. For these purposes, it is necessary that an automatic system of cell colony counting is used. This article introduces a new automatic system of counting based on the elaboration of the digital images of cellular colonies grown on petri dishes. This system is mainly based on the algorithms of region-growing for the recognition of the regions of interest (ROI) in the image and a Sanger neural net for the characterization of such regions. The better final classification is supplied from a Feed-Forward Neural Net (FF-NN) and confronted with the K-Nearest Neighbour (K-NN) and a Linear Discriminative Function (LDF). The preliminary results are shown.

Behavior of Cu-WC-Ti Metal Composite Afterusing Planetary Ball Milling

Copper based composites reinforced with WC and Ti particles were prepared using planetary ball-mill. The experiment was designed by using Taguchi technique and milling was carried out in an air for several hours. The powder was characterized before and after milling using the SEM, TEM and X-ray for microstructure and for possible new phases. Microstructures show that milled particles size and reduction in particle size depend on many parameters. The distance d between planes of atoms estimated from X-ray powder diffraction data and TEM image. X-ray diffraction patterns of the milled powder did not show clearly any new peak or energy shift, but the TEM images show a significant change in crystalline structure of corporate on titanium in the composites.

Underwater Interaction of 1064 nm Laser Radiation with Metal Target

Dynamics of laser radiation – metal target interaction in water at 1064 nm by applying Mach-Zehnder interference technique was studied. The mechanism of generating the well developed regime of evaporation of a metal surface and a spherical shock wave in water is proposed. Critical intensities of the NIR for the well developed evaporation of silver and gold targets were determined. Dynamics of shock waves was investigated for earlier (dozens) and later (hundreds) nanoseconds of time. Transparent expanding plasma-vapor-compressed water object was visualized and measured. The thickness of compressed layer of water and pressures behind the front of a shock wave for later time delays were obtained from the optical treatment of interferograms.

Mixed Convection Boundary Layer Flow from a Vertical Cone in a Porous Medium Filled with a Nanofluid

The steady mixed convection boundary layer flow from a vertical cone in a porous medium filled with a nanofluid is numerically investigated using different types of nanoparticles as Cu (copper), Al2O3 (alumina) and TiO2 (titania). The boundary value problem is solved by using the shooting technique by reducing it into an ordinary differential equation. Results of interest for the local Nusselt number with various values of the constant mixed convection parameter and nanoparticle volume fraction parameter are evaluated. It is found that dual solutions exist for a certain range of mixed convection parameter.

Tuning of Thermal FEA Using Krylov Parametric MOR for Subsea Application

A dead leg is a typical subsea production system component. CFD is required to model heat transfer within the dead leg. Unfortunately its solution is time demanding and thus not suitable for fast prediction or repeated simulations. Therefore there is a need to create a thermal FEA model, mimicking the heat flows and temperatures seen in CFD cool down simulations. This paper describes the conventional way of tuning and a new automated way using parametric model order reduction (PMOR) together with an optimization algorithm. The tuned FE analyses replicate the steady state CFD parameters within a maximum error in heat flow of 6 % and 3 % using manual and PMOR method respectively. During cool down, the relative error of the tuned FEA models with respect to temperature is below 5% comparing to the CFD. In addition, the PMOR method obtained the correct FEA setup five times faster than the manually tuned FEA.

Secondary Effects on Water Vapor Transport Properties Measured by Cup Method

The cup method is applied for the measurement of water vapor transport properties of porous materials worldwide. However, in practical applications the experimental results are often used without taking into account some secondary effects which can play an important role under specific conditions. In this paper, the effect of temperature on water vapor transport properties of cellular concrete is studied, together with the influence of sample thickness. At first, the bulk density, matrix density, total open porosity and sorption and desorption isotherms are measured for material characterization purposes. Then, the steady state cup method is used for determination of water vapor transport properties, whereas the measurements are performed at several temperatures and for three different sample thicknesses.

Modeling Peer-to-Peer Networks with Interest-Based Clusters

In the world of Peer-to-Peer (P2P) networking different protocols have been developed to make the resource sharing or information retrieval more efficient. The SemPeer protocol is a new layer on Gnutella that transforms the connections of the nodes based on semantic information to make information retrieval more efficient. However, this transformation causes high clustering in the network that decreases the number of nodes reached, therefore the probability of finding a document is also decreased. In this paper we describe a mathematical model for the Gnutella and SemPeer protocols that captures clustering-related issues, followed by a proposition to modify the SemPeer protocol to achieve moderate clustering. This modification is a sort of link management for the individual nodes that allows the SemPeer protocol to be more efficient, because the probability of a successful query in the P2P network is reasonably increased. For the validation of the models, we evaluated a series of simulations that supported our results.

A Metric-Set and Model Suggestion for Better Software Project Cost Estimation

Software project effort estimation is frequently seen as complex and expensive for individual software engineers. Software production is in a crisis. It suffers from excessive costs. Software production is often out of control. It has been suggested that software production is out of control because we do not measure. You cannot control what you cannot measure. During last decade, a number of researches on cost estimation have been conducted. The metric-set selection has a vital role in software cost estimation studies; its importance has been ignored especially in neural network based studies. In this study we have explored the reasons of those disappointing results and implemented different neural network models using augmented new metrics. The results obtained are compared with previous studies using traditional metrics. To be able to make comparisons, two types of data have been used. The first part of the data is taken from the Constructive Cost Model (COCOMO'81) which is commonly used in previous studies and the second part is collected according to new metrics in a leading international company in Turkey. The accuracy of the selected metrics and the data samples are verified using statistical techniques. The model presented here is based on Multi-Layer Perceptron (MLP). Another difficulty associated with the cost estimation studies is the fact that the data collection requires time and care. To make a more thorough use of the samples collected, k-fold, cross validation method is also implemented. It is concluded that, as long as an accurate and quantifiable set of metrics are defined and measured correctly, neural networks can be applied in software cost estimation studies with success

Real-Time Testing of Steel Strip Welds based on Bayesian Decision Theory

One of the main trouble in a steel strip manufacturing line is the breakage of whatever weld carried out between steel coils, that are used to produce the continuous strip to be processed. A weld breakage results in a several hours stop of the manufacturing line. In this process the damages caused by the breakage must be repaired. After the reparation and in order to go on with the production it will be necessary a restarting process of the line. For minimizing this problem, a human operator must inspect visually and manually each weld in order to avoid its breakage during the manufacturing process. The work presented in this paper is based on the Bayesian decision theory and it presents an approach to detect, on real-time, steel strip defective welds. This approach is based on quantifying the tradeoffs between various classification decisions using probability and the costs that accompany such decisions.

Attacks and Counter Measures in BST Overlay Structure of Peer-To-Peer System

There are various overlay structures that provide efficient and scalable solutions for point and range query in a peer-topeer network. Overlay structure based on m-Binary Search Tree (BST) is one such popular technique. It deals with the division of the tree into different key intervals and then assigning the key intervals to a BST. The popularity of the BST makes this overlay structure vulnerable to different kinds of attacks. Here we present four such possible attacks namely index poisoning attack, eclipse attack, pollution attack and syn flooding attack. The functionality of BST is affected by these attacks. We also provide different security techniques that can be applied against these attacks.

An Approach for Blind Source Separation using the Sliding DFT and Time Domain Independent Component Analysis

''Cocktail party problem'' is well known as one of the human auditory abilities. We can recognize the specific sound that we want to listen by this ability even if a lot of undesirable sounds or noises are mixed. Blind source separation (BSS) based on independent component analysis (ICA) is one of the methods by which we can separate only a special signal from their mixed signals with simple hypothesis. In this paper, we propose an online approach for blind source separation using the sliding DFT and the time domain independent component analysis. The proposed method can reduce calculation complexity in comparison with conventional methods, and can be applied to parallel processing by using digital signal processors (DSPs) and so on. We evaluate this method and show its availability.

A Genetic Algorithm for Clustering on Image Data

Clustering is the process of subdividing an input data set into a desired number of subgroups so that members of the same subgroup are similar and members of different subgroups have diverse properties. Many heuristic algorithms have been applied to the clustering problem, which is known to be NP Hard. Genetic algorithms have been used in a wide variety of fields to perform clustering, however, the technique normally has a long running time in terms of input set size. This paper proposes an efficient genetic algorithm for clustering on very large data sets, especially on image data sets. The genetic algorithm uses the most time efficient techniques along with preprocessing of the input data set. We test our algorithm on both artificial and real image data sets, both of which are of large size. The experimental results show that our algorithm outperforms the k-means algorithm in terms of running time as well as the quality of the clustering.

Online Brands: A Comparative Study of World Top Ranked Universities with Science and Technology Programs

University websites are considered as one of the brand primary touch points for multiple stakeholders, but most of them did not have great designs to create favorable impressions. Some of the elements that web designers should carefully consider are the appearance, the content, the functionality, usability and search engine optimization. However, priority should be placed on website simplicity and negative space. In terms of content, previous research suggests that universities should include reputation, learning environment, graduate career prospects, image destination, cultural integration, and virtual tour on their websites. The study examines how top 200 world ranking science and technology-based universities present their brands online and whether the websites capture the content dimensions. Content analysis of the websites revealed that the top ranking universities captured these dimensions at varying degree. Besides, the UK-based university had better priority on website simplicity and negative space compared to the Malaysian-based university.

Hardware Prototyping of an Efficient Encryption Engine

An approach to develop the FPGA of a flexible key RSA encryption engine that can be used as a standard device in the secured communication system is presented. The VHDL modeling of this RSA encryption engine has the unique characteristics of supporting multiple key sizes, thus can easily be fit into the systems that require different levels of security. A simple nested loop addition and subtraction have been used in order to implement the RSA operation. This has made the processing time faster and used comparatively smaller amount of space in the FPGA. The hardware design is targeted on Altera STRATIX II device and determined that the flexible key RSA encryption engine can be best suited in the device named EP2S30F484C3. The RSA encryption implementation has made use of 13,779 units of logic elements and achieved a clock frequency of 17.77MHz. It has been verified that this RSA encryption engine can perform 32-bit, 256-bit and 1024-bit encryption operation in less than 41.585us, 531.515us and 790.61us respectively.

A Review on Soft Computing Technique in Intrusion Detection System

Intrusion Detection System is significant in network security. It detects and identifies intrusion behavior or intrusion attempts in a computer system by monitoring and analyzing the network packets in real time. In the recent year, intelligent algorithms applied in the intrusion detection system (IDS) have been an increasing concern with the rapid growth of the network security. IDS data deals with a huge amount of data which contains irrelevant and redundant features causing slow training and testing process, higher resource consumption as well as poor detection rate. Since the amount of audit data that an IDS needs to examine is very large even for a small network, classification by hand is impossible. Hence, the primary objective of this review is to review the techniques prior to classification process suit to IDS data.

Face Image Coding Using Face Prototyping

In this paper we present a novel approach for face image coding. The proposed method makes a use of the features of video encoders like motion prediction. At first encoder selects appropriate prototype from the database and warps it according to features of encoding face. Warped prototype is placed as first I frame. Encoding face is placed as second frame as P frame type. Information about features positions, color change, selected prototype and data flow of P frame will be sent to decoder. The condition is both encoder and decoder own the same database of prototypes. We have run experiment with H.264 video encoder and obtained results were compared to results achieved by JPEG and JPEG2000. Obtained results show that our approach is able to achieve 3 times lower bitrate and two times higher PSNR in comparison with JPEG. According to comparison with JPEG2000 the bitrate was very similar, but subjective quality achieved by proposed method is better.

A Study of Touching Characters in Degraded Gurmukhi Text

Character segmentation is an important preprocessing step for text recognition. In degraded documents, existence of touching characters decreases recognition rate drastically, for any optical character recognition (OCR) system. In this paper a study of touching Gurmukhi characters is carried out and these characters have been divided into various categories after a careful analysis.Structural properties of the Gurmukhi characters are used for defining the categories. New algorithms have been proposed to segment the touching characters in middle zone. These algorithms have shown a reasonable improvement in segmenting the touching characters in degraded Gurmukhi script. The algorithms proposed in this paper are applicable only to machine printed text.

Architecting a Knowledge Theatre

This paper describes the architectural design considerations for building a new class of application, a Personal Knowledge Integrator and a particular example a Knowledge Theatre. It then supports this description by describing a scenario of a child acquiring knowledge and how this process could be augmented by the proposed architecture and design of a Knowledge Theatre. David Merrill-s first “principles of instruction" are kept in focus to provide a background to view the learning potential.

A Modification on Newton's Method for Solving Systems of Nonlinear Equations

In this paper, we are concerned with the further study for system of nonlinear equations. Since systems with inaccurate function values or problems with high computational cost arise frequently in science and engineering, recently such systems have attracted researcher-s interest. In this work we present a new method which is independent of function evolutions and has a quadratic convergence. This method can be viewed as a extension of some recent methods for solving mentioned systems of nonlinear equations. Numerical results of applying this method to some test problems show the efficiently and reliability of method.