Classifier Based Text Mining for Neural Network

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.

Energy Efficient Resource Allocation in Distributed Computing Systems

The problem of mapping tasks onto a computational grid with the aim to minimize the power consumption and the makespan subject to the constraints of deadlines and architectural requirements is considered in this paper. To solve this problem, we propose a solution from cooperative game theory based on the concept of Nash Bargaining Solution. The proposed game theoretical technique is compared against several traditional techniques. The experimental results show that when the deadline constraints are tight, the proposed technique achieves superior performance and reports competitive performance relative to the optimal solution.

A Relationship between Two Stabilizing Controllers and Its Application to Two-Stage Compensator Design without Coprime Factorizability – Single-Input Single-Output Case –

In this paper, we first show a relationship between two stabilizing controllers, which presents an extended feedback system using two stabilizing controllers. Then, we apply this relationship to the two-stage compensator design. In this paper, we consider singleinput single-output plants. On the other hand, we do not assume the coprime factorizability of the model. Thus, the results of this paper are based on the factorization approach only, so that they can be applied to numerous linear systems.

Septic B-spline Collocation Method for Solving One-dimensional Hyperbolic Telegraph Equation

Recently, it is found that telegraph equation is more suitable than ordinary diffusion equation in modelling reaction diffusion for such branches of sciences. In this paper, a numerical solution for the one-dimensional hyperbolic telegraph equation by using the collocation method using the septic splines is proposed. The scheme works in a similar fashion as finite difference methods. Test problems are used to validate our scheme by calculate L2-norm and L∞-norm. The accuracy of the presented method is demonstrated by two test problems. The numerical results are found to be in good agreement with the exact solutions.

Matrix Based Synthesis of EXOR dominated Combinational Logic for Low Power

This paper discusses a new, systematic approach to the synthesis of a NP-hard class of non-regenerative Boolean networks, described by FON[FOFF]={mi}[{Mi}], where for every mj[Mj]∈{mi}[{Mi}], there exists another mk[Mk]∈{mi}[{Mi}], such that their Hamming distance HD(mj, mk)=HD(Mj, Mk)=O(n), (where 'n' represents the number of distinct primary inputs). The method automatically ensures exact minimization for certain important selfdual functions with 2n-1 points in its one-set. The elements meant for grouping are determined from a newly proposed weighted incidence matrix. Then the binary value corresponding to the candidate pair is correlated with the proposed binary value matrix to enable direct synthesis. We recommend algebraic factorization operations as a post processing step to enable reduction in literal count. The algorithm can be implemented in any high level language and achieves best cost optimization for the problem dealt with, irrespective of the number of inputs. For other cases, the method is iterated to subsequently reduce it to a problem of O(n-1), O(n-2),.... and then solved. In addition, it leads to optimal results for problems exhibiting higher degree of adjacency, with a different interpretation of the heuristic, and the results are comparable with other methods. In terms of literal cost, at the technology independent stage, the circuits synthesized using our algorithm enabled net savings over AOI (AND-OR-Invert) logic, AND-EXOR logic (EXOR Sum-of- Products or ESOP forms) and AND-OR-EXOR logic by 45.57%, 41.78% and 41.78% respectively for the various problems. Circuit level simulations were performed for a wide variety of case studies at 3.3V and 2.5V supply to validate the performance of the proposed method and the quality of the resulting synthesized circuits at two different voltage corners. Power estimation was carried out for a 0.35micron TSMC CMOS process technology. In comparison with AOI logic, the proposed method enabled mean savings in power by 42.46%. With respect to AND-EXOR logic, the proposed method yielded power savings to the tune of 31.88%, while in comparison with AND-OR-EXOR level networks; average power savings of 33.23% was obtained.

Automatic Vehicle Identification by Plate Recognition

Automatic Vehicle Identification (AVI) has many applications in traffic systems (highway electronic toll collection, red light violation enforcement, border and customs checkpoints, etc.). License Plate Recognition is an effective form of AVI systems. In this study, a smart and simple algorithm is presented for vehicle-s license plate recognition system. The proposed algorithm consists of three major parts: Extraction of plate region, segmentation of characters and recognition of plate characters. For extracting the plate region, edge detection algorithms and smearing algorithms are used. In segmentation part, smearing algorithms, filtering and some morphological algorithms are used. And finally statistical based template matching is used for recognition of plate characters. The performance of the proposed algorithm has been tested on real images. Based on the experimental results, we noted that our algorithm shows superior performance in car license plate recognition.

A New Maximum Power Point Tracking for Photovoltaic Systems

In this paper a new maximum power point tracking algorithm for photovoltaic arrays is proposed. The algorithm detects the maximum power point of the PV. The computed maximum power is used as a reference value (set point) of the control system. ON/OFF power controller with hysteresis band is used to control the operation of a Buck chopper such that the PV module always operates at its maximum power computed from the MPPT algorithm. The major difference between the proposed algorithm and other techniques is that the proposed algorithm is used to control directly the power drawn from the PV. The proposed MPPT has several advantages: simplicity, high convergence speed, and independent on PV array characteristics. The algorithm is tested under various operating conditions. The obtained results have proven that the MPP is tracked even under sudden change of irradiation level.

Modified Fuzzy PID Control for Networked Control Systems with Random Delays

To deal with random delays in Networked Control System (NCS), Modified Fuzzy PID Controller is introduced in this paper to implement real-time control adaptively. Via adjusting the control signal dynamically, the system performance is improved. In this paper, the design process and the ultimate simulation results are represented. Finally, examples and corresponding comparisons prove the significance of this method.

Trust Managementfor Pervasive Computing Environments

Trust is essential for further and wider acceptance of contemporary e-services. It was first addressed almost thirty years ago in Trusted Computer System Evaluation Criteria standard by the US DoD. But this and other proposed approaches of that period were actually solving security. Roughly some ten years ago, methodologies followed that addressed trust phenomenon at its core, and they were based on Bayesian statistics and its derivatives, while some approaches were based on game theory. However, trust is a manifestation of judgment and reasoning processes. It has to be dealt with in accordance with this fact and adequately supported in cyber environment. On the basis of the results in the field of psychology and our own findings, a methodology called qualitative algebra has been developed, which deals with so far overlooked elements of trust phenomenon. It complements existing methodologies and provides a basis for a practical technical solution that supports management of trust in contemporary computing environments. Such solution is also presented at the end of this paper.

Using Serious Games to Improve the Preparation of Pre-Service Teachers in Bulgaria

This paper presents the outcomes of a qualitative study which aims to investigate the pedagogical potentials of serious games in the preparation of future teachers. The authors discuss the existing problems and barriers associated with the organization of teaching practices in Bulgaria as part of the pre-service teacher training, as well as the attitudes and perceptions of the interviewed academics, teachers and trainees concerning the integration of serious games in the teaching practicum. The study outcomes strongly confirm the positive attitudes of the respondents to the introduction of virtual learning environments for the development of professional skills of future teachers as a supplement to the traditional forms of education. Through the inclusion of serious games it is expected to improve the quality of practical training of pre-service teachers as they overcome many of the problems identified in the existing teaching practices. The outcomes of the study will inform the design of the educational simulation software which is part of the project SimAula Tomorrow's Teachers Training.

Application of Reliability Prediction Model Adapted for the Analysis of the ERP System

This paper presents the possibilities of using Weibull statistical distribution in modeling the distribution of defects in ERP systems. There follows a case study, which examines helpdesk records of defects that were reported as the result of one ERP subsystem upgrade. The result of the applied modeling is in modeling the reliability of the ERP system from a user perspective with estimated parameters like expected maximum number of defects in one day or predicted minimum of defects between two upgrades. Applied measurement-based analysis framework is proved to be suitable in predicting future states of the reliability of the observed ERP subsystems.

Challenges of Irrigation Water Supply in Croplands of Arid Regions and their Environmental Consequences – A Case Study in the Dez and Moghan Command Areas of Iran

Renewable water resources are crucial production variables in arid and semi-arid regions where intensive agriculture is practiced to meet ever-increasing demand for food and fiber. This is crucial for the Dez and Moghan command areas where water delivery problems and adverse environmental issues are widespread. This paper aims to identify major problems areas using on-farm surveys of 200 farmers, agricultural extensionists and water suppliers which was complemented by secondary data and field observations during 2010- 2011 cultivating season. The SPSS package was used to analyze and synthesis data. Results indicated inappropriate canal operations in both schemes, though there was no unanimity about the underlying causes. Inequitable and inflexible distribution was found to be rooted in deficient hydraulic structures particularly in the main and secondary canals. The inadequacy and inflexibility of water scheduling regime was the underlying causes of recurring pest and disease spread which often led to the decline of crop yield and quality, although these were not disputed, the water suppliers were not prepared to link with the deficiencies in the operation of the main and secondary canals. They rather attributed these to the prevailing salinity; alkalinity, water table fluctuations and leaching of the valuable agro-chemical inputs from the plants- route zone with farreaching consequences. Examples of these include the pollution of ground and surface resources due to over-irrigation at the farm level which falls under the growers- own responsibility. Poor irrigation efficiency and adverse environmental problems were attributed to deficient and outdated farming practices that were in turn rooted in poor extension programs and irrational water charges.

Problems of Innovative Economy: Forming of«Innovative Society» And Innovative Receptivity

Today many countries have the ambitious purposes of long-term and continuous development: constant growth of competitiveness, maintenance of a high standard of living of the population, leadership in the world market. One of the best possible ways of achievement of these purposes is a transition of the countries to innovative economy. The paper presents the analyses of problems of forming of innovative receptivity to innovations and creation of «innovative society». Creation of an innovative culture in a society and increase of the level of prestige of innovative activity are the best ways of developing of innovative processes. The base of the analysis is a comparing of Russia and different developed countries according to the level of some indictors of innovative activity.1

Data Gathering Protocols for Wireless Sensor Networks

Sensor network applications are often data centric and involve collecting data from a set of sensor nodes to be delivered to various consumers. Typically, nodes in a sensor network are resource-constrained, and hence the algorithms operating in these networks must be efficient. There may be several algorithms available implementing the same service, and efficient considerations may require a sensor application to choose the best suited algorithm. In this paper, we present a systematic evaluation of a set of algorithms implementing the data gathering service. We propose a modular infrastructure for implementing such algorithms in TOSSIM with separate configurable modules for various tasks such as interest propagation, data propagation, aggregation, and path maintenance. By appropriately configuring these modules, we propose a number of data gathering algorithms, each of which incorporates a different set of heuristics for optimizing performance. We have performed comprehensive experiments to evaluate the effectiveness of these heuristics, and we present results from our experimentation efforts.

Grouping and Indexing Color Features for Efficient Image Retrieval

Content-based Image Retrieval (CBIR) aims at searching image databases for specific images that are similar to a given query image based on matching of features derived from the image content. This paper focuses on a low-dimensional color based indexing technique for achieving efficient and effective retrieval performance. In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique. Then the cluster (region) mode is used as representative of the image in 3-D color space. The feature descriptor consists of the representative color of a region and is indexed using a spatial indexing method that uses *R -tree thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. Alternatively, the images in the database are clustered based on region feature similarity using Euclidian distance. Only representative (centroids) features of these clusters are indexed using *R -tree thus improving the efficiency. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The results of these methods are compared. A JAVA based query engine supporting query-by- example is built to retrieve images by color.

Design of Composite Risers for Minimum Weight

The use of composite materials in offshore engineering for deep sea oil production riser systems has drawn considerable interest due to the potential weight savings and improvement in durability. The design of composite risers consists of two stages: (1) local design based on critical local load cases, and (2) global analysis of the full length composite riser under global loads and assessment of critical locations. In the first stage, eight different material combinations were selected and their laminate configurations optimised under local load considerations. Stage two includes a final local stress analysis of the critical sections of the riser under the combined loads determined in the global analysis. This paper describes two design methodologies of the composite riser to provide minimum structural weight and shows that the use of off angle fibre orientations in addition to axial and hoop reinforcements offer substantial weight savings and ensure the structural capacity.

Work Structuring and the Feasibility of Application to Construction Projects in Vietnam

Design should be viewed concurrently by three ways as transformation, flow and value generation. An innovative approach to solve design – related problems is described as the integrated product - process design. As a foundation for a formal framework consisting of organizing principles and techniques, Work Structuring has been developed to guide efforts in the integration that enhances the development of operation and process design in alignment with product design. Vietnam construction projects are facing many delays, and cost overruns caused mostly by design related problems. A better design management that integrates product and process design could resolve these problems. A questionnaire survey and in – depth interviews were used to investigate the feasibility of applying Work Structuring to construction projects in Vietnam. The purpose of this paper is to present the research results and to illustrate the possible problems and potential solutions when Work Structuring is implemented to construction projects in Vietnam.

A Fast Cyclic Reduction Algorithm for A Quadratic Matrix Equation Arising from Overdamped Systems

We are concerned with a class of quadratic matrix equations arising from the overdamped mass-spring system. By exploring the structure of coefficient matrices, we propose a fast cyclic reduction algorithm to calculate the extreme solutions of the equation. Numerical experiments show that the proposed algorithm outperforms the original cyclic reduction and the structure-preserving doubling algorithm.

Prestressed Concrete Girder Bridges Using Large 0.7 Inch Strands

The National Bridge Inventory (NBI) includes more than 600,000 bridges within the United States of America. Prestressed concrete girder bridges represent one of the most widely used bridge systems. The majority of these girder bridges were constructed using 0.5 and 0.6 inch diameter strands. The main impediments to using larger strand diameters are: 1) lack of prestress bed capacities, 2) lack of structural knowledge regarding the transfer and development length of larger strands, and 3) the possibility of developing wider end zone cracks upon strand release. This paper presents a study about using 0.7 inch strands in girder fabrication. Transfer and development length were evaluated, and girders were fabricated using 0.7 inch strands at different spacings. Results showed that 0.7 inch strands can be used at 2.0 inch spacing without violating the AASHTO LRFD Specifications, while attaining superior performance in shear and flexure.

Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise

Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.