A Framework for Product Development Process including HW and SW Components

This paper proposes a framework for product development including hardware and software components. It provides separation of hardware dependent software, modifications of current product development process, and integration of software modules with existing product configuration models and assembly product structures. In order to decide the dependent software, the framework considers product configuration modules and engineering changes of associated software and hardware components. In order to support efficient integration of the two different hardware and software development, a modified product development process is proposed. The process integrates the dependent software development into product development through the interchanges of specific product information. By using existing product data models in Product Data Management (PDM), the framework represents software as modules for product configurations and software parts for product structure. The framework is applied to development of a robot system in order to show its effectiveness.

Maximum Norm Analysis of a Nonmatching Grids Method for Nonlinear Elliptic Boundary Value Problem −Δu = f(u)

We provide a maximum norm analysis of a finite element Schwarz alternating method for a nonlinear elliptic boundary value problem of the form -Δu = f(u), on two overlapping sub domains with non matching grids. We consider a domain which is the union of two overlapping sub domains where each sub domain has its own independently generated grid. The two meshes being mutually independent on the overlap region, a triangle belonging to one triangulation does not necessarily belong to the other one. Under a Lipschitz assumption on the nonlinearity, we establish, on each sub domain, an optimal L∞ error estimate between the discrete Schwarz sequence and the exact solution of the boundary value problem.

Heterogeneous Attribute Reduction in Noisy System based on a Generalized Neighborhood Rough Sets Model

Neighborhood Rough Sets (NRS) has been proven to be an efficient tool for heterogeneous attribute reduction. However, most of researches are focused on dealing with complete and noiseless data. Factually, most of the information systems are noisy, namely, filled with incomplete data and inconsistent data. In this paper, we introduce a generalized neighborhood rough sets model, called VPTNRS, to deal with the problem of heterogeneous attribute reduction in noisy system. We generalize classical NRS model with tolerance neighborhood relation and the probabilistic theory. Furthermore, we use the neighborhood dependency to evaluate the significance of a subset of heterogeneous attributes and construct a forward greedy algorithm for attribute reduction based on it. Experimental results show that the model is efficient to deal with noisy data.

Determination of Moisture Content and Liquid Limit of Foundations Soils, using Microwave Radiation, in the Different Locations of Sulaimani Governorate, Kurdistan Region-Iraq

Soils are normally dried in either a convection oven or stove. Laboratory moisture content testing indicated that the typical drying durations for a convection oven were, 24 hours. The purpose of this study was to determine the accuracy and soil drying duration of both, moisture content and liquid limit using microwave radiation. The soils were tested with both, convection and microwave ovens. The convection oven was considered to produce the true values for both, natural moisture content and liquid limit of soils; it was, therefore, used as a basis for comparison for the results of the microwave ovens. The samples used in this study were obtained from different projects of Consulting Engineering Bureau of College of Engineering of Sulaimani University. These samples were collected from different locations and at the different depths and consist mostly of brown and light brown clay and silty clay. A total of 102 samples were prepared. 26 of them were tested for natural moisture determination, while the other 76 were used for liquid limits determination

Electrical Performance of a Solid Oxide Fuel Cell Unit with Non-Uniform Inlet Flow and High Fuel Utilization

This study investigates the electrical performance of a planar solid oxide fuel cell unit with cross-flow configuration when the fuel utilization gets higher and the fuel inlet flow are non-uniform. A software package in this study solves two-dimensional, simultaneous, partial differential equations of mass, energy, and electro-chemistry, without considering stack direction variation. The results show that the fuel utilization increases with a decrease in the molar flow rate, and the average current density decreases when the molar flow rate drops. In addition, non-uniform Pattern A will induce more severe happening of non-reaction area in the corner of the fuel exit and the air inlet. This non-reaction area deteriorates the average current density and then deteriorates the electrical performance to –7%.

Coping with the Rapidity of Information Technology Changes – A Comparison Reviewon Current Practices

Information technology managers nowadays are facing with tremendous pressure to plan, implement, and adopt new technology solution due to the rapidity of technology changes. Resulted from a lack of study that have been done in this topic, the aim of this paper is to provide a comparison review on current tools that are currently being used in order to respond to technological changes. The study is based on extensive literature review of published works with majority of them are ranging from 2000 to the first part of 2011. The works were gathered from journals, books, and other information sources available on the Web. Findings show that, each tools has different focus and none of the tools are providing a framework in holistic view, which should include technical, people, process, and business environment aspect. Hence, this result provides potential information about current available tools that IT managers could use to manage changes in technology. Further, the result reveals a research gap in the area where the industries a short of such framework.

Development of NOx Emission Model for a Tangentially Fired Acid Incinerator

This paper aims to develop a NOx emission model of an acid gas incinerator using Nelder-Mead least squares support vector regression (LS-SVR). Malaysia DOE is actively imposing the Clean Air Regulation to mandate the installation of analytical instrumentation known as Continuous Emission Monitoring System (CEMS) to report emission level online to DOE . As a hardware based analyzer, CEMS is expensive, maintenance intensive and often unreliable. Therefore, software predictive technique is often preferred and considered as a feasible alternative to replace the CEMS for regulatory compliance. The LS-SVR model is built based on the emissions from an acid gas incinerator that operates in a LNG Complex. Simulated Annealing (SA) is first used to determine the initial hyperparameters which are then further optimized based on the performance of the model using Nelder-Mead simplex algorithm. The LS-SVR model is shown to outperform a benchmark model based on backpropagation neural networks (BPNN) in both training and testing data.

A New Algorithm for Cluster Initialization

Clustering is a very well known technique in data mining. One of the most widely used clustering techniques is the k-means algorithm. Solutions obtained from this technique are dependent on the initialization of cluster centers. In this article we propose a new algorithm to initialize the clusters. The proposed algorithm is based on finding a set of medians extracted from a dimension with maximum variance. The algorithm has been applied to different data sets and good results are obtained.

A Forward Automatic Censored Cell-Averaging Detector for Multiple Target Situations in Log-Normal Clutter

A challenging problem in radar signal processing is to achieve reliable target detection in the presence of interferences. In this paper, we propose a novel algorithm for automatic censoring of radar interfering targets in log-normal clutter. The proposed algorithm, termed the forward automatic censored cell averaging detector (F-ACCAD), consists of two steps: removing the corrupted reference cells (censoring) and the actual detection. Both steps are performed dynamically by using a suitable set of ranked cells to estimate the unknown background level and set the adaptive thresholds accordingly. The F-ACCAD algorithm does not require any prior information about the clutter parameters nor does it require the number of interfering targets. The effectiveness of the F-ACCAD algorithm is assessed by computing, using Monte Carlo simulations, the probability of censoring and the probability of detection in different background environments.

Experimental Analysis on Electrical and Photometric Performances of Commercially Available Integrated Compact Fluorescent Lamp

Lighting upgrades involve relatively lower costs which allow the benefits to be spread more widely than is possible with any other energy efficiency measure. In order to popularize the adoption of CFL in Taiwan, the authority proposes to implement a new energy efficient lamp comparative label system. The current study was accordingly undertaken to investigate the factors affecting the performance and the deviation of actual and labeled performance of commercially available integrated CFLs. In this paper, standard test methods to determine the electrical and photometric performances of CFL were developed based on CIE 84-1989 and CIE 60901-1987, then 55 selected CFLs from market were tested. The results show that with higher color temperature of CFLs lower efficacy are achieved. It was noticed that the most packaging of CFL often lack the information of Color Rendering Index. Also, there was no correlation between price and performance of the CFLs was indicated in this work. The results of this paper might help consumers to make more informed CFL-purchasing decisions.

Analysis of the Islands Tourists, Destination Information Sources and Service Satisfaction

The purpose of this study is to analyze the islands tourist travel information sources, as well as for the satisfaction of the tourist destination services. This study used questionnaires to the island of Taiwan to the Penghu Islands to engage in tourism activities tourist adopt the designated convenience sampling method, a total of 889 valid questionnaires were collected. After statistical analysis, this study found that: 1. tourists to the Penghu Islands travel information source for “friends and family came to Penghu". 2. Tourists feel the service of the outlying islands of Penghu, the highest feelings of “friendly local residents". 3. There are different demographic variables affect the tourist travel information source and service satisfaction. Based on the findings of this study not only for Penghu's tourism industry with the unit in charge of the proposed operating and suggestions for future research to other researchers.

Low Complexity Multi Mode Interleaver Core for WiMAX with Support for Convolutional Interleaving

A hardware efficient, multi mode, re-configurable architecture of interleaver/de-interleaver for multiple standards, like DVB, WiMAX and WLAN is presented. The interleavers consume a large part of silicon area when implemented by using conventional methods as they use memories to store permutation patterns. In addition, different types of interleavers in different standards cannot share the hardware due to different construction methodologies. The novelty of the work presented in this paper is threefold: 1) Mapping of vital types of interleavers including convolutional interleaver onto a single architecture with flexibility to change interleaver size; 2) Hardware complexity for channel interleaving in WiMAX is reduced by using 2-D realization of the interleaver functions; and 3) Silicon cost overheads reduced by avoiding the use of small memories. The proposed architecture consumes 0.18mm2 silicon area for 0.12μm process and can operate at a frequency of 140 MHz. The reduced complexity helps in minimizing the memory utilization, and at the same time provides strong support to on-the-fly computation of permutation patterns.

Systems and Software Safety and Security

Security issue and the importance of the function of police to provide practical and psychological contexts in the community has been the main topics among researchers , police and security circles and this subject require to review and analysis mechanisms within the police and its interaction with other parts of the system for providing community safety. This paper examine national and social security in the Internet.

Handling Mobility using Virtual Grid in Static Wireless Sensor Networks

Querying a data source and routing data towards sink becomes a serious challenge in static wireless sensor networks if sink and/or data source are mobile. Many a times the event to be observed either moves or spreads across wide area making maintenance of continuous path between source and sink a challenge. Also, sink can move while query is being issued or data is on its way towards sink. In this paper, we extend our already proposed Grid Based Data Dissemination (GBDD) scheme which is a virtual grid based topology management scheme restricting impact of movement of sink(s) and event(s) to some specific cells of a grid. This obviates the need for frequent path modifications and hence maintains continuous flow of data while minimizing the network energy consumptions. Simulation experiments show significant improvements in network energy savings and average packet delay for a packet to reach at sink.

Implementation of Watch Dog Timer for Fault Tolerant Computing on Cluster Server

In today-s new technology era, cluster has become a necessity for the modern computing and data applications since many applications take more time (even days or months) for computation. Although after parallelization, computation speeds up, still time required for much application can be more. Thus, reliability of the cluster becomes very important issue and implementation of fault tolerant mechanism becomes essential. The difficulty in designing a fault tolerant cluster system increases with the difficulties of various failures. The most imperative obsession is that the algorithm, which avoids a simple failure in a system, must tolerate the more severe failures. In this paper, we implemented the theory of watchdog timer in a parallel environment, to take care of failures. Implementation of simple algorithm in our project helps us to take care of different types of failures; consequently, we found that the reliability of this cluster improves.

A Modified AES Based Algorithm for Image Encryption

With the fast evolution of digital data exchange, security information becomes much important in data storage and transmission. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. In this paper, we analyze the Advanced Encryption Standard (AES), and we add a key stream generator (A5/1, W7) to AES to ensure improving the encryption performance; mainly for images characterised by reduced entropy. The implementation of both techniques has been realized for experimental purposes. Detailed results in terms of security analysis and implementation are given. Comparative study with traditional encryption algorithms is shown the superiority of the modified algorithm.

Seismic Alert System based on Artificial Neural Networks

We board the problem of creating a seismic alert system, based upon artificial neural networks, trained by using the well-known back-propagation and genetic algorithms, in order to emit the alarm for the population located into a specific city, about an eminent earthquake greater than 4.5 Richter degrees, and avoiding disasters and human loses. In lieu of using the propagation wave, we employed the magnitude of the earthquake, to establish a correlation between the recorded magnitudes from a controlled area and the city, where we want to emit the alarm. To measure the accuracy of the posed method, we use a database provided by CIRES, which contains the records of 2500 quakes incoming from the State of Guerrero and Mexico City. Particularly, we performed the proposed method to generate an issue warning in Mexico City, employing the magnitudes recorded in the State of Guerrero.

Fast Depth Estimation with Filters

Fast depth estimation from binocular vision is often desired for autonomous vehicles, but, most algorithms could not easily be put into practice because of the much time cost. We present an image-processing technique that can fast estimate depth image from binocular vision images. By finding out the lines which present the best matched area in the disparity space image, the depth can be estimated. When detecting these lines, an edge-emphasizing filter is used. The final depth estimation will be presented after the smooth filter. Our method is a compromise between local methods and global optimization.

The Heat and Mass Transfer Phenomena in Vacuum Membrane Distillation for Desalination

Vacuum membrane distillation (VMD) process can be used for water purification or the desalination of salt water. The process simply consists of a flat sheet hydrophobic micro porous PTFE membrane and diaphragm vacuum pump without a condenser for the water recovery or trap. The feed was used aqueous NaCl solution. The VMD experiments were performed to evaluate the heat and mass transfer coefficient of the boundary layer in a membrane module. The only operating parameters are feed inlet temperature, and feed flow rate were investigated. The permeate flux was strongly affected by the feed inlet temperature, feed flow rate, and boundary layer heat transfer coefficient. Since lowering the temperature polarization coefficient is essential enhance the process performance considerable and maximizing the heat transfer coefficient for maximizes the mass flux of distillate water. In this paper, the results of VMD experiments are used to measure the boundary layer heat transfer coefficient, and the experimental results are used to reevaluate the empirical constants in the Dittus- Boelter equation.

Parallel Double Splicing on Iso-Arrays

Image synthesis is an important area in image processing. To synthesize images various systems are proposed in the literature. In this paper, we propose a bio-inspired system to synthesize image and to study the generating power of the system, we define the class of languages generated by our system. We call image as array in this paper. We use a primitive called iso-array to synthesize image/array. The operation is double splicing on iso-arrays. The double splicing operation is used in DNA computing and we use this to synthesize image. A comparison of the family of languages generated by the proposed self restricted double splicing systems on iso-arrays with the existing family of local iso-picture languages is made. Certain closure properties such as union, concatenation and rotation are studied for the family of languages generated by the proposed model.