Information System Life Cycle: Applications in Construction and Manufacturing

In this paper, we present the information life cycle, and analyze the importance of managing the corporate application portfolio across this life cycle. The approach presented here does not correspond just to the extension of the traditional information system development life cycle. This approach is based in the generic life cycle employed in other contexts like manufacturing or marketing. In this paper it is proposed a model of an information system life cycle, supported in the assumption that a system has a limited life. But, this limited life may be extended. This model is also applied in several cases; being reported here two examples of the framework application in a construction enterprise, and in a manufacturing enterprise.

The Hardware Implementation of a Novel Genetic Algorithm

This paper presents a novel genetic algorithm, termed the Optimum Individual Monogenetic Algorithm (OIMGA) and describes its hardware implementation. As the monogenetic strategy retains only the optimum individual, the memory requirement is dramatically reduced and no crossover circuitry is needed, thereby ensuring the requisite silicon area is kept to a minimum. Consequently, depending on application requirements, OIMGA allows the investigation of solutions that warrant either larger GA populations or individuals of greater length. The results given in this paper demonstrate that both the performance of OIMGA and its convergence time are superior to those of existing hardware GA implementations. Local convergence is achieved in OIMGA by retaining elite individuals, while population diversity is ensured by continually searching for the best individuals in fresh regions of the search space.

Authentication and Data Hiding Using a Reversible ROI-based Watermarking Scheme for DICOM Images

In recent years image watermarking has become an important research area in data security, confidentiality and image integrity. Many watermarking techniques were proposed for medical images. However, medical images, unlike most of images, require extreme care when embedding additional data within them because the additional information must not affect the image quality and readability. Also the medical records, electronic or not, are linked to the medical secrecy, for that reason, the records must be confidential. To fulfill those requirements, this paper presents a lossless watermarking scheme for DICOM images. The proposed a fragile scheme combines two reversible techniques based on difference expansion for patient's data hiding and protecting the region of interest (ROI) with tamper detection and recovery capability. Patient's data are embedded into ROI, while recovery data are embedded into region of non-interest (RONI). The experimental results show that the original image can be exactly extracted from the watermarked one in case of no tampering. In case of tampered ROI, tampered area can be localized and recovered with a high quality version of the original area.

Case Based Reasoning Technology for Medical Diagnosis

Case based reasoning (CBR) methodology presents a foundation for a new technology of building intelligent computeraided diagnoses systems. This Technology directly addresses the problems found in the traditional Artificial Intelligence (AI) techniques, e.g. the problems of knowledge acquisition, remembering, robust and maintenance. This paper discusses the CBR methodology, the research issues and technical aspects of implementing intelligent medical diagnoses systems. Successful applications in cancer and heart diseases developed by Medical Informatics Research Group at Ain Shams University are also discussed.

RB-Matcher: String Matching Technique

All Text processing systems allow their users to search a pattern of string from a given text. String matching is fundamental to database and text processing applications. Every text editor must contain a mechanism to search the current document for arbitrary strings. Spelling checkers scan an input text for words in the dictionary and reject any strings that do not match. We store our information in data bases so that later on we can retrieve the same and this retrieval can be done by using various string matching algorithms. This paper is describing a new string matching algorithm for various applications. A new algorithm has been designed with the help of Rabin Karp Matcher, to improve string matching process.

Factoring a Polynomial with Multiple-Roots

A given polynomial, possibly with multiple roots, is factored into several lower-degree distinct-root polynomials with natural-order-integer powers. All the roots, including multiplicities, of the original polynomial may be obtained by solving these lowerdegree distinct-root polynomials, instead of the original high-degree multiple-root polynomial directly. The approach requires polynomial Greatest Common Divisor (GCD) computation. The very simple and effective process, “Monic polynomial subtractions" converted trickily from “Longhand polynomial divisions" of Euclidean algorithm is employed. It requires only simple elementary arithmetic operations without any advanced mathematics. Amazingly, the derived routine gives the expected results for the test polynomials of very high degree, such as p( x) =(x+1)1000.

Mapping Knowledge Model Onto Java Codes

This paper gives an overview of the mapping mechanism of SEAM-a methodology for the automatic generation of knowledge models and its mapping onto Java codes. It discusses the rules that will be used to map the different components in the knowledge model automatically onto Java classes, properties and methods. The aim of developing this mechanism is to help in the creation of a prototype which will be used to validate the knowledge model which has been generated automatically. It will also help to link the modeling phase with the implementation phase as existing knowledge engineering methodologies do not provide for proper guidelines for the transition from the knowledge modeling phase to development phase. This will decrease the development overheads associated to the development of Knowledge Based Systems.

Action Recognition in Video Sequences using a Mealy Machine

In this paper the use of sequential machines for recognizing actions taken by the objects detected by a general tracking algorithm is proposed. The system may deal with the uncertainty inherent in medium-level vision data. For this purpose, fuzzification of input data is performed. Besides, this transformation allows to manage data independently of the tracking application selected and enables adding characteristics of the analyzed scenario. The representation of actions by means of an automaton and the generation of the input symbols for finite automaton depending on the object and action compared are described. The output of the comparison process between an object and an action is a numerical value that represents the membership of the object to the action. This value is computed depending on how similar the object and the action are. The work concludes with the application of the proposed technique to identify the behavior of vehicles in road traffic scenes.

Using Reuse Water for Irrigation Green space of Naein City

Since water resources of desert Naein City are very limited, a approach which saves water resources and meanwhile meets the needs of the greenspace for water is to use city-s sewage wastewater. Proper treatment of Naein-s sewage up to the standards required for green space uses may solve some of the problems of green space development of the city. The present paper closely examines available statistics and information associated with city-s sewage system, and determines complementary stages of sewage treatment facilities of the city. In the present paper, population, per capita water use, and required discharge for various greenspace pieces including different plants are calculated. Moreover, in order to facilitate the application of water resources, a Crude water distribution network apart from drinking water distribution network is designed, and a plan for mixing municipal wells- water with sewage wastewater in proposed mixing tanks is suggested. Hence, following greenspace irrigation reform and complementary plan, per capita greenspace of the city will be increased from current amount of 13.2 square meters to 32 square meters.

Analytical Model for Brine Discharges from a Sea Outfall with Multiport Diffusers

Multiport diffusers are the effective engineering devices installed at the modern marine outfalls for the steady discharge of effluent streams from the coastal plants, such as municipal sewage treatment, thermal power generation and seawater desalination. A mathematical model using a two-dimensional advection-diffusion equation based on a flat seabed and incorporating the effect of a coastal tidal current is developed to calculate the compounded concentration following discharges of desalination brine from a sea outfall with multiport diffusers. The analytical solutions are computed graphically to illustrate the merging of multiple brine plumes in shallow coastal waters, and further approximation will be made to the maximum shoreline's concentration to formulate dilution of a multiport diffuser discharge.

Post Elevated Temperature Effect on the Strength and Microstructure of Thin High Performance Cementitious Composites (THPCC)

Reinforced Concrete (RC) structures strengthened with fiber reinforced polymer (FRP) lack in thermal resistance under elevated temperatures in the event of fire. This phenomenon led to the lining of strengthened concrete with thin high performance cementitious composites (THPCC) to protect the substrate against elevated temperature. Elevated temperature effects on THPCC, based on different cementitious materials have been studied in the past but high-alumina cement (HAC)-based THPCC have not been well characterized. This research study will focus on the THPCC based on HAC replaced by 60%, 70%, 80% and 85% of ground granulated blast furnace slag (GGBS). Samples were evaluated by the measurement of their mechanical strength (28 & 56 days of curing) after exposed to 400°C, 600°C and 28°C of room temperature for comparison and corroborated by their microstructure study. Results showed that among all mixtures, the mix containing only HAC showed the highest compressive strength after exposed to 600°C as compared to other mixtures. However, the tensile strength of THPCC made of HAC and 60% GGBS content was comparable to the THPCC with HAC only after exposed to 600°C. Field emission scanning electron microscopy (FESEM) images of THPCC accompanying Energy Dispersive X-ray (EDX) microanalysis revealed that the microstructure deteriorated considerably after exposure to elevated temperatures which led to the decrease in mechanical strength.

Noninvasive, Wireless Textronic System to Breath Frequency Measurement

In this paper authors presented the research of textile electroconductive materials, which can be used to construction sensory textronic shirt to breath frequency measurement. The full paper also will present results of measurements carried out on unique measurement stands.

Forward Simulation of a Parallel Hybrid Vehicle and Fuzzy Controller Design for Driving/Regenerative Propose

One of the best ways for achievement of conventional vehicle changing to hybrid case is trustworthy simulation result and using of driving realities. For this object, in this paper, at first sevendegree- of-freedom dynamical model of vehicle will be shown. Then by using of statically model of engine, gear box, clutch, differential, electrical machine and battery, the hybrid automobile modeling will be down and forward simulation of vehicle for pedals to wheels power transformation will be obtained. Then by design of a fuzzy controller and using the proper rule base, fuel economy and regenerative braking will be marked. Finally a series of MATLAB/SIMULINK simulation results will be proved the effectiveness of proposed structure.

Information Retrieval in Domain Specific Search Engine with Machine Learning Approaches

As the web continues to grow exponentially, the idea of crawling the entire web on a regular basis becomes less and less feasible, so the need to include information on specific domain, domain-specific search engines was proposed. As more information becomes available on the World Wide Web, it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest) [2]. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves. In a multi-view problem, the features of the domain can be partitioned into disjoint subsets (views) that are sufficient to learn the target concept. Semi-supervised, multi-view algorithms, which reduce the amount of labeled data required for learning, rely on the assumptions that the views are compatible and uncorrelated. This paper describes the use of semi-structured machine learning approach with Active learning for the “Domain Specific Search Engines". A domain-specific search engine is “An information access system that allows access to all the information on the web that is relevant to a particular domain. The proposed work shows that with the help of this approach relevant data can be extracted with the minimum queries fired by the user. It requires small number of labeled data and pool of unlabelled data on which the learning algorithm is applied to extract the required data.

DCBOR: A Density Clustering Based on Outlier Removal

Data clustering is an important data exploration technique with many applications in data mining. We present an enhanced version of the well known single link clustering algorithm. We will refer to this algorithm as DCBOR. The proposed algorithm alleviates the chain effect by removing the outliers from the given dataset. So this algorithm provides outlier detection and data clustering simultaneously. This algorithm does not need to update the distance matrix, since the algorithm depends on merging the most k-nearest objects in one step and the cluster continues grow as long as possible under specified condition. So the algorithm consists of two phases; at the first phase, it removes the outliers from the input dataset. At the second phase, it performs the clustering process. This algorithm discovers clusters of different shapes, sizes, densities and requires only one input parameter; this parameter represents a threshold for outlier points. The value of the input parameter is ranging from 0 to 1. The algorithm supports the user in determining an appropriate value for it. We have tested this algorithm on different datasets contain outlier and connecting clusters by chain of density points, and the algorithm discovers the correct clusters. The results of our experiments demonstrate the effectiveness and the efficiency of DCBOR.

Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Surface Roughness Optimization in End Milling Operation with Damper Inserted End Milling Cutters

This paper presents a study of the Taguchi design application to optimize surface quality in damper inserted end milling operation. Maintaining good surface quality usually involves additional manufacturing cost or loss of productivity. The Taguchi design is an efficient and effective experimental method in which a response variable can be optimized, given various factors, using fewer resources than a factorial design. This Study included spindle speed, feed rate, and depth of cut as control factors, usage of different tools in the same specification, which introduced tool condition and dimensional variability. An orthogonal array of L9(3^4)was used; ANOVA analyses were carried out to identify the significant factors affecting surface roughness, and the optimal cutting combination was determined by seeking the best surface roughness (response) and signal-to-noise ratio. Finally, confirmation tests verified that the Taguchi design was successful in optimizing milling parameters for surface roughness.

Providing Medical Information in Braille: Research and Development of Automatic Braille Translation Program for Japanese “eBraille“

Along with the advances in medicine, providing medical information to individual patient is becoming more important. In Japan such information via Braille is hardly provided to blind and partially sighted people. Thus we are researching and developing a Web-based automatic translation program “eBraille" to translate Japanese text into Japanese Braille. First we analyzed the Japanese transcription rules to implement them on our program. We then added medical words to the dictionary of the program to improve its translation accuracy for medical text. Finally we examined the efficacy of statistical learning models (SLMs) for further increase of word segmentation accuracy in braille translation. As a result, eBraille had the highest translation accuracy in the comparison with other translation programs, improved the accuracy for medical text and is utilized to make hospital brochures in braille for outpatients and inpatients.

Complexity Analysis of Some Known Graph Coloring Instances

Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.

Finite Element Analysis of Thin Steel Plate Shear Walls

Steel plate shear walls (SPSWs) in buildings are known to be an effective means for resisting lateral forces. By using un-stiffened walls and allowing them to buckle, their energy absorption capacity will increase significantly due to the postbuckling capacity. The post-buckling tension field action of SPSWs can provide substantial strength, stiffness and ductility. This paper presents the Finite Element Analysis of low yield point (LYP) steel shear walls. In this shear wall system, the LYP steel plate is used for the steel panel and conventional structural steel is used for boundary frames. A series of nonlinear cyclic analyses were carried out to obtain the stiffness, strength, deformation capacity, and energy dissipation capacity of the LYP steel shear wall. The effect of widthto- thickness ratio of steel plate on buckling behavior, and energy dissipation capacities were studied. Good energy dissipation and deformation capacities were obtained for all models.