Photogrammetry and GIS Integration for Archaeological Documentation of Ahl-Alkahf, Jordan

Protection and proper management of archaeological heritage are an essential process of studying and interpreting the generations present and future. Protecting the archaeological heritage is based upon multidiscipline professional collaboration. This study aims to gather data by different sources (Photogrammetry and Geographic Information System (GIS)) integrated for the purpose of documenting one the of significant archeological sites (Ahl-Alkahf, Jordan). 3D modeling deals with the actual image of the features, shapes and texture to represent reality as realistically as possible by using texture. The 3D coordinates that result of the photogrammetric adjustment procedures are used to create 3D-models of the study area. Adding Textures to the 3D-models surfaces gives a 'real world' appearance to the displayed models. GIS system combined all data, including boundary maps, indicating the location of archeological sites, transportation layer, digital elevation model and orthoimages. For realistic representation of the study area, 3D - GIS model prepared, where efficient generation, management and visualization of such special data can be achieved.

Formulation, Analysis and Validation of Takagi-Sugeno Fuzzy Modeling For Robotic Monipulators

This paper proposes a methodology for analysis of the dynamic behavior of a robotic manipulator in continuous time. Initially this system (nonlinear system) will be decomposed into linear submodels and analyzed in the context of the Linear and Parameter Varying (LPV) Systems. The obtained linear submodels, which represent the local dynamic behavior of the robotic manipulator in some operating points were grouped in a Takagi-Sugeno fuzzy structure. The obtained fuzzy model was analyzed and validated through analog simulation, as universal approximator of the robotic manipulator.

Local Curvelet Based Classification Using Linear Discriminant Analysis for Face Recognition

In this paper, an efficient local appearance feature extraction method based the multi-resolution Curvelet transform is proposed in order to further enhance the performance of the well known Linear Discriminant Analysis(LDA) method when applied to face recognition. Each face is described by a subset of band filtered images containing block-based Curvelet coefficients. These coefficients characterize the face texture and a set of simple statistical measures allows us to form compact and meaningful feature vectors. The proposed method is compared with some related feature extraction methods such as Principal component analysis (PCA), as well as Linear Discriminant Analysis LDA, and independent component Analysis (ICA). Two different muti-resolution transforms, Wavelet (DWT) and Contourlet, were also compared against the Block Based Curvelet-LDA algorithm. Experimental results on ORL, YALE and FERET face databases convince us that the proposed method provides a better representation of the class information and obtains much higher recognition accuracies.

Soccer Video Edition Using a Multimodal Annotation

In this paper, we present an approach for soccer video edition using a multimodal annotation. We propose to associate with each video sequence of a soccer match a textual document to be used for further exploitation like search, browsing and abstract edition. The textual document contains video meta data, match meta data, and match data. This document, generated automatically while the video is analyzed, segmented and classified, can be enriched semi automatically according to the user type and/or a specialized recommendation system.

Health Post A Sustainable Prototype for the Third World

This paper concerns the study of sustainable construction materials applied on the "Health Post", a prototype for the primary health care situated in alienated areas of the world. It's suitable for social and climatic Sub-Saharan context; however, it could be moved in other countries of the world with similar urgent needs. The idea is to create a Health Post with local construction materials that have a low environmental impact and promote the local workforce allowing reuse of traditional building techniques lowering production costs and transport. The aim of Primary Health Care Centre is to be a flexible and expandable structure identifying a modular form that can be repeated several times to expand its existing functions. In this way it could be not only a health care centre but also a socio-cultural facility.

Labeling Method in Steganography

In this paper a way of hiding text message (Steganography) in the gray image has been presented. In this method tried to find binary value of each character of text message and then in the next stage, tried to find dark places of gray image (black) by converting the original image to binary image for labeling each object of image by considering on 8 connectivity. Then these images have been converted to RGB image in order to find dark places. Because in this way each sequence of gray color turns into RGB color and dark level of grey image is found by this way if the Gary image is very light the histogram must be changed manually to find just dark places. In the final stage each 8 pixels of dark places has been considered as a byte and binary value of each character has been put in low bit of each byte that was created manually by dark places pixels for increasing security of the main way of steganography (LSB).

A Temporal Synchronization Model for Heterogeneous Data in Distributed Systems

Multimedia distributed systems deal with heterogeneous data, such as texts, images, graphics, video and audio. The specification of temporal relations among different data types and distributed sources is an open research area. This paper proposes a fully distributed synchronization model to be used in multimedia systems. One original aspect of the model is that it avoids the use of a common reference (e.g. wall clock and shared memory). To achieve this, all possible multimedia temporal relations are specified according to their causal dependencies.

Bottom Up Text Mining through Hierarchical Document Representation

Most of the existing text mining approaches are proposed, keeping in mind, transaction databases model. Thus, the mined dataset is structured using just one concept: the “transaction", whereas the whole dataset is modeled using the “set" abstract type. In such cases, the structure of the whole dataset and the relationships among the transactions themselves are not modeled and consequently, not considered in the mining process. We believe that taking into account structure properties of hierarchically structured information (e.g. textual document, etc ...) in the mining process, can leads to best results. For this purpose, an hierarchical associations rule mining approach for textual documents is proposed in this paper and the classical set-oriented mining approach is reconsidered profits to a Direct Acyclic Graph (DAG) oriented approach. Natural languages processing techniques are used in order to obtain the DAG structure. Based on this graph model, an hierarchical bottom up algorithm is proposed. The main idea is that each node is mined with its parent node.

An Analysis of the Themes of Alcoholic Beverage Advertisements in Thailand after the Enactment of the Alcoholic Beverage Control Act of 2008

The objective of this research was to study the themes of alcoholic beverage advertisements in Thailand after the enactment of the 2008 Alcoholic Beverage Control Act. Data was collected through textual analysis of 35 television and cinema advertisements for alcoholic beverage products broadcast in Thailand. Nine themes were identified, seven of which were themes that had previously been used before the new law (i.e. power, competition, friendship, Thainess, success, romance and safety) and two of which were new themes (volunteerism and conservation) that were introduced as a form of adaptation and negotiation in response to the new law.

Visual Hull with Imprecise Input

Imprecision is a long-standing problem in CAD design and high accuracy image-based reconstruction applications. The visual hull which is the closed silhouette equivalent shape of the objects of interest is an important concept in image-based reconstruction. We extend the domain-theoretic framework, which is a robust and imprecision capturing geometric model, to analyze the imprecision in the output shape when the input vertices are given with imprecision. Under this framework, we show an efficient algorithm to generate the 2D partial visual hull which represents the exact information of the visual hull with only basic imprecision assumptions. We also show how the visual hull from polyhedra problem can be efficiently solved in the context of imprecise input.

Universal Kinetic Modeling of RAFT Polymerization using Moment Equations

In the following text, we show that by introducing universal kinetic scheme, the origin of rate retardation and inhibition period which observed in dithiobenzoate-mediated RAFT polymerization can be described properly. We develop our model by utilizing the method of moments, then we apply our model to different monomer/RAFT agent systems, both homo- and copolymerization. The modeling results are in an excellent agreement with experiments and imply the validity of universal kinetic scheme, not only for dithiobenzoate-mediated systems, but also for different types of monomer/RAFT agent ones.

Key Based Text Watermarking of E-Text Documents in an Object Based Environment Using Z-Axis for Watermark Embedding

Data hiding into text documents itself involves pretty complexities due to the nature of text documents. A robust text watermarking scheme targeting an object based environment is presented in this research. The heart of the proposed solution describes the concept of watermarking an object based text document where each and every text string is entertained as a separate object having its own set of properties. Taking advantage of the z-ordering of objects watermark is applied with the z-axis letting zero fidelity disturbances to the text. Watermark sequence of bits generated against user key is hashed with selected properties of given document, to determine the bit sequence to embed. Bits are embedded along z-axis and the document has no fidelity issues when printed, scanned or photocopied.

Target Detection with Improved Image Texture Feature Coding Method and Support Vector Machine

An image texture analysis and target recognition approach of using an improved image texture feature coding method (TFCM) and Support Vector Machine (SVM) for target detection is presented. With our proposed target detection framework, targets of interest can be detected accurately. Cascade-Sliding-Window technique was also developed for automated target localization. Application to mammogram showed that over 88% of normal mammograms and 80% of abnormal mammograms can be correctly identified. The approach was also successfully applied to Synthetic Aperture Radar (SAR) and Ground Penetrating Radar (GPR) images for target detection.

Speech Encryption and Decryption Using Linear Feedback Shift Register (LFSR)

This paper is taken into consideration the problem of cryptanalysis of stream ciphers. There is some attempts need to improve the existing attacks on stream cipher and to make an attempt to distinguish the portions of cipher text obtained by the encryption of plain text in which some parts of the text are random and the rest are non-random. This paper presents a tutorial introduction to symmetric cryptography. The basic information theoretic and computational properties of classic and modern cryptographic systems are presented, followed by an examination of the application of cryptography to the security of VoIP system in computer networks using LFSR algorithm. The implementation program will be developed Java 2. LFSR algorithm is appropriate for the encryption and decryption of online streaming data, e.g. VoIP (voice chatting over IP). This paper is implemented the encryption module of speech signals to cipher text and decryption module of cipher text to speech signals.

A Web Text Mining Flexible Architecture

Text Mining is an important step of Knowledge Discovery process. It is used to extract hidden information from notstructured o semi-structured data. This aspect is fundamental because much of the Web information is semi-structured due to the nested structure of HTML code, much of the Web information is linked, much of the Web information is redundant. Web Text Mining helps whole knowledge mining process to mining, extraction and integration of useful data, information and knowledge from Web page contents. In this paper, we present a Web Text Mining process able to discover knowledge in a distributed and heterogeneous multiorganization environment. The Web Text Mining process is based on flexible architecture and is implemented by four steps able to examine web content and to extract useful hidden information through mining techniques. Our Web Text Mining prototype starts from the recovery of Web job offers in which, through a Text Mining process, useful information for fast classification of the same are drawn out, these information are, essentially, job offer place and skills.

DRE - A Quality Metric for Component based Software Products

The overriding goal of software engineering is to provide a high quality system, application or a product. To achieve this goal, software engineers must apply effective methods coupled with modern tools within the context of a mature software process [2]. In addition, it is also must to assure that high quality is realized. Although many quality measures can be collected at the project levels, the important measures are errors and defects. Deriving a quality measure for reusable components has proven to be challenging task now a days. The results obtained from the study are based on the empirical evidence of reuse practices, as emerged from the analysis of industrial projects. Both large and small companies, working in a variety of business domains, and using object-oriented and procedural development approaches contributed towards this study. This paper proposes a quality metric that provides benefit at both project and process level, namely defect removal efficiency (DRE).

Industrial Waste Monitoring

Conventional industrial monitoring systems are tedious, inefficient and the at times integrity of the data is unreliable. The objective of this system is to monitor industrial processes specifically the fluid level which will measure the instantaneous fluid level parameter and respond by text messaging the exact value of the parameter to the user when being enquired by a privileged access user. The development of the embedded program code and the circuit for fluid level measuring are discussed as well. Suggestions for future implementations and efficient remote monitoring works are included.

Modeling the Uncertainty of the Remanufacturing Process for Consideration of Extended Producer Responsibility (EPR)

There is a growing body of evidence to support the proposition of product take back for remanufacturing particularly within the context of Extended Producer Responsibility (EPR). Remanufacturing however presents challenges unlike that of traditional manufacturing environments due to its high levels of uncertainty which may further distract organizations from considering its potential benefits. This paper presents a novel modeling approach for evaluating the uncertainty of part failures within the remanufacturing process and its impact on economic and environmental performance measures. This paper presents both the theoretical modeling approach and an example of its use in application.

Standard Deviation of Mean and Variance of Rows and Columns of Images for CBIR

This paper describes a novel and effective approach to content-based image retrieval (CBIR) that represents each image in the database by a vector of feature values called “Standard deviation of mean vectors of color distribution of rows and columns of images for CBIR". In many areas of commerce, government, academia, and hospitals, large collections of digital images are being created. This paper describes the approach that uses contents as feature vector for retrieval of similar images. There are several classes of features that are used to specify queries: colour, texture, shape, spatial layout. Colour features are often easily obtained directly from the pixel intensities. In this paper feature extraction is done for the texture descriptor that is 'variance' and 'Variance of Variances'. First standard deviation of each row and column mean is calculated for R, G, and B planes. These six values are obtained for one image which acts as a feature vector. Secondly we calculate variance of the row and column of R, G and B planes of an image. Then six standard deviations of these variance sequences are calculated to form a feature vector of dimension six. We applied our approach to a database of 300 BMP images. We have determined the capability of automatic indexing by analyzing image content: color and texture as features and by applying a similarity measure Euclidean distance.

2D Gabor Functions and FCMI Algorithm for Flaws Detection in Ultrasonic Images

In this paper we present a new approach to detecting a flaw in T.O.F.D (Time Of Flight Diffraction) type ultrasonic image based on texture features. Texture is one of the most important features used in recognizing patterns in an image. The paper describes texture features based on 2D Gabor functions, i.e., Gaussian shaped band-pass filters, with dyadic treatment of the radial spatial frequency range and multiple orientations, which represent an appropriate choice for tasks requiring simultaneous measurement in both space and frequency domains. The most relevant features are used as input data on a Fuzzy c-mean clustering classifier. The classes that exist are only two: 'defects' or 'no defects'. The proposed approach is tested on the T.O.F.D image achieved at the laboratory and on the industrial field.