Error-Robust Nature of Genome Profiling Applied for Clustering of Species Demonstrated by Computer Simulation

Genome profiling (GP), a genotype based technology, which exploits random PCR and temperature gradient gel electrophoresis, has been successful in identification/classification of organisms. In this technology, spiddos (Species identification dots) and PaSS (Pattern similarity score) were employed for measuring the closeness (or distance) between genomes. Based on the closeness (PaSS), we can buildup phylogenetic trees of the organisms. We noticed that the topology of the tree is rather robust against the experimental fluctuation conveyed by spiddos. This fact was confirmed quantitatively in this study by computer-simulation, providing the limit of the reliability of this highly powerful methodology. As a result, we could demonstrate the effectiveness of the GP approach for identification/classification of organisms.

A Weighted-Profiling Using an Ontology Basefor Semantic-Based Search

The information on the Web increases tremendously. A number of search engines have been developed for searching Web information and retrieving relevant documents that satisfy the inquirers needs. Search engines provide inquirers irrelevant documents among search results, since the search is text-based rather than semantic-based. Information retrieval research area has presented a number of approaches and methodologies such as profiling, feedback, query modification, human-computer interaction, etc for improving search results. Moreover, information retrieval has employed artificial intelligence techniques and strategies such as machine learning heuristics, tuning mechanisms, user and system vocabularies, logical theory, etc for capturing user's preferences and using them for guiding the search based on the semantic analysis rather than syntactic analysis. Although a valuable improvement has been recorded on search results, the survey has shown that still search engines users are not really satisfied with their search results. Using ontologies for semantic-based searching is likely the key solution. Adopting profiling approach and using ontology base characteristics, this work proposes a strategy for finding the exact meaning of the query terms in order to retrieve relevant information according to user needs. The evaluation of conducted experiments has shown the effectiveness of the suggested methodology and conclusion is presented.

A Norm-based Approach for Profiling Business Knowledge

Knowledge is a key asset for any organisation to sustain competitive advantages, but it is difficult to identify and represent knowledge which is needed to perform activities in business processes. The effective knowledge management and support for relevant business activities definitely gives a huge impact to the performance of the organisation as a whole. This is because that knowledge have the functions of directing, coordinating and controlling actions within business processes. The study has introduced organisational morphology, a norm-based approach by applying semiotic theories which emphasise on the representation of knowledge in norms. This approach is concerned with the identification of activities into three categories: substantive, communication and control activities. All activities are directed by norms; hence three types of norms exist; each is associated to a category of activities. The paper describes the approach briefly and illustrates the application of this approach through a case study of academic activities in higher education institutions. The result of the study shows that the approach provides an effective way to profile business knowledge and the profile enables the understanding and specification of business requirements of an organisation.

Searching for Similar Informational Articles in the Internet Channel

In terms of total online audience, newspapers are the most successful form of online content to date. The online audience for newspapers continues to demand higher-quality services, including personalized news services. News providers should be able to offer suitable users appropriate content. In this paper, a news article recommender system is suggested based on a user-s preference when he or she visits an Internet news site and reads the published articles. This system helps raise the user-s satisfaction, increase customer loyalty toward the content provider.

An Evaluation of Pesticide Stress Induced Proteins in three Cyanobacterial Species-Anabaena Fertilissima, Aulosira Fertilissima and Westiellopsis Prolifica using SDS-PAGE

The whole-cell protein-profiling technique was evaluated for studying differences in banding pattern of three different species of Cyanobacteria i.e. Anabaena fertilissima, Aulosira fertilissima and Westiellopsis prolifica under the influence of four different pesticides-2,4-D (Ethyl Ester of 2,4-Dichloro Phenoxy Acetic Acid), Pencycuron (N-[(4-chlorophenyl)methyl]-Ncyclopentyl- N'–phenylurea), Endosulfan (6,7,8,9,10,10hexachloro- 1,5,5a,6,9,9a-hexahydro-6,9-methano-2,4,3-benzodioxathiepine-3- oxide) and Tebuconazole (1-(4-Chlorophenyl)-4,4-dimethyl-3-(1,2,4- triazol-1-ylmethyl)pentan-3-ol). Whole-cell extracts were obtained by sonication treatment (Sonifier cell disruptor -Branson Digital Sonifier S-450D, USA) and were analyzed by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE). SDS-PAGE analyses of the total protein profile of Anabaena fertilissima, Aulosira fertilissima and Westiellopsis prolifica showed a linear decrease in the protein content with increasing pesticide stress when administered to different concentrations of 2, 4-D, Pencycuron, Endosulfan and Tebuconazole. The results indicate that different stressors exert specific effects on cyanobacterial protein synthesis.

Heuristics Analysis for Distributed Scheduling using MONARC Simulation Tool

Simulation is a very powerful method used for highperformance and high-quality design in distributed system, and now maybe the only one, considering the heterogeneity, complexity and cost of distributed systems. In Grid environments, foe example, it is hard and even impossible to perform scheduler performance evaluation in a repeatable and controllable manner as resources and users are distributed across multiple organizations with their own policies. In addition, Grid test-beds are limited and creating an adequately-sized test-bed is expensive and time consuming. Scalability, reliability and fault-tolerance become important requirements for distributed systems in order to support distributed computation. A distributed system with such characteristics is called dependable. Large environments, like Cloud, offer unique advantages, such as low cost, dependability and satisfy QoS for all users. Resource management in large environments address performant scheduling algorithm guided by QoS constrains. This paper presents the performance evaluation of scheduling heuristics guided by different optimization criteria. The algorithms for distributed scheduling are analyzed in order to satisfy users constrains considering in the same time independent capabilities of resources. This analysis acts like a profiling step for algorithm calibration. The performance evaluation is based on simulation. The simulator is MONARC, a powerful tool for large scale distributed systems simulation. The novelty of this paper consists in synthetic analysis results that offer guidelines for scheduler service configuration and sustain the empirical-based decision. The results could be used in decisions regarding optimizations to existing Grid DAG Scheduling and for selecting the proper algorithm for DAG scheduling in various actual situations.

Protein Profiling in Alanine Aminotransferase Induced Patient cohort using Acetaminophen

Sensitive and predictive DILI (Drug Induced Liver Injury) biomarkers are needed in drug R&D to improve early detection of hepatotoxicity. The discovery of DILI biomarkers that demonstrate the predictive power to identify individuals at risk to DILI would represent a major advance in the development of personalized healthcare approaches. In this healthy volunteer acetaminophen study (4g/day for 7 days, with 3 monitored nontreatment days before and 4 after), 450 serum samples from 32 subjects were analyzed using protein profiling by antibody suspension bead arrays. Multiparallel protein profiles were generated using a DILI target protein array with 300 antibodies, where the antibodies were selected based on previous literature findings of putative DILI biomarkers and a screening process using pre dose samples from the same cohort. Of the 32 subjects, 16 were found to develop an elevated ALT value (2Xbaseline, responders). Using the plasma profiling approach together with multivariate statistical analysis some novel findings linked to lipid metabolism were found and more important, endogenous protein profiles in baseline samples (prior to treatment) with predictive power for ALT elevations were identified.

A Comparison of Fuzzy Clustering Algorithms to Cluster Web Messages

Our objective in this paper is to propose an approach capable of clustering web messages. The clustering is carried out by assigning, with a certain probability, texts written by the same web user to the same cluster based on Stylometric features and using fuzzy clustering algorithms. Focus in the present work is on comparing the most popular algorithms in fuzzy clustering theory namely, Fuzzy C-means, Possibilistic C-means and Fuzzy Possibilistic C-Means.

Web Application to Profiling Scientific Institutions through Citation Mining

Recently the use of data mining to scientific bibliographic data bases has been implemented to analyze the pathways of the knowledge or the core scientific relevances of a laureated novel or a country. This specific case of data mining has been named citation mining, and it is the integration of citation bibliometrics and text mining. In this paper we present an improved WEB implementation of statistical physics algorithms to perform the text mining component of citation mining. In particular we use an entropic like distance between the compression of text as an indicator of the similarity between them. Finally, we have included the recently proposed index h to characterize the scientific production. We have used this web implementation to identify users, applications and impact of the Mexican scientific institutions located in the State of Morelos.

A Simple Affymetrix Ratio-transformation Method Yields Comparable Expression Level Quantifications with cDNA Data

Gene expression profiling is rapidly evolving into a powerful technique for investigating tumor malignancies. The researchers are overwhelmed with the microarray-based platforms and methods that confer them the freedom to conduct large-scale gene expression profiling measurements. Simultaneously, investigations into cross-platform integration methods have started gaining momentum due to their underlying potential to help comprehend a myriad of broad biological issues in tumor diagnosis, prognosis, and therapy. However, comparing results from different platforms remains to be a challenging task as various inherent technical differences exist between the microarray platforms. In this paper, we explain a simple ratio-transformation method, which can provide some common ground for cDNA and Affymetrix platform towards cross-platform integration. The method is based on the characteristic data attributes of Affymetrix- and cDNA- platform. In the work, we considered seven childhood leukemia patients and their gene expression levels in either platform. With a dataset of 822 differentially expressed genes from both these platforms, we carried out a specific ratio-treatment to Affymetrix data, which subsequently showed an improvement in the relationship with the cDNA data.

The Effects of Processing and Preservation on the Sensory Qualities of Prickly Pear Juice

Prickly pear juice has received renewed attention with regard to the effects of processing and preservation on its sensory qualities (colour, taste, flavour, aroma, astringency, visual browning and overall acceptability). Juice was prepared by homogenizing fruit and treating the pulp with pectinase (Aspergillus niger). Juice treatments applied were sugar addition, acidification, heat-treatment, refrigeration, and freezing and thawing. Prickly pear pulp and juice had unique properties (low pH 3.88, soluble solids 3.68 oBrix and high titratable acidity 0.47). Sensory profiling and descriptive analyses revealed that non-treated juice had a bitter taste with high astringency whereas treated prickly pear was significantly sweeter. All treated juices had a good sensory acceptance with values approximating or exceeding 7. Regression analysis of the consumer sensory attributes for non-treated prickly pear juice indicated an overwhelming rejection, while treated prickly pear juice received overall acceptability. Thus, educed favourable sensory responses and may have positive implications for consumer acceptability.

Laser Beam Forming of 3 mm Steel Plate and the Evolving Properties

This paper reports the evolving properties of a 3 mm low carbon steel plate after Laser Beam Forming achieve this objective, the chemical analyse material and the formed components were carried thereafter both were characterized through microhardness profiling microstructural evaluation and tensile testing. showed an increase in the elemental concentration of the component when compared to the as received attributed to the enhancement property of the LBF process Ultimate Tensile Strength (UTS) and the Vickers the formed component shows an increase when compared to the as received material, this was attributed to strain hardening and grain refinement brought about by the LBF process. The microstructure of the as received steel consists of equiaxed ferrit that of the formed component exhibits elongated orming process (LBF). To es of the as received out and compared; profiling, The chemical analyses formed material; this can be process. The microhardness of ferrite and pearlite while grains.

Application of Femtosecond Laser pulses for Nanometer Accuracy Profiling of Quartz and Diamond Substrates and for Multi-Layered Targets and Thin-Film Conductors Processing

Research results and optimal parameters investigation of laser cut and profiling of diamond and quartz substrates by femtosecond laser pulses are presented. Profiles 10 μm in width, ~25 μm in depth and several millimeters long were made. Investigation of boundaries quality has been carried out with the use of AFM «Vecco». Possibility of technological formation of profiles and micro-holes in diamond and quartz substrates with nanometer-scale boundaries is shown. Experimental results of multilayer dielectric cover treatment are also presented. Possibility of precise upper layer (thickness of 70–140 nm) removal is demonstrated. Processes of thin metal film (60 nm and 350 nm thick) treatment are considered. Isolation tracks (conductance ~ 10-11 S) 1.6–2.5 μm in width in conductive metal layers are formed.

Hybrid Intelligent Intrusion Detection System

Intrusion Detection Systems are increasingly a key part of systems defense. Various approaches to Intrusion Detection are currently being used, but they are relatively ineffective. Artificial Intelligence plays a driving role in security services. This paper proposes a dynamic model Intelligent Intrusion Detection System, based on specific AI approach for intrusion detection. The techniques that are being investigated includes neural networks and fuzzy logic with network profiling, that uses simple data mining techniques to process the network data. The proposed system is a hybrid system that combines anomaly, misuse and host based detection. Simple Fuzzy rules allow us to construct if-then rules that reflect common ways of describing security attacks. For host based intrusion detection we use neural-networks along with self organizing maps. Suspicious intrusions can be traced back to its original source path and any traffic from that particular source will be redirected back to them in future. Both network traffic and system audit data are used as inputs for both.

A Critical Study of Media Profiling on Society-s Social Problems from a British Perspective

This article explores the sociological perspectives on social problems and the role of the media which has a delicate role to tread in balancing its duty to the public and the victim Whilst social problems have objective conditions, it is the subjective definition of such problems that ensure which social problem comes to the fore and which doesn-t. Further it explores the roles and functions of policymakers when addressing social problems and the impact of the inception of media profiling as well as the advantages and disadvantages of media profiling towards social problems. It focuses on the inception of media profiling due to its length and a follow up article will explore how current media profiling towards social problems have evolved since its inception.

A Framework for Personalized Multi-Device Information Communicating System

Due to the mobility of users, many information systems are now developed with the capability of supporting retrieval of information from both static and mobile users. Hence, the amount, content and format of the information retrieved will need to be tailored according to the device and the user who requested for it. Thus, this paper presents a framework for the design and implementation of such a system, which is to be developed for communicating final examination related information to the academic community at one university in Malaysia. The concept of personalization will be implemented in the system so that only highly relevant information will be delivered to the users. The personalization concept used will be based on user profiling as well as context. The system in its final state will be accessible through cell phones as well as intranet connected personal computers.

Performance Improvements of DSP Applications on a Generic Reconfigurable Platform

Speedups from mapping four real-life DSP applications on an embedded system-on-chip that couples coarsegrained reconfigurable logic with an instruction-set processor are presented. The reconfigurable logic is realized by a 2-Dimensional Array of Processing Elements. A design flow for improving application-s performance is proposed. Critical software parts, called kernels, are accelerated on the Coarse-Grained Reconfigurable Array. The kernels are detected by profiling the source code. For mapping the detected kernels on the reconfigurable logic a prioritybased mapping algorithm has been developed. Two 4x4 array architectures, which differ in their interconnection structure among the Processing Elements, are considered. The experiments for eight different instances of a generic system show that important overall application speedups have been reported for the four applications. The performance improvements range from 1.86 to 3.67, with an average value of 2.53, compared with an all-software execution. These speedups are quite close to the maximum theoretical speedups imposed by Amdahl-s law.

Geographic Profiling Based on Multi-point Centrography with K-means Clustering

Geographic Profiling has successfully assisted investigations for serial crimes. Considering the multi-cluster feature of serial criminal spots, we propose a Multi-point Centrography model as a natural extension of Single-point Centrography for geographic profiling. K-means clustering is first performed on the data samples and then Single-point Centrography is adopted to derive a probability distribution on each cluster. Finally, a weighted combinations of each distribution is formed to make next-crime spot prediction. Experimental study on real cases demonstrates the effectiveness of our proposed model.

Bioinformatics Profiling of Missense Mutations

The ability to distinguish missense nucleotide substitutions that contribute to harmful effect from those that do not is a difficult problem usually accomplished through functional in vivo analyses. In this study, instead current biochemical methods, the effects of missense mutations upon protein structure and function were assayed by means of computational methods and information from the databases. For this order, the effects of new missense mutations in exon 5 of PTEN gene upon protein structure and function were examined. The gene coding for PTEN was identified and localized on chromosome region 10q23.3 as the tumor suppressor gene. The utilization of these methods were shown that c.319G>A and c.341T>G missense mutations that were recognized in patients with breast cancer and Cowden disease, could be pathogenic. This method could be use for analysis of missense mutation in others genes.