Statistical Study of Drink Markets: Case Study

An important official knowledge in each country is to have a comprehensive knowledge about markets of each group of products. Drink markets are one the most important markets of each country as a sub-group of nourishment markets. This paper is going to study these markets in Iran. To do so, first, two drink products are selected as pilot, including milk and concentrate. Then, for each product, two groups of information are estimated for the last five years, including 1) total consumption (demand) and 2) total production. Finally, the two groups of productions are compared statistically by means of two statistical tests called t test and Mann- Whitney test. The implemented Different related tables and figures are also illustrated to show the method more explicitly.

Multivalued Knowledge-Base based on Multivalued Datalog

The basic aim of our study is to give a possible model for handling uncertain information. This model is worked out in the framework of DATALOG. The concept of multivalued knowledgebase will be defined as a quadruple of any background knowledge; a deduction mechanism; a connecting algorithm, and a function set of the program, which help us to determine the uncertainty levels of the results. At first the concept of fuzzy Datalog will be summarized, then its extensions for intuitionistic- and interval-valued fuzzy logic is given and the concept of bipolar fuzzy Datalog is introduced. Based on these extensions the concept of multivalued knowledge-base will be defined. This knowledge-base can be a possible background of a future agent-model.

Adding Edges between One Node and Every Other Node with the Same Depth in a Complete K-ary Tree

This paper proposes a model of adding relations between members of the same level in a pyramid organization structure which is a complete K-ary tree such that the communication of information between every member in the organization becomes the most efficient. When edges between one node and every other node with the same depth N in a complete K-ary tree of height H are added, an optimal depth N* = H is obtained by minimizing the total path length which is the sum of lengths of shortest paths between every pair of all nodes.

ECG Analysis using Nature Inspired Algorithm

This paper presents an algorithm based on the wavelet decomposition, for feature extraction from the ECG signal and recognition of three types of Ventricular Arrhythmias using neural networks. A set of Discrete Wavelet Transform (DWT) coefficients, which contain the maximum information about the arrhythmias, is selected from the wavelet decomposition. After that a novel clustering algorithm based on nature inspired algorithm (Ant Colony Optimization) is developed for classifying arrhythmia types. The algorithm is applied on the ECG registrations from the MIT-BIH arrhythmia and malignant ventricular arrhythmia databases. We applied Daubechies 4 wavelet in our algorithm. The wavelet decomposition enabled us to perform the task efficiently and produced reliable results.

Enhance Performance of Secure Image Using Wavelet Compression

The increase popularity of multimedia application especially in image processing places a great demand on efficient data storage and transmission techniques. Network communication such as wireless network can easily be intercepted and cause of confidential information leaked. Unfortunately, conventional compression and encryption methods are too slow; it is impossible to carry out real time secure image processing. In this research, Embedded Zerotree Wavelet (EZW) encoder which specially designs for wavelet compression is examined. With this algorithm, three methods are proposed to reduce the processing time, space and security protection that will be secured enough to protect the data.

A Decision Support Tool for Evaluating Mobility Projects

Success is a European project that will implement several clean transport offers in three European cities and evaluate the environmental impacts. The goal of these measures is to improve urban mobility or the displacement of residents inside cities. For e.g. park and ride, electric vehicles, hybrid bus and bike sharing etc. A list of 28 criteria and 60 measures has been established for evaluation of these transport projects. The evaluation criteria can be grouped into: Transport, environment, social, economic and fuel consumption. This article proposes a decision support system based that encapsulates a hybrid approach based on fuzzy logic, multicriteria analysis and belief theory for the evaluation of impacts of urban mobility solutions. A web-based tool called DeSSIA (Decision Support System for Impacts Assessment) has been developed that treats complex data. The tool has several functionalities starting from data integration (import of data), evaluation of projects and finishes by graphical display of results. The tool development is based on the concept of MVC (Model, View, and Controller). The MVC is a conception model adapted to the creation of software's which impose separation between data, their treatment and presentation. Effort is laid on the ergonomic aspects of the application. It has codes compatible with the latest norms (XHTML, CSS) and has been validated by W3C (World Wide Web Consortium). The main ergonomic aspect focuses on the usability of the application, ease of learning and adoption. By the usage of technologies such as AJAX (XML and Java Script asynchrones), the application is more rapid and convivial. The positive points of our approach are that it treats heterogeneous data (qualitative, quantitative) from various information sources (human experts, survey, sensors, model etc.).

Observations about the Principal Components Analysis and Data Clustering Techniques in the Study of Medical Data

The medical data statistical analysis often requires the using of some special techniques, because of the particularities of these data. The principal components analysis and the data clustering are two statistical methods for data mining very useful in the medical field, the first one as a method to decrease the number of studied parameters, and the second one as a method to analyze the connections between diagnosis and the data about the patient-s condition. In this paper we investigate the implications obtained from a specific data analysis technique: the data clustering preceded by a selection of the most relevant parameters, made using the principal components analysis. Our assumption was that, using the principal components analysis before data clustering - in order to select and to classify only the most relevant parameters – the accuracy of clustering is improved, but the practical results showed the opposite fact: the clustering accuracy decreases, with a percentage approximately equal with the percentage of information loss reported by the principal components analysis.

Energy Efficient Clustering Algorithm with Global and Local Re-clustering for Wireless Sensor Networks

Wireless Sensor Networks consist of inexpensive, low power sensor nodes deployed to monitor the environment and collect data. Gathering information in an energy efficient manner is a critical aspect to prolong the network lifetime. Clustering  algorithms have an advantage of enhancing the network lifetime. Current clustering algorithms usually focus on global re-clustering and local re-clustering separately. This paper, proposed a combination of those two reclustering methods to reduce the energy consumption of the network. Furthermore, the proposed algorithm can apply to homogeneous as well as heterogeneous wireless sensor networks. In addition, the cluster head rotation happens, only when its energy drops below a dynamic threshold value computed by the algorithm. The simulation result shows that the proposed algorithm prolong the network lifetime compared to existing algorithms.

Designing of Multi-Agent Rescue Robot: Development and Basic Experiments of Master-Slave Type Rescue Robots

A multi-agent type robot for disaster response in calamity scene is proposed in this paper. The proposed grouped rescue robots can perform cooperative reconnaissance and surveillance to achieve a given rescue mission. The multi-agent rescue of dual set robot consists of one master set and three slave units. The research for this rescue robot system is going to detect at harmful environment where human is unreachable, such as the building is infected with virus or the factory has hazardous liquid in effluent. As a dual set robot, with Bluetooth and communication network, the master set can connect with slave units and send information back to computer by wireless and monitor. Therefore, rescuer can be informed the real-time information in a calamity area. Furthermore, each slave robot is able to obstacle avoidance by ultrasonic sensors, and encodes distance and location by compass. The master robot can integrate every devices information to increase the efficiency of prospected and research unknown area.

Layout Based Spam Filtering

Due to the constant increase in the volume of information available to applications in fields varying from medical diagnosis to web search engines, accurate support of similarity becomes an important task. This is also the case of spam filtering techniques where the similarities between the known and incoming messages are the fundaments of making the spam/not spam decision. We present a novel approach to filtering based solely on layout, whose goal is not only to correctly identify spam, but also warn about major emerging threats. We propose a mathematical formulation of the email message layout and based on it we elaborate an algorithm to separate different types of emails and find the new, numerically relevant spam types.

Text-Mining Approach for Evaluation of Affective Management Practices

The purpose of this paper is to propose a text mining approach to evaluate companies- practices on affective management. Affective management argues that it is critical to take stakeholders- affects into consideration during decision-making process, along with the traditional numerical and rational indices. CSR reports published by companies were collected as source information. Indices were proposed based on the frequency and collocation of words relevant to affective management concept using text mining approach to analyze the text information of CSR reports. In addition, the relationships between the results obtained using proposed indices and traditional indicators of business performance were investigated using correlation analysis. Those correlations were also compared between manufacturing and non-manufacturing companies. The results of this study revealed the possibility to evaluate affective management practices of companies based on publicly available text documents.

Using Copulas to Measure Association between Air Pollution and Respiratory Diseases

Air pollution is still considered as one of the major environmental and health issues. There is enough research evidence to show a strong relationship between exposure to air contaminants and respiratory illnesses among children and adults. In this paper we used the Copula approach to study a potential relationship between selected air pollutants (PM10 and NO2) and hospital admissions for respiratory diseases. Kendall-s tau and Spearman-s rho rank correlation coefficients are calculated and used in Copula method. This paper demonstrates that copulas can be used to provide additional information as a measure of an association when compared to the standard correlation coefficients. The results find a significant correlation between the selected air pollutants and hospital admissions for most of the selected respiratory illnesses.

Translation of Phraseological Units in Abai Kunanbayev-s Poems

Abai Kunanbayev (1845-1904) was a great Kazakh poet, composer and philosopher. Abai's main contribution to Kazakh culture and folklore lies in his poetry, which expresses great nationalism and grew out of Kazakh folk culture. Before him, most Kazakh poetry was oral, echoing the nomadic habits of the people of the Kazakh steppes. We want to introduce to abroad our country, its history, tradition and culture. We can introduce it only through translations. Only by reading the Kazakh works can foreign people know who are kazakhs, the style of their life, their thoughts and so on. All information comes only through translation. The main requirement to a good translation is that it should be natural or that it should read as smoothly as the original. Literary translation should be adequate, should follow the original to the fullest. Translators have to be loyal to original text, they shouldn-t give the way to liberty.

A Fuzzy MCDM Approach for Health-Care Waste Management

The management of the health-care wastes is one of the most important problems in Istanbul, a city with more than 12 million inhabitants, as it is in most of the developing countries. Negligence in appropriate treatment and final disposal of the healthcare wastes can lead to adverse impacts to public health and to the environment. This paper employs a fuzzy multi-criteria group decision making approach, which is based on the principles of fusion of fuzzy information, 2-tuple linguistic representation model, and technique for order preference by similarity to ideal solution (TOPSIS), to evaluate health-care waste (HCW) treatment alternatives for Istanbul. The evaluation criteria are determined employing nominal group technique (NGT), which is a method of systematically developing a consensus of group opinion. The employed method is apt to manage information assessed using multigranularity linguistic information in a decision making problem with multiple information sources. The decision making framework employs ordered weighted averaging (OWA) operator that encompasses several operators as the aggregation operator since it can implement different aggregation rules by changing the order weights. The aggregation process is based on the unification of information by means of fuzzy sets on a basic linguistic term set (BLTS). Then, the unified information is transformed into linguistic 2-tuples in a way to rectify the problem of loss information of other fuzzy linguistic approaches.

A Survey on Performance Tools for OpenMP

Advances in processors architecture, such as multicore, increase the size of complexity of parallel computer systems. With multi-core architecture there are different parallel languages that can be used to run parallel programs. One of these languages is OpenMP which embedded in C/Cµ or FORTRAN. Because of this new architecture and the complexity, it is very important to evaluate the performance of OpenMP constructs, kernels, and application program on multi-core systems. Performance is the activity of collecting the information about the execution characteristics of a program. Performance tools consists of at least three interfacing software layers, including instrumentation, measurement, and analysis. The instrumentation layer defines the measured performance events. The measurement layer determines what performance event is actually captured and how it is measured by the tool. The analysis layer processes the performance data and summarizes it into a form that can be displayed in performance tools. In this paper, a number of OpenMP performance tools are surveyed, explaining how each is used to collect, analyse, and display data collection.

Atoms in Molecules, An Other Method For Analyzing Dibenzoylmethane

Proton transfer and hydrogen bonding are two aspects of the chemistry of hydrogen that respectively govern the behaviour and structure of many molecules, both simple and complex. All the theoretical enol and keto conformations of 1,3-diphenyl-1,3- propandion known as dibenzoylmethane (DBM), have been investigated by means of atoms in molecules (AIM) theory. It was found that the most stable conformers are those stabilized by hydrogen bridges.The aim of the present paper is a thorough conformational analysis of DBM (with special attention on chelated cis-enol conformers) in order to obtain detailed information on the geometrical parameters, relative stabilities and rotational motion of the phenyl groups. It is also important to estimate the barrier height for ptoton transfer and hydrogen bond strength, which are the main factors governing conformational stability.

Investigations on Some Operations of Soft Sets

Soft set theory was initiated by Molodtsov in 1999. In the past years, this theory had been applied to many branches of mathematics, information science and computer science. In 2003, Maji et al. introduced some operations of soft sets and gave some operational rules. Recently, some of these operational rules are pointed out to be not true. Furthermore, Ali et al., in their paper, introduced and discussed some new operations of soft sets. In this paper, we further investigate these operational rules given by Maji et al. and Ali et al.. We obtain some sufficient-necessary conditions such that corresponding operational rules hold and give correct forms for some operational rules. These results will be help for us to use rightly operational rules of soft sets in research and application of soft set theory.

Incorporating Semantic Similarity Measure in Genetic Algorithm : An Approach for Searching the Gene Ontology Terms

The most important property of the Gene Ontology is the terms. These control vocabularies are defined to provide consistent descriptions of gene products that are shareable and computationally accessible by humans, software agent, or other machine-readable meta-data. Each term is associated with information such as definition, synonyms, database references, amino acid sequences, and relationships to other terms. This information has made the Gene Ontology broadly applied in microarray and proteomic analysis. However, the process of searching the terms is still carried out using traditional approach which is based on keyword matching. The weaknesses of this approach are: ignoring semantic relationships between terms, and highly depending on a specialist to find similar terms. Therefore, this study combines semantic similarity measure and genetic algorithm to perform a better retrieval process for searching semantically similar terms. The semantic similarity measure is used to compute similitude strength between two terms. Then, the genetic algorithm is employed to perform batch retrievals and to handle the situation of the large search space of the Gene Ontology graph. The computational results are presented to show the effectiveness of the proposed algorithm.

The Study on Evaluation System and Method of Legacy System

In the upgrade process of enterprise information systems, how to deal with and utilize those legacy systems affects the efficiency of construction and development of the new system. We propose an evaluation system, which comprehensively describes the capacity of legacy information systems in five aspects. Then we propose a practical legacy systems evaluation method. Base on the evaluation result, we can determine the current state of legacy system which was evaluated.

Analysis and Design Business Directory for Micro, Small and Medium Enterprises using Google Maps API and Multimedia

This paper explain about analysis and design a business directory for micro-scale businesses, small and medium enterprises (SMEs). Business Directory, if implemented will facilitate and optimize the access of SMEs to ease suppliers access to marketing. Business Directory will be equipped with the power of geocoding, so each location can be easily viewed SMEs on the map. The map will be constructed by using the functionality of a webbased Google Maps API. The information presented in the form of multimedia that can be more interesting and interactive. The method used to achieve the goal are: observation; interviews; modeling and classifying business directory for SMEs.