Brain Image Segmentation Using Conditional Random Field Based On Modified Artificial Bee Colony Optimization Algorithm

Tumor is an uncontrolled growth of tissues in any part of the body. Tumors are of different types and they have different characteristics and treatments. Brain tumor is inherently serious and life-threatening because of its character in the limited space of the intracranial cavity (space formed inside the skull). Locating the tumor within MR (magnetic resonance) image of brain is integral part of the treatment of brain tumor. This segmentation task requires classification of each voxel as either tumor or non-tumor, based on the description of the voxel under consideration. Many studies are going on in the medical field using Markov Random Fields (MRF) in segmentation of MR images. Even though the segmentation process is better, computing the probability and estimation of parameters is difficult. In order to overcome the aforementioned issues, Conditional Random Field (CRF) is used in this paper for segmentation, along with the modified artificial bee colony optimization and modified fuzzy possibility c-means (MFPCM) algorithm. This work is mainly focused to reduce the computational complexities, which are found in existing methods and aimed at getting higher accuracy. The efficiency of this work is evaluated using the parameters such as region non-uniformity, correlation and computation time. The experimental results are compared with the existing methods such as MRF with improved Genetic Algorithm (GA) and MRF-Artificial Bee Colony (MRF-ABC) algorithm.

Improved Skin Detection Using Colour Space and Texture

Skin detection is an important task for computer vision systems. A good method of skin detection means a good and successful result of the system. The colour is a good descriptor for image segmentation and classification; it allows detecting skin colour in the images. The lighting changes and the objects that have a colour similar than skin colour make the operation of skin detection difficult. In this paper, we proposed a method using the YCbCr colour space for skin detection and lighting effects elimination, then we use the information of texture to eliminate the false regions detected by the YCbCr skin model.

The Application of Queuing Theory in Multi-Stage Production Lines

The purpose of this work is examining the multiproduct multi-stage in a battery production line. To improve the performances of an assembly production line by determine the efficiency of each workstation. Data collected from every workstation. The data are throughput rate, number of operator, and number of parts that arrive and leaves during part processing. Data for the number of parts that arrives and leaves are collected at least at the amount of ten samples to make the data is possible to be analyzed by Chi-Squared Goodness Test and queuing theory. Measures of this model served as the comparison with the standard data available in the company. Validation of the task time value resulted by comparing it with the task time value based on the company database. Some performance factors for the multi-product multi-stage in a battery production line in this work are shown. The efficiency in each workstation was also shown. Total production time to produce each part can be determined by adding the total task time in each workstation. To reduce the queuing time and increase the efficiency based on the analysis any probably improvement should be done. One probably action is by increasing the number of operators how manually operate this workstation.

Heuristic for Accelerating Run-Time Task Mapping in NoC-Based Heterogeneous MPSoCs

In this paper, we propose a new packing strategy to find a free resource for run-time mapping of application tasks to NoC-based Heterogeneous MPSoC. The proposed strategy minimizes the task mapping time in addition to placing the communicating tasks close to each other. To evaluate our approach, a comparative study is carried out for a platform containing single task supported PEs. Experiments show that our strategy provides better results when compared to latest dynamic mapping strategies reported in the literature.

Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis

One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the creditscoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.

On the Interactive Search with Web Documents

Due to the large amount of information in the World Wide Web (WWW, web) and the lengthy and usually linearly ordered result lists of web search engines that do not indicate semantic relationships between their entries, the search for topically similar and related documents can become a tedious task. Especially, the process of formulating queries with proper terms representing specific information needs requires much effort from the user. This problem gets even bigger when the user's knowledge on a subject and its technical terms is not sufficient enough to do so. This article presents the new and interactive search application DocAnalyser that addresses this problem by enabling users to find similar and related web documents based on automatic query formulation and state-ofthe- art search word extraction. Additionally, this tool can be used to track topics across semantically connected web documents.

Deviations and Defects of the Sub-Task’s Requirements in Construction Projects

The sub-task pattern in terms of deviations and defects should be identified and understood in order to improve the quality of practices in construction projects. Therefore, sub-task susceptibility to exposure to deviations and defects has been evaluated and classified via six classifications proposed in this study. Thirty-four case studies of specific sub-tasks (from compression members in constructed concrete structures) were collected from seven construction projects in order to examine the study’s proposed classifications. The study revealed that the sub-task has a high sensitivity to deviation, where 91% of the cases were recorded as deviations; however, only 19% of cases were recorded as defects. Other findings were that the actual work during the execution process is a high source of deviation for this sub-task (74%), while only 26% of the source of deviation was due to both design documentation and the actual work. These findings significantly imply that the study’s proposed classifications could be used to determine the pattern of each sub-task and develop proactive actions to overcome issues of sub-task deviations and defects.

Research and Development of Intelligent Cooling Channels Design System

The cooling channels of injection mould play a crucial role in determining the productivity of moulding process and the product quality. It’s not a simple task to design high quality cooling channels. In this paper, an intelligent cooling channels design system including automatic layout of cooling channels, interference checking and assembly of accessories is studied. Automatic layout of cooling channels using genetic algorithm is analyzed. Through integrating experience criteria of designing cooling channels, considering the factors such as the mould temperature and interference checking, the automatic layout of cooling channels is implemented. The method of checking interference based on distance constraint algorithm and the function of automatic and continuous assembly of accessories are developed and integrated into the system. Case studies demonstrate the feasibility and practicality of the intelligent design system.

Forecasting US Dollar/Euro Exchange Rate with Genetic Fuzzy Predictor

Fuzzy systems have been successfully used for exchange rate forecasting. However, fuzzy system is very confusing and complex to be designed by an expert, as there is a large set of parameters (fuzzy knowledge base) that must be selected, it is not a simple task to select the appropriate fuzzy knowledge base for an exchange rate forecasting. The researchers often look the effect of fuzzy knowledge base on the performances of fuzzy system forecasting. This paper proposes a genetic fuzzy predictor to forecast the future value of daily US Dollar/Euro exchange rate time’s series. A range of methodologies based on a set of fuzzy predictor’s which allow the forecasting of the same time series, but with a different fuzzy partition. Each fuzzy predictor is built from two stages, where each stage is performed by a real genetic algorithm.

Concept for Determining the Focus of Technology Monitoring Activities

Identification and selection of appropriate product and manufacturing technologies are key factors for competitiveness and market success of technology-based companies. Therefore, many companies perform technology intelligence (TI) activities to ensure the identification of evolving technologies at the right time. Technology monitoring is one of the three base activities of TI, besides scanning and scouting. As the technological progress is accelerating, more and more technologies are being developed. Against the background of limited resources it is therefore necessary to focus TI activities. In this paper we propose a concept for defining appropriate search fields for technology monitoring. This limitation of search space leads to more concentrated monitoring activities. The concept will be introduced and demonstrated through an anonymized case study conducted within an industry project at the Fraunhofer Institute for Production Technology IPT. The described concept provides a customized monitoring approach, which is suitable for use in technology-oriented companies. It is shown in this paper that the definition of search fields and search tasks are suitable methods to define topics of interest and thus to align monitoring activities. Current as well as planned product, production and material technologies and existing skills, capabilities and resources form the basis for derivation of relevant search areas. To further improve the concept of technology monitoring the proposed concept should be extended during future research e.g. by the definition of relevant monitoring parameters.

Organization of the Purchasing Function for Innovation

Innovations not only contribute to competitiveness of the company but have also positive effects on revenues. On average, product innovations account to 14 percent of companies’ sales. Innovation management has substantially changed during the last decade, because of growing reliance on external partners. As a consequence, a new task for purchasing arises, as firms need to understand which suppliers actually do have high potential contributing to the innovativeness of the firm and which do not. Proper organization of the purchasing function is important since for the majority of manufacturing companies deal with substantial material costs which pass through the purchasing function. In the past the purchasing function was largely seen as a transaction-oriented, clerical function but today purchasing is the intermediate with supply chain partners contributing to innovations, be it product or process innovations. Therefore, purchasing function has to be organized differently to enable firm innovation potential. However, innovations are inherently risky. There are behavioral risk (that some partner will take advantage of the other party), technological risk in terms of complexity of products and processes of manufacturing and incoming materials and finally market risks, which in fact judge the value of the innovation. These risks are investigated in this work. Specifically, technological risks which deal with complexity of the products, and processes will be investigated more thoroughly. Buying components or such high edge technologies necessities careful investigation of technical features and therefore is usually conducted by a team of experts. Therefore it is hypothesized that higher the technological risk, higher will be the centralization of the purchasing function as an interface with other supply chain members. Main contribution of this research lies is in the fact that analysis was performed on a large data set of 1493 companies, from 25 countries collected in the GMRG 4 survey. Most analyses of purchasing function are done by case study analysis of innovative firms. Therefore this study contributes with empirical evaluations that can be generalized.

General Awareness of Teenagers in Information Security

The use of IT equipment has become a part of every day. However, each device that is part of cyberspace should be secured against unauthorized use. It is very important to know the basics of these security devices, but also the basics of safe conduct their owners. This information should be part of every curriculum computer science education in primary and secondary schools. Therefore, the work focuses on the education of pupils in primary and secondary schools on the Internet. Analysis of the current state describes approaches to the education of pupils in security issues on the Internet. The paper presents a questionnaire-based survey which was carried out in the Czech Republic, whose task was to ascertain the level of opinion pupils in primary and secondary schools on the issue of communication in social networks. The research showed that awareness of socio-pathological phenomena on the Internet environment is very low. Based on the results it was proposed appropriate ways of teaching to this issue and its inclusion a proposal of curriculum for primary and secondary schools.

Civil Protection in Mass Methanol Poisoning in the Czech Republic

The paper is focused on the methods to solutions of the crisis situation in the Czech Republic associated with the mass methanol poisoning. The emphasis is put on tasks of individual state bodies and of Integrated Rescue System during the handling of the crisis. The theoretical part describes poisonings, ways of intoxication, types of intoxicants and cases of mass poisoning by dangerous substances in the world. The practical part describes the development, causes and solutions of extraordinary event, mass methanol poisoning in the Czech Republic. The main emphasis was put on the crisis management of the Czech Republic in solving this situation.

Nature Inspired Metaheuristic Algorithms for Multilevel Thresholding Image Segmentation - A Survey

Segmentation is one of the essential tasks in image processing. Thresholding is one of the simplest techniques for performing image segmentation. Multilevel thresholding is a simple and effective technique. The primary objective of bi-level or multilevel thresholding for image segmentation is to determine a best thresholding value. To achieve multilevel thresholding various techniques has been proposed. A study of some nature inspired metaheuristic algorithms for multilevel thresholding for image segmentation is conducted. Here, we study about Particle swarm optimization (PSO) algorithm, artificial bee colony optimization (ABC), Ant colony optimization (ACO) algorithm and Cuckoo search (CS) algorithm.

Tool for Fast Detection of Java Code Snippets

This paper presents general results on the Java source code snippet detection problem. We propose the tool which uses graph and subgraph isomorphism detection. A number of solutions for all of these tasks have been proposed in the literature. However, although that all these solutions are really fast, they compare just the constant static trees. Our solution offers to enter an input sample dynamically with the Scripthon language while preserving an acceptable speed. We used several optimizations to achieve very low number of comparisons during the matching algorithm.

Simulation Programs to Education of Crisis Management Members

This paper deals with a simulation programs and technologies using in the educational process for members of the crisis management. Risk analysis, simulation, preparation and planning are among the main activities of workers of crisis management. Made correctly simulation of emergency defines the extent of the danger. On this basis, it is possible to effectively prepare and plan measures to minimize damage. The paper is focused on simulation programs that are trained at the University of Defence. Implementation of the outputs from simulation programs in decision-making processes of crisis staffs is one of the main tasks of the research project.

Comparative Analysis of Diverse Collection of Big Data Analytics Tools

Over the past era, there have been a lot of efforts and studies are carried out in growing proficient tools for performing various tasks in big data. Recently big data have gotten a lot of publicity for their good reasons. Due to the large and complex collection of datasets it is difficult to process on traditional data processing applications. This concern turns to be further mandatory for producing various tools in big data. Moreover, the main aim of big data analytics is to utilize the advanced analytic techniques besides very huge, different datasets which contain diverse sizes from terabytes to zettabytes and diverse types such as structured or unstructured and batch or streaming. Big data is useful for data sets where their size or type is away from the capability of traditional relational databases for capturing, managing and processing the data with low-latency. Thus the out coming challenges tend to the occurrence of powerful big data tools. In this survey, a various collection of big data tools are illustrated and also compared with the salient features.

Electroencephalography Based Brain-Computer Interface for Cerebellum Impaired Patients

In healthy humans, the cortical brain rhythm shows specific mu (~6-14 Hz) and beta (~18-24 Hz) band patterns in the cases of both real and imaginary motor movements. As cerebellar ataxia is associated with impairment of precise motor movement control as well as motor imagery, ataxia is an ideal model system in which to study the role of the cerebellocortical circuit in rhythm control. We hypothesize that the EEG characteristics of ataxic patients differ from those of controls during the performance of a Brain-Computer Interface (BCI) task. Ataxia and control subjects showed a similar distribution of mu power during cued relaxation. During cued motor imagery, however, the ataxia group showed significant spatial distribution of the response, while the control group showed the expected decrease in mu-band power (localized to the motor cortex).

Labor Productivity in the Construction Industry -Factors Influencing the Spanish Construction Labor Productivity-

This research paper aims to identify, analyze and rank factors affecting labor productivity in Spain with respect to their relative importance. Using a selected set of 35 factors, a structured questionnaire survey was utilized as the method to collect data from companies. Target population is comprised by a random representative sample of practitioners related with the Spanish construction industry. Findings reveal the top five ranked factors are as follows: (1) shortage or late supply of materials; (2) clarity of the drawings and project documents; (3) clear and daily task assignment; (4) tools or equipment shortages; (5) level of skill and experience of laborers. Additionally, this research also pretends to provide simple and comprehensive recommendations so that they could be implemented by construction managers for an effective management of construction labor forces.

Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based On Local Color Histograms

Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.