A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems

Large scale systems such as computational Grid is a distributed computing infrastructure that can provide globally available network resources. The evolution of information processing systems in Data Grid is characterized by a strong decentralization of data in several fields whose objective is to ensure the availability and the reliability of the data in the reason to provide a fault tolerance and scalability, which cannot be possible only with the use of the techniques of replication. Unfortunately the use of these techniques has a height cost, because it is necessary to maintain consistency between the distributed data. Nevertheless, to agree to live with certain imperfections can improve the performance of the system by improving competition. In this paper, we propose a multi-layer protocol combining the pessimistic and optimistic approaches conceived for the data consistency maintenance in large scale systems. Our approach is based on a hierarchical representation model with tree layers, whose objective is with double vocation, because it initially makes it possible to reduce response times compared to completely pessimistic approach and it the second time to improve the quality of service compared to an optimistic approach.

Systems with Queueing and their Simulation

In the queueing theory, it is assumed that customer arrivals correspond to a Poisson process and service time has the exponential distribution. Using these assumptions, the behaviour of the queueing system can be described by means of Markov chains and it is possible to derive the characteristics of the system. In the paper, these theoretical approaches are presented on several types of systems and it is also shown how to compute the characteristics in a situation when these assumptions are not satisfied

Software Development Processes Maturity versus Software Processes and Products Measurement

Unsatisfactory effectiveness of software systems development and enhancement projects is one of the main reasons why in software engineering there are attempts being made to use experiences coming from other engineering disciplines. In spite of specificity of software product and process a belief had come out that the execution of software could be more effective if these objects were subject to measurement – as it is true in other engineering disciplines for which measurement is an immanent feature. Thus objective and reliable approaches to the measurement of software processes and products have been sought in software engineering for several dozens of years already. This may be proved, among others, by the current version of CMMI for Development model. This paper is aimed at analyzing the approach to the software processes and products measurement proposed in the latest version of this very model, indicating growing acceptance for this issue in software engineering.

The Main Principles of Text-to-Speech Synthesis System

In this paper, the main principles of text-to-speech synthesis system are presented. Associated problems which arise when developing speech synthesis system are described. Used approaches and their application in the speech synthesis systems for Azerbaijani language are shown.

A Formative Assessment Model within the Competency-Based-Approach for an Individualized E-learning Path

E-learning is not restricted to the use of new technologies for the online content, but also induces the adoption of new approaches to improve the quality of education. This quality depends on the ability of these approaches (technical and pedagogical) to provide an adaptive learning environment. Thus, the environment should include features that convey intentions and meeting the educational needs of learners by providing a customized learning path to acquiring a competency concerned In our proposal, we believe that an individualized learning path requires knowledge of the learner. Therefore, it must pass through a personalization of diagnosis to identify precisely the competency gaps to fill, and reduce the cognitive load To personalize the diagnosis and pertinently measure the competency gap, we suggest implementing the formative assessment in the e-learning environment and we propose the introduction of a pre-regulation process in the area of formative assessment, involving its individualization and implementation in e-learning.

Estimating Correlation Dimension on Japanese Candlestick, Application to FOREX Time Series

Recognizing behavioral patterns of financial markets is essential for traders. Japanese candlestick chart is a common tool to visualize and analyze such patterns in an economic time series. Since the world was introduced to Japanese candlestick charting, traders saw how combining this tool with intelligent technical approaches creates a powerful formula for the savvy investors. This paper propose a generalization to box counting method of Grassberger-Procaccia, which is based on computing the correlation dimension of Japanese candlesticks instead commonly used 'close' points. The results of this method applied on several foreign exchange rates vs. IRR (Iranian Rial). Satisfactorily show lower chaotic dimension of Japanese candlesticks series than regular Grassberger-Procaccia method applied merely on close points of these same candles. This means there is some valuable information inside candlesticks.

An Information Theoretic Approach to Rescoring Peptides Produced by De Novo Peptide Sequencing

Tandem mass spectrometry (MS/MS) is the engine driving high-throughput protein identification. Protein mixtures possibly representing thousands of proteins from multiple species are treated with proteolytic enzymes, cutting the proteins into smaller peptides that are then analyzed generating MS/MS spectra. The task of determining the identity of the peptide from its spectrum is currently the weak point in the process. Current approaches to de novo sequencing are able to compute candidate peptides efficiently. The problem lies in the limitations of current scoring functions. In this paper we introduce the concept of proteome signature. By examining proteins and compiling proteome signatures (amino acid usage) it is possible to characterize likely combinations of amino acids and better distinguish between candidate peptides. Our results strongly support the hypothesis that a scoring function that considers amino acid usage patterns is better able to distinguish between candidate peptides. This in turn leads to higher accuracy in peptide prediction.

A Bootstrap's Reliability Measure on Tests of Hypotheses

Bootstrapping has gained popularity in different tests of hypotheses as an alternative in using asymptotic distribution if one is not sure of the distribution of the test statistic under a null hypothesis. This method, in general, has two variants – the parametric and the nonparametric approaches. However, issues on reliability of this method always arise in many applications. This paper addresses the issue on reliability by establishing a reliability measure in terms of quantiles with respect to asymptotic distribution, when this is approximately correct. The test of hypotheses used is Ftest. The simulated results show that using nonparametric bootstrapping in F-test gives better reliability than parametric bootstrapping with relatively higher degrees of freedom.

The Research Approaches on Crisis and its Management

The paper structures research approaches to the crisis and its management. It focuses on approaches – psychological, sociological, economic, ethical and technological. Furthermore, it describes the basic features of models chosen according to those approaches. By their comparison it shows how the crisis influences organizations and individuals, and their mutual interaction.

A Modified Maximum Urgency First Scheduling Algorithm for Real-Time Tasks

This paper presents a modified version of the maximum urgency first scheduling algorithm. The maximum urgency algorithm combines the advantages of fixed and dynamic scheduling to provide the dynamically changing systems with flexible scheduling. This algorithm, however, has a major shortcoming due to its scheduling mechanism which may cause a critical task to fail. The modified maximum urgency first scheduling algorithm resolves the mentioned problem. In this paper, we propose two possible implementations for this algorithm by using either earliest deadline first or modified least laxity first algorithms for calculating the dynamic priorities. These two approaches are compared together by simulating the two algorithms. The earliest deadline first algorithm as the preferred implementation is then recommended. Afterwards, we make a comparison between our proposed algorithm and maximum urgency first algorithm using simulation and results are presented. It is shown that modified maximum urgency first is superior to maximum urgency first, since it usually has less task preemption and hence, less related overhead. It also leads to less failed non-critical tasks in overloaded situations.

A Refined Application of QFD in SCM, A New Approach

Due to the fact that in the new century customers tend to express globally increasing demands, networks of interconnected businesses have been established in societies and the management of such networks seems to be a major key through gaining competitive advantages. Supply chain management encompasses such managerial activities. Within a supply chain, a critical role is played by quality. QFD is a widely-utilized tool which serves the purpose of not only bringing quality to the ultimate provision of products or service packages required by the end customer or the retailer, but it can also initiate us into a satisfactory relationship with our initial customer; that is the wholesaler. However, the wholesalers- cooperation is considerably based on the capabilities that are heavily dependent on their locations and existing circumstances. Therefore, it is undeniable that for all companies each wholesaler possesses a specific importance ratio which can heavily influence the figures calculated in the House of Quality in QFD. Moreover, due to the competitiveness of the marketplace today, it-s been widely recognized that consumers- expression of demands has been highly volatile in periods of production. Apparently, such instability and proneness to change has been very tangibly noticed and taking it into account during the analysis of HOQ is widely influential and doubtlessly required. For a more reliable outcome in such matters, this article demonstrates the application viability of Analytic Network Process for considering the wholesalers- reputation and simultaneously introduces a mortality coefficient for the reliability and stability of the consumers- expressed demands in course of time. Following to this, the paper provides further elaboration on the relevant contributory factors and approaches through the calculation of such coefficients. In the end, the article concludes that an empirical application is needed to achieve broader validity.

A Survey on Usage and Diffusion of Project Risk Management Techniques and Software Tools in the Construction Industry

The area of Project Risk Management (PRM) has been extensively researched, and the utilization of various tools and techniques for managing risk in several industries has been sufficiently reported. Formal and systematic PRM practices have been made available for the construction industry. Based on such body of knowledge, this paper tries to find out the global picture of PRM practices and approaches with the help of a survey to look into the usage of PRM techniques and diffusion of software tools, their level of maturity, and their usefulness in the construction sector. Results show that, despite existing techniques and tools, their usage is limited: software tools are used only by a minority of respondents and their cost is one of the largest hurdles in adoption. Finally, the paper provides some important guidelines for future research regarding quantitative risk analysis techniques and suggestions for PRM software tools development and improvement.

Sensitivity Analysis of Real-Time Systems

Verification of real-time software systems can be expensive in terms of time and resources. Testing is the main method of proving correctness but has been shown to be a long and time consuming process. Everyday engineers are usually unwilling to adopt formal approaches to correctness because of the overhead associated with developing their knowledge of such techniques. Performance modelling techniques allow systems to be evaluated with respect to timing constraints. This paper describes PARTES, a framework which guides the extraction of performance models from programs written in an annotated subset of C.

Does the Polysemic Nature of Energy Security Make it a 'Wicked' Problem?

Governments around the world are expending considerable time and resources framing strategies and policies to deliver energy security. The term 'energy security' has quietly slipped into the energy lexicon without any meaningful discourse about its meaning or assumptions. An examination of explicit and inferred definitions finds that the concept is inherently slippery because it is polysemic in nature having multiple dimensions and taking on different specificities depending on the country (or continent), timeframe or energy source to which it is applied. But what does this mean for policymakers? Can traditional policy approaches be used to address the problem of energy security or does its- polysemic qualities mean that it should be treated as a 'wicked' problem? To answer this question, the paper assesses energy security against nine commonly cited characteristics of wicked policy problems and finds strong evidence of 'wickedness'.

Shape Error Concealment for Shape Independent Transform Coding

Arbitrarily shaped video objects are an important concept in modern video coding methods. The techniques presently used are not based on image elements but rather video objects having an arbitrary shape. In this paper, spatial shape error concealment techniques to be used for object-based image in error-prone environments are proposed. We consider a geometric shape representation consisting of the object boundary, which can be extracted from the α-plane. Three different approaches are used to replace a missing boundary segment: Bézier interpolation, Bézier approximation and NURBS approximation. Experimental results on object shape with different concealment difficulty demonstrate the performance of the proposed methods. Comparisons with proposed methods are also presented.

Full-genomic Network Inference for Non-model organisms: A Case Study for the Fungal Pathogen Candida albicans

Reverse engineering of full-genomic interaction networks based on compendia of expression data has been successfully applied for a number of model organisms. This study adapts these approaches for an important non-model organism: The major human fungal pathogen Candida albicans. During the infection process, the pathogen can adapt to a wide range of environmental niches and reversibly changes its growth form. Given the importance of these processes, it is important to know how they are regulated. This study presents a reverse engineering strategy able to infer fullgenomic interaction networks for C. albicans based on a linear regression, utilizing the sparseness criterion (LASSO). To overcome the limited amount of expression data and small number of known interactions, we utilize different prior-knowledge sources guiding the network inference to a knowledge driven solution. Since, no database of known interactions for C. albicans exists, we use a textmining system which utilizes full-text research papers to identify known regulatory interactions. By comparing with these known regulatory interactions, we find an optimal value for global modelling parameters weighting the influence of the sparseness criterion and the prior-knowledge. Furthermore, we show that soft integration of prior-knowledge additionally improves the performance. Finally, we compare the performance of our approach to state of the art network inference approaches.

Organizational Dimensions as Determinant Factors of KM Approaches in SMEs

In the current economy of increasing global competition, many organizations are attempting to use knowledge as one of the means to gain sustainable competitive advantage. Besides large organizations, the success of SMEs can be linked to how well they manage their knowledge. Despite the profusion of research about knowledge management within large organizations, fewer studies tried to analyze KM in SMEs. This research proposes a new framework showing the determinant role of organizational dimensions onto KM approaches. The paper and its propositions are based on a literature review and analysis. In this research, personalization versus codification, individualization versus institutionalization and IT-based versus non IT-based are highlighted as three distinct dimensions of knowledge management approaches. The study contributes to research by providing a more nuanced classification of KM approaches and provides guidance to managers about the types of KM approaches that should be adopted based on the size, geographical dispersion and task nature of SMEs. To the author-s knowledge, the paper is the first of its kind to examine if there are suitable configurations of KM approaches for SMEs with different dimensions. It gives valuable information, which hopefully will help SME sector to accomplish KM.

Peer-to-Peer Epidemic Algorithms for Reliable Multicasting in Ad Hoc Networks

Characteristics of ad hoc networks and even their existence depend on the nodes forming them. Thus, services and applications designed for ad hoc networks should adapt to this dynamic and distributed environment. In particular, multicast algorithms having reliability and scalability requirements should abstain from centralized approaches. We aspire to define a reliable and scalable multicast protocol for ad hoc networks. Our target is to utilize epidemic techniques for this purpose. In this paper, we present a brief survey of epidemic algorithms for reliable multicasting in ad hoc networks, and describe formulations and analytical results for simple epidemics. Then, P2P anti-entropy algorithm for content distribution and our prototype simulation model are described together with our initial results demonstrating the behavior of the algorithm.

Increase of Error Detection Effectiveness in the Data Transmission Channels with Pulse-Amplitude Modulation

In this paper an approaches for increasing the effectiveness of error detection in computer network channels with Pulse-Amplitude Modulation (PAM) has been proposed. Proposed approaches are based on consideration of special feature of errors, which are appearances in line with PAM. The first approach consists of CRC modification specifically for line with PAM. The second approach is base of weighted checksums using. The way for checksum components coding has been developed. It has been shown that proposed checksum modification ensure superior digital data control transformation reliability for channels with PAM in compare to CRC.

Exploiting Machine Learning Techniques for the Enhancement of Acceptance Sampling

This paper proposes an innovative methodology for Acceptance Sampling by Variables, which is a particular category of Statistical Quality Control dealing with the assurance of products quality. Our contribution lies in the exploitation of machine learning techniques to address the complexity and remedy the drawbacks of existing approaches. More specifically, the proposed methodology exploits Artificial Neural Networks (ANNs) to aid decision making about the acceptance or rejection of an inspected sample. For any type of inspection, ANNs are trained by data from corresponding tables of a standard-s sampling plan schemes. Once trained, ANNs can give closed-form solutions for any acceptance quality level and sample size, thus leading to an automation of the reading of the sampling plan tables, without any need of compromise with the values of the specific standard chosen each time. The proposed methodology provides enough flexibility to quality control engineers during the inspection of their samples, allowing the consideration of specific needs, while it also reduces the time and the cost required for these inspections. Its applicability and advantages are demonstrated through two numerical examples.