Nanosize Structure Phase States in the Titanium Surface Layers after Electroexplosive Carburizing and Subsequent Electron Beam Treatment

The peculiarities of the nanoscale structure-phase states formed after electroexplosive carburizing and subsequent electron-beam treatment of technically pure titanium surface in different regimes are established by methods of transmission electron diffraction microscopy and physical mechanisms are discussed. Electroexplosive carburizing leads to surface layer formation (40 m thickness) with increased (in 3.5 times) microhardness. It consists of β-titanium, graphite (monocrystals 100-150 nm, polycrystals 5-10 nm, amorphous particles 3-5nm), TiC (5-10 nm), β-Ti02 (2-20nm). After electron-beam treatment additionally increasing the microhardness the surface layer consists of TiC.

Finite-Horizon Tracking Control for Repetitive Systems with Uncertain Initial Conditions

Repetitive systems stand for a kind of systems that perform a simple task on a fixed pattern repetitively, which are widely spread in industrial fields. Hence, many researchers have been interested in those systems, especially in the field of iterative learning control (ILC). In this paper, we propose a finite-horizon tracking control scheme for linear time-varying repetitive systems with uncertain initial conditions. The scheme is derived both analytically and numerically for state-feedback systems and only numerically for output-feedback systems. Then, it is extended to stable systems with input constraints. All numerical schemes are developed in the forms of linear matrix inequalities (LMIs). A distinguished feature of the proposed scheme from the existing iterative learning control is that the scheme guarantees the tracking performance exactly even under uncertain initial conditions. The simulation results demonstrate the good performance of the proposed scheme.

Optimal Path Planning under Priori Information in Stochastic, Time-varying Networks

A novel path planning approach is presented to solve optimal path in stochastic, time-varying networks under priori traffic information. Most existing studies make use of dynamic programming to find optimal path. However, those methods are proved to be unable to obtain global optimal value, moreover, how to design efficient algorithms is also another challenge. This paper employs a decision theoretic framework for defining optimal path: for a given source S and destination D in urban transit network, we seek an S - D path of lowest expected travel time where its link travel times are discrete random variables. To solve deficiency caused by the methods of dynamic programming, such as curse of dimensionality and violation of optimal principle, an integer programming model is built to realize assignment of discrete travel time variables to arcs. Simultaneously, pruning techniques are also applied to reduce computation complexity in the algorithm. The final experiments show the feasibility of the novel approach.

Towards Model-Driven Communications

In modern distributed software systems, the issue of communication among composing parts represents a critical point, but the idea of extending conventional programming languages with general purpose communication constructs seems difficult to realize. As a consequence, there is a (growing) gap between the abstraction level required by distributed applications and the concepts provided by platforms that enable communication. This work intends to discuss how the Model Driven Software Development approach can be considered as a mature technology to generate in automatic way the schematic part of applications related to communication, by providing at the same time high level specialized languages useful in all the phases of software production. To achieve the goal, a stack of languages (meta-meta¬models) has been introduced in order to describe – at different levels of abstraction – the collaborative behavior of generic entities in terms of communication actions related to a taxonomy of messages. Finally, the generation of platforms for communication is viewed as a form of specification of language semantics, that provides executable models of applications together with model-checking supports and effective runtime environments.

Some (v + 1, b + r + λ + 1, r + λ + 1, k, λ + 1) Balanced Incomplete Block Designs (BIBDs) from Lotto Designs (LDs)

The paper considered the construction of BIBDs using potential Lotto Designs (LDs) earlier derived from qualifying parent BIBDs. The study utilized Li’s condition  pr t−1  ( t−1 2 ) + pr− pr t−1 (t−1) 2  < ( p 2 ) λ, to determine the qualification of a parent BIBD (v, b, r, k, λ) as LD (n, k, p, t) constrained on v ≥ k, v ≥ p, t ≤ min{k, p} and then considered the case k = t since t is the smallest number of tickets that can guarantee a win in a lottery. The (15, 140, 28, 3, 4) and (7, 7, 3, 3, 1) BIBDs were selected as parent BIBDs to illustrate the procedure. These BIBDs yielded three potential LDs each. Each of the LDs was completely generated and their properties studied. The three LDs from the (15, 140, 28, 3, 4) produced (9, 84, 28, 3, 7), (10, 120, 36, 3, 8) and (11, 165, 45, 3, 9) BIBDs while those from the (7, 7, 3, 3, 1) produced the (5, 10, 6, 3, 3), (6, 20, 10, 3, 4) and (7, 35, 15, 3, 5) BIBDs. The produced BIBDs follow the generalization (v + 1, b + r + λ + 1, r +λ+1, k, λ+1) where (v, b, r, k, λ) are the parameters of the (9, 84, 28, 3, 7) and (5, 10, 6, 3, 3) BIBDs. All the BIBDs produced are unreduced designs.

A Study on Algorithm Fusion for Recognition and Tracking of Moving Robot

This paper presents an algorithm for the recognition and tracking of moving objects, 1/10 scale model car is used to verify performance of the algorithm. Presented algorithm for the recognition and tracking of moving objects in the paper is as follows. SURF algorithm is merged with Lucas-Kanade algorithm. SURF algorithm has strong performance on contrast, size, rotation changes and it recognizes objects but it is slow due to many computational complexities. Processing speed of Lucas-Kanade algorithm is fast but the recognition of objects is impossible. Its optical flow compares the previous and current frames so that can track the movement of a pixel. The fusion algorithm is created in order to solve problems which occurred using the Kalman Filter to estimate the position and the accumulated error compensation algorithm was implemented. Kalman filter is used to create presented algorithm to complement problems that is occurred when fusion two algorithms. Kalman filter is used to estimate next location, compensate for the accumulated error. The resolution of the camera (Vision Sensor) is fixed to be 640x480. To verify the performance of the fusion algorithm, test is compared to SURF algorithm under three situations, driving straight, curve, and recognizing cars behind the obstacles. Situation similar to the actual is possible using a model vehicle. Proposed fusion algorithm showed superior performance and accuracy than the existing object recognition and tracking algorithms. We will improve the performance of the algorithm, so that you can experiment with the images of the actual road environment.

Intellectual Capital Report for Universities

Intellectual capital reporting becomes critical at universities, mainly due to the fact that knowledge is the main output as well as input in these institutions. In addition, universities have continuous external demands for greater information and transparency about the use of public funds, and are increasingly provided with greater autonomy regarding their organization, management, and budget allocation. This situation requires new management and reporting systems. The purpose of the present study is to provide a model for intellectual capital report in Spanish universities. To this end, a questionnaire was sent to every member of the Social Councils of Spanish public universities in order to identify which intangible elements university stakeholders demand most. Our proposal for an intellectual capital report aims to act as a guide to help the Spanish universities on the road to the presentation of information on intellectual capital which can assist stakeholders to make the right decisions.

Understanding Grip Choice and Comfort Whilst Hoovering

The hand is one of the essential parts of the body for carrying out Activities of Daily Living (ADLs). Individuals use their hands and fingers in everyday activities in the both the workplace and home. Hand-intensive tasks require diverse and sometimes extreme levels of exertion, depending on the action, movement or manipulation involved. The authors have undertaken several studies looking at grip choice and comfort. It is hoped that in providing improved understanding of discomfort during ADLs this will aid in the design of consumer products. Previous work by the authors outlined a methodology for calculating pain frequency and pain level for a range of tasks. From an online survey undertaken by the authors with regards manipulating objects during everyday tasks, tasks involving gripping were seen to produce the highest levels of pain and discomfort. Questioning of the participants showed that cleaning tasks were seen to be ADL's that produced the highest levels of discomfort, with women feeling higher levels of discomfort than men. This paper looks at the methodology for calculating pain frequency and pain level with particular regards to gripping activities. This methodology shows that activities such as mopping, sweeping and hoovering shows the highest numbers of pain frequency and pain level at 3112.5 frequency per month while the pain level per person doing this action was 0.78.The study then uses thin-film force sensors to analyze the force distribution in the hand whilst hoovering and compares this for differing grip styles and genders. Women were seen to have more of their hand under a higher pressure than men when undertaking hoovering. This suggests that women may feel greater discomfort than men since their hand is at a higher pressure more of the time.

Changes in Subjective and Objective Measures of Performance in Ramadan

The Muslim faith requires individuals to fast between the hours of sunrise and sunset during the month of Ramadan. Our recent work has concentrated on some of the changes that take place during the daytime when fasting. A questionnaire was developed to assess subjective estimates of physical, mental and social activities, and fatigue. Four days were studied: in the weeks before and after Ramadan (control days) and during the first and last weeks of Ramadan (experimental days). On each of these four days, this questionnaire was given several times during the daytime and once after the fast had been broken and just before individuals retired at night. During Ramadan, daytime mental, physical and social activities all decreased below control values but then increased to abovecontrol values in the evening. The desires to perform physical and mental activities showed very similar patterns. That is, individuals tried to conserve energy during the daytime in preparation for the evenings when they ate and drank, often with friends. During Ramadan also, individuals were more fatigued in the daytime and napped more often than on control days. This extra fatigue probably reflected decreased sleep, individuals often having risen earlier (before sunrise, to prepare for fasting) and retired later (to enable recovery from the fast). Some physiological measures and objective measures of performance (including the response to a bout of exercise) have also been investigated. Urine osmolality fell during the daytime on control days as subjects drank, but rose in Ramadan to reach values at sunset indicative of dehydration. Exercise performance was also compromised, particularly late in the afternoon when the fast had lasted several hours. Self-chosen exercise work-rates fell and a set amount of exercise felt more arduous. There were also changes in heart rate and lactate accumulation in the blood, indicative of greater cardiovascular and metabolic stress caused by the exercise in subjects who had been fasting. Daytime fasting in Ramadan produces widespread effects which probably reflect combined effects of sleep loss and restrictions to intakes of water and food.

SWARM: A Meta-Scheduler to Minimize Job Queuing Times on Computational Grids

Some meta-schedulers query the information system of individual supercomputers in order to submit jobs to the least busy supercomputer on a computational Grid. However, this information can become outdated by the time a job starts due to changes in scheduling priorities. The MSR scheme is based on Multiple Simultaneous Requests and can take advantage of opportunities resulting from these priorities changes. This paper presents the SWARM meta-scheduler, which can speed up the execution of large sets of tasks by minimizing the job queuing time through the submission of multiple requests. Performance tests have shown that this new meta-scheduler is faster than an implementation of the MSR scheme and the gLite meta-scheduler. SWARM has been used through the GridQTL project beta-testing portal during the past year. Statistics are provided for this usage and demonstrate its capacity to achieve reliably a substantial reduction of the execution time in production conditions.

Compiler-Based Architecture for Context Aware Frameworks

Computers are being integrated in the various aspects of human every day life in different shapes and abilities. This fact has intensified a requirement for the software development technologies which is ability to be: 1) portable, 2) adaptable, and 3) simple to develop. This problem is also known as the Pervasive Computing Problem (PCP) which can be implemented in different ways, each has its own pros and cons and Context Oriented Programming (COP) is one of the methods to address the PCP. In this paper a design for a COP framework, a context aware framework, is presented which has eliminated weak points of a previous design based on interpreter languages, while introducing the compiler languages power in implementing these frameworks. The key point of this improvement is combining COP and Dependency Injection (DI) techniques. Both old and new frameworks are analyzed to show advantages and disadvantages. Finally a simulation of both designs is proposed to indicating that the practical results agree with the theoretical analysis while the new design runs almost 8 times faster.

An Evaluation of Carbon Dioxide Emissions Trading among Enterprises -The Tokyo Cap and Trade Program-

This study aims to propose three evaluation methods to evaluate the Tokyo Cap and Trade Program when emissions trading is performed virtually among enterprises, focusing on carbon dioxide (CO2), which is the only emitted greenhouse gas that tends to increase. The first method clarifies the optimum reduction rate for the highest cost benefit, the second discusses emissions trading among enterprises through market trading, and the third verifies long-term emissions trading during the term of the plan (2010-2019), checking the validity of emissions trading partly using Geographic Information Systems (GIS). The findings of this study can be summarized in the following three points. 1. Since the total cost benefit is the greatest at a 44% reduction rate, it is possible to set it more highly than that of the Tokyo Cap and Trade Program to get more total cost benefit. 2. At a 44% reduction rate, among 320 enterprises, 8 purchasing enterprises and 245 sales enterprises gain profits from emissions trading, and 67 enterprises perform voluntary reduction without conducting emissions trading. Therefore, to further promote emissions trading, it is necessary to increase the sales volumes of emissions trading in addition to sales enterprises by increasing the number of purchasing enterprises. 3. Compared to short-term emissions trading, there are few enterprises which benefit in each year through the long-term emissions trading of the Tokyo Cap and Trade Program. Only 81 enterprises at the most can gain profits from emissions trading in FY 2019. Therefore, by setting the reduction rate more highly, it is necessary to increase the number of enterprises that participate in emissions trading and benefit from the restraint of CO2 emissions.

The Shanghai Cooperation Organization: China's Grand Strategy in Central Asia

The Shanghai Cooperation Organization is one of the successful outcomes of China's foreign policy since the end of the Cold war. The expansion of multilateral ties all over the world by dint of pursuing institutional strategies as SCO, identify China as a more constructive power. SCO became a new model of cooperation that was formed on remains of collapsed Soviet system, and predetermined China's geopolitical role in the region. As the fast developing effective regional mechanism, SCO today has more of external impact on the international system and forms a new type of interaction for promoting China's grand strategy of 'peaceful rise'.

Trends in Competitiveness of the Thai Printing Industry

Since the world printing industry has to confront globalization with a constant change, the Thai printing industry, as a small but increasingly significant part of the world printing industry, cannot inevitably escape but has to encounter with the similar change and also the need to revamp its production processes, designs and technology to make them more appealing to both international and domestic market. The essential question is what is the Thai competitive edge in the printing industry in changing environment? This research is aimed to study the Thai level of competitive edge in terms of marketing, technology, environment friendly, and the level of satisfaction of the process of using printing machines. To access the extent to which is the trends in competitiveness of Thai printing industry, both quantitative and qualitative study were conducted. The quantitative analysis was restricted to 100 respondents. The qualitative analysis was restricted to a focus group of 10 individuals from various backgrounds in the Thai printing industry. The findings from the quantitative analysis revealed that the overall mean scores are 4.53, 4.10, and 3.50 for the competitiveness of marketing, the competitiveness of technology, and the competitiveness of being environment friendly respectively. However, the level of satisfaction for the process of using machines has a mean score only 3.20. The findings from the qualitative analysis have revealed that target customers have increasingly reordered due to their contentment in both low prices and the acceptable quality of the products. Moreover, the Thai printing industry has a tendency to convert to ambient green technology which is friendly to the environment. The Thai printing industry is choosing to produce or substitute with products that are less damaging to the environment. It is also found that the Thai printing industry has been transformed into a very competitive industry which bargaining power rests on consumers who have a variety of choices.

Cryptography Over Elliptic Curve Of The Ring Fq[e], e4 = 0

Groups where the discrete logarithm problem (DLP) is believed to be intractable have proved to be inestimable building blocks for cryptographic applications. They are at the heart of numerous protocols such as key agreements, public-key cryptosystems, digital signatures, identification schemes, publicly verifiable secret sharings, hash functions and bit commitments. The search for new groups with intractable DLP is therefore of great importance.The goal of this article is to study elliptic curves over the ring Fq[], with Fq a finite field of order q and with the relation n = 0, n ≥ 3. The motivation for this work came from the observation that several practical discrete logarithm-based cryptosystems, such as ElGamal, the Elliptic Curve Cryptosystems . In a first time, we describe these curves defined over a ring. Then, we study the algorithmic properties by proposing effective implementations for representing the elements and the group law. In anther article we study their cryptographic properties, an attack of the elliptic discrete logarithm problem, a new cryptosystem over these curves.

Hiding Data in Images Using PCP

In recent years, everything is trending toward digitalization and with the rapid development of the Internet technologies, digital media needs to be transmitted conveniently over the network. Attacks, misuse or unauthorized access of information is of great concern today which makes the protection of documents through digital media a priority problem. This urges us to devise new data hiding techniques to protect and secure the data of vital significance. In this respect, steganography often comes to the fore as a tool for hiding information. Steganography is a process that involves hiding a message in an appropriate carrier like image or audio. It is of Greek origin and means "covered or hidden writing". The goal of steganography is covert communication. Here the carrier can be sent to a receiver without any one except the authenticated receiver only knows existence of the information. Considerable amount of work has been carried out by different researchers on steganography. In this work the authors propose a novel Steganographic method for hiding information within the spatial domain of the gray scale image. The proposed approach works by selecting the embedding pixels using some mathematical function and then finds the 8 neighborhood of the each selected pixel and map each bit of the secret message in each of the neighbor pixel coordinate position in a specified manner. Before embedding a checking has been done to find out whether the selected pixel or its neighbor lies at the boundary of the image or not. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.

Use of Novel Algorithms MAJE4 and MACJER-320 for Achieving Confidentiality and Message Authentication in SSL and TLS

Extensive use of the Internet coupled with the marvelous growth in e-commerce and m-commerce has created a huge demand for information security. The Secure Socket Layer (SSL) protocol is the most widely used security protocol in the Internet which meets this demand. It provides protection against eaves droppings, tampering and forgery. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL. But recent attacks against RC4 and HMAC have raised questions in the confidence on these algorithms. Hence two novel cryptographic algorithms MAJE4 and MACJER-320 have been proposed as substitutes for them. The focus of this work is to demonstrate the performance of these new algorithms and suggest them as dependable alternatives to satisfy the need of security services in SSL. The performance evaluation has been done by using practical implementation method.

DEA Method for Evaluation of EU Performance

The paper deals with an application of quantitative analysis – the Data Envelopment Analysis (DEA) method to performance evaluation of the European Union Member States, in the reference years 2000 and 2011. The main aim of the paper is to measure efficiency changes over the reference years and to analyze a level of productivity in individual countries based on DEA method and to classify the EU Member States to homogeneous units (clusters) according to efficiency results. The theoretical part is devoted to the fundamental basis of performance theory and the methodology of DEA. The empirical part is aimed at measuring degree of productivity and level of efficiency changes of evaluated countries by basic DEA model – CCR CRS model, and specialized DEA approach – the Malmquist Index measuring the change of technical efficiency and the movement of production possibility frontier. Here, DEA method becomes a suitable tool for setting a competitive/uncompetitive position of each country because there is not only one factor evaluated, but a set of different factors that determine the degree of economic development.

Transmission Model for Plasmodium Vivax Malaria: Conditions for Bifurcation

Plasmodium vivax malaria differs from P. falciparum malaria in that a person suffering from P. vivax infection can suffer relapses of the disease. This is due the parasite being able to remain dormant in the liver of the patients where it is able to re-infect the patient after a passage of time. During this stage, the patient is classified as being in the dormant class. The model to describe the transmission of P. vivax malaria consists of a human population divided into four classes, the susceptible, the infected, the dormant and the recovered. The effect of a time delay on the transmission of this disease is studied. The time delay is the period in which the P. vivax parasite develops inside the mosquito (vector) before the vector becomes infectious (i.e., pass on the infection). We analyze our model by using standard dynamic modeling method. Two stable equilibrium states, a disease free state E0 and an endemic state E1, are found to be possible. It is found that the E0 state is stable when a newly defined basic reproduction number G is less than one. If G is greater than one the endemic state E1 is stable. The conditions for the endemic equilibrium state E1 to be a stable spiral node are established. For realistic values of the parameters in the model, it is found that solutions in phase space are trajectories spiraling into the endemic state. It is shown that the limit cycle and chaotic behaviors can only be achieved with unrealistic parameter values.

Hierarchical PSO-Adaboost Based Classifiers for Fast and Robust Face Detection

We propose a fast and robust hierarchical face detection system which finds and localizes face images with a cascade of classifiers. Three modules contribute to the efficiency of our detector. First, heterogeneous feature descriptors are exploited to enrich feature types and feature numbers for face representation. Second, a PSO-Adaboost algorithm is proposed to efficiently select discriminative features from a large pool of available features and reinforce them into the final ensemble classifier. Compared with the standard exhaustive Adaboost for feature selection, the new PSOAdaboost algorithm reduces the training time up to 20 times. Finally, a three-stage hierarchical classifier framework is developed for rapid background removal. In particular, candidate face regions are detected more quickly by using a large size window in the first stage. Nonlinear SVM classifiers are used instead of decision stump functions in the last stage to remove those remaining complex nonface patterns that can not be rejected in the previous two stages. Experimental results show our detector achieves superior performance on the CMU+MIT frontal face dataset.