Gas Detonation Forming by a Mixture of H2+O2 Detonation

Explosive forming is one of the unconventional techniques in which, most commonly, the water is used as the pressure transmission medium. One of the newest methods in explosive forming is gas detonation forming which uses a normal shock wave derived of gas detonation, to form sheet metals. For this purpose a detonation is developed from the reaction of H2+O2 mixture in a long cylindrical detonation tube. The detonation wave goes through the detonation tube and acts as a blast load on the steel blank and forms it. Experimental results are compared with a finite element model; and the comparison of the experimental and numerical results obtained from strain, thickness variation and deformed geometry is carried out. Numerical and experimental results showed approximately 75 – 90 % similarity in formability of desired shape. Also optimum percent of gas mixture obtained when we mix 68% H2 with 32% O2.

Combining Minimum Energy and Minimum Direct Jerk of Linear Dynamic Systems

Both the minimum energy consumption and smoothness, which is quantified as a function of jerk, are generally needed in many dynamic systems such as the automobile and the pick-and-place robot manipulator that handles fragile equipments. Nevertheless, many researchers come up with either solely concerning on the minimum energy consumption or minimum jerk trajectory. This research paper proposes a simple yet very interesting when combining the minimum energy and jerk of indirect jerks approaches in designing the time-dependent system yielding an alternative optimal solution. Extremal solutions for the cost functions of the minimum energy, the minimum jerk and combining them together are found using the dynamic optimization methods together with the numerical approximation. This is to allow us to simulate and compare visually and statistically the time history of state inputs employed by combining minimum energy and jerk designs. The numerical solution of minimum direct jerk and energy problem are exactly the same solution; however, the solutions from problem of minimum energy yield the similar solution especially in term of tendency.

Characteristics Analysis of Voltage Sag and Voltage Swell in Multi-Grounded Four-Wire Power Distribution Systems

In North America, Most power distribution systems employ a four-wire multi-grounded neutral (MGN) design. This paper has explained the inherent characteristics of multi-grounded three-phase four-wire distribution systems under unbalanced situations. As a result, the mechanism of voltage swell and voltage sag in MGN feeders becomes difficult to understand. The simulation tool that has been used in this paper is MATLAB under Windows software. In this paper the equivalent model of a full-scale multigrounded distribution system implemented by MATLAB is introduced. The results are expected to help utility engineers to understand the impact of MGN on distribution system operations.

A Serializability Condition for Multi-step Transactions Accessing Ordered Data

In mobile environments, unspecified numbers of transactions arrive in continuous streams. To prove correctness of their concurrent execution a method of modelling an infinite number of transactions is needed. Standard database techniques model fixed finite schedules of transactions. Lately, techniques based on temporal logic have been proposed as suitable for modelling infinite schedules. The drawback of these techniques is that proving the basic serializability correctness condition is impractical, as encoding (the absence of) conflict cyclicity within large sets of transactions results in prohibitively large temporal logic formulae. In this paper, we show that, under certain common assumptions on the graph structure of data items accessed by the transactions, conflict cyclicity need only be checked within all possible pairs of transactions. This results in formulae of considerably reduced size in any temporal-logic-based approach to proving serializability, and scales to arbitrary numbers of transactions.

Shape Restoration of the Left Ventricle

This paper describes an automatic algorithm to restore the shape of three-dimensional (3D) left ventricle (LV) models created from magnetic resonance imaging (MRI) data using a geometry-driven optimization approach. Our basic premise is to restore the LV shape such that the LV epicardial surface is smooth after the restoration. A geometrical measure known as the Minimum Principle Curvature (κ2) is used to assess the smoothness of the LV. This measure is used to construct the objective function of a two-step optimization process. The objective of the optimization is to achieve a smooth epicardial shape by iterative in-plane translation of the MRI slices. Quantitatively, this yields a minimum sum in terms of the magnitude of κ 2, when κ2 is negative. A limited memory quasi-Newton algorithm, L-BFGS-B, is used to solve the optimization problem. We tested our algorithm on an in vitro theoretical LV model and 10 in vivo patient-specific models which contain significant motion artifacts. The results show that our method is able to automatically restore the shape of LV models back to smoothness without altering the general shape of the model. The magnitudes of in-plane translations are also consistent with existing registration techniques and experimental findings.

Revisiting the Concept of Risk Analysis within the Context of Geospatial Database Design: A Collaborative Framework

The aim of this research is to design a collaborative framework that integrates risk analysis activities into the geospatial database design (GDD) process. Risk analysis is rarely undertaken iteratively as part of the present GDD methods in conformance to requirement engineering (RE) guidelines and risk standards. Accordingly, when risk analysis is performed during the GDD, some foreseeable risks may be overlooked and not reach the output specifications especially when user intentions are not systematically collected. This may lead to ill-defined requirements and ultimately in higher risks of geospatial data misuse. The adopted approach consists of 1) reviewing risk analysis process within the scope of RE and GDD, 2) analyzing the challenges of risk analysis within the context of GDD, and 3) presenting the components of a risk-based collaborative framework that improves the collection of the intended/forbidden usages of the data and helps geo-IT experts to discover implicit requirements and risks.

Satellite Data Classification Accuracy Assessment Based from Reference Dataset

In order to develop forest management strategies in tropical forest in Malaysia, surveying the forest resources and monitoring the forest area affected by logging activities is essential. There are tremendous effort has been done in classification of land cover related to forest resource management in this country as it is a priority in all aspects of forest mapping using remote sensing and related technology such as GIS. In fact classification process is a compulsory step in any remote sensing research. Therefore, the main objective of this paper is to assess classification accuracy of classified forest map on Landsat TM data from difference number of reference data (200 and 388 reference data). This comparison was made through observation (200 reference data), and interpretation and observation approaches (388 reference data). Five land cover classes namely primary forest, logged over forest, water bodies, bare land and agricultural crop/mixed horticultural can be identified by the differences in spectral wavelength. Result showed that an overall accuracy from 200 reference data was 83.5 % (kappa value 0.7502459; kappa variance 0.002871), which was considered acceptable or good for optical data. However, when 200 reference data was increased to 388 in the confusion matrix, the accuracy slightly improved from 83.5% to 89.17%, with Kappa statistic increased from 0.7502459 to 0.8026135, respectively. The accuracy in this classification suggested that this strategy for the selection of training area, interpretation approaches and number of reference data used were importance to perform better classification result.

Utilization of Advanced Data Storage Technology to Conduct Construction Industry on Clear Environment

Construction projects generally take place in uncontrolled and dynamic environments where construction waste is a serious environmental problem in many large cities. The total amount of waste and carbon dioxide emissions from transportation vehicles are still out of control due to increasing construction projects, massive urban development projects and the lack of effective tools for minimizing adverse environmental impacts in construction. This research is about utilization of the integrated applications of automated advanced tracking and data storage technologies in the area of environmental management to monitor and control adverse environmental impacts such as construction waste and carbon dioxide emissions. Radio Frequency Identification (RFID) integrated with the Global Position System (GPS) provides an opportunity to uniquely identify materials, components, and equipments and to locate and track them using minimal or no worker input. The transmission of data to the central database will be carried out with the help of Global System for Mobile Communications (GSM).

A Thought on Exotic Statistical Distributions

The statistical distributions are modeled in explaining nature of various types of data sets. Although these distributions are mostly uni-modal, it is quite common to see multiple modes in the observed distribution of the underlying variables, which make the precise modeling unrealistic. The observed data do not exhibit smoothness not necessarily due to randomness, but could also be due to non-randomness resulting in zigzag curves, oscillations, humps etc. The present paper argues that trigonometric functions, which have not been used in probability functions of distributions so far, have the potential to take care of this, if incorporated in the distribution appropriately. A simple distribution (named as, Sinoform Distribution), involving trigonometric functions, is illustrated in the paper with a data set. The importance of trigonometric functions is demonstrated in the paper, which have the characteristics to make statistical distributions exotic. It is possible to have multiple modes, oscillations and zigzag curves in the density, which could be suitable to explain the underlying nature of select data set.

A Microcontroller Implementation of Model Predictive Control

Model Predictive Control (MPC) is increasingly being proposed for real time applications and embedded systems. However comparing to PID controller, the implementation of the MPC in miniaturized devices like Field Programmable Gate Arrays (FPGA) and microcontrollers has historically been very small scale due to its complexity in implementation and its computation time requirement. At the same time, such embedded technologies have become an enabler for future manufacturing enterprises as well as a transformer of organizations and markets. Recently, advances in microelectronics and software allow such technique to be implemented in embedded systems. In this work, we take advantage of these recent advances in this area in the deployment of one of the most studied and applied control technique in the industrial engineering. In fact in this paper, we propose an efficient framework for implementation of Generalized Predictive Control (GPC) in the performed STM32 microcontroller. The STM32 keil starter kit based on a JTAG interface and the STM32 board was used to implement the proposed GPC firmware. Besides the GPC, the PID anti windup algorithm was also implemented using Keil development tools designed for ARM processor-based microcontroller devices and working with C/Cµ langage. A performances comparison study was done between both firmwares. This performances study show good execution speed and low computational burden. These results encourage to develop simple predictive algorithms to be programmed in industrial standard hardware. The main features of the proposed framework are illustrated through two examples and compared with the anti windup PID controller.

Supercompression for Full-HD and 4k-3D (8k)Digital TV Systems

In this work, we developed the concept of supercompression, i.e., compression above the compression standard used. In this context, both compression rates are multiplied. In fact, supercompression is based on super-resolution. That is to say, supercompression is a data compression technique that superpose spatial image compression on top of bit-per-pixel compression to achieve very high compression ratios. If the compression ratio is very high, then we use a convolutive mask inside decoder that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. Specifically, the mentio-ned mask is coded inside texture memory of a GPGPU.

Heat Exchanger Design

This paper is intended to assist anyone with some general technical experience, but perhaps limited specific knowledge of heat transfer equipment. A characteristic of heat exchanger design is the procedure of specifying a design, heat transfer area and pressure drops and checking whether the assumed design satisfies all requirements or not. The purpose of this paper is how to design the oil cooler (heat exchanger) especially for shell-and-tube heat exchanger which is the majority type of liquid-to-liquid heat exchanger. General design considerations and design procedure are also illustrated in this paper and a flow diagram is provided as an aid of design procedure. In design calculation, the MatLAB and AutoCAD software are used. Fundamental heat transfer concepts and complex relationships involved in such exchanger are also presented in this paper. The primary aim of this design is to obtain a high heat transfer rate without exceeding the allowable pressure drop. This computer program is highly useful to design the shell-and-tube type heat exchanger and to modify existing deign.

A Proposal for Federation Technology for Authenticated Information between Terminals

Recently, various services such as television and the Internet have come to be received through various terminals. However, we could gain greater convenience by receiving these services through cellular phone terminals when we go out and then continuing to receive the same services through a large screen digital television after we have come home. However, it is necessary to go through the same authentication processing again when using TVs after we have come home. In this study, we have developed an authentication method that enables users to switch terminals in environments in which the user receives service from a server through a terminal. Specifically, the method simplifies the authentication of the server side when switching from one terminal to another terminal by using previously authenticated information.

High Performance VLSI Architecture of 2D Discrete Wavelet Transform with Scalable Lattice Structure

In this paper, we propose a fully-utilized, block-based 2D DWT (discrete wavelet transform) architecture, which consists of four 1D DWT filters with two-channel QMF lattice structure. The proposed architecture requires about 2MN-3N registers to save the intermediate results for higher level decomposition, where M and N stand for the filter length and the row width of the image respectively. Furthermore, the proposed 2D DWT processes in horizontal and vertical directions simultaneously without an idle period, so that it computes the DWT for an N×N image in a period of N2(1-2-2J)/3. Compared to the existing approaches, the proposed architecture shows 100% of hardware utilization and high throughput rates. To mitigate the long critical path delay due to the cascaded lattices, we can apply the pipeline technique with four stages, while retaining 100% of hardware utilization. The proposed architecture can be applied in real-time video signal processing.

Clamped-clamped Boundary Conditions for Analysis Free Vibration of Functionally Graded Cylindrical Shell with a Ring based on Third Order Shear Deformation Theory

In this paper a study on the vibration of thin cylindrical shells with ring supports and made of functionally graded materials (FGMs) composed of stainless steel and nickel is presented. Material properties vary along the thickness direction of the shell according to volume fraction power law. The cylindrical shells have ring supports which are arbitrarily placed along the shell and impose zero lateral deflections. The study is carried out based on third order shear deformation shell theory (T.S.D.T). The analysis is carried out using Hamilton-s principle. The governing equations of motion of FGM cylindrical shells are derived based on shear deformation theory. Results are presented on the frequency characteristics, influence of ring support position and the influence of boundary conditions. The present analysis is validated by comparing results with those available in the literature.

Comparison of Finite Difference Schemes for Water Flow in Unsaturated Soils

Flow movement in unsaturated soil can be expressed by a partial differential equation, named Richards equation. The objective of this study is the finding of an appropriate implicit numerical solution for head based Richards equation. Some of the well known finite difference schemes (fully implicit, Crank Nicolson and Runge-Kutta) have been utilized in this study. In addition, the effects of different approximations of moisture capacity function, convergence criteria and time stepping methods were evaluated. Two different infiltration problems were solved to investigate the performance of different schemes. These problems include of vertical water flow in a wet and very dry soils. The numerical solutions of two problems were compared using four evaluation criteria and the results of comparisons showed that fully implicit scheme is better than the other schemes. In addition, utilizing of standard chord slope method for approximation of moisture capacity function, automatic time stepping method and difference between two successive iterations as convergence criterion in the fully implicit scheme can lead to better and more reliable results for simulation of fluid movement in different unsaturated soils.

Experimental Study of the Metal Foam Flow Conditioner for Orifice Plate Flowmeters

The sensitivity of orifice plate metering to disturbed flow (either asymmetric or swirling) is a subject of great concern to flow meter users and manufacturers. The distortions caused by pipe fittings and pipe installations upstream of the orifice plate are major sources of this type of non-standard flows. These distortions can alter the accuracy of metering to an unacceptable degree. In this work, a multi-scale object known as metal foam has been used to generate a predetermined turbulent flow upstream of the orifice plate. The experimental results showed that the combination of an orifice plate and metal foam flow conditioner is broadly insensitive to upstream disturbances. This metal foam demonstrated a good performance in terms of removing swirl and producing a repeatable flow profile within a short distance downstream of the device. The results of using a combination of a metal foam flow conditioner and orifice plate for non-standard flow conditions including swirling flow and asymmetric flow show this package can preserve the accuracy of metering up to the level required in the standards.

Entrepreneurial Promotion among Farmers: the Early Impacts

The development of entrepreneurial competences of farmers has been pointed out as a necessary condition for the modernization of land in facing the phenomenon of globalization. However, the educational processes involved in such a development have been studied little, especially in emerging economies. This research aims to enlighten some of the critical issues behind the early stages of the transformation of farmers into entrepreneurs, through in depth interviews with farmers, entrepreneurial promoters and public officials participating in a public pilot project in Mexico. Although major impacts were expected only in the long run, important positive changes in the mind set of farmers and other participants were found in early stages of the intervention. Apparently, the farmers started a process of becoming more conscious about the importance of preserving the aquiferous resources, as well as more market and entrepreneurial oriented.

Analysis of Lightning Surge Condition Effect on Surge Arrester in Electrical Power System by using ATP/EMTP Program

The condition of lightning surge causes the traveling waves and the temporary increase in voltage in the transmission line system. Lightning is the most harmful for destroying the transmission line and setting devices so it is necessary to study and analyze the temporary increase in voltage for designing and setting the surge arrester. This analysis describes the figure of the lightning wave in transmission line with 115 kV voltage level in Thailand by using ATP/EMTP program to create the model of the transmission line and lightning surge. Because of the limit of this program, it must be calculated for the geometry of the transmission line and surge parameter and calculation in the manual book for the closest value of the parameter. On the other hand, for the effects on surge protector when the lightning comes, the surge arrester model must be right and standardized as metropolitan electrical authority's standard. The candidate compared the real information to the result from calculation, also. The results of the analysis show that the temporary increase in voltage value will be rise to 326.59 kV at the line which is done by lightning when the surge arrester is not set in the system. On the other hand, the temporary increase in voltage value will be 182.83 kV at the line which is done by lightning when the surge arrester is set in the system and the period of the traveling wave is reduced, also. The distance for setting the surge arrester must be as near to the transformer as possible. Moreover, it is necessary to know the right distance for setting the surge arrester and the size of the surge arrester for preventing the temporary increase in voltage, effectively.

Maximizer of the Posterior Marginal Estimate of Phase Unwrapping Based On Statistical Mechanics of the Q-Ising Model

We constructed a method of phase unwrapping for a typical wave-front by utilizing the maximizer of the posterior marginal (MPM) estimate corresponding to equilibrium statistical mechanics of the three-state Ising model on a square lattice on the basis of an analogy between statistical mechanics and Bayesian inference. We investigated the static properties of an MPM estimate from a phase diagram using Monte Carlo simulation for a typical wave-front with synthetic aperture radar (SAR) interferometry. The simulations clarified that the surface-consistency conditions were useful for extending the phase where the MPM estimate was successful in phase unwrapping with a high degree of accuracy and that introducing prior information into the MPM estimate also made it possible to extend the phase under the constraint of the surface-consistency conditions with a high degree of accuracy. We also found that the MPM estimate could be used to reconstruct the original wave-fronts more smoothly, if we appropriately tuned hyper-parameters corresponding to temperature to utilize fluctuations around the MAP solution. Also, from the viewpoint of statistical mechanics of the Q-Ising model, we found that the MPM estimate was regarded as a method for searching the ground state by utilizing thermal fluctuations under the constraint of the surface-consistency condition.