Conflicts and Compromise at the Management of Transboundry Water Resources (The Case of the Central Asia)

The problem of complex use of water resources in Central Asia by taking into consideration the sovereignty of the states and increasing demand on use of water for economic aspects are considered. Complex program with appropriate mathematical software intended for calculation of possible variants of using the Amudarya up-stream water resources according to satisfaction of incompatible requirements of the national economics in irrigation and energy generation is proposed.

Convective Heat Transfer of Internal Electronic Components in a Headlight Geometry

A numerical study is presented on convective heat transfer in enclosures. The results are addressed to automotive headlights containing new-age light sources like Light Emitting Diodes (LED). The heat transfer from the heat source (LED) to the enclosure walls is investigated for mixed convection as interaction of the forced convection flow from an inlet and an outlet port and the natural convection at the heat source. Unlike existing studies, inlet and outlet port are thermally coupled and do not serve to remove hot fluid. The input power of the heat source is expressed by the Rayleigh number. The internal position of the heat source, the aspect ratio of the enclosure, and the inclination angle of one wall are varied. The results are given in terms of the global Nusselt number and the enclosure Nusselt number that characterize the heat transfer from the source and from the interior fluid to the enclosure walls, respectively. It is found that the heat transfer from the source to the fluid can be maximized if the source is placed in the main stream from the inlet to the outlet port. In this case, the Reynolds number and heat source position have the major impact on the heat transfer. A disadvantageous position has been found where natural and forced convection compete each other. The overall heat transfer from the source to the wall increases with increasing Reynolds number as well as with increasing aspect ratio and decreasing inclination angle. The heat transfer from the interior fluid to the enclosure wall increases upon decreasing the aspect ratio and increasing the inclination angle. This counteracting behaviour is caused by the variation of the area of the enclosure wall. All mixed convection results are compared to the natural convection limit.

Hydrodynamic Characteristics of Dry Beneficiation of Iron Ore and Coal in a Fast Fluidized Bed

Iron ore and coal are the two major important raw materials being used in Iron making industries. Usually ore fines containing around 5% Alumina are rejected due to higher proportion of alumina. Therefore, a technology or process which may reduce the alumina content by 2% by beneficiation process will be highly attractive . In addition fine coals with ash content is used nearly 12% is directly injected in blast furnace. Fast fluidization is a technology by using dry beneficiation of coal and iron ore can be done. During the fluidization process the iron ore band coal is fluidized at high velocity in the riser of a fast fluidized bed, the heavier and coarse particles is generally settled at the bottom in a dense zone of the riser while the finer and lighter particle are entrained to the top dilute zone and then via a cyclone is fed back to the bottom of the riser column. Most of the alumina and low ash fine size coals being lighter are expected to move up to the riser and by a natural beneficiation of ores is expected to take place in the riser. Therefore in this study an attempt has been made for dry beneficiation of iron ore and coal in a fluidized bed and its hydrodynamic characterization.

Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms

In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.

Conjugate Heat transfer over an Unsteady Stretching Sheet Mixed Convection with Magnetic Effect

A conjugate heat transfer for steady two-dimensional mixed convection with magnetic hydrodynamic (MHD) flow of an incompressible quiescent fluid over an unsteady thermal forming stretching sheet has been studied. A parameter, M, which is used to represent the dominance of the magnetic effect has been presented in governing equations. The similar transformation and an implicit finite-difference method have been used to analyze the present problem. The numerical solutions of the flow velocity distributions, temperature profiles, the wall unknown values of f''(0) and '(θ (0) for calculating the heat transfer of the similar boundary-layer flow are carried out as functions of the unsteadiness parameter (S), the Prandtl number (Pr), the space-dependent parameter (A) and temperature-dependent parameter (B) for heat source/sink and the magnetic parameter (M). The effects of these parameters have also discussed. At the results, it will produce greater heat transfer effect with a larger Pr and M, S, A, B will reduce heat transfer effects. At last, conjugate heat transfer for the free convection with a larger G has a good heat transfer effect better than a smaller G=0.

Investigating Sustainable Neighborhood Development in Jahanshahr

Nowadays, access to sustainable development in cities is assumed as one of the most important goals of urban managers. In the meanwhile, neighborhood as the smallest unit of urban spatial organization has a substantial effect on urban sustainability. Hence, attention to and focus on this subject is highly important in urban development plans. The objective of this study is evaluation of the status of Jahanshahr Neighborhood in Karaj city based on sustainable neighborhood development indicators. This research has been applied based on documentary method and field surveys. Also, evaluating of Jahanshahr Neighborhood of Karaj shows that it has a high level in sustainability in physical and economical dimension while a low level in cultural and social dimension. For this purpose, this neighborhood as a semi-sustainable neighborhood must take measures for development of collective spaces and efficiency of utilizing the public neighborhood spaces via collaboration of citizens and officials.

Evaluation of Beauveria bassiana Spore Compatibility with Surfactants

The spores of entomopathogenic fungi, Beauveria bassiana was evaluated for their compatibility with four surfactants; SDS (sodium dodyl sulphate) and CABS-65 (calcium alkyl benzene sulphonate), Tween 20 (polyethylene sorbitan monolaureate) and Tween 80 (polyoxyethylene sorbitan monoleate) at six different concentrations (0.1%, 0.5%, 1%, 2.5%, 5% and 10%). Incubated spores showed decrease in concentrations due to conversion of spores to hyphae. The maximum germination recorded in 72 h incubated spores varied with surfactant concentration at 49-68% (SDS), 39- 53% (CABS), 78-92% (Tween 80) and 80-92% (Tween 20), while the optimal surfactant concentration for spore germination was found to be 2.5-5%. The surfactant effect on spores was more pronounced with SDS and CABS-65, where significant deterioration and loss in viability of the incubated spores was observed. The effect of Tween 20 and Tween 80 were comparatively less inhibiting. The results of the study would help in surfactant selection for B. bassiana emulsion preparation.

A Monte Carlo Method to Data Stream Analysis

Data stream analysis is the process of computing various summaries and derived values from large amounts of data which are continuously generated at a rapid rate. The nature of a stream does not allow a revisit on each data element. Furthermore, data processing must be fast to produce timely analysis results. These requirements impose constraints on the design of the algorithms to balance correctness against timely responses. Several techniques have been proposed over the past few years to address these challenges. These techniques can be categorized as either dataoriented or task-oriented. The data-oriented approach analyzes a subset of data or a smaller transformed representation, whereas taskoriented scheme solves the problem directly via approximation techniques. We propose a hybrid approach to tackle the data stream analysis problem. The data stream has been both statistically transformed to a smaller size and computationally approximated its characteristics. We adopt a Monte Carlo method in the approximation step. The data reduction has been performed horizontally and vertically through our EMR sampling method. The proposed method is analyzed by a series of experiments. We apply our algorithm on clustering and classification tasks to evaluate the utility of our approach.

Performance Comparison of Parallel Sorting Algorithms on the Cluster of Workstations

Sorting appears the most attention among all computational tasks over the past years because sorted data is at the heart of many computations. Sorting is of additional importance to parallel computing because of its close relation to the task of routing data among processes, which is an essential part of many parallel algorithms. Many parallel sorting algorithms have been investigated for a variety of parallel computer architectures. In this paper, three parallel sorting algorithms have been implemented and compared in terms of their overall execution time. The algorithms implemented are the odd-even transposition sort, parallel merge sort and parallel rank sort. Cluster of Workstations or Windows Compute Cluster has been used to compare the algorithms implemented. The C# programming language is used to develop the sorting algorithms. The MPI (Message Passing Interface) library has been selected to establish the communication and synchronization between processors. The time complexity for each parallel sorting algorithm will also be mentioned and analyzed.

XPM Response of Multiple Quantum Well chirped DFB-SOA All Optical Flip-Flop Switching

In this paper, based on the coupled-mode and carrier rate equations, derivation of a dynamic model and numerically analysis of a MQW chirped DFB-SOA all-optical flip-flop is done precisely. We have analyzed the effects of strains of QW and MQW and cross phase modulation (XPM) on the dynamic response, and rise and fall times of the DFB-SOA all optical flip flop. We have shown that strained MQW active region in under an optimized condition into a DFB-SOA with chirped grating can improve the switching ON speed limitation in such a of the device, significantly while the fall time is increased. The values of the rise times for such an all optical flip-flop, are obtained in an optimized condition, areas tr=255ps.

Ezilla Cloud Service with Cassandra Database for Sensor Observation System

The main mission of Ezilla is to provide a friendly interface to access the virtual machine and quickly deploy the high performance computing environment. Ezilla has been developed by Pervasive Computing Team at National Center for High-performance Computing (NCHC). Ezilla integrates the Cloud middleware, virtualization technology, and Web-based Operating System (WebOS) to form a virtual computer in distributed computing environment. In order to upgrade the dataset and speedup, we proposed the sensor observation system to deal with a huge amount of data in the Cassandra database. The sensor observation system is based on the Ezilla to store sensor raw data into distributed database. We adopt the Ezilla Cloud service to create virtual machines and login into virtual machine to deploy the sensor observation system. Integrating the sensor observation system with Ezilla is to quickly deploy experiment environment and access a huge amount of data with distributed database that support the replication mechanism to protect the data security.

Signal-to-Noise Ratio Improvement of EMCCD Cameras

Over the past years, the EMCCD has had a profound influence on photon starved imaging applications relying on its unique multiplication register based on the impact ionization effect in the silicon. High signal-to-noise ratio (SNR) means high image quality. Thus, SNR improvement is important for the EMCCD. This work analyzes the SNR performance of an EMCCD with gain off and on. In each mode, simplified SNR models are established for different integration times. The SNR curves are divided into readout noise (or CIC) region and shot noise region by integration time. Theoretical SNR values comparing long frame integration and frame adding in each region are presented and discussed to figure out which method is more effective. In order to further improve the SNR performance, pixel binning is introduced into the EMCCD. The results show that pixel binning does obviously improve the SNR performance, but at the expensive of the spatial resolution.

Random Oracle Model of Information Hiding System

Random Oracle Model (ROM) is an effective method for measuring the practical security of cryptograph. In this paper, we try to use it into information hiding system (IHS). Because IHS has its own properties, the ROM must be modified if it is used into IHS. Firstly, we fully discuss why and how to modify each part of ROM respectively. The main changes include: 1) Divide the attacks that IHS may be suffered into two phases and divide the attacks of each phase into several kinds. 2) Distinguish Oracles and Black-boxes clearly. 3) Define Oracle and four Black-boxes that IHS used. 4) Propose the formalized adversary model. And 5) Give the definition of judge. Secondly, based on ROM of IHS, the security against known original cover attack (KOCA-KOCA-security) is defined. Then, we give an actual information hiding scheme and prove that it is KOCA-KOCA-secure. Finally, we conclude the paper and propose the open problems of further research.

Evaluation of Protein Digestibility in Canola Meals between Caecectomised and Intact Adult Cockerels

The experiment was conducted to evaluate digestibility quantities of protein in Canola Meals (CMs) between caecectomised and intact adult Rhode Island Red (RIR) cockerels with using conventional addition method (CAM) for 7 d: a 4-d adaptation and a 3-d experiment period on the basis of a completely randomized design with 4 replicates. Results indicated that caecectomy decreased (P

Scatter Analysis of Fatigue Life and Pore Size Data of Die-Cast AM60B Magnesium Alloy

Scatter behavior of fatigue life in die-cast AM60B alloy was investigated. For comparison, those in rolled AM60B alloy and die-cast A365-T5 aluminum alloy were also studied. Scatter behavior of pore size was also investigated to discuss dominant factors for fatigue life scatter in die-cast materials. Three-parameter Weibull function was suitable to explain the scatter behavior of both fatigue life and pore size. The scatter of fatigue life in die-cast AM60B alloy was almost comparable to that in die-cast A365-T5 alloy, while it was significantly large compared to that in the rolled AM60B alloy. Scatter behavior of pore size observed at fracture nucleation site on the fracture surface was comparable to that observed on the specimen cross-section and also to that of fatigue life. Therefore, the dominant factor for large scatter of fatigue life in die-cast alloys would be the large scatter of pore size. This speculation was confirmed by the fracture mechanics fatigue life prediction, where the pore observed at fatigue crack nucleation site was assumed as the pre-existing crack.

Closed Form Optimal Solution of a Tuned Liquid Column Damper Responding to Earthquake

In this paper the vibration behaviors of a structure equipped with a tuned liquid column damper (TLCD) under a harmonic type of earthquake loading are studied. However, due to inherent nonlinear liquid damping, it is no doubt that a great deal of computational effort is required to search the optimum parameters of the TLCD, numerically. Therefore by linearization the equation of motion of the single degree of freedom structure equipped with the TLCD, the closed form solutions of the TLCD-structure system are derived. To find the reliability of the analytical method, the results have been compared with other researcher and have good agreement. Further, the effects of optimal design parameters such as length ratio and mass ratio on the performance of the TLCD for controlling the responses of a structure are investigated by using the harmonic type of earthquake excitation. Finally, the Citicorp Center which has a very flexible structure is used as an example to illustrate the design procedure for the TLCD under the earthquake excitation.

Design, Development and Implementation of aTemperature Sensor using Zigbee Concepts

This paper deals with the design, development & implementation of a temperature sensor using zigbee. The main aim of the work undertaken in this paper is to sense the temperature and to display the result on the LCD using the zigbee technology. ZigBee operates in the industrial, scientific and medical (ISM) radio bands; 868 MHz in Europe, 915 MHz in the USA and 2.4 GHz in most jurisdictions worldwide. The technology is intended to be simpler and cheaper than other WPANs such as Bluetooth. The most capable ZigBee node type is said to require only about 10 % of the software of a typical Bluetooth or Wireless Internet node, while the simplest nodes are about 2 %. However, actual code sizes are much higher, more like 50 % of the Bluetooth code size. ZigBee chip vendors have announced 128-kilobyte devices. In this work undertaken in the design & development of the temperature sensor, it senses the temperature and after amplification is then fed to the micro controller, this is then connected to the zigbee module, which transmits the data and at the other end the zigbee reads the data and displays on to the LCD. The software developed is highly accurate and works at a very high speed. The method developed shows the effectiveness of the scheme employed.

CAD/CAM Algorithms for 3D Woven Multilayer Textile Structures

This paper proposes new algorithms for the computeraided design and manufacture (CAD/CAM) of 3D woven multi-layer textile structures. Existing commercial CAD/CAM systems are often restricted to the design and manufacture of 2D weaves. Those CAD/CAM systems that do support the design and manufacture of 3D multi-layer weaves are often limited to manual editing of design paper grids on the computer display and weave retrieval from stored archives. This complex design activity is time-consuming, tedious and error-prone and requires considerable experience and skill of a technical weaver. Recent research reported in the literature has addressed some of the shortcomings of commercial 3D multi-layer weave CAD/CAM systems. However, earlier research results have shown the need for further work on weave specification, weave generation, yarn path editing and layer binding. Analysis of 3D multi-layer weaves in this research has led to the design and development of efficient and robust algorithms for the CAD/CAM of 3D woven multi-layer textile structures. The resulting algorithmically generated weave designs can be used as a basis for lifting plans that can be loaded onto looms equipped with electronic shedding mechanisms for the CAM of 3D woven multi-layer textile structures.

High Capacity Spread-Spectrum Watermarking for Telemedicine Applications

This paper presents a new spread-spectrum watermarking algorithm for digital images in discrete wavelet transform (DWT) domain. The algorithm is applied for embedding watermarks like patient identification /source identification or doctors signature in binary image format into host digital radiological image for potential telemedicine applications. Performance of the algorithm is analysed by varying the gain factor, subband decomposition levels, and size of watermark. Simulation results show that the proposed method achieves higher watermarking capacity.

A Real-Time Rendering based on Efficient Updating of Static Objects Buffer

Real-time 3D applications have to guarantee interactive rendering speed. There is a restriction for the number of polygons which is rendered due to performance of a graphics hardware or graphics algorithms. Generally, the rendering performance will be drastically increased when handling only the dynamic 3d models, which is much fewer than the static ones. Since shapes and colors of the static objects don-t change when the viewing direction is fixed, the information can be reused. We render huge amounts of polygon those cannot handled by conventional rendering techniques in real-time by using a static object image and merging it with rendering result of the dynamic objects. The performance must be decreased as a consequence of updating the static object image including removing an static object that starts to move, re-rending the other static objects being overlapped by the moving ones. Based on visibility of the object beginning to move, we can skip the updating process. As a result, we enhance rendering performance and reduce differences of rendering speed between each frame. Proposed method renders total 200,000,000 polygons that consist of 500,000 dynamic polygons and the rest are static polygons in about 100 frames per second.