Continuity Planning in Supply Chain Networks: Degrees of Freedom and Application in the Risk Management Process

Supply chain networks are frequently hit by unplanned events which lead to disruptions and cause operational and financial consequences. It is neither possible to avoid disruption risk entirely, nor are network members able to prepare for every possible disruptive event. Therefore a continuity planning should be set up which supports effective operational responses in supply chain networks in times of emergencies. In this research network related degrees of freedom which determine the options for responsive actions are derived from interview data. The findings are further embedded into a common risk management process. The paper provides support for researchers and practitioners to identify the network related options for responsive actions and to determine the need for improving the reaction capabilities.

An Algorithm Proposed for FIR Filter Coefficients Representation

Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.

Some Solid Transportation Models with Crisp and Rough Costs

In this paper, some practical solid transportation models are formulated considering per trip capacity of each type of conveyances with crisp and rough unit transportation costs. This is applicable for the system in which full vehicles, e.g. trucks, rail coaches are to be booked for transportation of products so that transportation cost is determined on the full of the conveyances. The models with unit transportation costs as rough variables are transformed into deterministic forms using rough chance constrained programming with the help of trust measure. Numerical examples are provided to illustrate the proposed models in crisp environment as well as with unit transportation costs as rough variables.

Uniformity of Dose Distribution in Radiation Fields Surrounding the Spine using Film Dosimetry and Comparison with 3D Treatment Planning Software

The overall penumbra is usually defined as the distance, p20–80, separating the 20% and 80% of the dose on the beam axis at the depth of interest. This overall penumbra accounts also for the fact that some photons emitted by the distal parts of the source are only partially attenuated by the collimator. Medulloblastoma is the most common type of childhood brain tumor and often spreads to the spine. Current guidelines call for surgery to remove as much of the tumor as possible, followed by radiation of the brain and spinal cord, and finally treatment with chemotherapy. The purpose of this paper was to present results on an Uniformity of dose distribution in radiation fields surrounding the spine using film dosimetry and comparison with 3D treatment planning software.

Ranking Genes from DNA Microarray Data of Cervical Cancer by a local Tree Comparison

The major objective of this paper is to introduce a new method to select genes from DNA microarray data. As criterion to select genes we suggest to measure the local changes in the correlation graph of each gene and to select those genes whose local changes are largest. More precisely, we calculate the correlation networks from DNA microarray data of cervical cancer whereas each network represents a tissue of a certain tumor stage and each node in the network represents a gene. From these networks we extract one tree for each gene by a local decomposition of the correlation network. The interpretation of a tree is that it represents the n-nearest neighbor genes on the n-th level of a tree, measured by the Dijkstra distance, and, hence, gives the local embedding of a gene within the correlation network. For the obtained trees we measure the pairwise similarity between trees rooted by the same gene from normal to cancerous tissues. This evaluates the modification of the tree topology due to tumor progression. Finally, we rank the obtained similarity values from all tissue comparisons and select the top ranked genes. For these genes the local neighborhood in the correlation networks changes most between normal and cancerous tissues. As a result we find that the top ranked genes are candidates suspected to be involved in tumor growth. This indicates that our method captures essential information from the underlying DNA microarray data of cervical cancer.

Numerical Simulation of the Liquid-Vapor Interface Evolution with Material Properties

A satured liquid is warmed until boiling in a parallelepipedic boiler. The heat is supplied in a liquid through the horizontal bottom of the boiler, the other walls being adiabatic. During the process of boiling, the liquid evaporates through its free surface by deforming it. This surface which subdivides the boiler into two regions occupied on both sides by the boiled liquid (broth) and its vapor which surmounts it. The broth occupying the region and its vapor the superior region. A two- fluids model is used to describe the dynamics of the broth, its vapor and their interface. In this model, the broth is treated as a monophasic fluid (homogeneous model) and form with its vapor adiphasic pseudo fluid (two-fluid model). Furthermore, the interface is treated as a zone of mixture characterized by superficial void fraction noted α* . The aim of this article is to describe the dynamics of the interface between the boiled fluid and its vapor within a boiler. The resolution of the problem allowed us to show the evolution of the broth and the level of the liquid.

Ageing Deterioration of Silicone Rubber Polymer Insulator under Salt Water Dip Wheel Test

This paper presents the experimental results of silicone rubber polymer insulators for 22 kV systems under salt water dip wheel test based on IEC 62217. Straight shed silicone rubber polymer insulators having leakage distance 685 mm were tested continuously 30,000 cycles. One test cycle includes 4 positions, energized, de-energized, salt water dip and deenergized, respectively. For one test cycle, each test specimen remains stationary for about 40 second in each position and takes 8 second for rotate to next position. By visual observation, sever surface erosion was observed on the trunk near the energized end of tested specimen. Puncture was observed on the upper shed near the energized end. In addition, decreasing in hydrophobicity and increasing in hardness were measured on tested specimen comparing with new specimen. Furthermore, chemical analysis by ATR-FTIR was conducted in order to elucidate the chemical change of tested specimens comparing with new specimen.

Bounds on Reliability of Parallel Computer Interconnection Systems

The evaluation of residual reliability of large sized parallel computer interconnection systems is not practicable with the existing methods. Under such conditions, one must go for approximation techniques which provide the upper bound and lower bound on this reliability. In this context, a new approximation method for providing bounds on residual reliability is proposed here. The proposed method is well supported by two algorithms for simulation purpose. The bounds on residual reliability of three different categories of interconnection topologies are efficiently found by using the proposed method

Extractability of Heavy Metals in Green Liquor Dregs using Artificial Sweat and Gastric Fluids

In an assessment of the extractability of metals in green liquor dregs from the chemical recovery circuit of semichemical pulp mill, extractable concentrations of heavy metals in artificial gastric fluid were between 10 (Ni) and 717 (Zn) times higher than those in artificial sweat fluid. Only Al (6.7 mg/kg; d.w.), Ni (1.2 mg/kg; d.w.) and Zn (1.8 mg/kg; d.w.) showed extractability in the artificial sweat fluid, whereas Al (730 mg/kg; d.w.), Ba (770 mg/kg; d.w.) and Zn (1290 mg/kg; d.w.) showed clear extractability in the artificial gastric fluid. As certain heavy metals were clearly soluble in the artificial gastric fluid, the careful handling of this residue is recommended in order to prevent the penetration of green liquor dregs across the human gastrointestinal tract.

Properties of Bricks Produced With Recycled Fine Aggregate

The main aim of this research is to study the possible use of recycled fine aggregate made from waste rubble wall to substitute partially for the natural sand used in the production of cement and sand bricks. The bricks specimens were prepared by using 100% natural sand; they were then replaced by recycled fine aggregate at 25, 50, 75, and 100% by weight of natural sand. A series of tests was carried out to study the effect of using recycled aggregate on the physical and mechanical properties of bricks, such as density, drying shrinkage, water absorption characteristic, compressive and flexural strength. Test results indicate that it is possible to manufacture bricks containing recycled fine aggregate with good characteristics that are similar in physical and mechanical properties to those of bricks with natural aggregate, provided that the percentage of recycled fine aggregates is limited up to 50-75%.

Size Control of Nanoparticles Using a Microfluidic Device

We have developed a microfluidic device system for the continuous producting of nanoparticles, and we have clarified the relationship between the mixing performance of reactors and the particle size. First, we evaluated the mixing performance of reactors by carring out the Villermaux–Dushman reaction and determined the experimental conditions for producing AgCl nanoparticles. Next, we produced AgCl nanoparticles and evaluated the mixing performance and the particle size. We found that as the mixing performance improves the size of produced particles decreases and the particle size distribution becomes sharper. We produced AgCl nanoparticles with a size of 86 nm using the microfluidic device that had the best mixing performance among the three reactors we tested in this study; the coefficient of variation (Cv) of the size distribution of the produced nanoparticles was 26.1%.

Spatial Variability in Human Development Patterns in Assiut, Egypt

Given the motivation of maps impact in enhancing the perception of the quality of life in a region, this work examines the use of spatial analytical techniques in exploring the role of space in shaping human development patterns in Assiut governorate. Variations of human development index (HDI) of the governorate-s villages, districts and cities are mapped using geographic information systems (GIS). Global and local spatial autocorrelation measures are employed to assess the levels of spatial dependency in the data and to map clusters of human development. Results show prominent disparities in HDI between regions of Assiut. Strong patterns of spatial association were found proving the presence of clusters on the distribution of HDI. Finally, the study indicates several "hot-spots" in the governorate to be area of more investigations to explore the attributes of such levels of human development. This is very important for accomplishing the development plan of poorest regions currently adopted in Egypt.

A Comparison between Heterogeneous and Homogeneous Gas Flow Model in Slurry Bubble Column Reactor for Direct Synthesis of DME

In the present study, a heterogeneous and homogeneous gas flow dispersion model for simulation and optimisation of a large-scale catalytic slurry reactor for the direct synthesis of dimethyl ether (DME) from syngas and CO2, using a churn-turbulent regime was developed. In the heterogeneous gas flow model the gas phase was distributed into two bubble phases: small and large, however in the homogeneous one, the gas phase was distributed into only one large bubble phase. The results indicated that the heterogeneous gas flow model was in more agreement with experimental pilot plant data than the homogeneous one.

Community Innovation in Sustainable Development: A Cross Case Study

Although in sustainable development field, innovative solutions have been sought worldwide by environmental groups, academia, governments and companies for many years, recently, citizens and communities have emerged as a new group and taken more and more active role in this field. Many scholars call for more research on the role of community and community innovation in sustainable development. This paper is to respond to the calls. In this paper, we first summarize a comprehensive set of innovation principles. Then, we do a qualitative cross case study by comparing three community innovation cases in three different areas of sustainable development according to the innovation principles. Finally, we summarize the case comparison and discuss the implications to sustainable development. A unified role model and innovation distribution map of community innovation are developed to better understand community innovation in sustainable development..

Approximate Bounded Knowledge Extraction Using Type-I Fuzzy Logic

Using neural network we try to model the unknown function f for given input-output data pairs. The connection strength of each neuron is updated through learning. Repeated simulations of crisp neural network produce different values of weight factors that are directly affected by the change of different parameters. We propose the idea that for each neuron in the network, we can obtain quasi-fuzzy weight sets (QFWS) using repeated simulation of the crisp neural network. Such type of fuzzy weight functions may be applied where we have multivariate crisp input that needs to be adjusted after iterative learning, like claim amount distribution analysis. As real data is subjected to noise and uncertainty, therefore, QFWS may be helpful in the simplification of such complex problems. Secondly, these QFWS provide good initial solution for training of fuzzy neural networks with reduced computational complexity.

Soil Improvement using Cement Dust Mixture

Day by day technology increases and problems associated with this technology also increase. Several researches were carried out to investigate the deployment of such material safely in geotechnical engineering in particular and civil engineering in general. However, different types of waste material have such as cement duct, fly ash and slag been proven to be suitable in several applications. In this research cement dust mixed with different percentages of sand will be used in some civil engineering application as will be explained later in this paper throughout filed and laboratory test. The used mixer (waste material with sand) prove high performance, durability to environmental condition, low cost and high benefits. At higher cement dust ratio, small cement ratio is valuable for compressive strength and permeability. Also at small cement dust ratio higher cement ratio is valuable for compressive strength.

Semi-Automatic Artifact Rejection Procedure Based on Kurtosis, Renyi's Entropy and Independent Component Scalp Maps

Artifact rejection plays a key role in many signal processing applications. The artifacts are disturbance that can occur during the signal acquisition and that can alter the analysis of the signals themselves. Our aim is to automatically remove the artifacts, in particular from the Electroencephalographic (EEG) recordings. A technique for the automatic artifact rejection, based on the Independent Component Analysis (ICA) for the artifact extraction and on some high order statistics such as kurtosis and Shannon-s entropy, was proposed some years ago in literature. In this paper we try to enhance this technique proposing a new method based on the Renyi-s entropy. The performance of our method was tested and compared to the performance of the method in literature and the former proved to outperform the latter.

Adsorption of H2 and CO on Iron-based Catalysts for Fischer-Tropsch Synthesis

The adsorption properties of CO and H2 on iron-based catalyst with addition of Zr and Ni were investigated using temperature programmed desorption process. It was found that on the carburized iron-based catalysts, molecular state and dissociative state CO existed together. The addition of Zr was preferential for the molecular state adsorption of CO on iron-based catalyst and the presence of Ni was beneficial to the dissociative adsorption of CO. On H2 reduced catalysts, hydrogen mainly adsorbs on the surface iron sites and surface oxide sites. On CO reduced catalysts, hydrogen probably existed as the most stable CH and OH species. The addition of Zr was not benefit to the dissociative adsorption of hydrogen on iron-based catalyst and the presence of Ni was preferential for the dissociative adsorption of hydrogen.

Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding

HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.

A Comparison among Wolf Pack Search and Four other Optimization Algorithms

The main objective of this paper is applying a comparison between the Wolf Pack Search (WPS) as a newly introduced intelligent algorithm with several other known algorithms including Particle Swarm Optimization (PSO), Shuffled Frog Leaping (SFL), Binary and Continues Genetic algorithms. All algorithms are applied on two benchmark cost functions. The aim is to identify the best algorithm in terms of more speed and accuracy in finding the solution, where speed is measured in terms of function evaluations. The simulation results show that the SFL algorithm with less function evaluations becomes first if the simulation time is important, while if accuracy is the significant issue, WPS and PSO would have a better performance.