Synthesis of Peptide Amides using Sol-Gel Immobilized Alcalase in Batch and Continuous Reaction System

Two commercial proteases from Bacillus licheniformis (Alcalase 2.4 L FG and Alcalase 2.5 L, Type DX) were screened for the production of Z-Ala-Phe-NH2 in batch reaction. Alcalase 2.4 L FG was the most efficient enzyme for the C-terminal amidation of Z-Ala-Phe-OMe using ammonium carbamate as ammonium source. Immobilization of protease has been achieved by the sol-gel method, using dimethyldimethoxysilane (DMDMOS) and tetramethoxysilane (TMOS) as precursors (unpublished results). In batch production, about 95% of Z-Ala-Phe-NH2 was obtained at 30°C after 24 hours of incubation. Reproducibility of different batches of commercial Alcalase 2.4 L FG preparations was also investigated by evaluating the amidation activity and the entrapment yields in the case of immobilization. A packed-bed reactor (0.68 cm ID, 15.0 cm long) was operated successfully for the continuous synthesis of peptide amides. The immobilized enzyme retained the initial activity over 10 cycles of repeated use in continuous reactor at ambient temperature. At 0.75 mL/min flow rate of the substrate mixture, the total conversion of Z-Ala-Phe-OMe was achieved after 5 hours of substrate recycling. The product contained about 90% peptide amide and 10% hydrolysis byproduct.

Optimal Allocation of FACTS Devices for ATC Enhancement Using Bees Algorithm

In this paper, a novel method using Bees Algorithm is proposed to determine the optimal allocation of FACTS devices for maximizing the Available Transfer Capability (ATC) of power transactions between source and sink areas in the deregulated power system. The algorithm simultaneously searches the FACTS location, FACTS parameters and FACTS types. Two types of FACTS are simulated in this study namely Thyristor Controlled Series Compensator (TCSC) and Static Var Compensator (SVC). A Repeated Power Flow with FACTS devices including ATC is used to evaluate the feasible ATC value within real and reactive power generation limits, line thermal limits, voltage limits and FACTS operation limits. An IEEE30 bus system is used to demonstrate the effectiveness of the algorithm as an optimization tool to enhance ATC. A Genetic Algorithm technique is used for validation purposes. The results clearly indicate that the introduction of FACTS devices in a right combination of location and parameters could enhance ATC and Bees Algorithm can be efficiently used for this kind of nonlinear integer optimization.

Squaring Construction for Repeated-Root Cyclic Codes

We considered repeated-root cyclic codes whose block length is divisible by the characteristic of the underlying field. Cyclic self dual codes are also the repeated root cyclic codes. It is known about the one-level squaring construction for binary repeated root cyclic codes. In this correspondence, we introduced of two level squaring construction for binary repeated root cyclic codes of length 2a b , a > 0, b is odd.

Relative Radiometric Correction of Cloudy Multitemporal Satellite Imagery

Repeated observation of a given area over time yields potential for many forms of change detection analysis. These repeated observations are confounded in terms of radiometric consistency due to changes in sensor calibration over time, differences in illumination, observation angles and variation in atmospheric effects. This paper demonstrates applicability of an empirical relative radiometric normalization method to a set of multitemporal cloudy images acquired by Resourcesat1 LISS III sensor. Objective of this study is to detect and remove cloud cover and normalize an image radiometrically. Cloud detection is achieved by using Average Brightness Threshold (ABT) algorithm. The detected cloud is removed and replaced with data from another images of the same area. After cloud removal, the proposed normalization method is applied to reduce the radiometric influence caused by non surface factors. This process identifies landscape elements whose reflectance values are nearly constant over time, i.e. the subset of non-changing pixels are identified using frequency based correlation technique. The quality of radiometric normalization is statistically assessed by R2 value and mean square error (MSE) between each pair of analogous band.

Carbon-Based Composites Enable Monitoring of Internal States in Concrete Structures

Regarding previous research studies it was concluded that thin-walled fiber-cement composites are able to conduct electric current under specific conditions. This property is ensured by using of various kinds of carbon materials. Though carbon fibers are less conductive than metal fibers, composites with carbon fibers were evaluated as better current conductors than the composites with metal fibers. The level of electric conductivity is monitored by the means of impedance measurement of designed samples. These composites could be used for a range of applications such as heating of trafficable surfaces or shielding of electro-magnetic fields. The aim of the present research was to design an element with the ability to monitor internal processes in building structures and prevent them from collapsing. As a typical element for laboratory testing there was chosen a concrete column, which was repeatedly subjected to load by simple pressure with continual monitoring of changes in electrical properties.

Vibration Reduction Module with Flexure Springs for Personal Tools

In the various working field, vibration may cause injurious to human body. Especially, in case of the vibration which is constantly and repeatedly transferred to the human. That gives serious physical problem, so called, Reynaud phenomenon. In this paper, we propose a vibration transmissibility reduction module with flexure mechanism for personal tools. At first, we select a target personal tool, grass cutter, and measure the level of vibration transmissibility on the hand. And then, we develop the concept design of the module that has stiffness for reduction the vibration transmissibility more than 20%, where the vibration transmissibility is measured with an accelerometer. In addition, the vibration reduction can be enhanced when the interior gap between inner and outer body is filled with silicone gel. This will be verified by the further experiment.

Principal Type of Water Responsible for Damage of Concrete Repeated Freeze-Thaw Cycles

The first and basic cause of the failure of concrete is repeated freezing (thawing) of moisture contained in the pores, microcracks, and cavities of the concrete. On transition to ice, water existing in the free state in cracks increases in volume, expanding the recess in which freezing occurs. A reduction in strength below the initial value is to be expected and further cycle of freezing and thawing have a further marked effect. By using some experimental parameters like nuclear magnetic resonance variation (NMR), enthalpy-temperature (or heat capacity) variation, we can resolve between the various water states and their effect on concrete properties during cooling through the freezing transition temperature range. The main objective of this paper is to describe the principal type of water responsible for the reduction in strength and structural damage (frost damage) of concrete following repeated freeze –thaw cycles. Some experimental work was carried out at the institute of cryogenics to determine what happens to water in concrete during the freezing transition. 

Masonry CSEB Building Models under Shaketable Testing-An Experimental Study

In this experimental investigation shake table tests were conducted on two reduced models that represent normal single room building constructed by Compressed Stabilized Earth Block (CSEB) from locally available soil. One model was constructed with earthquake resisting features (EQRF) having sill band, lintel band and vertical bands to control the building vibration and another one was without Earthquake Resisting Features. To examine the seismic capacity of the models particularly when it is subjected to long-period ground motion by large amplitude by many cycles of repeated loading, the test specimen was shaken repeatedly until the failure. The test results from Hi-end Data Acquisition system show that model with EQRF behave better than without EQRF. This modified masonry model with new material combined with new bands is used to improve the behavior of masonry building.

Packet Forwarding with Multiprotocol Label Switching

MultiProtocol Label Switching (MPLS) is an emerging technology that aims to address many of the existing issues associated with packet forwarding in today-s Internetworking environment. It provides a method of forwarding packets at a high rate of speed by combining the speed and performance of Layer 2 with the scalability and IP intelligence of Layer 3. In a traditional IP (Internet Protocol) routing network, a router analyzes the destination IP address contained in the packet header. The router independently determines the next hop for the packet using the destination IP address and the interior gateway protocol. This process is repeated at each hop to deliver the packet to its final destination. In contrast, in the MPLS forwarding paradigm routers on the edge of the network (label edge routers) attach labels to packets based on the forwarding Equivalence class (FEC). Packets are then forwarded through the MPLS domain, based on their associated FECs , through swapping the labels by routers in the core of the network called label switch routers. The act of simply swapping the label instead of referencing the IP header of the packet in the routing table at each hop provides a more efficient manner of forwarding packets, which in turn allows the opportunity for traffic to be forwarded at tremendous speeds and to have granular control over the path taken by a packet. This paper deals with the process of MPLS forwarding mechanism, implementation of MPLS datapath , and test results showing the performance comparison of MPLS and IP routing. The discussion will focus primarily on MPLS IP packet networks – by far the most common application of MPLS today.

Problems of Measuring Effectiveness of Innovation Performance

The innovation performance of nations has been repeatedly measured in the literature. We argue that while the literature offers many suggestions, their theoretical foundation is often weak and the underlying assumptions are rarely discussed. In this paper, we systematize various mechanisms by which spatial units influence nation and firms' innovation activities. On the basis of this, common innovation performance measures and analyses are discussed and evaluated. It is concluded that there is no general best way of measuring the innovation performance of spatial units. In fact, the most interesting insights can be obtained using a multitude of different approaches at the same time.

Performance Optimization of Data Mining Application Using Radial Basis Function Classifier

Text data mining is a process of exploratory data analysis. Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. This paper describes proposed radial basis function Classifier that performs comparative crossvalidation for existing radial basis function Classifier. The feasibility and the benefits of the proposed approach are demonstrated by means of data mining problem: direct Marketing. Direct marketing has become an important application field of data mining. Comparative Cross-validation involves estimation of accuracy by either stratified k-fold cross-validation or equivalent repeated random subsampling. While the proposed method may have high bias; its performance (accuracy estimation in our case) may be poor due to high variance. Thus the accuracy with proposed radial basis function Classifier was less than with the existing radial basis function Classifier. However there is smaller the improvement in runtime and larger improvement in precision and recall. In the proposed method Classification accuracy and prediction accuracy are determined where the prediction accuracy is comparatively high.

Genetic-Fuzzy Inverse Controller for a Robot Arm Suitable for On Line Applications

The robot is a repeated task plant. The control of such a plant under parameter variations and load disturbances is one of the important problems. The aim of this work is to design Geno-Fuzzy controller suitable for online applications to control single link rigid robot arm plant. The genetic-fuzzy online controller (indirect controller) has two genetic-fuzzy blocks, the first as controller, the second as identifier. The identification method is based on inverse identification technique. The proposed controller it tested in normal and load disturbance conditions.

Biodiesel Fuel Production by Methanolysis of Fish Oil Derived from the Discarded Parts of Fish Catalyzed by Carica papaya Lipase

In this paper, naturally immobilized lipase, Carica papaya lipase, catalyzed biodiesel production from fish oil was studied. The refined fish oil, extracted from the discarded parts of fish, was used as a starting material for biodiesel production. The effects of molar ratio of oil: methanol, lipase dosage, initial water activity of lipase, temperature and solvent were investigated. It was found that Carica papaya lipase was suitable for methanolysis of fish oil to produce methyl ester. The maximum yield of methyl ester could reach up to 83% with the optimal reaction conditions: oil: methanol molar ratio of 1: 4, 20% (based on oil) of lipase, initial water activity of lipase at 0.23 and 20% (based on oil) of tert-butanol at 40oC after 18 h of reaction time. There was negligible loss in lipase activity even after repeated use for 30 cycles.

Resource Leveling in Construction Projects using Re- Modified Minimum Moment Approach

An attempt in this paper proposes a re-modification to the minimum moment approach of resource leveling which is a modified minimum moment approach to the traditional method by Harris. The method is based on critical path method. The new approach suggests the difference between the methods in the selection criteria of activity which needs to be shifted for leveling resource histogram. In traditional method, the improvement factor found first to select the activity for each possible day of shifting. In modified method maximum value of the product of Resources Rate and Free Float was found first and improvement factor is then calculated for that activity which needs to be shifted. In the proposed method the activity to be selected first for shifting is based on the largest value of resource rate. The process is repeated for all the remaining activities for possible shifting to get updated histogram. The proposed method significantly reduces the number of iterations and is easier for manual computations.

The Effects of Neuromuscular Training on Limits of Stability in Female Individuals

This study examined the effects of neuromuscular training (NT) on limits of stability (LOS) in female individuals. Twenty female basketball amateurs were assigned into NT experimental group or control group by volunteer. All the players were underwent regular basketball practice, 90 minutes, 3 times per week for 6 weeks, but the NT experimental group underwent extra NT with plyometric and core training, 50 minutes, 3 times per week for 6 weeks during this period. Limits of stability (LOS) were evaluated by the Biodex Balance System. One factor ANCOVA was used to examine the differences between groups after training. The significant level for statistic was set at p

Review and Experiments on SDMSCue

In this work, I present a review on Sparse Distributed Memory for Small Cues (SDMSCue), a variant of Sparse Distributed Memory (SDM) that is capable of handling small cues. I then conduct and show some cognitive experiments on SDMSCue to test its cognitive soundness compared to SDM. Small cues refer to input cues that are presented to memory for reading associations; but have many missing parts or fields from them. The original SDM failed to handle such a problem. SDMSCue handles and overcomes this pitfall. The main idea in SDMSCue; is the repeated projection of the semantic space on smaller subspaces; that are selected based on the input cue length and pattern. This process allows for Read/Write operations using an input cue that is missing a large portion. SDMSCue is augmented with the use of genetic algorithms for memory allocation and initialization. I claim that SDM functionality is a subset of SDMSCue functionality.

Using the Monte Carlo Simulation to Predict the Assembly Yield

Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.

Threshold Stress of the Soil Subgrade Evaluation for Highway Formations

The objective of this study is to evaluate the threshold stress of the clay with sand subgrade soil. Threshold stress can be defined as the stress level above which cyclic loading leads to excessive deformation and eventual failure. The thickness determination of highways formations using the threshold stress approach is a more realistic assessment of the soil behaviour because it is subjected to repeated loadings from moving vehicles. Threshold stress can be evaluated by plastic strain criterion, which is based on the accumulated plastic strain behaviour during cyclic loadings [1]. Several conditions of the all-round pressure the subgrade soil namely, zero confinement, low all-round pressure and high all-round pressure are investigated. The threshold stresses of various soil conditions are determined. Threshold stress of the soil are 60%, 31% and 38.6% for unconfined partially saturated sample, low effective stress saturated sample, high effective stress saturated sample respectively.

Pseudo-Homogeneous Kinetic of Dilute-Acid Hydrolysis of Rice Huskfor Ethanol Production: Effect of Sugar Degradation

Rice husk is a lignocellulosic source that can be converted to ethanol. Three hundreds grams of rice husk was mixed with 1 L of 0.18 N sulfuric acid solutions then was heated in an autoclave. The reaction was expected to be at constant temperature (isothermal), but before that temperature was achieved, reaction has occurred. The first liquid sample was taken at temperature of 140 0C and repeated every 5 minute interval. So the data obtained are in the regions of non-isothermal and isothermal. It was observed that the degradation has significant effects on the ethanol production. The kinetic constants can be expressed by Arrhenius equation with the frequency factors for hydrolysis and sugar degradation of 1.58 x 105 min-1 and 2.29 x 108 L/mole-min, respectively, while the activation energies are 64,350 J/mole and 76,571 J/mole. The highest ethanol concentration from fermentation is 1.13% v/v, attained at 220 0C.

A New Heuristic Approach for the Large-Scale Generalized Assignment Problem

This paper presents a heuristic approach to solve the Generalized Assignment Problem (GAP) which is NP-hard. It is worth mentioning that many researches used to develop algorithms for identifying the redundant constraints and variables in linear programming model. Some of the algorithms are presented using intercept matrix of the constraints to identify redundant constraints and variables prior to the start of the solution process. Here a new heuristic approach based on the dominance property of the intercept matrix to find optimal or near optimal solution of the GAP is proposed. In this heuristic, redundant variables of the GAP are identified by applying the dominance property of the intercept matrix repeatedly. This heuristic approach is tested for 90 benchmark problems of sizes upto 4000, taken from OR-library and the results are compared with optimum solutions. Computational complexity is proved to be O(mn2) of solving GAP using this approach. The performance of our heuristic is compared with the best state-ofthe- art heuristic algorithms with respect to both the quality of the solutions. The encouraging results especially for relatively large size test problems indicate that this heuristic approach can successfully be used for finding good solutions for highly constrained NP-hard problems.