Quantum Computation using Two Component Bose-Einstein Condensates

Quantum computation using qubits made of two component Bose-Einstein condensates (BECs) is analyzed. We construct a general framework for quantum algorithms to be executed using the collective states of the BECs. The use of BECs allows for an increase of energy scales via bosonic enhancement, resulting in two qubit gate operations that can be performed at a time reduced by a factor of N, where N is the number of bosons per qubit. We illustrate the scheme by an application to Deutsch-s and Grover-s algorithms, and discuss possible experimental implementations. Decoherence effects are analyzed under both general conditions and for the experimental implementation proposed.

Accurate Visualization of Graphs of Functions of Two Real Variables

The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.

Mathematical Modeling of Asphaltene Precipitation: A Review

In the Enhanced Oil Recovery (EOR) method, use of Carbon dioxide flooding whereby CO2 is injected into an oil reservoir to increase output when extracting oil resulted significant recovery worldwide. The carbon dioxide function as a pressurizing agent when mixed into the underground crude oil will reduce its viscosity and will enable a rapid oil flow. Despite the CO2’s advantage in the oil recovery, it may result to asphaltene precipitation a problem that will cause the reduction of oil produced from oil wells. In severe cases, asphaltene precipitation can cause costly blockages in oil pipes and machinery. This paper presents reviews of several studies done on mathematical modeling of asphaltene precipitation. The synthesized result from several researches done on this topic can be used as guide in order to better understand asphaltene precipitation. Likewise, this can be used as initial reference for students, and new researchers doing study on asphaltene precipitation.

Effect of Organic Matter and Biofertilizers on Chickpea Quality and Biological Nitrogen Fixation

In order to evaluation the effects of soil organic matter and biofertilizer on chickpea quality and biological nitrogen fixation, field experiments were carried out in 2007 and 2008 growing seasons. In this research the effects of different strategies for soil fertilization were investigated on grain yield and yield component, minerals, organic compounds and cooking time of chickpea. Experimental units were arranged in split-split plots based on randomized complete blocks with three replications. Main plots consisted of (G1): establishing a mixed vegetation of Vicia panunica and Hordeum vulgare and (G2): control, as green manure levels. Also, five strategies for obtaining the base fertilizer requirement including (N1): 20 t.ha-1 farmyard manure; (N2): 10 t.ha-1 compost; (N3): 75 kg.ha-1 triple super phosphate; (N4): 10 t.ha-1 farmyard manure + 5 t.ha-1 compost and (N5): 10 t.ha-1 farmyard manure + 5 t.ha-1 compost + 50 kg.ha-1 triple super phosphate were considered in sub plots. Furthermoree four levels of biofertilizers consisted of (B1): Bacillus lentus + Pseudomonas putida; (B2): Trichoderma harzianum; (B3): Bacillus lentus + Pseudomonas putida + Trichoderma harzianum; and (B4): control (without biofertilizers) were arranged in sub-sub plots. Results showed that integrating biofertilizers (B3) and green manure (G1) produced the highest grain yield. The highest amounts of yield were obtained in G1×N5 interaction. Comparison of all 2-way and 3-way interactions showed that G1N5B3 was determined as the superior treatment. Significant increasing of N, P2O5, K2O, Fe and Mg content in leaves and grains emphasized on superiority of mentioned treatment because each one of these nutrients has an approved role in chlorophyll synthesis and photosynthesis abilities of the crops. The combined application of compost, farmyard manure and chemical phosphorus (N5) in addition to having the highest yield, had the best grain quality due to high protein, starch and total sugar contents, low crude fiber and reduced cooking time.

Library Aware Power Conscious Realization of Complementary Boolean Functions

In this paper, we consider the problem of logic simplification for a special class of logic functions, namely complementary Boolean functions (CBF), targeting low power implementation using static CMOS logic style. The functions are uniquely characterized by the presence of terms, where for a canonical binary 2-tuple, D(mj) ∪ D(mk) = { } and therefore, we have | D(mj) ∪ D(mk) | = 0 [19]. Similarly, D(Mj) ∪ D(Mk) = { } and hence | D(Mj) ∪ D(Mk) | = 0. Here, 'mk' and 'Mk' represent a minterm and maxterm respectively. We compare the circuits minimized with our proposed method with those corresponding to factored Reed-Muller (f-RM) form, factored Pseudo Kronecker Reed-Muller (f-PKRM) form, and factored Generalized Reed-Muller (f-GRM) form. We have opted for algebraic factorization of the Reed-Muller (RM) form and its different variants, using the factorization rules of [1], as it is simple and requires much less CPU execution time compared to Boolean factorization operations. This technique has enabled us to greatly reduce the literal count as well as the gate count needed for such RM realizations, which are generally prone to consuming more cells and subsequently more power consumption. However, this leads to a drawback in terms of the design-for-test attribute associated with the various RM forms. Though we still preserve the definition of those forms viz. realizing such functionality with only select types of logic gates (AND gate and XOR gate), the structural integrity of the logic levels is not preserved. This would consequently alter the testability properties of such circuits i.e. it may increase/decrease/maintain the same number of test input vectors needed for their exhaustive testability, subsequently affecting their generalized test vector computation. We do not consider the issue of design-for-testability here, but, instead focus on the power consumption of the final logic implementation, after realization with a conventional CMOS process technology (0.35 micron TSMC process). The quality of the resulting circuits evaluated on the basis of an established cost metric viz., power consumption, demonstrate average savings by 26.79% for the samples considered in this work, besides reduction in number of gates and input literals by 39.66% and 12.98% respectively, in comparison with other factored RM forms.

A Context-Aware Supplier Selection Model

Selection of the best possible set of suppliers has a significant impact on the overall profitability and success of any business. For this reason, it is usually necessary to optimize all business processes and to make use of cost-effective alternatives for additional savings. This paper proposes a new efficient context-aware supplier selection model that takes into account possible changes of the environment while significantly reducing selection costs. The proposed model is based on data clustering techniques while inspiring certain principles of online algorithms for an optimally selection of suppliers. Unlike common selection models which re-run the selection algorithm from the scratch-line for any decision-making sub-period on the whole environment, our model considers the changes only and superimposes it to the previously defined best set of suppliers to obtain a new best set of suppliers. Therefore, any recomputation of unchanged elements of the environment is avoided and selection costs are consequently reduced significantly. A numerical evaluation confirms applicability of this model and proves that it is a more optimal solution compared with common static selection models in this field.

Structural Analysis of Stiffened FGM Thick Walled Cylinders by Application of a New Cylindrical Super Element

Structural behavior of ring stiffened thick walled cylinders made of functionally graded materials (FGMs) is investigated in this paper. Functionally graded materials are inhomogeneous composites which are usually made from a mixture of metal and ceramic. The gradient compositional variation of the constituents from one surface to the other provides an elegant solution to the problem of high transverse shear stresses that are induced when two dissimilar materials with large differences in material properties are bonded. FGM formation of the cylinder is modeled by power-law exponent and the variation of characteristics is supposed to be in radial direction. A finite element formulation is derived for the analysis. According to the property variation of the constituent materials in the radial direction of the wall, it is not convenient to use conventional elements to model and analyze the structure of the stiffened FGM cylinders. In this paper a new cylindrical super-element is used to model the finite element formulation and analyze the static and modal behavior of stiffened FGM thick walled cylinders. By using this super-element the number of elements, which are needed for modeling, will reduce significantly and the process time is less in comparison with conventional finite element formulations. Results for static and modal analysis are evaluated and verified by comparison to finite element formulation with conventional elements. Comparison indicates a good conformity between results.

Improving Performance of World Wide Web by Adaptive Web Traffic Reduction

The ever increasing use of World Wide Web in the existing network, results in poor performance. Several techniques have been developed for reducing web traffic by compressing the size of the file, saving the web pages at the client side, changing the burst nature of traffic into constant rate etc. No single method was adequate enough to access the document instantly through the Internet. In this paper, adaptive hybrid algorithms are developed for reducing web traffic. Intelligent agents are used for monitoring the web traffic. Depending upon the bandwidth usage, user-s preferences, server and browser capabilities, intelligent agents use the best techniques to achieve maximum traffic reduction. Web caching, compression, filtering, optimization of HTML tags, and traffic dispersion are incorporated into this adaptive selection. Using this new hybrid technique, latency is reduced to 20 – 60 % and cache hit ratio is increased 40 – 82 %.

Performance of Random Diagonal Codes for Spectral Amplitude Coding Optical CDMA Systems

In this paper we study the use of a new code called Random Diagonal (RD) code for Spectral Amplitude Coding (SAC) optical Code Division Multiple Access (CDMA) networks, using Fiber Bragg-Grating (FBG), FBG consists of a fiber segment whose index of reflection varies periodically along its length. RD code is constructed using code level and data level, one of the important properties of this code is that the cross correlation at data level is always zero, which means that Phase intensity Induced Phase (PIIN) is reduced. We find that the performance of the RD code will be better than Modified Frequency Hopping (MFH) and Hadamard code It has been observed through experimental and theoretical simulation that BER for RD code perform significantly better than other codes. Proof –of-principle simulations of encoding with 3 channels, and 10 Gbps data transmission have been successfully demonstrated together with FBG decoding scheme for canceling the code level from SAC-signal.

Emission Assessment of Rice Husk Combustion for Power Production

Rice husk is one of the alternative fuels for Thailand because of its high potential and environmental benefits. Nonetheless, the environmental profile of the electricity production from rice husk must be assessed to ensure reduced environmental damage. A 10 MW pilot plant using rice husk as feedstock is the study site. The environmental impacts from rice husk power plant are evaluated by using the Life Cycle Assessment (LCA) methodology. Energy, material and carbon balances have been determined for tracing the system flow. Carbon closure has been used for describing of the net amount of CO2 released from the system in relation to the amount being recycled between the power plant and the CO2 adsorbed by rice husk. The transportation of rice husk to the power plant has significant on global warming, but not on acidification and photo-oxidant formation. The results showed that the impact potentials from rice husk power plant are lesser than the conventional plants for most of the categories considered; except the photo-oxidant formation potential from CO. The high CO from rice husk power plant may be due to low boiler efficiency and high moisture content in rice husk. The performance of the study site can be enhanced by improving the combustion efficiency.

A Comparison Study of Electrical Characteristics in Conventional Multiple-gate Silicon Nanowire Transistors

In this paper electrical characteristics of various kinds of multiple-gate silicon nanowire transistors (SNWT) with the channel length equal to 7 nm are compared. A fully ballistic quantum mechanical transport approach based on NEGF was employed to analyses electrical characteristics of rectangular and cylindrical silicon nanowire transistors as well as a Double gate MOS FET. A double gate, triple gate, and gate all around nano wires were studied to investigate the impact of increasing the number of gates on the control of the short channel effect which is important in nanoscale devices. Also in the case of triple gate rectangular SNWT inserting extra gates on the bottom of device can improve the application of device. The results indicate that by using gate all around structures short channel effects such as DIBL, subthreshold swing and delay reduces.

Person Identification using Gait by Combined Features of Width and Shape of the Binary Silhouette

Current image-based individual human recognition methods, such as fingerprints, face, or iris biometric modalities generally require a cooperative subject, views from certain aspects, and physical contact or close proximity. These methods cannot reliably recognize non-cooperating individuals at a distance in the real world under changing environmental conditions. Gait, which concerns recognizing individuals by the way they walk, is a relatively new biometric without these disadvantages. The inherent gait characteristic of an individual makes it irreplaceable and useful in visual surveillance. In this paper, an efficient gait recognition system for human identification by extracting two features namely width vector of the binary silhouette and the MPEG-7-based region-based shape descriptors is proposed. In the proposed method, foreground objects i.e., human and other moving objects are extracted by estimating background information by a Gaussian Mixture Model (GMM) and subsequently, median filtering operation is performed for removing noises in the background subtracted image. A moving target classification algorithm is used to separate human being (i.e., pedestrian) from other foreground objects (viz., vehicles). Shape and boundary information is used in the moving target classification algorithm. Subsequently, width vector of the outer contour of binary silhouette and the MPEG-7 Angular Radial Transform coefficients are taken as the feature vector. Next, the Principal Component Analysis (PCA) is applied to the selected feature vector to reduce its dimensionality. These extracted feature vectors are used to train an Hidden Markov Model (HMM) for identification of some individuals. The proposed system is evaluated using some gait sequences and the experimental results show the efficacy of the proposed algorithm.

Intelligent Video-Based Monitoring of Freeway Traffic

Freeways are originally designed to provide high mobility to road users. However, the increase in population and vehicle numbers has led to increasing congestions around the world. Daily recurrent congestion substantially reduces the freeway capacity when it is most needed. Building new highways and expanding the existing ones is an expensive solution and impractical in many situations. Intelligent and vision-based techniques can, however, be efficient tools in monitoring highways and increasing the capacity of the existing infrastructures. The crucial step for highway monitoring is vehicle detection. In this paper, we propose one of such techniques. The approach is based on artificial neural networks (ANN) for vehicles detection and counting. The detection process uses the freeway video images and starts by automatically extracting the image background from the successive video frames. Once the background is identified, subsequent frames are used to detect moving objects through image subtraction. The result is segmented using Sobel operator for edge detection. The ANN is, then, used in the detection and counting phase. Applying this technique to the busiest freeway in Riyadh (King Fahd Road) achieved higher than 98% detection accuracy despite the light intensity changes, the occlusion situations, and shadows.

Reducing Cognitive Load in Learning Computer Programming

Many difficulties are faced in the process of learning computer programming. This paper will propose a system framework intended to reduce cognitive load in learning programming. In first section focus is given on the process of learning and the shortcomings of the current approaches to learning programming. Finally the proposed prototype is suggested along with the justification of the prototype. In the proposed prototype the concept map is used as visualization metaphor. Concept maps are similar to the mental schema in long term memory and hence it can reduce cognitive load well. In addition other method such as part code method is also proposed in this framework to can reduce cognitive load.

The Effects of the Impact of Instructional Immediacy on Cognition and Learning in Online Classes

Current research has explored the impact of instructional immediacy, defined as those behaviors that help build close relationships or feelings of closeness, both on cognition and motivation in the traditional classroom and online classroom; however, online courses continue to suffer from higher dropout rates. Based on Albert Bandura-s Social Cognitive Theory, four primary relationships or interactions in an online course will be explored in light of how they can provide immediacy thereby reducing student attrition and improving cognitive learning. The four relationships are teacher-student, student-student, and student-content, and studentcomputer. Results of a study conducted with inservice teachers completing a 14-week online professional development technology course will be examined to demonstrate immediacy strategies that improve cognitive learning and reduce student attrition. Results of the study reveal that students can be motivated through various interactions and instructional immediacy behaviors which lead to higher completion rates, improved self-efficacy, and cognitive learning.

Biplot Analysis for Evaluation of Tolerance in Some Bean (Phaseolus vulgaris L.) Genotypes to Bean Common Mosaic Virus (BCMV)

The common bean is the most important grain legume for direct human consumption in the world and BCMV is one of the world's most serious bean diseases that can reduce yield and quality of harvested product. To determine the best tolerance index to BCMV and recognize tolerant genotypes, 2 experiments were conducted in field conditions. Twenty five common bean genotypes were sown in 2 separate RCB design with 3 replications under contamination and non-contamination conditions. On the basis of the results of indices correlations GMP, MP and HARM were determined as the most suitable tolerance indices. The results of principle components analysis indicated 2 first components totally explained 98.52% of variations among data. The first and second components were named potential yield and stress susceptible respectively. Based on the results of BCMV tolerance indices assessment and biplot analysis WA8563-4, WA8563-2 and Cardinal were the genotypes that exhibited potential seed yield under contamination and noncontamination conditions.

Using Data Mining Methodology to Build the Predictive Model of Gold Passbook Price

Gold passbook is an investing tool that is especially suitable for investors to do small investment in the solid gold. The gold passbook has the lower risk than other ways investing in gold, but its price is still affected by gold price. However, there are many factors can cause influences on gold price. Therefore, building a model to predict the price of gold passbook can both reduce the risk of investment and increase the benefits. This study investigates the important factors that influence the gold passbook price, and utilize the Group Method of Data Handling (GMDH) to build the predictive model. This method can not only obtain the significant variables but also perform well in prediction. Finally, the significant variables of gold passbook price, which can be predicted by GMDH, are US dollar exchange rate, international petroleum price, unemployment rate, whole sale price index, rediscount rate, foreign exchange reserves, misery index, prosperity coincident index and industrial index.

Linear-Operator Formalism in the Analysis of Omega Planar Layered Waveguides

A complete spectral representation for the electromagnetic field of planar multilayered waveguides inhomogeneously filled with omega media is presented. The problem of guided electromagnetic propagation is reduced to an eigenvalue equation related to a 2 ´ 2 matrix differential operator. Using the concept of adjoint waveguide, general bi-orthogonality relations for the hybrid modes (either from the discrete or from the continuous spectrum) are derived. For the special case of homogeneous layers the linear operator formalism is reduced to a simple 2 ´ 2 coupling matrix eigenvalue problem. Finally, as an example of application, the surface and the radiation modes of a grounded omega slab waveguide are analyzed.

Meta Model Based EA for Complex Optimization

Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiency

FT-IR Study of Stabilized PAN Fibers for Fabrication of Carbon Fibers

In this investigation, types of commercial and special polyacrylonitrile (PAN) fibers contain sodium 2-methyl-2- acrylamidopropane sulfonate (SAMPS) and itaconic acid (IA) comonomers were studied by fourier transform infrared (FT-IR) spectroscopy. The study of FT-IR spectra of PAN fibers samples with different comonomers shows that during stabilization of PAN fibers, the peaks related to C≡N bonds and CH2 are reduced sharply. These reductions are related to cyclization of nitrile groups and stabilization procedure. This reduction in PAN fibers contain IA comonomer is very intense in comparison with PAN fibers contain SAMPS comonomer. This fact indicates the cycling and stabilization for sample contain IA comonomer have been conducted more completely. Therefore the carbon fibers produced from this material have higher tensile strength due to suitable stabilization.