Investigation of Water Deficit Stress on Agronomical Traits of Soybean Cultivars in Temperate Climate

In order to investigate water deficit stress on 24 of soybean (Glycine Max. L) cultivars and lines in temperate climate, an experiment was conducted in Iran Seed and Plant Improvement Institute. Stress levels were irrigation after evaporation of 50, 100, 150 mm water from pan, class A. Randomized Completely Block Design was arranged for each stress levels. Some traits such as, node number, plant height, pod number per area, grain number per pod, grain number per area, 1000 grains weight, grain yield and harvest index were measured. Results showed that water deficit stress had significant effect on node number, plant height, pod number per area, grain number per pod, grain number per area, 1000 grains weight and harvest index. Also all of agronomic traits except harvest index influenced significantly by cultivars and lines. The least and most grain yield was belonged to Ronak X Williams and M41 x Clark respectively.

Characterization of Atmospheric Particulate Matter using PIXE Technique

Coarse and fine particulate matter were collected at a residential area at Vashi, Navi Mumbai and the filter samples were analysed for trace elements using PIXE technique. The trend of particulate matter showed higher concentrations during winter than the summer and monsoon concentration levels. High concentrations of elements related to soil and sea salt were found in PM10 and PM2.5. Also high levels of zinc and sulphur found in the particulates of both the size fractions. EF analysis showed enrichment of Cu, Cr and Mn only in the fine fraction suggesting their origin from anthropogenic sources. The EF value was observed to be maximum for As, Pb and Zn in the fine particulates. However, crustal derived elements showed very low EF values indicating their origin from soil. The PCA based multivariate studies identified soil, sea salt, combustion and Se sources as common sources for coarse and additionally an industrial source has also been identified for fine particles.

Appraisal of Energy Efficiency of Urban Development Plans: The Fidelity Concept on Izmir-Balcova Case

Design and land use are closely linked to the energy efficiency levels for an urban area. The current city planning practice does not involve an effective land useenergy evaluation in its 'blueprint' urban plans. The study proposed an appraisal method that can be embedded in GIS programs using five planning criteria as how far a planner can give away from the planning principles (criteria) for the most energy output s/he can obtain. The case of Balcova, a district in the Izmir Metropolitan area, is used conformingly for evaluating the proposed master plan and the geothermal energy (heating only) use for the concern district. If the land use design were proposed accordingly at-most energy efficiency (a 30% obtained), mainly increasing the density around the geothermal wells and also proposing more mixed use zones, we could have 17% distortion (infidelity to the main planning principles) from the original plan. The proposed method can be an effective tool for planners as simulation media, of which calculations can be made by GIS ready tools, to evaluate efficiency levels for different plan proposals, letting to know how much energy saving causes how much deviation from the other planning ideals. Lower energy uses can be possible for different land use proposals for various policy trials.

Meteorological Data Study and Forecasting Using Particle Swarm Optimization Algorithm

Weather systems use enormously complex combinations of numerical tools for study and forecasting. Unfortunately, due to phenomena in the world climate, such as the greenhouse effect, classical models may become insufficient mostly because they lack adaptation. Therefore, the weather forecast problem is matched for heuristic approaches, such as Evolutionary Algorithms. Experimentation with heuristic methods like Particle Swarm Optimization (PSO) algorithm can lead to the development of new insights or promising models that can be fine tuned with more focused techniques. This paper describes a PSO approach for analysis and prediction of data and provides experimental results of the aforementioned method on realworld meteorological time series.

The Comparative Investigation and Calculation of Thermo-Neutronic Parameters on Two Gens II and III Nuclear Reactors with Same Powers

Whereas in the third generation nuclear reactors, dimensions of core and also the kind of coolant and enrichment percent of fuel have significantly changed than the second generation, therefore in this article the aim is based on a comparative investigation between two same power reactors of second and third generations, that the neutronic parameters of both reactors such as: K∞, Keff and its details and thermal hydraulic parameters such as: power density, specific power, volumetric heat rate, released power per fuel volume unit, volume and mass of clad and fuel (consisting fissile and fertile fuels), be calculated and compared together. By this comparing the efficiency and modification of third generation nuclear reactors than second generation which have same power can be distinguished. In order to calculate the cited parameters, some information such as: core dimensions, the pitch of lattice, the fuel matter, the percent of enrichment and the kind of coolant are used. For calculating the neutronic parameters, a neutronic program entitled: SIXFAC and also related formulas have been used. Meantime for calculating the thermal hydraulic and other parameters, analytical method and related formulas have been applied.

Sustainable and Ecological Designs of the Built Environment

This paper reviews designs of the built environment from a sustainability perspective, emphasizing their importance in achieving ecological and sustainable economic objectives. The built environment has traditionally resulted in loss of biodiversity, extinction of some species, climate change, excessive water use, land degradation, space depletion, waste accumulation, energy consumption and environmental pollution. Materials used like plastics, metals, bricks, concrete, cement, natural aggregates, glass and plaster have wreaked havoc on the earth´s resources, since they have high levels of embodied energy hence not sustainable. Additional resources are consumed during use and disposal phases. Proposed designs for sustainability solutions include: ecological sanitation and eco-efficiency systems that ensure social, economic, environmental and technical sustainability. Renewable materials and energy systems, passive cooling and heating systems and material and energy reduction, reuse and recycling can improve the sector. These ideas are intended to inform the field of ecological design of the built environment.

Packing Theory for Natural and Crushed Aggregate to Obtain the Best Mix of Aggregate: Research and Development

Concrete performance is strongly affected by the particle packing degree since it determines the distribution of the cementitious component and the interaction of mineral particles. By using packing theory designers will be able to select optimal aggregate materials for preparing concrete with low cement content, which is beneficial from the point of cost. Optimum particle packing implies minimizing porosity and thereby reducing the amount of cement paste needed to fill the voids between the aggregate particles, taking also the rheology of the concrete into consideration. For reaching good fluidity superplasticizers are required. The results from pilot tests at Luleå University of Technology (LTU) show various forms of the proposed theoretical models, and the empirical approach taken in the study seems to provide a safer basis for developing new, improved packing models.

Digital Predistorter with Pipelined Architecture Using CORDIC Processors

In a wireless communication system, a predistorter(PD) is often employed to alleviate nonlinear distortions due to operating a power amplifier near saturation, thereby improving the system performance and reducing the interference to adjacent channels. This paper presents a new adaptive polynomial digital predistorter(DPD). The proposed DPD uses Coordinate Rotation Digital Computing(CORDIC) processors and PD process by pipelined architecture. It is simpler and faster than conventional adaptive polynomial DPD. The performance of the proposed DPD is proved by MATLAB simulation.

Cost Optimization of Concentric Braced Steel Building Structures

Seismic design may require non-conventional concept, due to the fact that the stiffness and layout of the structure have a great effect on the overall structural behaviour, on the seismic load intensity as well as on the internal force distribution. To find an economical and optimal structural configuration the key issue is the optimal design of the lateral load resisting system. This paper focuses on the optimal design of regular, concentric braced frame (CBF) multi-storey steel building structures. The optimal configurations are determined by a numerical method using genetic algorithm approach, developed by the authors. Aim is to find structural configurations with minimum structural cost. The design constraints of objective function are assigned in accordance with Eurocode 3 and Eurocode 8 guidelines. In this paper the results are presented for various building geometries, different seismic intensities, and levels of energy dissipation.

Solution of Interval-valued Manufacturing Inventory Models With Shortages

A manufacturing inventory model with shortages with carrying cost, shortage cost, setup cost and demand quantity as imprecise numbers, instead of real numbers, namely interval number is considered here. First, a brief survey of the existing works on comparing and ranking any two interval numbers on the real line is presented. A common algorithm for the optimum production quantity (Economic lot-size) per cycle of a single product (so as to minimize the total average cost) is developed which works well on interval number optimization under consideration. Finally, the designed algorithm is illustrated with numerical example.

Methane and Other Hydrocarbon Gas Emissions Resulting from Flaring in Kuwait Oilfields

Air pollution is a major environmental health problem, affecting developed and developing countries around the world. Increasing amounts of potentially harmful gases and particulate matter are being emitted into the atmosphere on a global scale, resulting in damage to human health and the environment. Petroleum-related air pollutants can have a wide variety of adverse environmental impacts. In the crude oil production sectors, there is a strong need for a thorough knowledge of gaseous emissions resulting from the flaring of associated gas of known composition on daily basis through combustion activities under several operating conditions. This can help in the control of gaseous emission from flares and thus in the protection of their immediate and distant surrounding against environmental degradation. The impacts of methane and non-methane hydrocarbons emissions from flaring activities at oil production facilities at Kuwait Oilfields have been assessed through a screening study using records of flaring operations taken at the gas and oil production sites, and by analyzing available meteorological and air quality data measured at stations located near anthropogenic sources. In the present study the Industrial Source Complex (ISCST3) Dispersion Model is used to calculate the ground level concentrations of methane and nonmethane hydrocarbons emitted due to flaring in all over Kuwait Oilfields. The simulation of real hourly air quality in and around oil production facilities in the State of Kuwait for the year 2006, inserting the respective source emission data into the ISCST3 software indicates that the levels of non-methane hydrocarbons from the flaring activities exceed the allowable ambient air standard set by Kuwait EPA. So, there is a strong need to address this acute problem to minimize the impact of methane and non-methane hydrocarbons released from flaring activities over the urban area of Kuwait.

Assessing Semantic Consistency of Business Process Models

Business process modeling has become an accepted means for designing and describing business operations. Thereby, consistency of business process models, i.e., the absence of modeling faults, is of upmost importance to organizations. This paper presents a concept and subsequent implementation for detecting faults in business process models and for computing a measure of their consistency. It incorporates not only syntactic consistency but also semantic consistency, i.e., consistency regarding the meaning of model elements from a business perspective.

Dynamic Models versus Frailty Models for Recurrent Event Data

Recurrent event data is a special type of multivariate survival data. Dynamic and frailty models are one of the approaches that dealt with this kind of data. A comparison between these two models is studied using the empirical standard deviation of the standardized martingale residual processes as a way of assessing the fit of the two models based on the Aalen additive regression model. Here we found both approaches took heterogeneity into account and produce residual standard deviations close to each other both in the simulation study and in the real data set.

Teaching Approach and Self-Confidence Effect Model Consistency between Taiwan and Singapore Multi-Group HLM

This study was conducted to explore the effects of two countries model comparison program in Taiwan and Singapore in TIMSS database. The researchers used Multi-Group Hierarchical Linear Modeling techniques to compare the effects of two different country models and we tested our hypotheses on 4,046 Taiwan students and 4,599 Singapore students in 2007 at two levels: the class level and student (individual) level. Design quality is a class level variable. Student level variables are achievement and self-confidence. The results challenge the widely held view that retention has a positive impact on self-confidence. Suggestions for future research are discussed.

Brain MRI Segmentation and Lesions Detection by EM Algorithm

In Multiple Sclerosis, pathological changes in the brain results in deviations in signal intensity on Magnetic Resonance Images (MRI). Quantitative analysis of these changes and their correlation with clinical finding provides important information for diagnosis. This constitutes the objective of our work. A new approach is developed. After the enhancement of images contrast and the brain extraction by mathematical morphology algorithm, we proceed to the brain segmentation. Our approach is based on building statistical model from data itself, for normal brain MRI and including clustering tissue type. Then we detect signal abnormalities (MS lesions) as a rejection class containing voxels that are not explained by the built model. We validate the method on MR images of Multiple Sclerosis patients by comparing its results with those of human expert segmentation.

Numerical Study of Cyclic Behavior of Shallow Foundations on Sand Reinforced with Geogrid and Grid-Anchor

When the foundations of structures under cyclic loading with amplitudes less than their permissible load, the concern exists often for the amount of uniform and non-uniform settlement of such structures. Storage tank foundations with numerous filling and discharging and railways ballast course under repeating transportation loads are examples of such conditions. This paper deals with the effects of using the new generation of reinforcements, Grid-Anchor, for the purpose of reducing the permanent settlement of these foundations under the influence of different proportions of the ultimate load. Other items such as the type and the number of reinforcements as well as the number of loading cycles are studied numerically. Numerical models were made using the Plaxis3D Tunnel finite element code. The results show that by using gridanchor and increasing the number of their layers in the same proportion as that of the cyclic load being applied, the amount of permanent settlement decreases up to 42% relative to unreinforced condition depends on the number of reinforcement layers and percent of applied load and the number of loading cycles to reach a constant value of dimensionless settlement decreases up to 20% relative to unreinforced condition.

Joint Microstatistic Multiuser Detection and Cancellation of Nonlinear Distortion Effects for the Uplink of MC-CDMA Systems Using Golay Codes

The study in this paper underlines the importance of correct joint selection of the spreading codes for uplink of multicarrier code division multiple access (MC-CDMA) at the transmitter side and detector at the receiver side in the presence of nonlinear distortion due to high power amplifier (HPA). The bit error rate (BER) of system for different spreading sequences (Walsh code, Gold code, orthogonal Gold code, Golay code and Zadoff-Chu code) and different kinds of receivers (minimum mean-square error receiver (MMSE-MUD) and microstatistic multi-user receiver (MSF-MUD)) is compared by means of simulations for MC-CDMA transmission system. Finally, the results of analysis will show, that the application of MSF-MUD in combination with Golay codes can outperform significantly the other tested spreading codes and receivers for all mostly used models of HPA.

Gasoline and Diesel Production via Fischer- Tropsch Synthesis over Cobalt Based Catalyst

Performance of a cobalt doped sol-gel derived silica (Co/SiO2) catalyst for Fischer–Tropsch synthesis (FTS) in slurryphase reactor was studied using paraffin wax as initial liquid media. The reactive mixed gas, hydrogen (H2) and carbon monoxide (CO) in a molar ratio of 2:1, was flowed at 50 ml/min. Braunauer-Emmett- Teller (BET) surface area and X-ray diffraction (XRD) techniques were employed to characterize both the specific surface area and crystallinity of the catalyst, respectively. The reduction behavior of Co/SiO2 catalyst was investigated using the Temperature Programmmed Reduction (TPR) method. Operating temperatures were varied from 493 to 533K to find the optimum conditions to maximize liquid fuels production, gasoline and diesel.

On Formalizing Predefined OCL Properties

The ability of UML to handle the modeling process of complex industrial software applications has increased its popularity to the extent of becoming the de-facto language in serving the design purpose. Although, its rich graphical notation naturally oriented towards the object-oriented concept, facilitates the understandability, it hardly successes to report all domainspecific aspects in a satisfactory way. OCL, as the standard language for expressing additional constraints on UML models, has great potential to help improve expressiveness. Unfortunately, it suffers from a weak formalism due to its poor semantic resulting in many obstacles towards the build of tools support and thus its application in the industry field. For this reason, many researches were established to formalize OCL expressions using a more rigorous approach. Our contribution join this work in a complementary way since it focuses specifically on OCL predefined properties which constitute an important part in the construction of OCL expressions. Using formal methods, we mainly succeed in expressing rigorously OCL predefined functions.

Bayesian Network Model for Students- Laboratory Work Performance Assessment: An Empirical Investigation of the Optimal Construction Approach

There are three approaches to complete Bayesian Network (BN) model construction: total expert-centred, total datacentred, and semi data-centred. These three approaches constitute the basis of the empirical investigation undertaken and reported in this paper. The objective is to determine, amongst these three approaches, which is the optimal approach for the construction of a BN-based model for the performance assessment of students- laboratory work in a virtual electronic laboratory environment. BN models were constructed using all three approaches, with respect to the focus domain, and compared using a set of optimality criteria. In addition, the impact of the size and source of the training, on the performance of total data-centred and semi data-centred models was investigated. The results of the investigation provide additional insight for BN model constructors and contribute to literature providing supportive evidence for the conceptual feasibility and efficiency of structure and parameter learning from data. In addition, the results highlight other interesting themes.