Exploring the Potential of Phase Change Memories as an Alternative to DRAM Technology

Scalability poses a severe threat to the existing DRAM technology. The capacitors that are used for storing and sensing charge in DRAM are generally not scaled beyond 42nm. This is because; the capacitors must be sufficiently large for reliable sensing and charge storage mechanism. This leaves DRAM memory scaling in jeopardy, as charge sensing and storage mechanisms become extremely difficult. In this paper we provide an overview of the potential and the possibilities of using Phase Change Memory (PCM) as an alternative for the existing DRAM technology. The main challenges that we encounter in using PCM are, the limited endurance, high access latencies, and higher dynamic energy consumption than that of the conventional DRAM. We then provide an overview of various methods, which can be employed to overcome these drawbacks. Hybrid memories involving both PCM and DRAM can be used, to achieve good tradeoffs in access latency and storage density. We conclude by presenting, the results of these methods that makes PCM a potential replacement for the current DRAM technology.

Electronic Government in the GCC Countries

The study investigated the practices of organisations in Gulf Cooperation Council (GCC) countries with regards to G2C egovernment maturity. It reveals that e-government G2C initiatives in the surveyed countries in particular, and arguably around the world in general, are progressing slowly because of the lack of a trusted and secure medium to authenticate the identities of online users. The authors conclude that national ID schemes will play a major role in helping governments reap the benefits of e-government if the three advanced technologies of smart card, biometrics and public key infrastructure (PKI) are utilised to provide a reliable and trusted authentication medium for e-government services.

A Fast Replica Placement Methodology for Large-scale Distributed Computing Systems

Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.

Low Air Velocity Measurement Characteristics- Variation Due to Flow Regime

The paper depicts air velocity values, reproduced by laser Doppler anemometer (LDA) and ultrasonic anemometer (UA), relations with calculated ones from flow rate measurements using the gas meter which calibration uncertainty is ± (0.15 – 0.30) %. Investigation had been performed in channel installed in aerodynamical facility used as a part of national standard of air velocity. Relations defined in a research let us confirm the LDA and UA for air velocity reproduction to be the most advantageous measures. The results affirm ultrasonic anemometer to be reliable and favourable instrument for measurement of mean velocity or control of velocity stability in the velocity range of 0.05 m/s – 10 (15) m/s when the LDA used. The main aim of this research is to investigate low velocity regularities, starting from 0.05 m/s, including region of turbulent, laminar and transitional air flows. Theoretical and experimental results and brief analysis of it are given in the paper. Maximum and mean velocity relations for transitional air flow having unique distribution are represented. Transitional flow having distinctive and different from laminar and turbulent flow characteristics experimentally have not yet been analysed.

A Method to Predict Hemorrhage Disease of Grass Carp Tends

Hemorrhage Disease of Grass Carp (HDGC) is a kind of commonly occurring illnesses in summer, and the extremely high death rate result in colossal losses to aquaculture. As the complex connections among each factor which influences aquiculture diseases, there-s no quit reasonable mathematical model to solve the problem at present.A BP neural network which with excellent nonlinear mapping coherence was adopted to establish mathematical model; Environmental factor, which can easily detected, such as breeding density, water temperature, pH and light intensity was set as the main analyzing object. 25 groups of experimental data were used for training and test, and the accuracy of using the model to predict the trend of HDGC was above 80%. It is demonstrated that BP neural network for predicating diseases in HDGC has a particularly objectivity and practicality, thus it can be spread to other aquiculture disease.

A Renovated Cook's Distance Based On The Buckley-James Estimate In Censored Regression

There have been various methods created based on the regression ideas to resolve the problem of data set containing censored observations, i.e. the Buckley-James method, Miller-s method, Cox method, and Koul-Susarla-Van Ryzin estimators. Even though comparison studies show the Buckley-James method performs better than some other methods, it is still rarely used by researchers mainly because of the limited diagnostics analysis developed for the Buckley-James method thus far. Therefore, a diagnostic tool for the Buckley-James method is proposed in this paper. It is called the renovated Cook-s Distance, (RD* i ) and has been developed based on the Cook-s idea. The renovated Cook-s Distance (RD* i ) has advantages (depending on the analyst demand) over (i) the change in the fitted value for a single case, DFIT* i as it measures the influence of case i on all n fitted values Yˆ∗ (not just the fitted value for case i as DFIT* i) (ii) the change in the estimate of the coefficient when the ith case is deleted, DBETA* i since DBETA* i corresponds to the number of variables p so it is usually easier to look at a diagnostic measure such as RD* i since information from p variables can be considered simultaneously. Finally, an example using Stanford Heart Transplant data is provided to illustrate the proposed diagnostic tool.

A Combined Conventional and Differential Evolution Method for Model Order Reduction

In this paper a mixed method by combining an evolutionary and a conventional technique is proposed for reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM). In the conventional technique, the mixed advantages of Mihailov stability criterion and continued Fraction Expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. Then, retaining the numerator polynomial, the denominator polynomial is recalculated by an evolutionary technique. In the evolutionary method, the recently proposed Differential Evolution (DE) optimization technique is employed. DE method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. The proposed method is illustrated through a numerical example and compared with ROM where both numerator and denominator polynomials are obtained by conventional method to show its superiority.

Remarks on Some Properties of Decision Rules

This paper shows that some properties of the decision rules in the literature do not hold by presenting a counterexample. We give sufficient and necessary conditions under which these properties are valid. These results will be helpful when one tries to choose the right decision rules in the research of rough set theory.

Geochemistry of Tektites from Maoming of Guandong Province, China

We measured the major and trace element contents and Rb-Sr isotopic compositions of 12 tektites from the Maoming area, Guandong province (south China). All the samples studied are splash-form tektites which show pitted or grooved surfaces with schlieren structures on some surfaces. The trace element ratios Ba/Rb (avg. 4.33), Th/Sm (avg. 2.31), Sm/Sc (avg. 0.44), Th/Sc (avg. 1.01) , La/Sc (avg. 2.86), Th/U (avg. 7.47), Zr/Hf (avg. 46.01) and the rare earth elements (REE) contents of tektites of this study are similar to the average upper continental crust. From the chemical composition, it is suggested that tektites in this study are derived from similar parental terrestrial sedimentary deposit which may be related to post-Archean upper crustal rocks. The tektites from the Maoming area have high positive εSr(0) values-ranging from 176.9~190.5 which indicate that the parental material for these tektites have similar Sr isotopic compositions to old terrestrial sedimentary rocks and they were not dominantly derived from recent young sediments (such as soil or loess). The Sr isotopic data obtained by the present study support the conclusion proposed by Blum et al. (1992)[1] that the depositional age of sedimentary target materials is close to 170Ma (Jurassic). Mixing calculations based on the model proposed by Ho and Chen (1996)[2] for various amounts and combinations of target rocks indicate that the best fit for tektites from the Maoming area is a mixture of 40% shale, 30% greywacke, 30% quartzite.

Processing Web-Cam Images by a Neuro-Fuzzy Approach for Vehicular Traffic Monitoring

Traffic management in an urban area is highly facilitated by the knowledge of the traffic conditions in every street or highway involved in the vehicular mobility system. Aim of the paper is to propose a neuro-fuzzy approach able to compute the main parameters of a traffic system, i.e., car density, velocity and flow, by using the images collected by the web-cams located at the crossroads of the traffic network. The performances of this approach encourage its application when the traffic system is far from the saturation. A fuzzy model is also outlined to evaluate when it is suitable to use more accurate, even if more time consuming, algorithms for measuring traffic conditions near to saturation.

Stabilization of Nonnecessarily Inversely Stable First-Order Adaptive Systems under Saturated Input

This paper presents an indirect adaptive stabilization scheme for first-order continuous-time systems under saturated input which is described by a sigmoidal function. The singularities are avoided through a modification scheme for the estimated plant parameter vector so that its associated Sylvester matrix is guaranteed to be non-singular and then the estimated plant model is controllable. The modification mechanism involves the use of a hysteresis switching function. An alternative hybrid scheme, whose estimated parameters are updated at sampling instants is also given to solve a similar adaptive stabilization problem. Such a scheme also uses hysteresis switching for modification of the parameter estimates so as to ensure the controllability of the estimated plant model.

Research on Strategy for Automated Scaleless-Map Compilation

As a tool for human spatial cognition and thinking, the map has been playing an important role. Maps are perhaps as fundamental to society as language and the written word. Economic and social development requires extensive and in-depth understanding of their own living environment, from the scope of the overall global to urban housing. This has brought unprecedented opportunities and challenges for traditional cartography . This paper first proposed the concept of scaleless-map and its basic characteristics, through the analysis of the existing multi-scale representation techniques. Then some strategies are presented for automated mapping compilation. Taking into account the demand of automated map compilation, detailed proposed the software - WJ workstation must have four technical features, which are generalization operators, symbol primitives, dynamically annotation and mapping process template. This paper provides a more systematic new idea and solution to improve the intelligence and automation of the scaleless cartography.

On Analysis of Boundness Property for ECATNets by Using Rewriting Logic

To analyze the behavior of Petri nets, the accessibility graph and Model Checking are widely used. However, if the analyzed Petri net is unbounded then the accessibility graph becomes infinite and Model Checking can not be used even for small Petri nets. ECATNets [2] are a category of algebraic Petri nets. The main feature of ECATNets is their sound and complete semantics based on rewriting logic [8] and its language Maude [9]. ECATNets analysis may be done by using techniques of accessibility analysis and Model Checking defined in Maude. But, these two techniques supported by Maude do not work also with infinite-states systems. As a category of Petri nets, ECATNets can be unbounded and so infinite systems. In order to know if we can apply accessibility analysis and Model Checking of Maude to an ECATNet, we propose in this paper an algorithm allowing the detection if the ECATNet is bounded or not. Moreover, we propose a rewriting logic based tool implementing this algorithm. We show that the development of this tool using the Maude system is facilitated thanks to the reflectivity of the rewriting logic. Indeed, the self-interpretation of this logic allows us both the modelling of an ECATNet and acting on it.

Development of Gas Chromatography Model: Propylene Concentration Using Neural Network

Gas chromatography (GC) is the most widely used technique in analytical chemistry. However, GC has high initial cost and requires frequent maintenance. This paper examines the feasibility and potential of using a neural network model as an alternative whenever GC is unvailable. It can also be part of system verification on the performance of GC for preventive maintenance activities. It shows the performance of MultiLayer Perceptron (MLP) with Backpropagation structure. Results demonstrate that neural network model when trained using this structure provides an adequate result and is suitable for this purpose. cm.

An Enhanced Key Management Scheme Based on Key Infection in Wireless Sensor Networks

We propose an enhanced key management scheme based on Key Infection, which is lightweight scheme for tiny sensors. The basic scheme, Key Infection, is perfectly secure against node capture and eavesdropping if initial communications after node deployment is secure. If, however, an attacker can eavesdrop on the initial communications, they can take the session key. We use common neighbors for each node to generate the session key. Each node has own secret key and shares it with its neighbor nodes. Then each node can establish the session key using common neighbors- secret keys and a random number. Our scheme needs only a few communications even if it uses neighbor nodes- information. Without losing the lightness of basic scheme, it improves the resistance against eavesdropping on the initial communications more than 30%.

Modes of Collapse of Compress–Expand Member under Axial Loading

In this paper, a study on the modes of collapse of compress- expand members are presented. Compress- expand member is a compact, multiple-combined cylinders, to be proposed as energy absorbers. Previous studies on the compress- expand member have clarified its energy absorption efficiency, proposed an approximate equation to describe its deformation characteristics and also highlighted the improvement that it has brought. However, for the member to be practical, the actual range of geometrical dimension that it can maintain its applicability must be investigated. In this study, using a virtualized materials that comply the bilinear hardening law, Finite element Method (FEM) analysis on the collapse modes of compress- expand member have been conducted. Deformation maps that plotted the member's collapse modes with regards to the member's geometric and material parameters were then presented in order to determine the dimensional range of each collapse modes.

Optimal Parameters of Double Moving Average Control Chart

The objective of this paper is to present explicit analytical formulas for evaluating important characteristics of Double Moving Average control chart (DMA) for Poisson distribution. The most popular characteristics of a control chart are Average Run Length ( 0 ARL ) - the mean of observations that are taken before a system is signaled to be out-of control when it is actually still incontrol, and Average Delay time ( 1 ARL ) - mean delay of true alarm times. An important property required of 0 ARL is that it should be sufficiently large when the process is in-control to reduce a number of false alarms. On the other side, if the process is actually out-ofcontrol then 1 ARL should be as small as possible. In particular, the explicit analytical formulas for evaluating 0 ARL and 1 ARL be able to get a set of optimal parameters which depend on a width of the moving average ( w ) and width of control limit ( H ) for designing DMA chart with minimum of 1 ARL

Optimization for Subcritical Water Extraction of Phenolic Compounds from Rambutan Peels

Rambutan is a tropical fruit which peel possesses antioxidant properties. This work was conducted to optimize extraction conditions of phenolic compounds from rambutan peel. Response surface methodology (RSM) was adopted to optimize subcritical water extraction (SWE) on temperature, extraction time and percent solvent mixture. The results demonstrated that the optimum conditions for SWE were as follows: temperature 160°C, extraction time 20min. and concentration of 50% ethanol. Comparison of the phenolic compounds from the rambutan peels in maceration 6h, soxhlet 4h, and SWE 20min., it indicated that total phenolic content (using Folin-Ciocalteu-s phenol reagent) was 26.42, 70.29, and 172.47mg of tannic acid equivalent (TAE) per g dry rambutan peel, respectively. The comparative study concluded that SWE was a promising technique for phenolic compounds extraction from rambutan peel, due to much more two times of conventional techniques and shorter extraction times.

A Critical Review of the Adequacy of EIA Reports-Evidence from Pakistan

The preparation of good-quality Environmental Impact Assessment (EIA) reports contribute to enhancing overall effectiveness of EIA. This component of the EIA process becomes more important in situation where public participation is weak and there is lack of expertise on the part of the competent authority. In Pakistan, EIA became mandatory for every project likely to cause adverse environmental impacts from July 1994. The competent authority also formulated guidelines for preparation and review of EIA reports in 1997. However, EIA is yet to prove as a successful decision support tool to help in environmental protection. One of the several reasons of this ineffectiveness is the generally poor quality of EIA reports. This paper critically reviews EIA reports of some randomly selected projects. Interviews of EIA consultants, project proponents and concerned government officials have also been conducted to underpin the root causes of poor quality of EIA reports. The analysis reveals several inadequacies particularly in areas relating to identification, evaluation and mitigation of key impacts and consideration of alternatives. The paper identifies some opportunities and suggests measures for improving the quality of EIA reports and hence making EIA an effective tool to help in environmental protection.

Physico-chemical State of the Air at the Stagnation Point during the Atmospheric Reentry of a Spacecraft

Hypersonic flows around spatial vehicles during their reentry phase in planetary atmospheres are characterized by intense aerothermal phenomena. The aim of this work is to analyze high temperature flows around an axisymmetric blunt body taking into account chemical and vibrational non-equilibrium for air mixture species. For this purpose, a finite volume methodology is employed to determine the supersonic flow parameters around the axisymmetric blunt body, especially at the stagnation point and along the wall of spacecraft for several altitudes. This allows the capture shock wave before a blunt body placed in supersonic free stream. The numerical technique uses the Flux Vector Splitting method of Van Leer. Here, adequate time stepping parameter, along with CFL coefficient and mesh size level are selected to ensure numerical convergence, sought with an order of 10-8