A Functional Framework for Large Scale Application Software Systems

From the perspective of system of systems (SoS) and emergent behaviors, this paper describes large scale application software systems, and proposes framework methods to further depict systems- functional and non-functional characteristics. Besides, this paper also specifically discusses some functional frameworks. In the end, the framework-s applications in system disintegrations, system architecture and stable intermediate forms are additionally dealt with in this in building, deployment and maintenance of large scale software applications.

Selective Forwarding Attack and Its Detection Algorithms: A Review

The wireless mesh networks (WMNs) are emerging technology in wireless networking as they can serve large scale high speed internet access. Due to its wireless multi-hop feature, wireless mesh network is prone to suffer from many attacks, such as denial of service attack (DoS). We consider a special case of DoS attack which is selective forwarding attack (a.k.a. gray hole attack). In such attack, a misbehaving mesh router selectively drops the packets it receives rom its predecessor mesh router. It is very hard to detect that packet loss is due to medium access collision, bad channel quality or because of selective forwarding attack. In this paper, we present a review of detection algorithms of selective forwarding attack and discuss their advantage & disadvantage. Finally we conclude this paper with open research issues and challenges.

A Survey of Various Algorithms for Vlsi Physical Design

Electronic Systems are the core of everyday lives. They form an integral part in financial networks, mass transit, telephone systems, power plants and personal computers. Electronic systems are increasingly based on complex VLSI (Very Large Scale Integration) integrated circuits. Initial electronic design automation is concerned with the design and production of VLSI systems. The next important step in creating a VLSI circuit is Physical Design. The input to the physical design is a logical representation of the system under design. The output of this step is the layout of a physical package that optimally or near optimally realizes the logical representation. Physical design problems are combinatorial in nature and of large problem sizes. Darwin observed that, as variations are introduced into a population with each new generation, the less-fit individuals tend to extinct in the competition of basic necessities. This survival of fittest principle leads to evolution in species. The objective of the Genetic Algorithms (GA) is to find an optimal solution to a problem .Since GA-s are heuristic procedures that can function as optimizers, they are not guaranteed to find the optimum, but are able to find acceptable solutions for a wide range of problems. This survey paper aims at a study on Efficient Algorithms for VLSI Physical design and observes the common traits of the superior contributions.

Performance of QoS Parameters in MANET Application Traffics in Large Scale Scenarios

A mobile Ad-hoc network consists of wireless nodes communicating without the need for a centralized administration. A user can move anytime in an ad hoc scenario and, as a result, such a network needs to have routing protocols which can adopt dynamically changing topology. To accomplish this, a number of ad hoc routing protocols have been proposed and implemented, which include DSR, OLSR and AODV. This paper presents a study on the QoS parameters for MANET application traffics in large-scale scenarios with 50 and 120 nodes. The application traffics analyzed in this study is File Transfer Protocol (FTP). In large scale networks (120 nodes) OLSR shows better performance and in smaller scale networks (50 nodes)AODV shows less packet drop rate and OLSR shows better throughput.

Mapping Complex, Large – Scale Spiking Networks on Neural VLSI

Traditionally, VLSI implementations of spiking neural nets have featured large neuron counts for fixed computations or small exploratory, configurable nets. This paper presents the system architecture of a large configurable neural net system employing a dedicated mapping algorithm for projecting the targeted biology-analog nets and dynamics onto the hardware with its attendant constraints.

Levenberg-Marquardt Algorithm for Karachi Stock Exchange Share Rates Forecasting

Financial forecasting is an example of signal processing problems. A number of ways to train/learn the network are available. We have used Levenberg-Marquardt algorithm for error back-propagation for weight adjustment. Pre-processing of data has reduced much of the variation at large scale to small scale, reducing the variation of training data.

Condition Monitoring in the Management of Maintenance in a Large Scale Precision CNC Machining Manufacturing Facility

The manufacture of large-scale precision aerospace components using CNC requires a highly effective maintenance strategy to ensure that the required accuracy can be achieved over many hours of production. This paper reviews a strategy for a maintenance management system based on Failure Mode Avoidance, which uses advanced techniques and technologies to underpin a predictive maintenance strategy. It is shown how condition monitoring (CM) is important to predict potential failures in high precision machining facilities and achieve intelligent and integrated maintenance management. There are two distinct ways in which CM can be applied. One is to monitor key process parameters and observe trends which may indicate a gradual deterioration of accuracy in the product. The other is the use of CM techniques to monitor high status machine parameters enables trends to be observed which can be corrected before machine failure and downtime occurs. It is concluded that the key to developing a flexible and intelligent maintenance framework in any precision manufacturing operation is the ability to evaluate reliably and routinely machine tool condition using condition monitoring techniques within a framework of Failure Mode Avoidance.

MONARC: A Case Study on Simulation Analysis for LHC Activities

The scale, complexity and worldwide geographical spread of the LHC computing and data analysis problems are unprecedented in scientific research. The complexity of processing and accessing this data is increased substantially by the size and global span of the major experiments, combined with the limited wide area network bandwidth available. We present the latest generation of the MONARC (MOdels of Networked Analysis at Regional Centers) simulation framework, as a design and modeling tool for large scale distributed systems applied to HEP experiments. We present simulation experiments designed to evaluate the capabilities of the current real-world distributed infrastructure to support existing physics analysis processes and the means by which the experiments bands together to meet the technical challenges posed by the storage, access and computing requirements of LHC data analysis within the CMS experiment.

A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems

Large scale systems such as computational Grid is a distributed computing infrastructure that can provide globally available network resources. The evolution of information processing systems in Data Grid is characterized by a strong decentralization of data in several fields whose objective is to ensure the availability and the reliability of the data in the reason to provide a fault tolerance and scalability, which cannot be possible only with the use of the techniques of replication. Unfortunately the use of these techniques has a height cost, because it is necessary to maintain consistency between the distributed data. Nevertheless, to agree to live with certain imperfections can improve the performance of the system by improving competition. In this paper, we propose a multi-layer protocol combining the pessimistic and optimistic approaches conceived for the data consistency maintenance in large scale systems. Our approach is based on a hierarchical representation model with tree layers, whose objective is with double vocation, because it initially makes it possible to reduce response times compared to completely pessimistic approach and it the second time to improve the quality of service compared to an optimistic approach.

A Parallel Algorithm for 2-D Cylindrical Geometry Transport Equation with Interface Corrections

In order to make conventional implicit algorithm to be applicable in large scale parallel computers , an interface prediction and correction of discontinuous finite element method is presented to solve time-dependent neutron transport equations under 2-D cylindrical geometry. Domain decomposition is adopted in the computational domain.The numerical experiments show that our parallel algorithm with explicit prediction and implicit correction has good precision, parallelism and simplicity. Especially, it can reach perfect speedup even on hundreds of processors for large-scale problems.

Potential of Agro-Waste Extracts as Supplements for the Continuous Bioremediation of Free Cyanide Contaminated Wastewater

Different agricultural waste peels were assessed for their suitability to be used as primary substrates for the bioremediation of free cyanide (CN-) by a cyanide-degrading fungus Aspergillus awamori isolated from cyanide containing wastewater. The bioremediated CN- concentration were in the range of 36 to 110 mg CN-/L, with Orange (C. sinensis) > Carrot (D. carota) > Onion (A. cepa) > Apple (M. pumila), being chosen as suitable substrates for large scale CN- degradation processes due to: 1) the high concentration of bioremediated CN-, 2) total reduced sugars released into solution to sustain the biocatalyst, and 3) minimal residual NH4- N concentration after fermentation. The bioremediation rate constants (k) were 0.017h-1 (0h < t < 24h), with improved bioremediation rates (0.02189h-1) observed after 24h. The averaged nitrilase activity was ~10 U/L.

Face Recognition: A Literature Review

The task of face recognition has been actively researched in recent years. This paper provides an up-to-date review of major human face recognition research. We first present an overview of face recognition and its applications. Then, a literature review of the most recent face recognition techniques is presented. Description and limitations of face databases which are used to test the performance of these face recognition algorithms are given. A brief summary of the face recognition vendor test (FRVT) 2002, a large scale evaluation of automatic face recognition technology, and its conclusions are also given. Finally, we give a summary of the research results.

Learning Monte Carlo Data for Circuit Path Length

This paper analyzes the patterns of the Monte Carlo data for a large number of variables and minterms, in order to characterize the circuit path length behavior. We propose models that are determined by training process of shortest path length derived from a wide range of binary decision diagram (BDD) simulations. The creation of the model was done use of feed forward neural network (NN) modeling methodology. Experimental results for ISCAS benchmark circuits show an RMS error of 0.102 for the shortest path length complexity estimation predicted by the NN model (NNM). Use of such a model can help reduce the time complexity of very large scale integrated (VLSI) circuitries and related computer-aided design (CAD) tools that use BDDs.

Simulating Action Potential as a Linear Combination of Gating Dynamics

In this research we show that the dynamics of an action potential in a cell can be modeled with a linear combination of the dynamics of the gating state variables. It is shown that the modeling error is negligible. Our findings can be used for simplifying cell models and reduction of computational burden i.e. it is useful for simulating action potential propagation in large scale computations like tissue modeling. We have verified our finding with the use of several cell models.

Evaluation of A 50MW Two-Axis Tracking Photovoltaic Power Plant for AL-Jagbob, Libya: Energetic, Economic, and Environmental Impact Analysis

This paper investigates the application of large scale (LS-PV) two-axis tracking photovoltaic power plant in Al-Jagbob, Libya. A 50MW PV-grid connected (two-axis tracking) power plant design in Al-Jagbob, Libya has been carried out presently. A hetero-junction with intrinsic thin layer (HIT) type PV module has been selected and modeled. A Microsoft Excel-VBA program has been constructed to compute slope radiation, dew-point, sky temperature, and then cell temperature, maximum power output and module efficiency for this system, for tracking system. The results for energy production show that the total energy output is 128.5 GWh/year. The average module efficiency is 16.6%. The electricity generation capacity factor (CF) and solar capacity factor (SCF) were found to be 29.3% and 70.4% respectively. A 50MW two axis tracking power plant with a total energy output of 128.5 GWh/year would reduce CO2 pollution by 85,581 tonnes of each year. The payback time for the proposed LS-PV photovoltaic power plant was found to be 4 years.

An Agent Based Dynamic Resource Scheduling Model with FCFS-Job Grouping Strategy in Grid Computing

Grid computing is a group of clusters connected over high-speed networks that involves coordinating and sharing computational power, data storage and network resources operating across dynamic and geographically dispersed locations. Resource management and job scheduling are critical tasks in grid computing. Resource selection becomes challenging due to heterogeneity and dynamic availability of resources. Job scheduling is a NP-complete problem and different heuristics may be used to reach an optimal or near optimal solution. This paper proposes a model for resource and job scheduling in dynamic grid environment. The main focus is to maximize the resource utilization and minimize processing time of jobs. Grid resource selection strategy is based on Max Heap Tree (MHT) that best suits for large scale application and root node of MHT is selected for job submission. Job grouping concept is used to maximize resource utilization for scheduling of jobs in grid computing. Proposed resource selection model and job grouping concept are used to enhance scalability, robustness, efficiency and load balancing ability of the grid.

Visualisation and Navigation in Large Scale P2P Service Networks

In Peer-to-Peer service networks, where peers offer any kind of publicly available services or applications, intuitive navigation through all services in the network becomes more difficult as the number of services increases. In this article, a concept is discussed that enables users to intuitively browse and use large scale P2P service networks. The concept extends the idea of creating virtual 3D-environments solely based on Peer-to-Peer technologies. Aside from browsing, users shall have the possibility to emphasize services of interest using their own semantic criteria. The appearance of the virtual world shall intuitively reflect network properties that may be of interest for the user. Additionally, the concept comprises options for load- and traffic-balancing. In this article, the requirements concerning the underlying infrastructure and the graphical user interface are defined. First impressions of the appearance of future systems are presented and the next steps towards a prototypical implementation are discussed.

Two New Low Power High Performance Full Adders with Minimum Gates

with increasing circuits- complexity and demand to use portable devices, power consumption is one of the most important parameters these days. Full adders are the basic block of many circuits. Therefore reducing power consumption in full adders is very important in low power circuits. One of the most powerconsuming modules in full adders is XOR/XNOR circuit. This paper presents two new full adders based on two new logic approaches. The proposed logic approaches use one XOR or XNOR gate to implement a full adder cell. Therefore, delay and power will be decreased. Using two new approaches and two XOR and XNOR gates, two new full adders have been implemented in this paper. Simulations are carried out by HSPICE in 0.18μm bulk technology with 1.8V supply voltage. The results show that the ten-transistors proposed full adder has 12% less power consumption and is 5% faster in comparison to MB12T full adder. 9T is more efficient in area and is 24% better than similar 10T full adder in term of power consumption. The main drawback of the proposed circuits is output threshold loss problem.

Dynamic Window Secured Implicit Geographic Forwarding Routing for Wireless Sensor Network

Routing security is a major concerned in Wireless Sensor Network since a large scale of unattended nodes is deployed in ad hoc fashion with no possibility of a global addressing due to a limitation of node-s memory and the node have to be self organizing when the systems require a connection with the other nodes. It becomes more challenging when the nodes have to act as the router and tightly constrained on energy and computational capabilities where any existing security mechanisms are not allowed to be fitted directly. These reasons thus increasing vulnerabilities to the network layer particularly and to the whole network, generally. In this paper, a Dynamic Window Secured Implicit Geographic Forwarding (DWSIGF) routing is presented where a dynamic time is used for collection window to collect Clear to Send (CTS) control packet in order to find an appropriate hoping node. The DWIGF is expected to minimize a chance to select an attacker as the hoping node that caused by a blackhole attack that happen because of the CTS rushing attack, which promise a good network performance with high packet delivery ratios.

Detection of Linkages Between Extreme Flow Measures and Climate Indices

Large scale climate signals and their teleconnections can influence hydro-meteorological variables on a local scale. Several extreme flow and timing measures, including high flow and low flow measures, from 62 hydrometric stations in Canada are investigated to detect possible linkages with several large scale climate indices. The streamflow data used in this study are derived from the Canadian Reference Hydrometric Basin Network and are characterized by relatively pristine and stable land-use conditions with a minimum of 40 years of record. A composite analysis approach was used to identify linkages between extreme flow and timing measures and climate indices. The approach involves determining the 10 highest and 10 lowest values of various climate indices from the data record. Extreme flow and timing measures for each station were examined for the years associated with the 10 largest values and the years associated with the 10 smallest values. In each case, a re-sampling approach was applied to determine if the 10 values of extreme flow measures differed significantly from the series mean. Results indicate that several stations are impacted by the large scale climate indices considered in this study. The results allow the determination of any relationship between stations that exhibit a statistically significant trend and stations for which the extreme measures exhibit a linkage with the climate indices.