6DSpaces: Multisensory Interactive Installations

Interactive installations for public spaces are a particular kind of interactive systems, the design of which has been the subject of several research studies. Sensor-based applications are becoming increasingly popular, but the human-computer interaction community is still far from reaching sound, effective large-scale interactive installations for public spaces. The 6DSpaces project is described in this paper as a research approach based on studying the role of multisensory interactivity and how it can be effectively used to approach people to digital, scientific contents. The design of an entire scientific exhibition is described and the result was evaluated in the real world context of a Science Centre. Conclusions bring insight into how the human-computer interaction should be designed in order to maximize the overall experience.

On Asymptotic Laws and Transfer Processes Enhancement in Complex Turbulent Flows

The lecture represents significant advances in understanding of the transfer processes mechanism in turbulent separated flows. Based upon experimental data suggesting the governing role of generated local pressure gradient that takes place in the immediate vicinity of the wall in separated flow as a result of intense instantaneous accelerations induced by large-scale vortex flow structures similarity laws for mean velocity and temperature and spectral characteristics and heat and mass transfer law for turbulent separated flows have been developed. These laws are confirmed by available experimental data. The results obtained were employed for analysis of heat and mass transfer in some very complex processes occurring in technological applications such as impinging jets, heat transfer of cylinders in cross flow and in tube banks, packed beds where processes manifest distinct properties which allow them to be classified under turbulent separated flows. Many facts have got an explanation for the first time.

A Multiagent System for Distributed Systems Management

The demand for autonomous resource management for distributed systems has increased in recent years. Distributed systems require an efficient and powerful communication mechanism between applications running on different hosts and networks. The use of mobile agent technology to distribute and delegate management tasks promises to overcome the scalability and flexibility limitations of the currently used centralized management approach. This work proposes a multiagent system that adopts mobile agents as a technology for tasks distribution, results collection, and management of resources in large-scale distributed systems. A new mobile agent-based approach for collecting results from distributed system elements is presented. The technique of artificial intelligence based on intelligent agents giving the system a proactive behavior. The presented results are based on a design example of an application operating in a mobile environment.

SC-LSH: An Efficient Indexing Method for Approximate Similarity Search in High Dimensional Space

Locality Sensitive Hashing (LSH) is one of the most promising techniques for solving nearest neighbour search problem in high dimensional space. Euclidean LSH is the most popular variation of LSH that has been successfully applied in many multimedia applications. However, the Euclidean LSH presents limitations that affect structure and query performances. The main limitation of the Euclidean LSH is the large memory consumption. In order to achieve a good accuracy, a large number of hash tables is required. In this paper, we propose a new hashing algorithm to overcome the storage space problem and improve query time, while keeping a good accuracy as similar to that achieved by the original Euclidean LSH. The Experimental results on a real large-scale dataset show that the proposed approach achieves good performances and consumes less memory than the Euclidean LSH.

Developing Damage Assessment Model for Bridge Surroundings: A Study of Disaster by Typhoon Morakot in Taiwan

This paper presents an integrated model that automatically measures the change of rivers, damage area of bridge surroundings, and change of vegetation. The proposed model is on the basis of a neurofuzzy mechanism enhanced by SOM optimization algorithm, and also includes three functions to deal with river imagery. High resolution imagery from FORMOSAT-2 satellite taken before and after the invasion period is adopted. By randomly selecting a bridge out of 129 destroyed bridges, the recognition results show that the average width has increased 66%. The ruined segment of the bridge is located exactly at the most scour region. The vegetation coverage has also reduced to nearly 90% of the original. The results yielded from the proposed model demonstrate a pinpoint accuracy rate at 99.94%. This study brings up a successful tool not only for large-scale damage assessment but for precise measurement to disasters.

Estimating Shortest Circuit Path Length Complexity

When binary decision diagrams are formed from uniformly distributed Monte Carlo data for a large number of variables, the complexity of the decision diagrams exhibits a predictable relationship to the number of variables and minterms. In the present work, a neural network model has been used to analyze the pattern of shortest path length for larger number of Monte Carlo data points. The neural model shows a strong descriptive power for the ISCAS benchmark data with an RMS error of 0.102 for the shortest path length complexity. Therefore, the model can be considered as a method of predicting path length complexities; this is expected to lead to minimum time complexity of very large-scale integrated circuitries and related computer-aided design tools that use binary decision diagrams.

Analysis of Long-Term File System Activities on Cluster Systems

I/O workload is a critical and important factor to analyze I/O pattern and to maximize file system performance. However to measure I/O workload on running distributed parallel file system is non-trivial due to collection overhead and large volume of data. In this paper, we measured and analyzed file system activities on two large-scale cluster systems which had TFlops level high performance computation resources. By comparing file system activities of 2009 with those of 2006, we analyzed the change of I/O workloads by the development of system performance and high-speed network technology.

On the Robust Stability of Homogeneous Perturbed Large-Scale Bilinear Systems with Time Delays and Constrained Inputs

The stability test problem for homogeneous large-scale perturbed bilinear time-delay systems subjected to constrained inputs is considered in this paper. Both nonlinear uncertainties and interval systems are discussed. By utilizing the Lyapunove equation approach associated with linear algebraic techniques, several delay-independent criteria are presented to guarantee the robust stability of the overall systems. The main feature of the presented results is that although the Lyapunov stability theorem is used, they do not involve any Lyapunov equation which may be unsolvable. Furthermore, it is seen the proposed schemes can be applied to solve the stability analysis problem of large-scale time-delay systems.

Assessment of Reliability and Quality Measures in Power Systems

The paper presents new results of a recent industry supported research and development study in which an efficient framework for evaluating practical and meaningful power system reliability and quality indices was applied. The system-wide integrated performance indices are capable of addressing and revealing areas of deficiencies and bottlenecks as well as redundancies in the composite generation-transmission-demand structure of large-scale power grids. The technique utilizes a linear programming formulation, which simulates practical operating actions and offers a general and comprehensive framework to assess the harmony and compatibility of generation, transmission and demand in a power system. Practical applications to a reduced system model as well as a portion of the Saudi power grid are also presented in the paper for demonstration purposes.

On the Efficient Implementation of a Serial and Parallel Decomposition Algorithm for Fast Support Vector Machine Training Including a Multi-Parameter Kernel

This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.

Nonlinear Controller for Fuzzy Model of Double Inverted Pendulums

In this paper a method for designing of nonlinear controller for a fuzzy model of Double Inverted Pendulum is proposed. This system can be considered as a fuzzy large-scale system that includes offset terms and disturbance in each subsystem. Offset terms are deterministic and disturbances are satisfied a matching condition that is mentioned in the paper. Based on Lyapunov theorem, a nonlinear controller is designed for this fuzzy system (as a model reference base) which is simple in computation and guarantees stability. This idea can be used for other fuzzy large- scale systems that include more subsystems Finally, the results are shown.

The Challenge of Large-Scale IT Projects

The trend in the world of Information Technology (IT) is getting increasingly large and difficult projects rather than smaller and easier. However, the data on large-scale IT project success rates provide cause for concern. This paper seeks to answer why large-scale IT projects are different from and more difficult than other typical engineering projects. Drawing on the industrial experience, a compilation of the conditions that influence failure is presented. With a view to improve success rates solutions are suggested.

Multi-agent On-line Monitor for the Safety of Critical Systems

Operational safety of critical systems, such as nuclear power plants, industrial chemical processes and means of transportation, is a major concern for system engineers and operators. A means to assure that is on-line safety monitors that deliver three safety tasks; fault detection and diagnosis, alarm annunciation and fault controlling. While current monitors deliver these tasks, benefits and limitations in their approaches have at the same time been highlighted. Drawing from those benefits, this paper develops a distributed monitor based on semi-independent agents, i.e. a multiagent system, and monitoring knowledge derived from a safety assessment model of the monitored system. Agents are deployed hierarchically and provided with knowledge portions and collaboration protocols to reason and integrate over the operational conditions of the components of the monitored system. The monitor aims to address limitations arising from the large-scale, complicated behaviour and distributed nature of monitored systems and deliver the aforementioned three monitoring tasks effectively.

A Novel Low Power, High Speed 14 Transistor CMOS Full Adder Cell with 50% Improvement in Threshold Loss Problem

Full adders are important components in applications such as digital signal processors (DSP) architectures and microprocessors. In addition to its main task, which is adding two numbers, it participates in many other useful operations such as subtraction, multiplication, division,, address calculation,..etc. In most of these systems the adder lies in the critical path that determines the overall speed of the system. So enhancing the performance of the 1-bit full adder cell (the building block of the adder) is a significant goal.Demands for the low power VLSI have been pushing the development of aggressive design methodologies to reduce the power consumption drastically. To meet the growing demand, we propose a new low power adder cell by sacrificing the MOS Transistor count that reduces the serious threshold loss problem, considerably increases the speed and decreases the power when compared to the static energy recovery full (SERF) adder. So a new improved 14T CMOS l-bit full adder cell is presented in this paper. Results show 50% improvement in threshold loss problem, 45% improvement in speed and considerable power consumption over the SERF adder and other different types of adders with comparable performance.

Agents Network on a Grid: An Approach with the Set of Circulant Operators

In this work we present some matrix operators named circulant operators and their action on square matrices. This study on square matrices provides new insights into the structure of the space of square matrices. Moreover it can be useful in various fields as in agents networking on Grid or large-scale distributed self-organizing grid systems.

Effective Scheduling of Semiconductor Manufacturing using Simulation

The process of wafer fabrication is arguably the most technologically complex and capital intensive stage in semiconductor manufacturing. This large-scale discrete-event process is highly reentrant, and involves hundreds of machines, restrictions, and processing steps. Therefore, production control of wafer fabrication facilities (fab), specifically scheduling, is one of the most challenging problems that this industry faces. Dispatching rules have been extensively applied to the scheduling problems in semiconductor manufacturing. Moreover, lot release policies are commonly used in this manufacturing setting to further improve the performance of such systems and reduce its inherent variability. In this work, simulation is used in the scheduling of re-entrant flow shop manufacturing systems with an application in semiconductor wafer fabrication; where, a simulation model has been developed for the Intel Five-Machine Six Step Mini-Fab using the ExtendTM simulation environment. The Mini-Fab has been selected as it captures the challenges involved in scheduling the highly re-entrant semiconductor manufacturing lines. A number of scenarios have been developed and have been used to evaluate the effect of different dispatching rules and lot release policies on the selected performance measures. Results of simulation showed that the performance of the Mini-Fab can be drastically improved using a combination of dispatching rules and lot release policy.

A Software Tool Design for Cerebral Infarction of MR Images

The brain MR imaging-based clinical research and analysis system were specifically built and the development for a large-scale data was targeted. We used the general clinical data available for building large-scale data. Registration period for the selection of the lesion ROI and the region growing algorithm was used and the Mesh-warp algorithm for matching was implemented. The accuracy of the matching errors was modified individually. Also, the large ROI research data can accumulate by our developed compression method. In this way, the correctly decision criteria to the research result was suggested. The experimental groups were age, sex, MR type, patient ID and smoking which can easily be queries. The result data was visualized of the overlapped images by a color table. Its data was calculated by the statistical package. The evaluation for the utilization of this system in the chronic ischemic damage in the area has done from patients with the acute cerebral infarction. This is the cause of neurologic disability index location in the center portion of the lateral ventricle facing. The corona radiate was found in the position. Finally, the system reliability was measured both inter-user and intra-user registering correlation.

Concurrency in Web Access Patterns Mining

Web usage mining is an interesting application of data mining which provides insight into customer behaviour on the Internet. An important technique to discover user access and navigation trails is based on sequential patterns mining. One of the key challenges for web access patterns mining is tackling the problem of mining richly structured patterns. This paper proposes a novel model called Web Access Patterns Graph (WAP-Graph) to represent all of the access patterns from web mining graphically. WAP-Graph also motivates the search for new structural relation patterns, i.e. Concurrent Access Patterns (CAP), to identify and predict more complex web page requests. Corresponding CAP mining and modelling methods are proposed and shown to be effective in the search for and representation of concurrency between access patterns on the web. From experiments conducted on large-scale synthetic sequence data as well as real web access data, it is demonstrated that CAP mining provides a powerful method for structural knowledge discovery, which can be visualised through the CAP-Graph model.

A Fast Replica Placement Methodology for Large-scale Distributed Computing Systems

Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.

Restarted Generalized Second-Order Krylov Subspace Methods for Solving Quadratic Eigenvalue Problems

This article is devoted to the numerical solution of large-scale quadratic eigenvalue problems. Such problems arise in a wide variety of applications, such as the dynamic analysis of structural mechanical systems, acoustic systems, fluid mechanics, and signal processing. We first introduce a generalized second-order Krylov subspace based on a pair of square matrices and two initial vectors and present a generalized second-order Arnoldi process for constructing an orthonormal basis of the generalized second-order Krylov subspace. Then, by using the projection technique and the refined projection technique, we propose a restarted generalized second-order Arnoldi method and a restarted refined generalized second-order Arnoldi method for computing some eigenpairs of largescale quadratic eigenvalue problems. Some theoretical results are also presented. Some numerical examples are presented to illustrate the effectiveness of the proposed methods.