HERMES System: a Virtual Reality Simulator for the Angioplasty Intervention Training

One of the essential requirements in order to have a realistic surgical simulator is real-time interaction by means of a haptic interface is. In fact, reproducing haptic sensations increases the realism of the simulation. However, the interaction need to be performed in real-time, since a delay between the user action and the system reaction reduces the user immersion. In this paper, we present a prototype of the coronary stent implant simulator developed in the HERMES Project; this system allows real-time interactions with a artery by means of a specific haptic device; thus the user can interactively navigate in a reconstructed artery and force feedback is produced when contact occurs between the artery walls and the medical instruments

A Materialized View Approach to Support Aggregation Operations over Long Periods in Sensor Networks

The increasing interest on processing data created by sensor networks has evolved into approaches to implement sensor networks as databases. The aggregation operator, which calculates a value from a large group of data such as computing averages or sums, etc. is an essential function that needs to be provided when implementing such sensor network databases. This work proposes to add the DURING clause into TinySQL to calculate values during a specific long period and suggests a way to implement the aggregation service in sensor networks by applying materialized view and incremental view maintenance techniques that is used in data warehouses. In sensor networks, data values are passed from child nodes to parent nodes and an aggregation value is computed at the root node. As such root nodes need to be memory efficient and low powered, it becomes a problem to recompute aggregate values from all past and current data. Therefore, applying incremental view maintenance techniques can reduce the memory consumption and support fast computation of aggregate values.

A Study on the Developing Method of the BIM (Building Information Modeling) Software Based On Cloud Computing Environment

According as the Architecture, Engineering and Construction (AEC) Industry projects have grown more complex and larger, the number of utilization of BIM for 3D design and simulation is increasing significantly. Therefore, typical applications of BIM such as clash detection and alternative measures based on 3-dimenstional planning are expanded to process management, cost and quantity management, structural analysis, check for regulation, and various domains for virtual design and construction. Presently, commercial BIM software is operated on single-user environment, so initial cost is so high and the investment may be wasted frequently. Cloud computing that is a next-generation internet technology enables simple internet devices (such as PC, Tablet, Smart phone etc) to use services and resources of BIM software. In this paper, we suggested developing method of the BIM software based on cloud computing environment in order to expand utilization of BIM and reduce cost of BIM software. First, for the benchmarking, we surveyed successful case of BIM and cloud computing. And we analyzed needs and opportunities of BIM and cloud computing in AEC Industry. Finally, we suggested main functions of BIM software based on cloud computing environment and developed a simple prototype of cloud computing BIM software for basic BIM model viewing.

Optimal Path Planning under Priori Information in Stochastic, Time-varying Networks

A novel path planning approach is presented to solve optimal path in stochastic, time-varying networks under priori traffic information. Most existing studies make use of dynamic programming to find optimal path. However, those methods are proved to be unable to obtain global optimal value, moreover, how to design efficient algorithms is also another challenge. This paper employs a decision theoretic framework for defining optimal path: for a given source S and destination D in urban transit network, we seek an S - D path of lowest expected travel time where its link travel times are discrete random variables. To solve deficiency caused by the methods of dynamic programming, such as curse of dimensionality and violation of optimal principle, an integer programming model is built to realize assignment of discrete travel time variables to arcs. Simultaneously, pruning techniques are also applied to reduce computation complexity in the algorithm. The final experiments show the feasibility of the novel approach.

Structure and Functions of Urban Surface Water System in Coastal Areas: The Case of Almere

In the context of global climate change, flooding and sea level rise is increasingly threatening coastal urban areas, in which large population is continuously concentrated. Dutch experiences in urban water system management provide high reference value for sustainable coastal urban development projects. Preliminary studies shows the urban water system in Almere, a typical Dutch polder city, have three kinds of operational modes, achieving functions as: (1) coastline control – strong multiple damming system prevents from storm surges and maintains sufficient capacity upon risks; (2) high flexibility – large area and widely scattered open water system greatly reduce local runoff and water level fluctuation; (3) internal water maintenance – weir and sluice system maintains relatively stable water level, providing excellent boating and landscaping service, coupling with water circulating model maintaining better water quality. Almere has provided plenty of hints and experiences for ongoing development of coastal cities in emerging economies.

Some (v + 1, b + r + λ + 1, r + λ + 1, k, λ + 1) Balanced Incomplete Block Designs (BIBDs) from Lotto Designs (LDs)

The paper considered the construction of BIBDs using potential Lotto Designs (LDs) earlier derived from qualifying parent BIBDs. The study utilized Li’s condition  pr t−1  ( t−1 2 ) + pr− pr t−1 (t−1) 2  < ( p 2 ) λ, to determine the qualification of a parent BIBD (v, b, r, k, λ) as LD (n, k, p, t) constrained on v ≥ k, v ≥ p, t ≤ min{k, p} and then considered the case k = t since t is the smallest number of tickets that can guarantee a win in a lottery. The (15, 140, 28, 3, 4) and (7, 7, 3, 3, 1) BIBDs were selected as parent BIBDs to illustrate the procedure. These BIBDs yielded three potential LDs each. Each of the LDs was completely generated and their properties studied. The three LDs from the (15, 140, 28, 3, 4) produced (9, 84, 28, 3, 7), (10, 120, 36, 3, 8) and (11, 165, 45, 3, 9) BIBDs while those from the (7, 7, 3, 3, 1) produced the (5, 10, 6, 3, 3), (6, 20, 10, 3, 4) and (7, 35, 15, 3, 5) BIBDs. The produced BIBDs follow the generalization (v + 1, b + r + λ + 1, r +λ+1, k, λ+1) where (v, b, r, k, λ) are the parameters of the (9, 84, 28, 3, 7) and (5, 10, 6, 3, 3) BIBDs. All the BIBDs produced are unreduced designs.

On the Quantizer Design for Base Station Cooperation Systems with SC-FDE Techniques

By employing BS (Base Station) cooperation we can increase substantially the spectral efficiency and capacity of cellular systems. The signals received at each BS are sent to a central unit that performs the separation of the different MT (Mobile Terminal) using the same physical channel. However, we need accurate sampling and quantization of those signals so as to reduce the backhaul communication requirements. In this paper we consider the optimization of the quantizers for BS cooperation systems. Four different quantizer types are analyzed and optimized to allow better SQNR (Signal-to-Quantization Noise Ratio) and BER (Bit Error Rate) performance.

System-Level Energy Estimation for SoC based on the Dynamic Behavior of Embedded Software

This paper describes a system-level SoC energy consumption estimation method based on a dynamic behavior of embedded software in the early stages of the SoC development. A major problem of SOC development is development rework caused by unreliable energy consumption estimation at the early stages. The energy consumption of an SoC used in embedded systems is strongly affected by the dynamic behavior of the software. At the early stages of SoC development, modeling with a high level of abstraction is required for both the dynamic behavior of the software, and the behavior of the SoC. We estimate the energy consumption by a UML model-based simulation. The proposed method is applied for an actual embedded system in an MFP. The energy consumption estimation of the SoC is more accurate than conventional methods and this proposed method is promising to reduce the chance of development rework in the SoC development. ∈

A Novel Convergence Accelerator for the LMS Adaptive Algorithm

The least mean square (LMS) algorithmis one of the most well-known algorithms for mobile communication systems due to its implementation simplicity. However, the main limitation is its relatively slow convergence rate. In this paper, a booster using the concept of Markov chains is proposed to speed up the convergence rate of LMS algorithms. The nature of Markov chains makes it possible to exploit the past information in the updating process. Moreover, since the transition matrix has a smaller variance than that of the weight itself by the central limit theorem, the weight transition matrix converges faster than the weight itself. Accordingly, the proposed Markov-chain based booster thus has the ability to track variations in signal characteristics, and meanwhile, it can accelerate the rate of convergence for LMS algorithms. Simulation results show that the LMS algorithm can effectively increase the convergence rate and meantime further approach the Wiener solution, if the Markov-chain based booster is applied. The mean square error is also remarkably reduced, while the convergence rate is improved.

Flow and Heat Transfer of a Nanofluid over a Shrinking Sheet

The problem of laminar fluid flow which results from the shrinking of a permeable surface in a nanofluid has been investigated numerically. The model used for the nanofluid incorporates the effects of Brownian motion and thermophoresis. A similarity solution is presented which depends on the mass suction parameter S, Prandtl number Pr, Lewis number Le, Brownian motion number Nb and thermophoresis number Nt. It was found that the reduced Nusselt number is decreasing function of each dimensionless number.

Hierarchical PSO-Adaboost Based Classifiers for Fast and Robust Face Detection

We propose a fast and robust hierarchical face detection system which finds and localizes face images with a cascade of classifiers. Three modules contribute to the efficiency of our detector. First, heterogeneous feature descriptors are exploited to enrich feature types and feature numbers for face representation. Second, a PSO-Adaboost algorithm is proposed to efficiently select discriminative features from a large pool of available features and reinforce them into the final ensemble classifier. Compared with the standard exhaustive Adaboost for feature selection, the new PSOAdaboost algorithm reduces the training time up to 20 times. Finally, a three-stage hierarchical classifier framework is developed for rapid background removal. In particular, candidate face regions are detected more quickly by using a large size window in the first stage. Nonlinear SVM classifiers are used instead of decision stump functions in the last stage to remove those remaining complex nonface patterns that can not be rejected in the previous two stages. Experimental results show our detector achieves superior performance on the CMU+MIT frontal face dataset.

Accelerating GLA with an M-Tree

In this paper, we propose a novel improvement for the generalized Lloyd Algorithm (GLA). Our algorithm makes use of an M-tree index built on the codebook which makes it possible to reduce the number of distance computations when the nearest code words are searched. Our method does not impose the use of any specific distance function, but works with any metric distance, making it more general than many other fast GLA variants. Finally, we present the positive results of our performance experiments.

Border Limited Adaptive Subdivision Based On Triangle Meshes

Subdivision is a method to create a smooth surface from a coarse mesh by subdividing the entire mesh. The conventional ways to compute and render surfaces are inconvenient both in terms of memory and computational time as the number of meshes will increase exponentially. An adaptive subdivision is the way to reduce the computational time and memory by subdividing only certain selected areas. In this paper, a new adaptive subdivision method for triangle meshes is introduced. This method defines a new adaptive subdivision rules by considering the properties of each triangle's neighbors and is embedded in a traditional Loop's subdivision. It prevents some undesirable side effects that appear in the conventional adaptive ways. Models that were subdivided by our method are compared with other adaptive subdivision methods

Fast Painting with Different Colors Using Cross Correlation in the Frequency Domain

In this paper, a new technique for fast painting with different colors is presented. The idea of painting relies on applying masks with different colors to the background. Fast painting is achieved by applying these masks in the frequency domain instead of spatial (time) domain. New colors can be generated automatically as a result from the cross correlation operation. This idea was applied successfully for faster specific data (face, object, pattern, and code) detection using neural algorithms. Here, instead of performing cross correlation between the input input data (e.g., image, or a stream of sequential data) and the weights of neural networks, the cross correlation is performed between the colored masks and the background. Furthermore, this approach is developed to reduce the computation steps required by the painting operation. The principle of divide and conquer strategy is applied through background decomposition. Each background is divided into small in size subbackgrounds and then each sub-background is processed separately by using a single faster painting algorithm. Moreover, the fastest painting is achieved by using parallel processing techniques to paint the resulting sub-backgrounds using the same number of faster painting algorithms. In contrast to using only faster painting algorithm, the speed up ratio is increased with the size of the background when using faster painting algorithm and background decomposition. Simulation results show that painting in the frequency domain is faster than that in the spatial domain.

Artificial Neural Networks for Classifying Magnetic Measurements in Tokamak Reactors

This paper is mainly concerned with the application of a novel technique of data interpretation to the characterization and classification of measurements of plasma columns in Tokamak reactors for nuclear fusion applications. The proposed method exploits several concepts derived from soft computing theory. In particular, Artifical Neural Networks have been exploited to classify magnetic variables useful to determine shape and position of the plasma with a reduced computational complexity. The proposed technique is used to analyze simulated databases of plasma equilibria based on ITER geometry configuration. As well as demonstrating the successful recovery of scalar equilibrium parameters, we show that the technique can yield practical advantages compares with earlier methods.

Efficient Tools for Managing Uncertainties in Design and Operation of Engineering Structures

Actual load, material characteristics and other quantities often differ from the design values. This can cause worse function, shorter life or failure of a civil engineering structure, a machine, vehicle or another appliance. The paper shows main causes of the uncertainties and deviations and presents a systematic approach and efficient tools for their elimination or mitigation of consequences. Emphasis is put on the design stage, which is most important for reliability ensuring. Principles of robust design and important tools are explained, including FMEA, sensitivity analysis and probabilistic simulation methods. The lifetime prediction of long-life objects can be improved by long-term monitoring of the load response and damage accumulation in operation. The condition evaluation of engineering structures, such as bridges, is often based on visual inspection and verbal description. Here, methods based on fuzzy logic can reduce the subjective influences.

An Optimization Analysis on an Automotive Component with Fatigue Constraint Using HyperWorks Software for Environmental Sustainability

A finite element analysis (FEA) computer software HyperWorks is utilized in re-designing an automotive component to reduce its mass. Reduction of components mass contributes towards environmental sustainability by saving world-s valuable metal resources and by reducing carbon emission through improved overall vehicle fuel efficiency. A shape optimization analysis was performed on a rear spindle component. Pre-processing and solving procedures were performed using HyperMesh and RADIOSS respectively. Shape variables were defined using HyperMorph. Then optimization solver OptiStruct was utilized with fatigue life set as a design constraint. Since Stress-Number of Cycle (S-N) theory deals with uni-axial stress, the Signed von Misses stress on the component was used for looking up damage on S-N curve, and Gerber criterion for mean stress corrections. The optimization analysis resulted in mass reduction of 24% of the original mass. The study proved that the adopted approach has high potential use for environmental sustainability.

Efficient System for Speech Recognition using General Regression Neural Network

In this paper we present an efficient system for independent speaker speech recognition based on neural network approach. The proposed architecture comprises two phases: a preprocessing phase which consists in segmental normalization and features extraction and a classification phase which uses neural networks based on nonparametric density estimation namely the general regression neural network (GRNN). The relative performances of the proposed model are compared to the similar recognition systems based on the Multilayer Perceptron (MLP), the Recurrent Neural Network (RNN) and the well known Discrete Hidden Markov Model (HMM-VQ) that we have achieved also. Experimental results obtained with Arabic digits have shown that the use of nonparametric density estimation with an appropriate smoothing factor (spread) improves the generalization power of the neural network. The word error rate (WER) is reduced significantly over the baseline HMM method. GRNN computation is a successful alternative to the other neural network and DHMM.

Open Cloud Computing with Fault Tolerance

Cloud Computing (CC) has become one of the most talked about emerging technologies that provides powerful computing and large storage environments through the use of the Internet. Cloud computing provides different dynamically scalable computing resources as a service. It brings economic benefits to individuals and businesses that adopt the technology. In theory adoption of cloud computing reduces capital and operational expenditure on information technology. For this to be a reality there is need to solve some challenges and at the same time addressing concerns that consumers have about cloud computing. This paper looks at Cloud Computing in general then highlights the challenges of Cloud Computing and finally suggests solutions to some of the challenges.