The Effect of Mixture Velocity and Droplet Diameter on Oil-water Separator using Computational Fluid Dynamics (CFD)

The characteristics of fluid flow and phase separation in an oil-water separator were numerically analysed as part of the work presented herein. Simulations were performed for different velocities and droplet diameters, and the way this parameters can influence the separator geometry was studied. The simulations were carried out using the software package Fluent 6.2, which is designed for numerical simulation of fluid flow and mass transfer. The model consisted of a cylindrical horizontal separator. A tetrahedral mesh was employed in the computational domain. The condition of two-phase flow was simulated with the two-fluid model, taking into consideration turbulence effects using the k-ε model. The results showed that there is a strong dependency of phase separation on mixture velocity and droplet diameter. An increase in mixture velocity will bring about a slow down in phase separation and as a consequence will require a weir of greater height. An increase in droplet diameter will produce a better phase separation. The simulations are in agreement with results reported in literature and show that CFD can be a useful tool in studying a horizontal oilwater separator.

Prediction of Fatigue Crack Growth of Aeronautical Aluminum Alloy

In this paper fatigue crack growth behavior of aeronautical aluminum alloy 2024 T351 was studied. Effects of various loading and geometrical parameters are studied such as stress ratio, amplitude loading, etc. The fatigue crack growth with constant amplitude is studied using the AFGROW code when NASGRO model is used. The effect of the stress ratio is highlighted, where one notices a shift of the curves of crack growth. The comparative study between two orientations L-T and T-L on fatigue behavior are presented and shows the variation on the fatigue life. L-T orientation presents a good fatigue crack growth resistance. Effects of crack closure are shown in Paris domain and that no crack closure phenomenons are present at high stress intensity factor.

Image Adaptive Watermarking with Visual Model in Orthogonal Polynomials based Transformation Domain

In this paper, an image adaptive, invisible digital watermarking algorithm with Orthogonal Polynomials based Transformation (OPT) is proposed, for copyright protection of digital images. The proposed algorithm utilizes a visual model to determine the watermarking strength necessary to invisibly embed the watermark in the mid frequency AC coefficients of the cover image, chosen with a secret key. The visual model is designed to generate a Just Noticeable Distortion mask (JND) by analyzing the low level image characteristics such as textures, edges and luminance of the cover image in the orthogonal polynomials based transformation domain. Since the secret key is required for both embedding and extraction of watermark, it is not possible for an unauthorized user to extract the embedded watermark. The proposed scheme is robust to common image processing distortions like filtering, JPEG compression and additive noise. Experimental results show that the quality of OPT domain watermarked images is better than its DCT counterpart.

Hybrid Genetic-Simulated Annealing Approach for Fractal Image Compression

In this paper a hybrid technique of Genetic Algorithm and Simulated Annealing (HGASA) is applied for Fractal Image Compression (FIC). With the help of this hybrid evolutionary algorithm effort is made to reduce the search complexity of matching between range block and domain block. The concept of Simulated Annealing (SA) is incorporated into Genetic Algorithm (GA) in order to avoid pre-mature convergence of the strings. One of the image compression techniques in the spatial domain is Fractal Image Compression but the main drawback of FIC is that it involves more computational time due to global search. In order to improve the computational time along with acceptable quality of the decoded image, HGASA technique has been proposed. Experimental results show that the proposed HGASA is a better method than GA in terms of PSNR for Fractal image Compression.

Surface and Guided Waves in Composites with Nematic Coatings

The theoretical prediction of the acoustical polarization effects in the heterogeneous composites, made of thick elastic solids with thin nematic films, is presented. The numericalanalytical solution to the problem of the different wave propagation exhibits some new physical effects in the low frequency domain: the appearance of the critical frequency and the existence of the narrow transition zone where the wave rapidly changes its speed. The associated wave attenuation is highly perturbed in this zone. We also show the possible appearance of the critical frequencies where the attenuation changes the sign. The numerical results of parametrical analysis are presented and discussed.

Outlier Pulse Detection and Feature Extraction for Wrist Pulse Analysis

Wrist pulse analysis for identification of health status is found in Ancient Indian as well as Chinese literature. The preprocessing of wrist pulse is necessary to remove outlier pulses and fluctuations prior to the analysis of pulse pressure signal. This paper discusses the identification of irregular pulses present in the pulse series and intricacies associated with the extraction of time domain pulse features. An approach of Dynamic Time Warping (DTW) has been utilized for the identification of outlier pulses in the wrist pulse series. The ambiguity present in the identification of pulse features is resolved with the help of first derivative of Ensemble Average of wrist pulse series. An algorithm for detecting tidal and dicrotic notch in individual wrist pulse segment is proposed.

Motion Analysis for Duplicate Frame Removal in Wireless Capsule Endoscope Video

Wireless capsule Endoscopy (WCE) has rapidly shown its wide applications in medical domain last ten years thanks to its noninvasiveness for patients and support for thorough inspection through a patient-s entire digestive system including small intestine. However, one of the main barriers to efficient clinical inspection procedure is that it requires large amount of effort for clinicians to inspect huge data collected during the examination, i.e., over 55,000 frames in video. In this paper, we propose a method to compute meaningful motion changes of WCE by analyzing the obtained video frames based on regional optical flow estimations. The computed motion vectors are used to remove duplicate video frames caused by WCE-s imaging nature, such as repetitive forward-backward motions from peristaltic movements. The motion vectors are derived by calculating directional component vectors in four local regions. Our experiments are performed on small intestine area, which is of main interest to clinical experts when using WCEs, and our experimental results show significant frame reductions comparing with a simple frame-to-frame similarity-based image reduction method.

PeliGRIFF: A Parallel DEM-DLM/FD Method for DNS of Particulate Flows with Collisions

An original Direct Numerical Simulation (DNS) method to tackle the problem of particulate flows at moderate to high concentration and finite Reynolds number is presented. Our method is built on the framework established by Glowinski and his coworkers [1] in the sense that we use their Distributed Lagrange Multiplier/Fictitious Domain (DLM/FD) formulation and their operator-splitting idea but differs in the treatment of particle collisions. The novelty of our contribution relies on replacing the simple artificial repulsive force based collision model usually employed in the literature by an efficient Discrete Element Method (DEM) granular solver. The use of our DEM solver enables us to consider particles of arbitrary shape (at least convex) and to account for actual contacts, in the sense that particles actually touch each other, in contrast with the simple repulsive force based collision model. We recently upgraded our serial code, GRIFF 1 [2], to full MPI capabilities. Our new code, PeliGRIFF 2, is developed under the framework of the full MPI open source platform PELICANS [3]. The new MPI capabilities of PeliGRIFF open new perspectives in the study of particulate flows and significantly increase the number of particles that can be considered in a full DNS approach: O(100000) in 2D and O(10000) in 3D. Results on the 2D/3D sedimentation/fluidization of isometric polygonal/polyedral particles with collisions are presented.

An Adaptive Mammographic Image Enhancement in Orthogonal Polynomials Domain

X-ray mammography is the most effective method for the early detection of breast diseases. However, the typical diagnostic signs such as microcalcifications and masses are difficult to detect because mammograms are of low-contrast and noisy. In this paper, a new algorithm for image denoising and enhancement in Orthogonal Polynomials Transformation (OPT) is proposed for radiologists to screen mammograms. In this method, a set of OPT edge coefficients are scaled to a new set by a scale factor called OPT scale factor. The new set of coefficients is then inverse transformed resulting in contrast improved image. Applications of the proposed method to mammograms with subtle lesions are shown. To validate the effectiveness of the proposed method, we compare the results to those obtained by the Histogram Equalization (HE) and the Unsharp Masking (UM) methods. Our preliminary results strongly suggest that the proposed method offers considerably improved enhancement capability over the HE and UM methods.

Finding a Solution, all Solutions, or the Most Probable Solution to a Temporal Interval Algebra Network

Over the years, many implementations have been proposed for solving IA networks. These implementations are concerned with finding a solution efficiently. The primary goal of our implementation is simplicity and ease of use. We present an IA network implementation based on finite domain non-binary CSPs, and constraint logic programming. The implementation has a GUI which permits the drawing of arbitrary IA networks. We then show how the implementation can be extended to find all the solutions to an IA network. One application of finding all the solutions, is solving probabilistic IA networks.

Simulation of a Sustainable Cement Supply Chain; Proposal Model Review

In recent years, sustainable supply chain management (SSCM) has been widely researched in academic domain. However, due to the traditional operational role and the complexity of supply chain management in the cement industry, a relatively small amount of research has been conducted on cement supply chain simulation integrated with sustainability criteria. This paper analyses the cement supply chain operations using the Push-Pull supply chain frameworks, the Life Cycle Assessment (LCA) methodology; and proposal integration approach, proposes three supply chain scenarios based on Make-To-Stock (MTS), Pack-To-Order (PTO) and Grind- To-Order (GTO) strategies. A Discrete-Event Simulation (DES) model of SSCM is constructed using Arena software to implement the three-target scenarios. We conclude with the simulation results that (GTO) is the optimal supply chain strategy that demonstrates the best economic, ecological and social performance in the cement industry.

A Hybrid Differential Transform Approach for Laser Heating of a Double-Layered Thin Film

This paper adopted the hybrid differential transform approach for studying heat transfer problems in a gold/chromium thin film with an ultra-short-pulsed laser beam projecting on the gold side. The physical system, formulated based on the hyperbolic two-step heat transfer model, covers three characteristics: (i) coupling effects between the electron/lattice systems, (ii) thermal wave propagation in metals, and (iii) radiation effects along the interface. The differential transform method is used to transfer the governing equations in the time domain into the spectrum equations, which is further discretized in the space domain by the finite difference method. The results, obtained through a recursive process, show that the electron temperature in the gold film can rise up to several thousand degrees before its electron/lattice systems reach equilibrium at only several hundred degrees. The electron and lattice temperatures in the chromium film are much lower than those in the gold film.

Parallel Explicit Group Domain Decomposition Methods for the Telegraph Equation

In a previous work, we presented the numerical solution of the two dimensional second order telegraph partial differential equation discretized by the centred and rotated five-point finite difference discretizations, namely the explicit group (EG) and explicit decoupled group (EDG) iterative methods, respectively. In this paper, we utilize a domain decomposition algorithm on these group schemes to divide the tasks involved in solving the same equation. The objective of this study is to describe the development of the parallel group iterative schemes under OpenMP programming environment as a way to reduce the computational costs of the solution processes using multicore technologies. A detailed performance analysis of the parallel implementations of points and group iterative schemes will be reported and discussed.

Chances and Challenges of Intelligent Technologies in the Production and Retail Sector

This paper provides an introduction into the evolution of information and communication technology and illustrates its usage in the work domain. The paper is sub-divided into two parts. The first part gives an overview over the different phases of information processing in the work domain. It starts by charting the past and present usage of computers in work environments and shows current technological trends, which are likely to influence future business applications. The second part starts by briefly describing, how the usage of computers changed business processes in the past, and presents first Ambient Intelligence applications based on identification and localization information, which are already used in the production and retail sector. Based on current systems and prototype applications, the paper gives an outlook of how Ambient Intelligence technologies could change business processes in the future.

Review of Surface Electromyogram Signals: Its Analysis and Applications

Electromyography (EMG) is the study of muscles function through analysis of electrical activity produced from muscles. This electrical activity which is displayed in the form of signal is the result of neuromuscular activation associated with muscle contraction. The most common techniques of EMG signal recording are by using surface and needle/wire electrode where the latter is usually used for interest in deep muscle. This paper will focus on surface electromyogram (SEMG) signal. During SEMG recording, several problems had to been countered such as noise, motion artifact and signal instability. Thus, various signal processing techniques had been implemented to produce a reliable signal for analysis. SEMG signal finds broad application particularly in biomedical field. It had been analyzed and studied for various interests such as neuromuscular disease, enhancement of muscular function and human-computer interface.

Dynamic Analysis of Porous Media Using Finite Element Method

The mechanical behavior of porous media is governed by the interaction between its solid skeleton and the fluid existing inside its pores. The interaction occurs through the interface of gains and fluid. The traditional analysis methods of porous media, based on the effective stress and Darcy's law, are unable to account for these interactions. For an accurate analysis, the porous media is represented in a fluid-filled porous solid on the basis of the Biot theory of wave propagation in poroelastic media. In Biot formulation, the equations of motion of the soil mixture are coupled with the global mass balance equations to describe the realistic behavior of porous media. Because of irregular geometry, the domain is generally treated as an assemblage of fmite elements. In this investigation, the numerical formulation for the field equations governing the dynamic response of fluid-saturated porous media is analyzed and employed for the study of transient wave motion. A finite element model is developed and implemented into a computer code called DYNAPM for dynamic analysis of porous media. The weighted residual method with 8-node elements is used for developing of a finite element model and the analysis is carried out in the time domain considering the dynamic excitation and gravity loading. Newmark time integration scheme is developed to solve the time-discretized equations which are an unconditionally stable implicit method Finally, some numerical examples are presented to show the accuracy and capability of developed model for a wide variety of behaviors of porous media.

Feature Reduction of Nearest Neighbor Classifiers using Genetic Algorithm

The design of a pattern classifier includes an attempt to select, among a set of possible features, a minimum subset of weakly correlated features that better discriminate the pattern classes. This is usually a difficult task in practice, normally requiring the application of heuristic knowledge about the specific problem domain. The selection and quality of the features representing each pattern have a considerable bearing on the success of subsequent pattern classification. Feature extraction is the process of deriving new features from the original features in order to reduce the cost of feature measurement, increase classifier efficiency, and allow higher classification accuracy. Many current feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and increasing classification efficiency, it does not necessarily reduce the number of features that must be measured since each new feature may be a linear combination of all of the features in the original pattern vector. In this paper a new approach is presented to feature extraction in which feature selection, feature extraction, and classifier training are performed simultaneously using a genetic algorithm. In this approach each feature value is first normalized by a linear equation, then scaled by the associated weight prior to training, testing, and classification. A knn classifier is used to evaluate each set of feature weights. The genetic algorithm optimizes a vector of feature weights, which are used to scale the individual features in the original pattern vectors in either a linear or a nonlinear fashion. By this approach, the number of features used in classifying can be finely reduced.

The Weight of Corporate Social Responsibility Indicators in Measurement Procedure

The Corporate Social Responsibility (CSR) performance has garnered significant interest during the last two decades as numerous methodologies are proposed by Social Responsible Investment (SRI) indexes. The weight of each indicator is a crucial component of the CSR measurement procedures. Based on a previous study, the appropriate weight of each proposed indicator for the Greek telecommunication sector is specified using the rank reciprocal weighting. The Kendall-s Coefficient of Concordance and Spearman Correlation Coefficient non-parametric tests are adopted to determine the level of consensus among the experts concerning the importance rank of indicators. The results show that there is no consensus regarding the rank of indicators in most of stakeholders- domains. The equal weight for all indicators could be proposed as a solution for the lack of consensus among the experts. The study recommends three different equations concerning the adopted weight approach.

Modeling Corporate Memories using the ReCaRo Model, Some Experiments

This paper presents a model of case based corporate memory named ReCaRo (REsource, CAse, ROle). The approach suggested in ReCaRo decomposes the domain to model through a set of components. These components represent the objects developed by the company during its activity. They are reused, and sometimes, while bringing adaptations. These components are enriched by knowledge after each reuse. ReCaRo builds the corporate memory on the basis of these components. It models two types of knowledge: 1) Business Knowledge, which constitutes the main knowledge capital of the company, refers to its basic skill, thus, directly to the components and 2) the Experience Knowledge which is a specialised knowledge and represents the experience gained during the handling of business knowledge. ReCaRo builds corporate memories which are made up of five communicating ones.

Enhancing Operational Effectiveness in the Norwegian Army through Simulation-Based Training

The Norwegian Military Academy (Army) has initiated a project with the main ambition to explore possible avenues to enhancing operational effectiveness through an increased use of simulation-based training and exercises. Within a cost/benefit framework, we discuss opportunities and limitations of vertical and horizontal integration of the existing tactical training system. Vertical integration implies expanding the existing training system to span the full range of training from tactical level (platoon, company) to command and staff level (battalion, brigade). Horizontal integration means including other domains than army tactics and staff procedures in the training, such as military ethics, foreign languages, leadership and decision making. We discuss each of the integration options with respect to purpose and content of training, "best practice" for organising and conducting simulation-based training, and suggest how to evaluate training procedures and measure learning outcomes. We conclude by giving guidelines towards further explorative work and possible implementation.