Ray Tracing Technique based 60 GHz Band Propagation Modelling and Influence of People Shadowing

The main objectif of this paper is to present a tool that we have developed subject to characterize and modelling indoor radio channel propagation at millimetric wave. The tool is based on the ray tracing technique (RTT). As, in realistic environment we cannot neglect the significant impact of Human Body Shadowing and other objects in motion on indoor 60 GHz propagation channel. Hence, our proposed model allows a simulation of propagation in a dynamic indoor environment. First, we describe a model of human body. Second, RTT with this model is used to simulate the propagation of millimeter waves in the presence of persons in motion. Results of the simulation show that this tool gives results in agreement with those reported in the literature. Specially, the effects of people motion on temporal channel properties.

PeliGRIFF: A Parallel DEM-DLM/FD Method for DNS of Particulate Flows with Collisions

An original Direct Numerical Simulation (DNS) method to tackle the problem of particulate flows at moderate to high concentration and finite Reynolds number is presented. Our method is built on the framework established by Glowinski and his coworkers [1] in the sense that we use their Distributed Lagrange Multiplier/Fictitious Domain (DLM/FD) formulation and their operator-splitting idea but differs in the treatment of particle collisions. The novelty of our contribution relies on replacing the simple artificial repulsive force based collision model usually employed in the literature by an efficient Discrete Element Method (DEM) granular solver. The use of our DEM solver enables us to consider particles of arbitrary shape (at least convex) and to account for actual contacts, in the sense that particles actually touch each other, in contrast with the simple repulsive force based collision model. We recently upgraded our serial code, GRIFF 1 [2], to full MPI capabilities. Our new code, PeliGRIFF 2, is developed under the framework of the full MPI open source platform PELICANS [3]. The new MPI capabilities of PeliGRIFF open new perspectives in the study of particulate flows and significantly increase the number of particles that can be considered in a full DNS approach: O(100000) in 2D and O(10000) in 3D. Results on the 2D/3D sedimentation/fluidization of isometric polygonal/polyedral particles with collisions are presented.

CMOS-Compatible Silicon Nanoplasmonics for On-Chip Integration

Although silicon photonic devices provide a significantly larger bandwidth and dissipate a substantially less power than the electronic devices, they suffer from a large size due to the fundamental diffraction limit and the weak optical response of Si. A potential solution is to exploit Si plasmonics, which may not only miniaturize the photonic device far beyond the diffraction limit, but also enhance the optical response in Si due to the electromagnetic field confinement. In this paper, we discuss and summarize the recently developed metal-insulator-Si-insulator-metal nanoplasmonic waveguide as well as various passive and active plasmonic components based on this waveguide, including coupler, bend, power splitter, ring resonator, MZI, modulator, detector, etc. All these plasmonic components are CMOS compatible and could be integrated with electronic and conventional dielectric photonic devices on the same SOI chip. More potential plasmonic devices as well as plasmonic nanocircuits with complex functionalities are also addressed.

Enhanced Nutrients Removal in Conventional Anaerobic Digestion Processes

One of the main challenges for one phase anaerobic digestion processes is the high concentration of NH4+ and PO4 3- ions  in the digested sludge supernatant. This project focuses on enhancing the removal of nutrients during the anaerobic digestion process through fixing both NH4+ and PO4 3- ions in the form of struvite (magnesium ammonium phosphate, MAP, MgNH4PO4.6H2O) within the anaerobic sludge. Batch anaerobic digestion tests showed that Mg2+ concentration in the range 279 – 812 mg/L had insignificant effect on CGP but incurred a slight increase in COD removal. The reactor that had soluble Mg2+:NH4+:PO43- at a molar ratio of 1.28:1:00:1:00 achieved the best performance enhancement of 8% increase in COD removal and 32% reduction in NH4+ in the reactor supernatant. Overall, the results show that there is a potential to optimise conventional anaerobic digestion such that supernatant lean in P and N, and sludge rich in nutrients are obtained. 

Effect of High Injection Pressure on Mixture Formation, Burning Process and Combustion Characteristics in Diesel Combustion

The mixture formation prior to the ignition process plays as a key element in the diesel combustion. Parametric studies of mixture formation and ignition process in various injection parameter has received considerable attention in potential for reducing emissions. Purpose of this study is to clarify the effects of injection pressure on mixture formation and ignition especially during ignition delay period, which have to be significantly influences throughout the combustion process and exhaust emissions. This study investigated the effects of injection pressure on diesel combustion fundamentally using rapid compression machine. The detail behavior of mixture formation during ignition delay period was investigated using the schlieren photography system with a high speed camera. This method can capture spray evaporation, spray interference, mixture formation and flame development clearly with real images. Ignition process and flame development were investigated by direct photography method using a light sensitive high-speed color digital video camera. The injection pressure and air motion are important variable that strongly affect to the fuel evaporation, endothermic and prolysis process during ignition delay. An increased injection pressure makes spray tip penetration longer and promotes a greater amount of fuel-air mixing occurs during ignition delay. A greater quantity of fuel prepared during ignition delay period thus predominantly promotes more rapid heat release.

Microalbuminuria in Human Immunodeficiency Virus Infection and Acquired Immunodeficiency Syndrome

Human immunodeficiency virus infection and acquired immunodeficiency syndrome is a global pandemic with cases reporting from virtually every country and continues to be a common infection in developing country like India. Microalbuminuria is a manifestation of human immunodeficiency virus associated nephropathy. Therefore, microalbuminuria may be an early marker of human immunodeficiency virus associated nephropathy, and screening for its presence may be beneficial. A strikingly high prevalence of microalbuminuria among human immunodeficiency virus infected patients has been described in various studies. Risk factors for clinically significant proteinuria include African - American race, higher human immunodeficiency virus ribonucleic acid level and lower CD4 lymphocyte count. The cardiovascular risk factors of increased systolic blood pressure and increase fasting blood sugar level are strongly associated with microalbuminuria in human immunodeficiency virus patient. These results suggest that microalbuminuria may be a sign of current endothelial dysfunction and micro-vascular disease and there is substantial risk of future cardiovascular disease events. Positive contributing factors include early kidney disease such as human immunodeficiency virus associated nephropathy, a marker of end organ damage related to co morbidities of diabetes or hypertension, or more diffuse endothelial cells dysfunction. Nevertheless after adjustment for non human immunodeficiency virus factors, human immunodeficiency virus itself is a major risk factor. The presence of human immunodeficiency virus infection is independent risk to develop microalbuminuria in human immunodeficiency virus patient. Cardiovascular risk factors appeared to be stronger predictors of microalbuminuria than markers of human immunodeficiency virus severity person with human immunodeficiency virus infection and microalbuminuria therefore appear to potentially bear the burden of two separate damage related to known vascular end organ damage related to know vascular risk factors, and human immunodeficiency virus specific processes such as the direct viral infection of kidney cells.The higher prevalence of microalbuminuria among the human immunodeficiency virus infected could be harbinger of future increased risks of both kidney and cardiovascular disease. Further study defining the prognostic significance of microalbuminuria among human immunodeficiency virus infected persons will be essential. Microalbuminuria seems to be a predictor of cardiovascular disease in diabetic and non diabetic subjects, hence it can also be used for early detection of micro vascular disease in human immunodeficiency virus positive patients, thus can help to diagnose the disease at the earliest.

Turbo-Coded Mobile Terrestrial Communication Systems in Urban and Suburban Areas for Wireless Multimedia Applications

With the rapid popularization of internet services, it is apparent that the next generation terrestrial communication systems must be capable of supporting various applications like voice, video, and data. This paper presents the performance evaluation of turbo- coded mobile terrestrial communication systems, which are capable of providing high quality services for delay sensitive (voice or video) and delay tolerant (text transmission) multimedia applications in urban and suburban areas. Different types of multimedia information require different service qualities, which are generally expressed in terms of a maximum acceptable bit-error-rate (BER) and maximum tolerable latency. The breakthrough discovery of turbo codes allows us to significantly reduce the probability of bit errors with feasible latency. In a turbo-coded system, a trade-off between latency and BER results from the choice of convolutional component codes, interleaver type and size, decoding algorithm, and the number of decoding iterations. This trade-off can be exploited for multimedia applications by using optimal and suboptimal performance parameter amalgamations to achieve different service qualities. The results are therefore proposing an adaptive framework for turbo-coded wireless multimedia communications which incorporate a set of performance parameters that achieve an appropriate set of service qualities, depending on the application's requirements.

Dynamic-Stochastic Influence Diagrams: Integrating Time-Slices IDs and Discrete Event Systems Modeling

The Influence Diagrams (IDs) is a kind of Probabilistic Belief Networks for graphic modeling. The usage of IDs can improve the communication among field experts, modelers, and decision makers, by showing the issue frame discussed from a high-level point of view. This paper enhances the Time-Sliced Influence Diagrams (TSIDs, or called Dynamic IDs) based formalism from a Discrete Event Systems Modeling and Simulation (DES M&S) perspective, for Exploring Analysis (EA) modeling. The enhancements enable a modeler to specify times occurred of endogenous events dynamically with stochastic sampling as model running and to describe the inter- influences among them with variable nodes in a dynamic situation that the existing TSIDs fails to capture. The new class of model is named Dynamic-Stochastic Influence Diagrams (DSIDs). The paper includes a description of the modeling formalism and the hiberarchy simulators implementing its simulation algorithm, and shows a case study to illustrate its enhancements.

Development of a Porous Silica Film by Sol-gel Process

In the present work homogeneous silica film on silicon was fabricated by colloidal silica sol. The silica sol precursor with uniformly granular particle was derived by the alkaline hydrolysis of tetraethoxyorthosilicate (TEOS) in presence of glycerol template. The film was prepared by dip coating process. The templated hetero-structured silica film was annealed at elevated temperatures to generate nano- and meso porosity in the film. The film was subsequently annealed at different temperatures to make it defect free and abrasion resistant. The sol and the film were characterized by the measurement of particle size distribution, scanning electron microscopy, XRD, FTIR spectroscopy, transmission electron microscopy, atomic force microscopy, measurement of the refractive index, thermal conductivity and abrasion resistance. The porosity of the films decreased whereas refractive index and dielectric constant of it `increased with the increase in the annealing temperature. The thermal conductivity of the films increased with the increase in the film thickness. The developed porous silica film holds strong potential for use in different areas.

A Soft Systems Methodology Perspective on Data Warehousing Education Improvement

This paper demonstrates how the soft systems methodology can be used to improve the delivery of a module in data warehousing for fourth year information technology students. Graduates in information technology needs to have academic skills but also needs to have good practical skills to meet the skills requirements of the information technology industry. In developing and improving current data warehousing education modules one has to find a balance in meeting the expectations of various role players such as the students themselves, industry and academia. The soft systems methodology, developed by Peter Checkland, provides a methodology for facilitating problem understanding from different world views. In this paper it is demonstrated how the soft systems methodology can be used to plan the improvement of data warehousing education for fourth year information technology students.

Feature Point Reduction for Video Stabilization

Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.

A Digitally Programmable Voltage-mode Multifunction Biquad Filter with Single-Output

This article proposes a voltage-mode multifunction filter using differential voltage current controllable current conveyor transconductance amplifier (DV-CCCCTA). The features of the circuit are that: the quality factor and pole frequency can be tuned independently via the values of capacitors: the circuit description is very simple, consisting of merely 1 DV-CCCCTA, and 2 capacitors. Without any component matching conditions, the proposed circuit is very appropriate to further develop into an integrated circuit. Additionally, each function response can be selected by suitably selecting input signals with digital method. The PSpice simulation results are depicted. The given results agree well with the theoretical anticipation.

Fragile Watermarking for Color Images Using Thresholding Technique

In this paper, we propose ablock-wise watermarking scheme for color image authentication to resist malicious tampering of digital media. The thresholding technique is incorporated into the scheme such that the tampered region of the color image can be recovered with high quality while the proofing result is obtained. The watermark for each block consists of its dual authentication data and the corresponding feature information. The feature information for recovery iscomputed bythe thresholding technique. In the proofing process, we propose a dual-option parity check method to proof the validity of image blocks. In the recovery process, the feature information of each block embedded into the color image is rebuilt for high quality recovery. The simulation results show that the proposed watermarking scheme can effectively proof the tempered region with high detection rate and can recover the tempered region with high quality.

SVM-based Multiview Face Recognition by Generalization of Discriminant Analysis

Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set of features and perform recognition task. The proposed system is evaluated with UMIST face database. The experiment results demonstrate the efficiency and robustness of the proposed system with high recognition rates.

Strategies of Entrepreneurs to Collaborate with Alliances for Commercializing Technology and New Product Innovation: A Practical Learning in Thailand

This paper provides a key driver-based conceptual framework that can be used to improve a firm-s success in commercializing technology and in new product innovation resulting from collaboration with other organizations through strategic alliances. Based on a qualitative study using an interview approach, strategic alliances of entrepreneurs in the food processing industry in Thailand are explored. This paper describes factors affecting decisions to collaborate through alliances. It identifies four issues: maintaining the efficiency of the value chain for production capability, adapting to present and future competition, careful assessment of value of outcomes, and management of innovation. We consider five driving factors: resource orientation, assessment of risk, business opportunity, sharing of benefits and confidence in alliance partners. These factors will be of interest to entrepreneurs and policy makers with regard to further understanding of the direction of business strategies.

Optimization of Acid Treatments by Assessing Diversion Strategies in Carbonate and Sandstone Formations

When acid is pumped into damaged reservoirs for damage removal/stimulation, distorted inflow of acid into the formation occurs caused by acid preferentially traveling into highly permeable regions over low permeable regions, or (in general) into the path of least resistance. This can lead to poor zonal coverage and hence warrants diversion to carry out an effective placement of acid. Diversion is desirably a reversible technique of temporarily reducing the permeability of high perm zones, thereby forcing the acid into lower perm zones. The uniqueness of each reservoir can pose several challenges to engineers attempting to devise optimum and effective diversion strategies. Diversion techniques include mechanical placement and/or chemical diversion of treatment fluids, further sub-classified into ball sealers, bridge plugs, packers, particulate diverters, viscous gels, crosslinked gels, relative permeability modifiers (RPMs), foams, and/or the use of placement techniques, such as coiled tubing (CT) and the maximum pressure difference and injection rate (MAPDIR) methodology. It is not always realized that the effectiveness of diverters greatly depends on reservoir properties, such as formation type, temperature, reservoir permeability, heterogeneity, and physical well characteristics (e.g., completion type, well deviation, length of treatment interval, multiple intervals, etc.). This paper reviews the mechanisms by which each variety of diverter functions and discusses the effect of various reservoir properties on the efficiency of diversion techniques. Guidelines are recommended to help enhance productivity from zones of interest by choosing the best methods of diversion while pumping an optimized amount of treatment fluid. The success of an overall acid treatment often depends on the effectiveness of the diverting agents.

Extraction of Data from Web Pages: A Vision Based Approach

With the explosive growth of information sources available on the World Wide Web, it has become increasingly difficult to identify the relevant pieces of information, since web pages are often cluttered with irrelevant content like advertisements, navigation-panels, copyright notices etc., surrounding the main content of the web page. Hence, tools for the mining of data regions, data records and data items need to be developed in order to provide value-added services. Currently available automatic techniques to mine data regions from web pages are still unsatisfactory because of their poor performance and tag-dependence. In this paper a novel method to extract data items from the web pages automatically is proposed. It comprises of two steps: (1) Identification and Extraction of the data regions based on visual clues information. (2) Identification of data records and extraction of data items from a data region. For step1, a novel and more effective method is proposed based on visual clues, which finds the data regions formed by all types of tags using visual clues. For step2 a more effective method namely, Extraction of Data Items from web Pages (EDIP), is adopted to mine data items. The EDIP technique is a list-based approach in which the list is a linear data structure. The proposed technique is able to mine the non-contiguous data records and can correctly identify data regions, irrespective of the type of tag in which it is bound. Our experimental results show that the proposed technique performs better than the existing techniques.

Finding a Solution, all Solutions, or the Most Probable Solution to a Temporal Interval Algebra Network

Over the years, many implementations have been proposed for solving IA networks. These implementations are concerned with finding a solution efficiently. The primary goal of our implementation is simplicity and ease of use. We present an IA network implementation based on finite domain non-binary CSPs, and constraint logic programming. The implementation has a GUI which permits the drawing of arbitrary IA networks. We then show how the implementation can be extended to find all the solutions to an IA network. One application of finding all the solutions, is solving probabilistic IA networks.

Self-efficacy, Self-reliance, and Motivation inan Asynchronous Learning Environment

Self-efficacy, self-reliance, and motivation were examined in a quasi-experimental study with 178 sophomore university students. Participants used an interactive cardiovascular anatomy and physiology CD-ROM, and completed a 15-item questionnaire. Reliability of the questionnaire was established using Cronbach-s alpha. Post-tests and course grades were examined using a t-test, demonstrating no significance. Results of an item-to-item analysis of the questionnaire showed overall satisfaction with the teaching methodology and varied results for self-efficacy, selfreliance, and motivation. Kendall-s Tau was calculated for all items in the questionnaire.

Bioengineering for Customized Orthodontic Applications- Implant, Bracket and Dental Vibrator

To understand complex living system an effort has made by mechanical engineers and dentists to deliver prompt products and services to patients concerned about their aesthetic look. Since two decades various bracket systems have designed involving techniques like milling, injection molding which are technically not flexible for the customized dental product development. The aim of this paper to design, develop a customized system which is economical and mainly emphasizes the expertise design and integration of engineering and dental fields. A custom made selfadjustable lingual bracket and customized implants are designed and developed using computer aided design (CAD) and rapid prototyping technology (RPT) to improve the smiles and to overcome the difficulties associated with conventional ones. Lengthy orthodontic treatment usually not accepted by the patients because the patient compliance is lost. Patient-s compliance can be improved by facilitating faster tooth movements by designing a localized dental vibrator using advanced engineering principles.