Surface Flattening Assisted with 3D Mannequin Based On Minimum Energy

The topic of surface flattening plays a vital role in the field of computer aided design and manufacture. Surface flattening enables the production of 2D patterns and it can be used in design and manufacturing for developing a 3D surface to a 2D platform, especially in fashion design. This study describes surface flattening based on minimum energy methods according to the property of different fabrics. Firstly, through the geometric feature of a 3D surface, the less transformed area can be flattened on a 2D platform by geodesic. Then, strain energy that has accumulated in mesh can be stably released by an approximate implicit method and revised error function. In some cases, cutting mesh to further release the energy is a common way to fix the situation and enhance the accuracy of the surface flattening, and this makes the obtained 2D pattern naturally generate significant cracks. When this methodology is applied to a 3D mannequin constructed with feature lines, it enhances the level of computer-aided fashion design. Besides, when different fabrics are applied to fashion design, it is necessary to revise the shape of a 2D pattern according to the properties of the fabric. With this model, the outline of 2D patterns can be revised by distributing the strain energy with different results according to different fabric properties. Finally, this research uses some common design cases to illustrate and verify the feasibility of this methodology.

Motivated Support Vector Regression using Structural Prior Knowledge

It-s known that incorporating prior knowledge into support vector regression (SVR) can help to improve the approximation performance. Most of researches are concerned with the incorporation of knowledge in the form of numerical relationships. Little work, however, has been done to incorporate the prior knowledge on the structural relationships among the variables (referred as to Structural Prior Knowledge, SPK). This paper explores the incorporation of SPK in SVR by constructing appropriate admissible support vector kernel (SV kernel) based on the properties of reproducing kernel (R.K). Three-levels specifications of SPK are studied with the corresponding sub-levels of prior knowledge that can be considered for the method. These include Hierarchical SPK (HSPK), Interactional SPK (ISPK) consisting of independence, global and local interaction, Functional SPK (FSPK) composed of exterior-FSPK and interior-FSPK. A convenient tool for describing the SPK, namely Description Matrix of SPK is introduced. Subsequently, a new SVR, namely Motivated Support Vector Regression (MSVR) whose structure is motivated in part by SPK, is proposed. Synthetic examples show that it is possible to incorporate a wide variety of SPK and helpful to improve the approximation performance in complex cases. The benefits of MSVR are finally shown on a real-life military application, Air-toground battle simulation, which shows great potential for MSVR to the complex military applications.

The Effect of Soil Surface Slope on Splash Distribution under Water Drop Impact

The effects of down slope steepness on soil splash distribution under a water drop impact have been investigated in this study. The equipment used are the burette to simulate a water drop, a splash cup filled with sandy soil which forms the source area and a splash board to collect the ejected particles. The results found in this study have shown that the apparent mass increased with increasing downslope angle following a linear regression equation with high coefficient of determination. In the same way, the radial soil splash distribution over the distance has been analyzed statistically, and an exponential function was the best fit of the relationship for the different slope angles. The curves and the regressions equations validate the well known FSDF and extend the theory of Van Dijk.

Solution of Two Dimensional Quasi-Harmonic Equations with CA Approach

Many computational techniques were applied to solution of heat conduction problem. Those techniques were the finite difference (FD), finite element (FE) and recently meshless methods. FE is commonly used in solution of equation of heat conduction problem based on the summation of stiffness matrix of elements and the solution of the final system of equations. Because of summation process of finite element, convergence rate was decreased. Hence in the present paper Cellular Automata (CA) approach is presented for the solution of heat conduction problem. Each cell considered as a fixed point in a regular grid lead to the solution of a system of equations is substituted by discrete systems of equations with small dimensions. Results show that CA can be used for solution of heat conduction problem.

Preparation of Nanosized Iron Oxide and their Photocatalytic Properties for Congo Red

Nanostructured Iron Oxide with different morphologies of rod-like and granular have been suc-cessfully prepared via a solid-state reaction in the presence of NaCl, NaBr, NaI and NaN3, respectively. The added salts not only prevent a drastic increase in the size of the products but also provide suitable conditions for the oriented growth of primary nanoparticles. The formation mechanisms of these materials by solid-state reaction at ambient temperature are proposed. The photocatalytic experiments for congo red (CR) have demonstrated that the mixture of α-Fe2O3 and Fe3O4 nanostructures were more efficient than α-Fe2O3 nanostructures.

Canonical PSO based Nanorobot Control for Blood Vessel Repair

As nanotechnology advances, the use of nanotechnology for medical purposes in the field of nanomedicine seems more promising; the rise of nanorobots for medical diagnostics and treatments could be arriving in the near future. This study proposes a swarm intelligence based control mechanism for swarm nanorobots that operate as artificial platelets to search for wounds. The canonical particle swarm optimization algorithm is employed in this study. A simulation in the circulatory system is constructed and used for demonstrating the movement of nanorobots with essential characteristics to examine the performance of proposed control mechanism. The effects of three nanorobot capabilities including their perception range, maximum velocity and respond time are investigated. The results show that canonical particle swarm optimization can be used to control the early version nanorobots with simple behaviors and actions.

Performance Evaluation of an Online Text-Based Strategy Game

Text-based game is supposed to be a low resource consumption application that delivers good performances when compared to graphical-intensive type of games. But, nowadays, some of the online text-based games are not offering performances that are acceptable to the users. Therefore, an online text-based game called Star_Quest has been developed in order to analyze its behavior under different performance measurements. Performance metrics such as throughput, scalability, response time and page loading time are captured to yield the performance of the game. The techniques in performing the load testing are also disclosed to exhibit the viability of our work. The comparative assessment between the results obtained and the accepted level of performances are conducted as to determine the performance level of the game. The study reveals that the developed game managed to meet all the performance objectives set forth.

Using Data Mining Methodology to Build the Predictive Model of Gold Passbook Price

Gold passbook is an investing tool that is especially suitable for investors to do small investment in the solid gold. The gold passbook has the lower risk than other ways investing in gold, but its price is still affected by gold price. However, there are many factors can cause influences on gold price. Therefore, building a model to predict the price of gold passbook can both reduce the risk of investment and increase the benefits. This study investigates the important factors that influence the gold passbook price, and utilize the Group Method of Data Handling (GMDH) to build the predictive model. This method can not only obtain the significant variables but also perform well in prediction. Finally, the significant variables of gold passbook price, which can be predicted by GMDH, are US dollar exchange rate, international petroleum price, unemployment rate, whole sale price index, rediscount rate, foreign exchange reserves, misery index, prosperity coincident index and industrial index.

An Improved Integer Frequency Offset Estimator using the P1 Symbol for OFDM System

This paper suggests an improved integer frequency offset (IFO) estimation scheme using P1 symbol for orthogonal frequency division multiplexing (OFDM) based the second generation terrestrial digital video broadcasting (DVB-T2) system. Proposed IFO estimator is designed by a low-complexity blind IFO estimation scheme, which is implemented with complex additions. Also, we propose active carriers (ACs) selection scheme in order to prevent performance degradation in blind IFO estimation. The simulation results show that under the AWGN and TU6 channels, the proposed method has low complexity than conventional method and almost similar performance in comparison with the conventional method.

Thermodynamic Performance of a Combined Power and Ejector Refrigeration Cycle

In this study thermodynamic performance analysis of a combined organic Rankine cycle and ejector refrigeration cycle is carried out for use of low-grade heat source in the form of sensible energy. Special attention is paid to the effects of system parameters including the turbine inlet temperature and turbine inlet pressure on the characteristics of the system such as ratios of mass flow rate, net work production, and refrigeration capacity as well as the coefficient of performance and exergy efficiency of the system. Results show that for a given source the coefficient of performance increases with increasing of the turbine inlet pressure. However, the exergy efficiency has an optimal condition with respect to the turbine inlet pressure.

Modeling and Simulating Reaction-Diffusion Systems with State-Dependent Diffusion Coefficients

The present models and simulation algorithms of intracellular stochastic kinetics are usually based on the premise that diffusion is so fast that the concentrations of all the involved species are homogeneous in space. However, recents experimental measurements of intracellular diffusion constants indicate that the assumption of a homogeneous well-stirred cytosol is not necessarily valid even for small prokaryotic cells. In this work a mathematical treatment of diffusion that can be incorporated in a stochastic algorithm simulating the dynamics of a reaction-diffusion system is presented. The movement of a molecule A from a region i to a region j of the space is represented as a first order reaction Ai k- ! Aj , where the rate constant k depends on the diffusion coefficient. The diffusion coefficients are modeled as function of the local concentration of the solutes, their intrinsic viscosities, their frictional coefficients and the temperature of the system. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the intrinsic reaction kinetics and diffusion dynamics. To demonstrate the method the simulation results of the reaction-diffusion system of chaperoneassisted protein folding in cytoplasm are shown.

Footbridge Response on Single Pedestrian Induced Vibration Analysis

Many footbridges have natural frequencies that coincide with the dominant frequencies of the pedestrian-induced load and therefore they have a potential to suffer excessive vibrations under dynamic loads induced by pedestrians. Some of the design standards introduce load models for pedestrian loads applicable for simple structures. Load modeling for more complex structures, on the other hand, is most often left to the designer. The main focus of this paper is on the human induced forces transmitted to a footbridge and on the ways these loads can be modeled to be used in the dynamic design of footbridges. Also design criteria and load models proposed by widely used standards were introduced and a comparison was made. The dynamic analysis of the suspension bridge in Kolin in the Czech Republic was performed on detailed FEM model using the ANSYS program system. An attempt to model the load imposed by a single person and a crowd of pedestrians resulted in displacements and accelerations that are compared with serviceability criteria.

Using Ontology Search in the Design of Class Diagram from Business Process Model

Business process model describes process flow of a business and can be seen as the requirement for developing a software application. This paper discusses a BPM2CD guideline which complements the Model Driven Architecture concept by suggesting how to create a platform-independent software model in the form of a UML class diagram from a business process model. An important step is the identification of UML classes from the business process model. A technique for object-oriented analysis called domain analysis is borrowed and key concepts in the business process model will be discovered and proposed as candidate classes for the class diagram. The paper enhances this step by using ontology search to help identify important classes for the business domain. As ontology is a source of knowledge for a particular domain which itself can link to ontologies of related domains, the search can give a refined set of candidate classes for the resulting class diagram.

Investigate the Relation between the Correctness and the Number of Versions of Fault Tolerant Software System

In this paper, we generalize several techniques in developing Fault Tolerant Software. We introduce property “Correctness" in evaluating N-version Systems and compare it to some commonly used properties such as reliability or availability. We also find out the relation between this property and the number of versions of system. Our experiments to verify the correctness and the applicability of the relation are also presented.

A Fast Code Acquisition Scheme for O-CDMA Systems

This paper proposes a fast code acquisition scheme for optical code division multiple access (O-CDMA) systems. Unlike the conventional scheme, the proposed scheme employs multiple thresholds providing a shorter mean acquisition time (MAT) performance. The simulation results show that the MAT of the proposed scheme is shorter than that of the conventional scheme.

An Evaluation of Sputum Smear Conversion and Haematological Parameter Alteration in Early Detection Period of New Pulmonary Tuberculosis (PTB) Patients

Sputum smear conversion after one month of antituberculosis therapy in new smear positive pulmonary tuberculosis patients (PTB+) is a vital indicator towards treatment success. The objective of this study is to determine the rate of sputum smear conversion in new PTB+ patients after one month under treatment of National Institute of Diseases of the Chest and Hospital (NIDCH). Analysis of sputum smear conversion was done by re-clinical examination with sputum smear microscopic test after one month. Socio-demographic and hematological parameters were evaluated to perceive the correlation with the disease status. Among all enrolled patients only 33.33% were available for follow up diagnosis and of them only 42.86% patients turned to smear negative. Probably this consequence is due to non-coherence to the proper disease management. 66.67% and 78.78% patients reported low haemoglobin and packed cell volume level respectively whereas 80% and 93.33% patients accounted accelerated platelet count and erythrocyte sedimentation rate correspondingly.

Cell Growth and Metabolites Produced by Fluorescent Pseudomonad R62 in Modified Chemically Defined Medium

Chemically defined Schlegel-s medium was modified to improve production of cell growth and other metabolites that are produced by fluorescent pseudomonad R62 strain. The modified medium does not require pH control as pH changes are kept within ± 0.2 units of the initial pH 7.1 during fermentation. The siderophore production was optimized for the fluorescent pseudomonad strain in the modified medium containing 1% glycerol as a major carbon source supplemented with 0.05% succinic acid and 0.5% Ltryptophan. Indole-3 acetic acid (IAA) production was higher when L-tryptophan was used at 0.5%. The 2,4- diacetylphloroglucinol (DAPG) was higher with amended three trace elements in medium. The optimized medium produced 2.28 g/l of dry cell mass and 900 mg/l of siderophore at the end of 36 h cultivation, while the production levels of IAA and DAPG were 65 mg/l and 81 mg/l respectively at the end of 48 h cultivation.

Automated Knowledge Engineering

This article outlines conceptualization and implementation of an intelligent system capable of extracting knowledge from databases. Use of hybridized features of both the Rough and Fuzzy Set theory render the developed system flexibility in dealing with discreet as well as continuous datasets. A raw data set provided to the system, is initially transformed in a computer legible format followed by pruning of the data set. The refined data set is then processed through various Rough Set operators which enable discovery of parameter relationships and interdependencies. The discovered knowledge is automatically transformed into a rule base expressed in Fuzzy terms. Two exemplary cancer repository datasets (for Breast and Lung Cancer) have been used to test and implement the proposed framework.

Robust Camera Calibration using Discrete Optimization

Camera calibration is an indispensable step for augmented reality or image guided applications where quantitative information should be derived from the images. Usually, a camera calibration is obtained by taking images of a special calibration object and extracting the image coordinates of projected calibration marks enabling the calculation of the projection from the 3d world coordinates to the 2d image coordinates. Thus such a procedure exhibits typical steps, including feature point localization in the acquired images, camera model fitting, correction of distortion introduced by the optics and finally an optimization of the model-s parameters. In this paper we propose to extend this list by further step concerning the identification of the optimal subset of images yielding the smallest overall calibration error. For this, we present a Monte Carlo based algorithm along with a deterministic extension that automatically determines the images yielding an optimal calibration. Finally, we present results proving that the calibration can be significantly improved by automated image selection.

Meta Model Based EA for Complex Optimization

Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiency