Condensation of Moist Air in Heat Exchanger Using CFD

This work presents results of moist air condensation in heat exchanger. It describes theoretical knowledge and definition of moist air. Model with geometry of square canal was created for better understanding and postprocessing of condensation phenomena. Different approaches were examined on this model to find suitable software and model. Obtained knowledge was applied to geometry of real heat exchanger and results from experiment were compared with numerical results. One of the goals is to solve this issue without creating any user defined function in the applied code. It also contains summary of knowledge and outlook for future work.

New Approaches on Stability Analysis for Neural Networks with Time-Varying Delay

Utilizing the Lyapunov functional method and combining linear matrix inequality (LMI) techniques and integral inequality approach (IIA) to analyze the global asymptotic stability for delayed neural networks (DNNs),a new sufficient criterion ensuring the global stability of DNNs is obtained.The criteria are formulated in terms of a set of linear matrix inequalities,which can be checked efficiently by use of some standard numercial packages.In order to show the stability condition in this paper gives much less conservative results than those in the literature,numerical examples are considered.

Simulation of Inverter Fed Induction Motor Drive with LabVIEW

This paper describes a software approach for modeling inverter fed induction motor drive using Laboratory Virtual Instrument Engineering Workbench (LabVIEW). The reason behind the selection of LabVIEW software is because of its strong graphical interface, flexibility of its programming language combined with built-in tools designed specifically for test, measurement and control. LabVIEW is generally used in most of the applications for data acquisition, test and control. In this paper, inverter and induction motor are modeled using LabVIEW toolkits. Simulation results are presented and are validated.

Artificial Intelligent Approach for Machining Titanium Alloy in a Nonconventional Process

Artificial neural networks (ANN) are used in distinct researching fields and professions, and are prepared by cooperation of scientists in different fields such as computer engineering, electronic, structure, biology and so many different branches of science. Many models are built correlating the parameters and the outputs in electrical discharge machining (EDM) concern for different types of materials. Up till now model for Ti-5Al-2.5Sn alloy in the case of electrical discharge machining performance characteristics has not been developed. Therefore, in the present work, it is attempted to generate a model of material removal rate (MRR) for Ti-5Al-2.5Sn material by means of Artificial Neural Network. The experimentation is performed according to the design of experiment (DOE) of response surface methodology (RSM). To generate the DOE four parameters such as peak current, pulse on time, pulse off time and servo voltage and one output as MRR are considered. Ti-5Al-2.5Sn alloy is machined with positive polarity of copper electrode. Finally the developed model is tested with confirmation test. The confirmation test yields an error as within the agreeable limit. To investigate the effect of the parameters on performance sensitivity analysis is also carried out which reveals that the peak current having more effect on EDM performance.

Ultra High Speed Approach for Document Skew Detection and Correction Based On Centre of Gravity

Skew detection and correction (SDC) has a direct effect in efficiency and exactitude of documents’ segmentation and analysis and thus is considered as a very important step in documents’ analysis field. Skew is a major problem in documents’ analysis for every language. For Arabic/Persian document scripts this problem is more severe because of special features of these languages. In this paper an efficient and fast algorithm for Document Skew Detection (DSD) based on the concept of segmentation and Center of Gravity (COG) is proposed. This algorithm is examined for 150 Arabic/Persian and English documents and SDC process are done successfully for 93 percent of documents with error rate of less than 1°. This algorithm shows better results for English documents compared to Arabic/Persian documents. The proposed method is also represents favorable results for handwritten, printed and also complicated documents such as newspapers and journals even with very low quality and resolution.

A New Approach to Design an Efficient CIC Decimator Using Signed Digit Arithmetic

Any digital processing performed on a signal with larger nyquist interval requires more computation than signal processing performed on smaller nyquist interval. The sampling rate alteration generates the unwanted effects in the system such as spectral aliasing and spectral imaging during signal processing. Multirate-multistage implementation of digital filter can result a significant computational saving than single rate filter designed for sample rate conversion. In this paper, we presented an efficient cascaded integrator comb (CIC) decimation filter that perform fast down sampling using signed digit adder algorithm with compensated frequency droop that arises due to aliasing effect during the decimation process. This proposed compensated CIC decimation filter structure with a hybrid signed digit (HSD) fast adder provide an improved performance in terms of down sampling speed by 65.15% than ripple carry adder (RCA) and reduced area and power by 57.5% and 0.01 % than signed digit (SD) adder algorithms respectively.

A Framework for Successful TQM Implementation and Its Effect on the Organizational Sustainability Development

The main purpose of this research is to construct a generic model for successful implementation of Total Quality Management (TQM) in Oil sector, and to find out the effects of this model on the organizational sustainability development (OSD) performance of Libyan oil and gas companies using the structured equation modeling (SEM) approach. The research approach covers both quantitative and qualitative methods. A questionnaire was developed in order to identify the quality factors that are seen by Libyan oil and gas companies to be critical to the success of TQM implementation. Hypotheses were developed to evaluate the impact of TQM implementation on O SD. Data analysis reveals that there is a significant positive effect of the TQM implementation on OSD. 24 quality factors are found to be critical and absolutely essential for successful TQM implementation. The results generated a structure of the TQMSD implementation framework based on the four major road map constructs (Top management commitment, employee involvement and participation, customer-driven processes, and continuous improvement culture).

A Review: Comparative Analysis of Different Categorical Data Clustering Ensemble Methods

Over the past epoch a rampant amount of work has been done in the data clustering research under the unsupervised learning technique in Data mining. Furthermore several algorithms and methods have been proposed focusing on clustering different data types, representation of cluster models, and accuracy rates of the clusters. However no single clustering algorithm proves to be the most efficient in providing best results. Accordingly in order to find the solution to this issue a new technique, called Cluster ensemble method was bloomed. This cluster ensemble is a good alternative approach for facing the cluster analysis problem. The main hope of the cluster ensemble is to merge different clustering solutions in such a way to achieve accuracy and to improve the quality of individual data clustering. Due to the substantial and unremitting development of new methods in the sphere of data mining and also the incessant interest in inventing new algorithms, makes obligatory to scrutinize a critical analysis of the existing techniques and the future novelty. This paper exposes the comparative study of different cluster ensemble methods along with their features, systematic working process and the average accuracy and error rates of each ensemble methods. Consequently this speculative and comprehensive analysis will be very useful for the community of clustering practitioners and also helps in deciding the most suitable one to rectify the problem in hand.

Stability and Kinetic Analysis during Vermicomposting of Sewage Sludge

The present study is aimed at alteration of sewage sludge into stable compost product using vermicomposting of sewage sludge mixed with cattle manure and saw dust in five different proportions based on C/N ratios (C/N 15 (R1), 20 (R2), 25 (R3) and 30 (R4); and control (R5)) by employing an epigeic earthworm Eisenia fetida. Higher reductions in C/N ratio, CO2 evolution and OUR were observed in R4 demonstrated the compost stability. In addition, R4 proved to be best combination for the growth of the earthworms. In order to observe the optimal degradation, kinetics for degradation of organic matter in vermicomposting were quantitatively evaluated. An approach model was developed by assuming that composting process is carried out in a homogeneous way and the kinetics for decomposition reaction is represented by a Monod-type equation. The results exhibit comparable variations in the kinetic constants Km and K3 under varying parameters during vermicomposting process. Results suggested that higher R2 value in R4, enhanced suitability towards Lineweaver-Burke plot. R4 yields higher degradability coefficient (K) reveals that the occurrence of optimal nutrient balance, which not only enhanced the affinity of enzymes towards substrate but also improved its degradation process. Therefore, it can be proved that R4 provided to be the best feed combination for vermicomposting process as compared to other reactors.

Enhanced Approaches to Rectify the Noise, Illumination and Shadow Artifacts

Enhancing the quality of two dimensional signals is one of the most important factors in the fields of video surveillance and computer vision. Usually in real-life video surveillance, false detection occurs due to the presence of random noise, illumination and shadow artifacts. The detection methods based on background subtraction faces several problems in accurately detecting objects in realistic environments: In this paper, we propose a noise removal algorithm using neighborhood comparison method with thresholding. The illumination variations correction is done in the detected foreground objects by using an amalgamation of techniques like homomorphic decomposition, curvelet transformation and gamma adjustment operator. Shadow is removed using chromaticity estimator with local relation estimator. Results are compared with the existing methods and prove as high robustness in the video surveillance.

WEMax: Virtual Manned Assembly Line Generation

Presented in this paper is a framework of a software ‘WEMax’. The WEMax is invented for analysis and simulation for manned assembly lines to sustain and improve performance of manufacturing systems. In a manufacturing system, performance, such as productivity, is a key of competitiveness for output products. However, the manned assembly lines are difficult to forecast performance, because human labors are not expectable factors by computer simulation models or mathematical models. Existing approaches to performance forecasting of the manned assembly lines are limited to matters of the human itself, such as ergonomic and workload design, and non-human-factor-relevant simulation. Consequently, an approach for the forecasting and improvement of manned assembly line performance is needed to research. As a solution of the current problem, this study proposes a framework that is for generation and simulation of virtual manned assembly lines, and the framework has been implemented as a software.

Process Parameters Optimization for Pulsed TIG Welding of 70/30 Cu-Ni Alloy Welds Using Taguchi Technique

Taguchi approach was applied to determine the most influential control factors which will yield better tensile strength of the joints of pulse TIG welded 70/30 Cu-Ni alloy. In order to evaluate the effect of process parameters such as pulse frequency, peak current, base current and welding speed on tensile strength of Pulsed current TIG welded 70/30 Cu-Ni alloy of 5 mm thickness, Taguchi parametric design and optimization approach was used. Through the Taguchi parametric design approach, the optimum levels of process parameters were determined at 95% confidence level. The results indicate that the Pulse frequency, peak current, welding speed and base current are the significant parameters in deciding the tensile strength of the joint. The predicted optimal values of tensile strength of Pulsed current Gas tungsten arc welding (PC GTAW) of 70/30 Cu-Ni alloy welds are 368.8MPa.

The Thought of Islamic Literature in Modern Malaysian Literature

This study aims to investigate the emergence of the thought of ​​Islamic literature in the development of modern Malay literature in Malaysia. It examines the views, approaches and theories discussed and argued by literary scholars. Further, this study investigates the influence of the thought of Islamic literature on the development of modern Malay literature in Malaysia by examining the emergence of prominent scholars and bodies that organized competitions for writing Islamic literary works. Findings reveal that in the 70’s, the movement began to be accepted by the literary society. Government bodies played an important role in creating and disseminating the Islamic literary works.

Review and Comparison of Associative Classification Data Mining Approaches

Associative classification (AC) is a data mining approach that combines association rule and classification to build classification models (classifiers). AC has attracted a significant attention from several researchers mainly because it derives accurate classifiers that contain simple yet effective rules. In the last decade, a number of associative classification algorithms have been proposed such as Classification based Association (CBA), Classification based on Multiple Association Rules (CMAR), Class based Associative Classification (CACA), and Classification based on Predicted Association Rule (CPAR). This paper surveys major AC algorithms and compares the steps and methods performed in each algorithm including: rule learning, rule sorting, rule pruning, classifier building, and class prediction.

Can Exams Be Shortened? Using a New Empirical Approach to Test in Finance Courses

Marking exams is universally detested by lecturers. Final exams in many higher education courses often last 3.0 hrs. Do exams really need to be so long? Can we justifiably reduce the number of questions on them? Surprisingly few have researched these questions, arguably because of the complexity and difficulty of using traditional methods. To answer these questions empirically, we used a new approach based on three key elements: Use of an unusual variation of a true experimental design, equivalence hypothesis testing, and an expanded set of six psychometric criteria to be met by any shortened exam if it is to replace a current 3.0-hr exam (reliability, validity, justifiability, number of exam questions, correspondence, and equivalence). We compared student performance on each official 3.0-hr exam with that on five shortened exams having proportionately fewer questions (2.5, 2.0, 1.5, 1.0, and 0.5 hours) in a series of four experiments conducted in two classes in each of two finance courses (224 students in total). We found strong evidence that, in these courses, shortening of final exams to 2.0 hrs was warranted on all six psychometric criteria. Shortening these exams by one hour should result in a substantial one-third reduction in lecturer time and effort spent marking, lower student stress, and more time for students to prepare for other exams. Our approach provides a relatively simple, easy-to-use methodology that lecturers can use to examine the effect of shortening their own exams.

Retrofitting of Bridge Piers against the Scour Damages: Case Study of the Marand-Soofian Route Bridge

Bridge piers which are constructed in the track of high water rivers cause some variations in the flow patterns. This variation mostly is a result of the changes in river sections. Decreasing the river section, bridge piers significantly impress the flow patterns. Once the flow approaches the piers, the stream lines change their order, causing the appearance of different flow patterns around the bridge piers. New flow patterns are created following the geometry and the other technical characteristics of the piers. One of the most significant consequences of this event is the scour generated around the bridge piers which threatens the safety of the structure. In order to determine the properties of scour holes, to find maximum depth of the scour is an important factor. In this manuscript a numerical simulation of the scour around Marand-Soofian route bridge piers has been carried out via SSIIM 2.0 Software and the amount of maximum scour has been achieved subsequently. Eventually the methods for retrofitting of bridge piers against scours and also the methods for decreasing the amount of scour have been offered.

A Human Activity Recognition System Based On Sensory Data Related to Object Usage

Sensor-based Activity Recognition systems usually accounts which sensors have been activated to perform an activity. The system then combines the conditional probabilities of those sensors to represent different activities and takes the decision based on that. However, the information about the sensors which are not activated may also be of great help in deciding which activity has been performed. This paper proposes an approach where the sensory data related to both usage and non-usage of objects are utilized to make the classification of activities. Experimental results also show the promising performance of the proposed method.

A Weighted Approach to Unconstrained Iris Recognition

This paper presents a weighted approach to unconstrained iris recognition. In nowadays, commercial systems are usually characterized by strong acquisition constraints based on the subject’s cooperation. However, it is not always achievable for real scenarios in our daily life. Researchers have been focused on reducing these constraints and maintaining the performance of the system by new techniques at the same time. With large variation in the environment, there are two main improvements to develop the proposed iris recognition system. For solving extremely uneven lighting condition, statistic based illumination normalization is first used on eye region to increase the accuracy of iris feature. The detection of the iris image is based on Adaboost algorithm. Secondly, the weighted approach is designed by Gaussian functions according to the distance to the center of the iris. Furthermore, local binary pattern (LBP) histogram is then applied to texture classification with the weight. Experiment showed that the proposed system provided users a more flexible and feasible way to interact with the verification system through iris recognition.

A Development of OTOP Web Application: In Case of Samut Songkhram Province

This paper aims to present the development of a web‑based system to serve the need of selling OTOP products in Samut Songkhram, Thailand. This system was designed to promote and sell OTOP products on Web site. We describe the design approaches and functional components of this system. The system was developed by PHP and JavaScript and MySQL database System. To evaluate the system performance, questionnaires were used to measure user satisfaction with system usability by specialists and users. The results were satisfactory as followed: Means for specialists and users were 4.05 and 3.97, and standard deviation for specialists and users were 0.563 and 0.644 respectively. Further analysis showed that the quality of One Tambon One Product (OTOP) Website was also at a good level as well.

Estimation of the Upper Tail Dependence Coefficient for Insurance Loss Data Using an Empirical Copula-Based Approach

Considerable focus in the world of insurance risk quantification is placed on modeling loss values from lines of business (LOBs) that possess upper tail dependence. Copulas such as the Joe, Gumbel and Student-t copula may be used for this purpose. The copula structure imparts a desired level of tail dependence on the joint distribution of claims from the different LOBs. Alternatively, practitioners may possess historical or simulated data that already exhibit upper tail dependence, through the impact of catastrophe events such as hurricanes or earthquakes. In these circumstances, it is not desirable to induce additional upper tail dependence when modeling the joint distribution of the loss values from the individual LOBs. Instead, it is of interest to accurately assess the degree of tail dependence already present in the data. The empirical copula and its associated upper tail dependence coefficient are presented in this paper as robust, efficient means of achieving this goal.