Measuring Relative Efficiency of Korean Construction Company using DEA/Window

Sub-prime mortgage crisis which began in the US is regarded as the most economic crisis since the Great Depression in the early 20th century. Especially, hidden problems on efficient operation of a business were disclosed at a time and many financial institutions went bankrupt and filed for court receivership. The collapses of physical market lead to bankruptcy of manufacturing and construction businesses. This study is to analyze dynamic efficiency of construction businesses during the five years at the turn of the global financial crisis. By discovering the trend and stability of efficiency of a construction business, this study-s objective is to improve management efficiency of a construction business in the ever-changing construction market. Variables were selected by analyzing corporate information on top 20 construction businesses in Korea and analyzed for static efficiency in 2008 and dynamic efficiency between 2006 and 2010. Unlike other studies, this study succeeded in deducing efficiency trend and stability of a construction business for five years by using the DEA/Window model. Using the analysis result, efficient and inefficient companies could be figured out. In addition, relative efficiency among DMU was measured by comparing the relationship between input and output variables of construction businesses. This study can be used as a literature to improve management efficiency for companies with low efficiency based on efficiency analysis of construction businesses.

The Effect of Mixture Velocity and Droplet Diameter on Oil-water Separator using Computational Fluid Dynamics (CFD)

The characteristics of fluid flow and phase separation in an oil-water separator were numerically analysed as part of the work presented herein. Simulations were performed for different velocities and droplet diameters, and the way this parameters can influence the separator geometry was studied. The simulations were carried out using the software package Fluent 6.2, which is designed for numerical simulation of fluid flow and mass transfer. The model consisted of a cylindrical horizontal separator. A tetrahedral mesh was employed in the computational domain. The condition of two-phase flow was simulated with the two-fluid model, taking into consideration turbulence effects using the k-ε model. The results showed that there is a strong dependency of phase separation on mixture velocity and droplet diameter. An increase in mixture velocity will bring about a slow down in phase separation and as a consequence will require a weir of greater height. An increase in droplet diameter will produce a better phase separation. The simulations are in agreement with results reported in literature and show that CFD can be a useful tool in studying a horizontal oilwater separator.

Product-Based Industrial Information Systems (Application to the Steel Industry)

This paper shows a simple and effective approach to the design and implementation of Industrial Information Systems (IIS) oriented to control the characteristics of each individual product manufactured in a production line and also their manufacturing conditions. The particular products considered in this work are large steel strips that are coiled just after their manufacturing. However, the approach is directly applicable to coiled strips in other industries, like paper, textile, aluminum, etc. These IIS provide very detailed information of each manufactured product, which complement the general information managed by the ERP system of the production line. In spite of the high importance of this type of IIS to guarantee and improve the quality of the products manufactured in many industries, there are very few works about them in the technical literature. For this reason, this paper represents an important contribution to the development of this type of IIS, providing guidelines for their design, implementation and exploitation.

Rigorous Modeling of Fixed-Bed Reactors Containing Finite Hollow Cylindrical Catalyst with Michaelis-Menten Type of Kinetics

A large number of chemical, bio-chemical and pollution-control processes use heterogeneous fixed-bed reactors. The use of finite hollow cylindrical catalyst pellets can enhance conversion levels in such reactors. The absence of the pellet core can significantly lower the diffusional resistance associated with the solid phase. This leads to a better utilization of the catalytic material, which is reflected in the higher values for the effectiveness factor, leading ultimately to an enhanced conversion level in the reactor. It is however important to develop a rigorous heterogeneous model for the reactor incorporating the two-dimensional feature of the solid phase owing to the presence of the finite hollow cylindrical catalyst pellet. Presently, heterogeneous models reported in the literature invariably employ one-dimension solid phase models meant for spherical catalyst pellets. The objective of the paper is to present a rigorous model of the fixed-bed reactors containing finite hollow cylindrical catalyst pellets. The reaction kinetics considered here is the widely used Michaelis–Menten kinetics for the liquid-phase bio-chemical reactions. The reaction parameters used here are for the enzymatic degradation of urea. Results indicate that increasing the height to diameter ratio helps to improve the conversion level. On the other hand, decreasing the thickness is apparently not as effective. This could however be explained in terms of the higher void fraction of the bed that causes a smaller amount of the solid phase to be packed in the fixed-bed bio-chemical reactor.

A Multi Objective Optimization Approach to Optimize Vehicle Ride and Handling Characteristics

Vehicle suspension design must fulfill some conflicting criteria. Among those is ride comfort which is attained by minimizing the acceleration transmitted to the sprung mass, via suspension spring and damper. Also good handling of a vehicle is a desirable property which requires stiff suspension and therefore is in contrast with a vehicle with good ride. Among the other desirable features of a suspension is the minimization of the maximum travel of suspension. This travel which is called suspension working space in vehicle dynamics literature is also a design constraint and it favors good ride. In this research a full car 8 degrees of freedom model has been developed and the three above mentioned criteria, namely: ride, handling and working space has been adopted as objective functions. The Multi Objective Programming (MOP) discipline has been used to find the Pareto Front and some reasoning used to chose a design point between these non dominated points of Pareto Front.

Synergy in Vertical Transformations of Expert Designers

Existing literature ondesign reasoning seems to give either one sided accounts on expert design behaviour based on internal processing. In the same way ecological theoriesseem to focus one sidedly on external elementsthat result in a lack of unifying design cognition theory. Although current extended design cognition studies acknowledge the intellectual interaction between internal and external resources, there still seems to be insufficient understanding of the complexities involved in such interactive processes. As such,this paper proposes a novelmulti-directional model for design researchers tomap the complex and dynamic conduct controlling behaviour in which both the computational and ecological perspectives are integrated in a vertical manner. A clear distinction between identified intentional and emerging physical drivers, and relationships between them during the early phases of experts- design process, is demonstrated by presenting a case study in which the model was employed.

The Development of Positive Emotion Regulation Strategies Scale for Children and Adolescents

The study was designed to develop a measurement of the positive emotion regulation questionnaire (PERQ) that assesses positive emotion regulation strategies through self-report. The 14 items developed for the surveying instrument of the study were based upon literatures regarding elements of positive regulation strategies. 319 elementary students (age ranging from 12 to14) were recruited among three public elementary schools to survey on their use of positive emotion regulation strategies. Of 319 subjects, 20 invalid questionnaire s yielded a response rate of 92%. The data collected wasanalyzed through methods such as item analysis, factor analysis, and structural equation models. In reference to the results from item analysis, the formal survey instrument was reduced to 11 items. A principal axis factor analysis with varimax was performed on responses, resulting in a 2-factor equation (savoring strategy and neutralizing strategy), which accounted for 55.5% of the total variance. Then, the two-factor structure of scale was also identified by structural equation models. Finally, the reliability coefficients of the two factors were Cronbach-s α .92 and .74. Gender difference was only found in savoring strategy. In conclusion, the positive emotion regulation strategies questionnaire offers a brief, internally consistent, and valid self-report measure for understanding the emotional regulation strategies of children that may be useful to researchers and applied professionals.

Improving the Effectiveness of Software Testing through Test Case Reduction

This paper proposes a new technique for improving the efficiency of software testing, which is based on a conventional attempt to reduce test cases that have to be tested for any given software. The approach utilizes the advantage of Regression Testing where fewer test cases would lessen time consumption of the testing as a whole. The technique also offers a means to perform test case generation automatically. Compared to one of the techniques in the literature where the tester has no option but to perform the test case generation manually, the proposed technique provides a better option. As for the test cases reduction, the technique uses simple algebraic conditions to assign fixed values to variables (Maximum, minimum and constant variables). By doing this, the variables values would be limited within a definite range, resulting in fewer numbers of possible test cases to process. The technique can also be used in program loops and arrays.

Outlier Pulse Detection and Feature Extraction for Wrist Pulse Analysis

Wrist pulse analysis for identification of health status is found in Ancient Indian as well as Chinese literature. The preprocessing of wrist pulse is necessary to remove outlier pulses and fluctuations prior to the analysis of pulse pressure signal. This paper discusses the identification of irregular pulses present in the pulse series and intricacies associated with the extraction of time domain pulse features. An approach of Dynamic Time Warping (DTW) has been utilized for the identification of outlier pulses in the wrist pulse series. The ambiguity present in the identification of pulse features is resolved with the help of first derivative of Ensemble Average of wrist pulse series. An algorithm for detecting tidal and dicrotic notch in individual wrist pulse segment is proposed.

Achieving Business and IT Alignment from Organisational Learning Perspectives

Business and IT alignment has continued as a top concern for business and IT executives for almost three decades. Many researchers have conducted empirical studies on the relationship between business-IT alignment and performance. Yet, these approaches, lacking a social perspective, have had little impact on sustaining performance and competitive advantage. In addition to the limited alignment literature that explores organisational learning that is represented in shared understanding, communication, cognitive maps and experiences. Hence, this paper proposes an integrated process that enables social and intellectual dimensions through the concept of organisational learning. In particular, the feedback and feedforward process which provide a value creation across dynamic multilevel of learning. This mechanism enables on-going effectiveness through development of individuals, groups and organisations, which improves the quality of business and IT strategies and drives to performance.

Optimal Image Compression Based on Sign and Magnitude Coding of Wavelet Coefficients

Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.

Ray Tracing Technique based 60 GHz Band Propagation Modelling and Influence of People Shadowing

The main objectif of this paper is to present a tool that we have developed subject to characterize and modelling indoor radio channel propagation at millimetric wave. The tool is based on the ray tracing technique (RTT). As, in realistic environment we cannot neglect the significant impact of Human Body Shadowing and other objects in motion on indoor 60 GHz propagation channel. Hence, our proposed model allows a simulation of propagation in a dynamic indoor environment. First, we describe a model of human body. Second, RTT with this model is used to simulate the propagation of millimeter waves in the presence of persons in motion. Results of the simulation show that this tool gives results in agreement with those reported in the literature. Specially, the effects of people motion on temporal channel properties.

PeliGRIFF: A Parallel DEM-DLM/FD Method for DNS of Particulate Flows with Collisions

An original Direct Numerical Simulation (DNS) method to tackle the problem of particulate flows at moderate to high concentration and finite Reynolds number is presented. Our method is built on the framework established by Glowinski and his coworkers [1] in the sense that we use their Distributed Lagrange Multiplier/Fictitious Domain (DLM/FD) formulation and their operator-splitting idea but differs in the treatment of particle collisions. The novelty of our contribution relies on replacing the simple artificial repulsive force based collision model usually employed in the literature by an efficient Discrete Element Method (DEM) granular solver. The use of our DEM solver enables us to consider particles of arbitrary shape (at least convex) and to account for actual contacts, in the sense that particles actually touch each other, in contrast with the simple repulsive force based collision model. We recently upgraded our serial code, GRIFF 1 [2], to full MPI capabilities. Our new code, PeliGRIFF 2, is developed under the framework of the full MPI open source platform PELICANS [3]. The new MPI capabilities of PeliGRIFF open new perspectives in the study of particulate flows and significantly increase the number of particles that can be considered in a full DNS approach: O(100000) in 2D and O(10000) in 3D. Results on the 2D/3D sedimentation/fluidization of isometric polygonal/polyedral particles with collisions are presented.

Secure Power Systems Against Malicious Cyber-Physical Data Attacks: Protection and Identification

The security of power systems against malicious cyberphysical data attacks becomes an important issue. The adversary always attempts to manipulate the information structure of the power system and inject malicious data to deviate state variables while evading the existing detection techniques based on residual test. The solutions proposed in the literature are capable of immunizing the power system against false data injection but they might be too costly and physically not practical in the expansive distribution network. To this end, we define an algebraic condition for trustworthy power system to evade malicious data injection. The proposed protection scheme secures the power system by deterministically reconfiguring the information structure and corresponding residual test. More importantly, it does not require any physical effort in either microgrid or network level. The identification scheme of finding meters being attacked is proposed as well. Eventually, a well-known IEEE 30-bus system is adopted to demonstrate the effectiveness of the proposed schemes.

Coordination on Agrifood Supply Chain

Coordinated supply chain represents major challenges for the different actors involved in it, because each agent responds to individual interests. The paper presents a framework with the reviewed literature regarding the system's decision structure and nature of demand. Later, it characterizes an agri food supply chain in the Central Region of Colombia, it responds to a decentralized distribution system and a stochastic demand. Finally, the paper recommends coordinating the chain based on shared information, and mechanisms for each agent, as VMI (vendor-managed inventory) strategy for farmer-buyer relationship, information system for farmers and contracts for transportation service providers.

Adaptation of Iterative Methods to Solve Fuzzy Mathematical Programming Problems

Based on the fuzzy set theory this work develops two adaptations of iterative methods that solve mathematical programming problems with uncertainties in the objective function and in the set of constraints. The first one uses the approach proposed by Zimmermann to fuzzy linear programming problems as a basis and the second one obtains cut levels and later maximizes the membership function of fuzzy decision making using the bound search method. We outline similarities between the two iterative methods studied. Selected examples from the literature are presented to validate the efficiency of the methods addressed.

Classification of Fuzzy Petri Nets, and Their Applications

Petri Net (PN) has proven to be effective graphical, mathematical, simulation, and control tool for Discrete Event Systems (DES). But, with the growth in the complexity of modern industrial, and communication systems, PN found themselves inadequate to address the problems of uncertainty, and imprecision in data. This gave rise to amalgamation of Fuzzy logic with Petri nets and a new tool emerged with the name of Fuzzy Petri Nets (FPN). Although there had been a lot of research done on FPN and a number of their applications have been anticipated, but their basic types and structure are still ambiguous. Therefore, in this research, an effort is made to categorize FPN according to their structure and algorithms Further, literature review of the applications of FPN in the light of their classifications has been done.

Passive Cooling of Building by using Solar Chimney

Natural ventilation is an important means to improve indoor thermal comfort and reduce the energy consumption. A solar chimney system is an enhancing natural draft device, which uses solar radiation to heat the air inside the chimney, thereby converting the thermal energy into kinetic energy. The present study considered some parameters such as chimney width and solar intensity, which were believed to have a significant effect on space ventilation. Fluent CFD software was used to predict buoyant air flow and flow rates in the cavities. The results were compared with available published experimental and theoretical data from the literature. There was an acceptable trend match between the present results and the published data for the room air change per hour, ACH. Further, it was noticed that the solar intensity has a more significant effect on ACH.

Genetic Algorithm for Solving Non-Convex Economic Dispatch Problem

Economic dispatch (ED) is considered to be one of the key functions in electric power system operation. This paper presents a new hybrid approach based genetic algorithm (GA) to economic dispatch problems. GA is most commonly used optimizing algorithm predicated on principal of natural evolution. Utilization of chaotic queue with GA generates several neighborhoods of near optimal solutions to keep solution variation. It could avoid the search process from becoming pre-mature. For the objective of chaotic queue generation, utilization of tent equation as opposed to logistic equation results in improvement of iterative speed. The results of the proposed approach were compared in terms of fuel cost, with existing differential evolution and other methods in literature.

“Blood Family“ Activity With Respect To Comprehensive Guidance School Program

Children and adolescents developing in the worlds of today are facing a getting array of new and old challenges. School counselling is improving rapidly in contemporary education systems around the world. It can be said that counselling system in Turkey was newly borning. In this study, “Family of the Blood" activity is improved with respect to compherensive guidance school program. The sample included 22 adolescents who were high school students. The activity was carried out in 4 sessions, each of which lasted 45 minutes. In the first session, students- personal-social needs were determined. In the second session, in order to warm up, the students were asked three questions consisting of the constructional aspect. In the third session, the counselor and the teacher shared the results of students- responses obtained in the previous session. In the fourth session, the tables formed by students were presented in the classroom. In order to evaluate the activity, three questions were asked of the teacher and counselor. According to the results, the lesson aims of curriculum and counselling aims of curriculum were attained. In the light of literature, the results were discussed and some suggestions were made. It is taken into consideration that the activitiy was beneficial in many respects, similar studies should be carried out in the near future.