Comparation Treatment Method for Industrial Tempeh Waste by Constructed Wetland and Activated Sludge

Ever since industrial revolution began, our ecosystem has changed. And indeed, the negatives outweigh the positives. Industrial waste usually released into all kinds of body of water, such as river or sea. Tempeh waste is one example of waste that carries many hazardous and unwanted substances that will affect the surrounding environment. Tempeh is a popular fermented food in Asia which is rich in nutrients and active substances. Tempeh liquid waste- in particular- can cause an air pollution, and if penetrates through the soil, it will contaminates ground-water, making it unavailable for the water to be consumed. Moreover, bacteria will thrive within the polluted water, which often responsible for causing many kinds of diseases. The treatment used for this chemical waste is biological treatment such as constructed wetland and activated sludge. These kinds of treatment are able to reduce both physical and chemical parameters altogether such as temperature, TSS, pH, BOD, COD, NH3-N, NO3-N, and PO4-P. These treatments are implemented before the waste is released into the water. The result is a comparation between constructed wetland and activated sludge, along with determining which method is better suited to reduce the physical and chemical subtances of the waste.

An Efficient Algorithm for Computing all Program Forward Static Slices

Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program backward slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. The existing algorithms for computing program slices are introduced to compute a slice at a program point. In these algorithms, the program, or the model that represents the program, is traversed completely or partially once. To compute more than one slice, the same algorithm is applied for every point of interest in the program. Thus, the same program, or program representation, is traversed several times. In this paper, an algorithm is introduced to compute all forward static slices of a computer program by traversing the program representation graph once. Therefore, the introduced algorithm is useful for software engineering applications that require computing program slices at different points of a program. The program representation graph used in this paper is called Program Dependence Graph (PDG).

Implementation of Geo-knowledge Based Geographic Information System for Estimating Earthquake Hazard Potential at a Metropolitan Area, Gwangju, in Korea

In this study, an inland metropolitan area, Gwangju, in Korea was selected to assess the amplification potential of earthquake motion and provide the information for regional seismic countermeasure. A geographic information system-based expert system was implemented for reliably predicting the spatial geotechnical layers in the entire region of interesting by building a geo-knowledge database. Particularly, the database consists of the existing boring data gathered from the prior geotechnical projects and the surface geo-knowledge data acquired from the site visit. For practical application of the geo-knowledge database to estimate the earthquake hazard potential related to site amplification effects at the study area, seismic zoning maps on geotechnical parameters, such as the bedrock depth and the site period, were created within GIS framework. In addition, seismic zonation of site classification was also performed to determine the site amplification coefficients for seismic design at any site in the study area. KeywordsEarthquake hazard, geo-knowledge, geographic information system, seismic zonation, site period.

Creative Thinking Skill Approach Through Problem-Based Learning: Pedagogy and Practice in the Engineering Classroom

Problem-based learning (PBL) is one of the student centered approaches and has been considered by a number of higher educational institutions in many parts of the world as a method of delivery. This paper presents a creative thinking approach for implementing Problem-based Learning in Mechanics of Structure within a Malaysian Polytechnics environment. In the learning process, students learn how to analyze the problem given among the students and sharing classroom knowledge into practice. Further, through this course-s emphasis on problem-based learning, students acquire creative thinking skills and professional skills as they tackle complex, interdisciplinary and real-situation problems. Once the creative ideas are generated, there are useful additional techniques for tender ideas that will grow into a productive concept or solution. The combination of creative skills and technical abilities will enable the students to be ready to “hit-the-ground-running" and produce in industry when they graduate.

Modeling and Identification of Hammerstein System by using Triangular Basis Functions

This paper deals with modeling and parameter identification of nonlinear systems described by Hammerstein model having Piecewise nonlinear characteristics such as Dead-zone nonlinearity characteristic. The simultaneous use of both an easy decomposition technique and the triangular basis functions leads to a particular form of Hammerstein model. The approximation by using Triangular basis functions for the description of the static nonlinear block conducts to a linear regressor model, so that least squares techniques can be used for the parameter estimation. Singular Values Decomposition (SVD) technique has been applied to separate the coupled parameters. The proposed approach has been efficiently tested on academic examples of simulation.

Understanding and Measuring Trust Evolution Effectiveness in Peer-to-Peer Computing Systems

In any trust model, the two information sources that a peer relies on to predict trustworthiness of another peer are direct experience as well as reputation. These two vital components evolve over time. Trust evolution is an important issue, where the objective is to observe a sequence of past values of a trust parameter and determine the future estimates. Unfortunately, trust evolution algorithms received little attention and the proposed algorithms in the literature do not comply with the conditions and the nature of trust. This paper contributes to this important problem in the following ways: (a) presents an algorithm that manages and models trust evolution in a P2P environment, (b) devises new mechanisms for effectively maintaining trust values based on the conditions that influence trust evolution , and (c) introduces a new methodology for incorporating trust-nurture incentives into the trust evolution algorithm. Simulation experiments are carried out to evaluate our trust evolution algorithm.

Comparative Study of Transformed and Concealed Data in Experimental Designs and Analyses

This paper presents the comparative study of coded data methods for finding the benefit of concealing the natural data which is the mercantile secret. Influential parameters of the number of replicates (rep), treatment effects (τ) and standard deviation (σ) against the efficiency of each transformation method are investigated. The experimental data are generated via computer simulations under the specified condition of the process with the completely randomized design (CRD). Three ways of data transformation consist of Box-Cox, arcsine and logit methods. The difference values of F statistic between coded data and natural data (Fc-Fn) and hypothesis testing results were determined. The experimental results indicate that the Box-Cox results are significantly different from natural data in cases of smaller levels of replicates and seem to be improper when the parameter of minus lambda has been assigned. On the other hand, arcsine and logit transformations are more robust and obviously, provide more precise numerical results. In addition, the alternate ways to select the lambda in the power transformation are also offered to achieve much more appropriate outcomes.

Ontology of Collaborative Supply Chain for Quality Management

In the highly competitive and rapidly changing global marketplace, independent organizations and enterprises often come together and form a temporary alignment of virtual enterprise in a supply chain to better provide products or service. As firms adopt the systems approach implicit in supply chain management, they must manage the quality from both internal process control and external control of supplier quality and customer requirements. How to incorporate quality management of upstream and downstream supply chain partners into their own quality management system has recently received a great deal of attention from both academic and practice. This paper investigate the collaborative feature and the entities- relationship in a supply chain, and presents an ontology of collaborative supply chain from an approach of aligning service-oriented framework with service-dominant logic. This perspective facilitates the segregation of material flow management from manufacturing capability management, which provides a foundation for the coordination and integration of the business process to measure, analyze, and continually improve the quality of products, services, and process. Further, this approach characterizes the different interests of supply chain partners, providing an innovative approach to analyze the collaborative features of supply chain. Furthermore, this ontology is the foundation to develop quality management system which internalizes the quality management in upstream and downstream supply chain partners and manages the quality in supply chain systematically.

A New Brazilian Friction-Resistant Low Alloy High Strength Steel – A Life Testing Approach

In this paper we will develop a sequential life test approach applied to a modified low alloy-high strength steel part used in highway overpasses in Brazil.We will consider two possible underlying sampling distributions: the Normal and theInverse Weibull models. The minimum life will be considered equal to zero. We will use the two underlying models to analyze a fatigue life test situation, comparing the results obtained from both.Since a major chemical component of this low alloy-high strength steel part has been changed, there is little information available about the possible values that the parameters of the corresponding Normal and Inverse Weibull underlying sampling distributions could have. To estimate the shape and the scale parameters of these two sampling models we will use a maximum likelihood approach for censored failure data. We will also develop a truncation mechanism for the Inverse Weibull and Normal models. We will provide rules to truncate a sequential life testing situation making one of the two possible decisions at the moment of truncation; that is, accept or reject the null hypothesis H0. An example will develop the proposed truncated sequential life testing approach for the Inverse Weibull and Normal models.

Analysis of Sonogram Images of Thyroid Gland Based on Wavelet Transform

Sonogram images of normal and lymphocyte thyroid tissues have considerable overlap which makes it difficult to interpret and distinguish. Classification from sonogram images of thyroid gland is tackled in semiautomatic way. While making manual diagnosis from images, some relevant information need not to be recognized by human visual system. Quantitative image analysis could be helpful to manual diagnostic process so far done by physician. Two classes are considered: normal tissue and chronic lymphocyte thyroid (Hashimoto's Thyroid). Data structure is analyzed using K-nearest-neighbors classification. This paper is mentioned that unlike the wavelet sub bands' energy, histograms and Haralick features are not appropriate to distinguish between normal tissue and Hashimoto's thyroid.

Organizational Management Model based on Knowledge Management, Talent Management and Technology Management Framework “Gomak“

This paper aims to present a framework for the organizational knowledge management, which seeks to deploy a standardized structure for the integrated management of knowledge is a common language based on domains, processes and global indicators inspired by the COBIT framework 5 (ISACA, 2012), which supports the integration of three technologies, enterprise information architecture (EIA), the business process modeling (BPM) and service-oriented architecture (SOA). The Gomak Framework is a management platform that seeks to integrate the information technology infrastructure, the structure of applications, information infrastructure, and business logic and business model to support a sound strategy of organizational knowledge management, low process-based approach and concurrent engineering. Concurrent engineering (CE) is a systematic approach to integrated product development that respond to customer expectations, involving all perspectives in parallel, from the beginning of the product life cycle. (European Space Agency, 2000).

New Robust Approach of Direct Field Oriented Control of Induction Motor

This paper presents a new technique of compensation of the effect of variation parameters in the direct field oriented control of induction motor. The proposed method uses an adaptive tuning of the value of synchronous speed to obtain the robustness for the field oriented control. We show that this adaptive tuning allows having robustness for direct field oriented control to changes in rotor resistance, load torque and rotational speed. The effectiveness of the proposed control scheme is verified by numerical simulations. The numerical validation results of the proposed scheme have presented good performances compared to the usual direct-field oriented control.

A Hybrid Fuzzy AGC in a Competitive Electricity Environment

This paper presents a new Hybrid Fuzzy (HF) PID type controller based on Genetic Algorithms (GA-s) for solution of the Automatic generation Control (AGC) problem in a deregulated electricity environment. In order for a fuzzy rule based control system to perform well, the fuzzy sets must be carefully designed. A major problem plaguing the effective use of this method is the difficulty of accurately constructing the membership functions, because it is a computationally expensive combinatorial optimization problem. On the other hand, GAs is a technique that emulates biological evolutionary theories to solve complex optimization problems by using directed random searches to derive a set of optimal solutions. For this reason, the membership functions are tuned automatically using a modified GA-s based on the hill climbing method. The motivation for using the modified GA-s is to reduce fuzzy system effort and take large parametric uncertainties into account. The global optimum value is guaranteed using the proposed method and the speed of the algorithm-s convergence is extremely improved, too. This newly developed control strategy combines the advantage of GA-s and fuzzy system control techniques and leads to a flexible controller with simple stricture that is easy to implement. The proposed GA based HF (GAHF) controller is tested on a threearea deregulated power system under different operating conditions and contract variations. The results of the proposed GAHF controller are compared with those of Multi Stage Fuzzy (MSF) controller, robust mixed H2/H∞ and classical PID controllers through some performance indices to illustrate its robust performance for a wide range of system parameters and load changes.

Towards a Load Balancing Framework for an SMS–Based Service Invocation Environment

The drastic increase in the usage of SMS technology has led service providers to seek for a solution that enable users of mobile devices to access services through SMSs. This has resulted in the proposal of solutions towards SMS-based service invocation in service oriented environments. However, the dynamic nature of service-oriented environments coupled with sudden load peaks generated by service request, poses performance challenges to infrastructures for supporting SMS-based service invocation. To address this problem we adopt load balancing techniques. A load balancing model with adaptive load balancing and load monitoring mechanisms as its key constructs is proposed. The load balancing model then led to realization of Least Loaded Load Balancing Framework (LLLBF). Evaluation of LLLBF benchmarked with round robin (RR) scheme on the queuing approach showed LLLBF outperformed RR in terms of response time and throughput. However, LLLBF achieved better result in the cost of high processing power.

Digital Automatic Gain Control Integrated on WLAN Platform

In this work we present a solution for DAGC (Digital Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4 GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used enables gain control over Low Noise Amplifier (LNA) and a Variable Gain Amplifier (VGA). The control over those signals is performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the average power of the baseband signal close to the desired set point. DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and actual gain setting, adjusting a gain factor of the accumulation, and applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.

Stability of Electrical Motor Supplied by a Five Level Inverter

The development of the power electronics has allowed increasing the precision and reliability of the electrical trainings, thanks to the adjustable inverters, as the Pulse Wide Modulation (PWM) five level inverters, which is the object of study in this article.The authors treat the relation between the law order adopted for a given system and the oscillations of the electrical and mechanical parameters of which the tolerance depends on the process with which they are integrated (paper factory, lifting of the heavy loads, etc.).Thus the best choice of the regulation indexes allows us to achieve stability and safety training without investment (management of existing equipment).

Integrated Drunken Driving Prevention System

As is needless to say; a majority of accidents, which occur, are due to drunk driving. As such, there is no effective mechanism to prevent this. Here we have designed an integrated system for the same purpose. Alcohol content in the driver-s body is detected by means of an infrared breath analyzer placed at the steering wheel. An infrared cell directs infrared energy through the sample and any unabsorbed energy at the other side is detected. The higher the concentration of ethanol, the more infrared absorption occurs (in much the same way that a sunglass lens absorbs visible light, alcohol absorbs infrared light). Thus the alcohol level of the driver is continuously monitored and calibrated on a scale. When it exceeds a particular limit the fuel supply is cutoff. If the device is removed also, the fuel supply will be automatically cut off or an alarm is sounded depending upon the requirement. This does not happen abruptly and special indicators are fixed at the back to avoid inconvenience to other drivers using the highway signals. Frame work for integration of sensors and control module in a scalable multi-agent system is provided .A SMS which contains the current GPS location of the vehicle is sent via a GSM module to the police control room to alert the police. The system is foolproof and the driver cannot tamper with it easily. Thus it provides an effective and cost effective solution for the problem of drunk driving in vehicles.

Global Exponential Stability of Impulsive BAM Fuzzy Cellular Neural Networks with Time Delays in the Leakage Terms

In this paper, a class of impulsive BAM fuzzy cellular neural networks with time delays in the leakage terms is formulated and investigated. By establishing a delay differential inequality and M-matrix theory, some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive BAM fuzzy cellular neural networks with time delays in the leakage terms are obtained. In particular, a precise estimate of the exponential convergence rate is also provided, which depends on system parameters and impulsive perturbation intention. It is believed that these results are significant and useful for the design and applications of BAM fuzzy cellular neural networks. An example is given to show the effectiveness of the results obtained here.

Effect of Non Uniformity Factors and Assignment Factors on Errors in Charge Simulation Method with Point Charge Model

Charge Simulation Method (CSM) is one of the very widely used numerical field computation technique in High Voltage (HV) engineering. The high voltage fields of varying non uniformities are encountered in practice. CSM programs being case specific, the simulation accuracies heavily depend on the user (programmers) experience. Here is an effort to understand CSM errors and evolve some guidelines to setup accurate CSM models, relating non uniformities with assignment factors. The results are for the six-point-charge model of sphere-plane gap geometry. Using genetic algorithm (GA) as tool, optimum assignment factors at different non uniformity factors for this model have been evaluated and analyzed. It is shown that the symmetrically placed six-point-charge models can be good enough to set up CSM programs with potential errors less than 0.1% when the field non uniformity factor is greater than 2.64 (field utilization factor less than 52.76%).

Single Frame Supercompression of Still Images,Video, High Definition TV and Digital Cinema

Super-resolution is nowadays used for a high-resolution image produced from several low-resolution noisy frames. In this work, we consider the problem of high-quality interpolation of a single noise-free image. Such images may come from different sources, i.e., they may be frames of videos, individual pictures, etc. On the other hand, in the encoder we apply a downsampling via bidimen-sional interpolation of each frame, and in the decoder we apply a upsampling by which we restore the original size of the image. If the compression ratio is very high, then we use a convolutive mask that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. In fact, the mentioned mask is coded inside texture memory of a GPGPU.