Real Power Generation Scheduling to Improve Steady State Stability Limit in the Java-Bali 500kV Interconnection Power System

This paper will discuss about an active power generator scheduling method in order to increase the limit level of steady state systems. Some power generator optimization methods such as Langrange, PLN (Indonesian electricity company) Operation, and the proposed Z-Thevenin-based method will be studied and compared in respect of their steady state aspects. A method proposed in this paper is built upon the thevenin equivalent impedance values between each load respected to each generator. The steady state stability index obtained with the REI DIMO method. This research will review the 500kV-Jawa-Bali interconnection system. The simulation results show that the proposed method has the highest limit level of steady state stability compared to other optimization techniques such as Lagrange, and PLN operation. Thus, the proposed method can be used to create the steady state stability limit of the system especially in the peak load condition.

Software Maintenance Severity Prediction for Object Oriented Systems

As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done in time especially for the critical applications. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this present work, various Neural Network Based techniques are explored and comparative analysis is performed for the prediction of level of need of maintenance by predicting level severity of faults present in NASA-s public domain defect dataset. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that Generalized Regression Networks is the best algorithm for classification of the software components into different level of severity of impact of the faults. The algorithm can be used to develop model that can be used for identifying modules that are heavily affected by the faults.

The Comparation of Activation Nuclear Factor Kappa Beta (NFKB) at Rattus Novergicus Strain Wistar Induced by Various Duration High Fat Diet (HFD)

NFκB is a transcription factor regulating many function of the vessel wall. In the normal condition , NFκB is revealed diffuse cytoplasmic expressionsuggesting that the system is inactive. The presence of activation NFκB provide a potential pathway for the rapid transcriptional of a variety of genes encoding cytokines, growth factors, adhesion molecules and procoagulatory factors. It is likely to play an important role in chronic inflamatory disease involved atherosclerosis. There are many stimuli with the potential to active NFκB, including hyperlipidemia. We used 24 mice which was divided in 6 groups. The HFD given by et libitum procedure during 2, 4, and 6 months. The parameters in this study were the amount of NFKB activation ,H2O2 as ROS and VCAM-1 as a product of NFKB activation. H2O2 colorimetryc assay performed directly using Anti Rat H2O2 ELISA Kit. The NFKB and VCAM-1 detection obtained from aorta mice, measured by ELISA kit and imunohistochemistry. There was a significant difference activation of H2O2, NFKB and VCAM-1 level at induce HFD after 2, 4 and 6 months. It suggest that HFD induce ROS formation and increase the activation of NFKB as one of atherosclerosis marker that caused by hyperlipidemia as classical atheroschlerosis risk factor.

Hot Workability of High Strength Low Alloy Steels

The hot deformation behavior of high strength low alloy (HSLA) steels with different chemical compositions under hot working conditions in the temperature range of 900 to 1100℃ and strain rate range from 0.1 to 10 s-1 has been studied by performing a series of hot compression tests. The dynamic materials model has been employed for developing the processing maps, which show variation of the efficiency of power dissipation with temperature and strain rate. Also the Kumar-s model has been used for developing the instability map, which shows variation of the instability for plastic deformation with temperature and strain rate. The efficiency of power dissipation increased with decreasing strain rate and increasing temperature in the steel with higher Cr and Ti content. High efficiency of power dissipation over 20 % was obtained at a finite strain level of 0.1 under the conditions of strain rate lower than 1 s-1 and temperature higher than 1050 ℃ . Plastic instability was expected in the regime of temperatures lower than 1000 ℃ and strain rate lower than 0.3 s-1. Steel with lower Cr and Ti contents showed high efficiency of power dissipation at higher strain rate and lower temperature conditions.

Measuring Heterogeneous Traffic Density

Traffic Density provides an indication of the level of service being provided to the road users. Hence, there is a need to study the traffic flow characteristics with specific reference to density in detail. When the length and speed of the vehicles in a traffic stream vary significantly, the concept of occupancy, rather than density, is more appropriate to describe traffic concentration. When the concept of occupancy is applied to heterogeneous traffic condition, it is necessary to consider the area of the road space and the area of the vehicles as the bases. Hence, a new concept named, 'area-occupancy' is proposed here. It has been found that the estimated area-occupancy gives consistent values irrespective of change in traffic composition.

Optimization of the Process of Osmo – Convective Drying of Edible Button Mushrooms using Response Surface Methodology (RSM)

Simultaneous effects of temperature, immersion time, salt concentration, sucrose concentration, pressure and convective dryer temperature on the combined osmotic dehydration - convective drying of edible button mushrooms were investigated. Experiments were designed according to Central Composite Design with six factors each at five different levels. Response Surface Methodology (RSM) was used to determine the optimum processing conditions that yield maximum water loss and rehydration ratio and minimum solid gain and shrinkage in osmotic-convective drying of edible button mushrooms. Applying surfaces profiler and contour plots optimum operation conditions were found to be temperature of 39 °C, immersion time of 164 min, salt concentration of 14%, sucrose concentration of 53%, pressure of 600 mbar and drying temperature of 40 °C. At these optimum conditions, water loss, solid gain, rehydration ratio and shrinkage were found to be 63.38 (g/100 g initial sample), 3.17 (g/100 g initial sample), 2.26 and 7.15%, respectively.

Power Optimization Techniques in FPGA Devices: A Combination of System- and Low-Levels

This paper presents preliminary results regarding system-level power awareness for FPGA implementations in wireless sensor networks. Re-configurability of field programmable gate arrays (FPGA) allows for significant flexibility in its applications to embedded systems. However, high power consumption in FPGA becomes a significant factor in design considerations. We present several ideas and their experimental verifications on how to optimize power consumption at high level of designing process while maintaining the same energy per operation (low-level methods can be used additionally). This paper demonstrates that it is possible to estimate feasible power consumption savings even at the high level of designing process. It is envisaged that our results can be also applied to other embedded systems applications, not limited to FPGA-based.

Simulation of Sloshing behavior using Moving Grid and Body Force Methods

The flow field and the motion of the free surface in an oscillating container are simulated numerically to assess the numerical approach for studying two-phase flows under oscillating conditions. Two numerical methods are compared: one is to model the oscillating container directly using the moving grid of the ALE method, and the other is to simulate the effect of container motion using the oscillating body force acting on the fluid in the stationary container. The two-phase flow field in the container is simulated using the level set method in both cases. It is found that the calculated results by the body force method coinsides with those by the moving grid method and the sloshing behavior is predicted well by both the methods. Theoretical back ground and limitation of the body force method are discussed, and the effects of oscillation amplitude and frequency are shown.

Comparative Study of View Point Types on Landscape Evaluation

The purpose of this study was to examine the viewpoints in terms of changing distances and levels and thereby, comparatively analyze the visual sensitivity to the elements of the natural views. The questionnaire survey was conducted separately for experts and non-experts. Summing up, it was confirmed that the visual sensitivity to the elements of the same natural views differed significantly depending on subjects' professionalism, changes of the viewpoint levels and distances, while the visual sensitivity to 'openness of visual/view axes' did not differ significantly when only the distances of the viewpoints were varied. In addition, the visual sensitivity to visual/view axes differed between experts and ordinary people when the levels of the viewpoints were varied, while the visual sensitivity to 'damaged natural view resources' differed between two groups when the distances of the viewpoints were varied.

Dynamic Performance Indicators for Aged-Care Construction Projects

Key performance indicators (KPIs) are used for post result evaluation in the construction industry, and they normally do not have provisions for changes. This paper proposes a set of dynamic key performance indicators (d-KPIs) which predicts the future performance of the activity being measured and presents the opportunity to change practice accordingly. Critical to the predictability of a construction project is the ability to achieve automated data collection. This paper proposes an effective way to collect the process and engineering management data from an integrated construction management system. The d-KPI matrix, consisting of various indicators under seven categories, developed from this study can be applied to close monitoring of the development projects of aged-care facilities. The d-KPI matrix also enables performance measurement and comparison at both project and organization levels.

The Effect of Cooperation Teaching Method on Learning of Students in Primary Schools

The effect of teaching method on learning assistance Dunn Review .The study, to compare the effects of collaboration on teaching mathematics learning courses, including writing, science, experimental girl students by other methods of teaching basic first paid and the amount of learning students methods have been trained to cooperate with other students with other traditional methods have been trained to compare. The survey on 100 students in Tehran that using random sampling ¬ cluster of girl students between the first primary selections was performed. Considering the topic of semi-experimental research methods used to practice the necessary information by questionnaire, examination questions by the researcher, in collaboration with teachers and view authority in this field and related courses that teach these must have been collected. Research samples to test and control groups were divided. Experimental group and control group collaboration using traditional methods of mathematics courses, including writing and experimental sciences were trained. Research results using statistical methods T is obtained in two independent groups show that, through training assistance will lead to positive results and student learning in comparison with traditional methods, will increase also led to collaboration methods increase skills to solve math lesson practice, better understanding and increased skill level of students in practical lessons such as science and has been writing.

Burstiness Reduction of a Doubly Stochastic AR-Modeled Uniform Activity VBR Video

Stochastic modeling of network traffic is an area of significant research activity for current and future broadband communication networks. Multimedia traffic is statistically characterized by a bursty variable bit rate (VBR) profile. In this paper, we develop an improved model for uniform activity level video sources in ATM using a doubly stochastic autoregressive model driven by an underlying spatial point process. We then examine a number of burstiness metrics such as the peak-to-average ratio (PAR), the temporal autocovariance function (ACF) and the traffic measurements histogram. We found that the former measure is most suitable for capturing the burstiness of single scene video traffic. In the last phase of this work, we analyse statistical multiplexing of several constant scene video sources. This proved, expectedly, to be advantageous with respect to reducing the burstiness of the traffic, as long as the sources are statistically independent. We observed that the burstiness was rapidly diminishing, with the largest gain occuring when only around 5 sources are multiplexed. The novel model used in this paper for characterizing uniform activity video was thus found to be an accurate model.

Reliable Capacitated Facility Location Problem Considering Maximal Covering

This paper provides a framework in order to incorporate reliability issue as a sign of disruption in distribution systems and partial covering theory as a response to limitation in coverage radios and economical preferences, simultaneously into the traditional literatures of capacitated facility location problems. As a result we develop a bi-objective model based on the discrete scenarios for expected cost minimization and demands coverage maximization through a three echelon supply chain network by facilitating multi-capacity levels for provider side layers and imposing gradual coverage function for distribution centers (DCs). Additionally, in spite of objectives aggregation for solving the model through LINGO software, a branch of LP-Metric method called Min- Max approach is proposed and different aspects of corresponds model will be explored.

In-flight Meals, Passengers- Level of Satisfaction and Re-flying Intention

Service quality has become a centerpiece for airline companies in vying with one another and keeps their image in the minds of passengers. Many airlines have pushed service quality through service personalization which includes both ground and on board especially from the viewpoint of retaining satisfied passengers and attracting new ones. Besides those, in-flight meals/food service is another important aspect of the airline operation. The in flight meals/food services now are seen as part of marketing strategies in attracting business or leisure travelers. This study reports the outcomes of the investigation on in-flight meals/food attributes toward passengers- level of satisfaction and re-flying intention. Taste, freshness, appearance of in-flight meals/food served and menu choices are important to the airlines passengers especially for the long haul flight. Food not only contributes to the prediction of the airline passengers- levels of satisfaction but besides other factors slightly influence passengers- re- flying intention. Airline companies therefore should not ignore this element but take the opportunity to create more attractive and acceptable in-flight meals/food along with other matter as marketing tools in attracting passengers to re-flying with them.

Job Satisfaction, Organizational Commitment, and Turnover Intention: A Case Study on Employees of a Retail Company in Malaysia

High employee turnover rate in Malaysia-s retail industry has become a major issue that needs to be addressed. This study determines the levels of job satisfaction, organizational commitment, and turnover intention of employees in a retail company in Malaysia. The relationships between job satisfaction and organizational commitment on turnover intention are also investigated. A questionnaire was developed using Job Descriptive Index, Organizational Commitment Questionnaire, and Lee and Mowday-s turnover intention items and data were collected from 62 respondents. The findings suggested that the respondents were moderately satisfied with job satisfaction facets such as promotion, work itself, co-workers, and supervisors but were unsatisfied with salary. They also had moderate commitment level with considerably high intention to leave the organization. All satisfaction facets (except for co-workers) and organizational commitment were significantly and negatively related to turnover intention. Based on the findings, retention strategies of retail employees were proposed.

WPRiMA Tool: Managing Risks in Web Projects

Risk management is an essential fraction of project management, which plays a significant role in project success. Many failures associated with Web projects are the consequences of poor awareness of the risks involved and lack of process models that can serve as a guideline for the development of Web based applications. To circumvent this problem, contemporary process models have been devised for the development of conventional software. This paper introduces the WPRiMA (Web Project Risk Management Assessment) as the tool, which is used to implement RIAP, the risk identification architecture pattern model, which focuses upon the data from the proprietor-s and vendor-s perspectives. The paper also illustrates how WPRiMA tool works and how it can be used to calculate the risk level for a given Web project, to generate recommendations in order to facilitate risk avoidance in a project, and to improve the prospects of early risk management.

Enhanced-Delivery Overlay Multicasting Scheme by Optimizing Bandwidth and Latency Discrepancy Ratios

With optimized bandwidth and latency discrepancy ratios, Node Gain Scores (NGSs) are determined and used as a basis for shaping the max-heap overlay. The NGSs - determined as the respective bandwidth-latency-products - govern the construction of max-heap-form overlays. Each NGS is earned as a synergy of discrepancy ratio of the bandwidth requested with respect to the estimated available bandwidth, and latency discrepancy ratio between the nodes and the source node. The tree leads to enhanceddelivery overlay multicasting – increasing packet delivery which could, otherwise, be hindered by induced packet loss occurring in other schemes not considering the synergy of these parameters on placing the nodes on the overlays. The NGS is a function of four main parameters – estimated available bandwidth, Ba; individual node's requested bandwidth, Br; proposed node latency to its prospective parent (Lp); and suggested best latency as advised by source node (Lb). Bandwidth discrepancy ratio (BDR) and latency discrepancy ratio (LDR) carry weights of α and (1,000 - α ) , respectively, with arbitrary chosen α ranging between 0 and 1,000 to ensure that the NGS values, used as node IDs, maintain a good possibility of uniqueness and balance between the most critical factor between the BDR and the LDR. A max-heap-form tree is constructed with assumption that all nodes possess NGS less than the source node. To maintain a sense of load balance, children of each level's siblings are evenly distributed such that a node can not accept a second child, and so on, until all its siblings able to do so, have already acquired the same number of children. That is so logically done from left to right in a conceptual overlay tree. The records of the pair-wise approximate available bandwidths as measured by a pathChirp scheme at individual nodes are maintained. Evaluation measures as compared to other schemes – Bandwidth Aware multicaSt architecturE (BASE), Tree Building Control Protocol (TBCP), and Host Multicast Tree Protocol (HMTP) - have been conducted. This new scheme generally performs better in terms of trade-off between packet delivery ratio; link stress; control overhead; and end-to-end delays.

Online Web Service based Solution for Urban Traffic Management

In this article, we present a web server based solution for implementing a system for intelligent navigation. In this solution we use real time collected data and traffic history to establish the best route for navigation. This is a low cost solution that is easily to implement and extend. There is no need any infrastructure at road network level except only a device that collect data about traffic in key road crossing. The presented solution creates a strong base for traffic pursuit and offers an infrastructure for navigation applications.

Dempster-Shafer Evidence Theory for Image Segmentation: Application in Cells Images

In this paper we propose a new knowledge model using the Dempster-Shafer-s evidence theory for image segmentation and fusion. The proposed method is composed essentially of two steps. First, mass distributions in Dempster-Shafer theory are obtained from the membership degrees of each pixel covering the three image components (R, G and B). Each membership-s degree is determined by applying Fuzzy C-Means (FCM) clustering to the gray levels of the three images. Second, the fusion process consists in defining three discernment frames which are associated with the three images to be fused, and then combining them to form a new frame of discernment. The strategy used to define mass distributions in the combined framework is discussed in detail. The proposed fusion method is illustrated in the context of image segmentation. Experimental investigations and comparative studies with the other previous methods are carried out showing thus the robustness and superiority of the proposed method in terms of image segmentation.

Using the Monte Carlo Simulation to Predict the Assembly Yield

Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.