3D Modeling of Temperature by Finite Element in Machining with Experimental Authorization

In the present paper, the three-dimensional temperature field of tool is determined during the machining and compared with experimental work on C45 workpiece using carbide cutting tool inserts. During the metal cutting operations, high temperature is generated in the tool cutting edge which influence on the rate of tool wear. Temperature is most important characteristic of machining processes; since many parameters such as cutting speed, surface quality and cutting forces depend on the temperature and high temperatures can cause high mechanical stresses which lead to early tool wear and reduce tool life. Therefore, considerable attention is paid to determine tool temperatures. The experiments are carried out for dry and orthogonal machining condition. The results show that the increase of tool temperature depends on depth of cut and especially cutting speed in high range of cutting conditions.

The Willingness of Business Students on T Innovative Behavior within the Theory of Planned Behavior

Classes on creativity, innovation, and entrepreneurship are becoming quite popular at universities throughout the world. However, it is not easy for business students to get involved to innovative activities, especially patent application. The present study investigated how to enhance business students- intention to participate in innovative activities and which incentives universities should consider. A 22-item research scale was used, and confirmatory factor analysis was conducted to verify its reliability and validity. Multiple regression and discriminant analyses were also conducted. The results demonstrate the effect of growth-need strength on innovative behavior and indicate that the theory of planned behavior can explain and predict business students- intention to participate in innovative activities. Additionally, the results suggest that applying our proposed model in practice would effectively strengthen business students- intentions to engage in innovative activities.

Integrating Big Island Layout with Pull System for Production Optimization

Lean manufacturing is a production philosophy made popular by Toyota Motor Corporation (TMC). It is globally known as the Toyota Production System (TPS) and has the ultimate aim of reducing cost by thoroughly eliminating wastes or muda. TPS embraces the Just-in-time (JIT) manufacturing; achieving cost reduction through lead time reduction. JIT manufacturing can be achieved by implementing Pull system in the production. Furthermore, TPS aims to improve productivity and creating continuous flow in the production by arranging the machines and processes in cellular configurations. This is called as Cellular Manufacturing Systems (CMS). This paper studies on integrating the CMS with the Pull system to establish a Big Island-Pull system production for High Mix Low Volume (HMLV) products in an automotive component industry. The paper will use the build-in JIT system steps adapted from TMC to create the Pull system production and also create a shojinka line which, according to takt time, has the flexibility to adapt to demand changes simply by adding and taking out manpower. This will lead to optimization in production.

A Codebook-based Redundancy Suppression Mechanism with Lifetime Prediction in Cluster-based WSN

Wireless Sensor Network (WSN) comprises of sensor nodes which are designed to sense the environment, transmit sensed data back to the base station via multi-hop routing to reconstruct physical phenomena. Since physical phenomena exists significant overlaps between temporal redundancy and spatial redundancy, it is necessary to use Redundancy Suppression Algorithms (RSA) for sensor node to lower energy consumption by reducing the transmission of redundancy. A conventional algorithm of RSAs is threshold-based RSA, which sets threshold to suppress redundant data. Although many temporal and spatial RSAs are proposed, temporal-spatial RSA are seldom to be proposed because it is difficult to determine when to utilize temporal or spatial RSAs. In this paper, we proposed a novel temporal-spatial redundancy suppression algorithm, Codebookbase Redundancy Suppression Mechanism (CRSM). CRSM adopts vector quantization to generate a codebook, which is easily used to implement temporal-spatial RSA. CRSM not only achieves power saving and reliability for WSN, but also provides the predictability of network lifetime. Simulation result shows that the network lifetime of CRSM outperforms at least 23% of that of other RSAs.

Design of Low Power and High Speed Digital IIR Filter in 45nm with Optimized CSA for Digital Signal Processing Applications

In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.

The Design of Axisymmetric Ducts for Incompressible Flow with a Parabolic Axial Velocity Inlet Profile

In this paper a numerical algorithm is described for solving the boundary value problem associated with axisymmetric, inviscid, incompressible, rotational (and irrotational) flow in order to obtain duct wall shapes from prescribed wall velocity distributions. The governing equations are formulated in terms of the stream function ψ (x,y)and the function φ (x,y)as independent variables where for irrotational flow φ (x,y)can be recognized as the velocity potential function, for rotational flow φ (x,y)ceases being the velocity potential function but does remain orthogonal to the stream lines. A numerical method based on the finite difference scheme on a uniform mesh is employed. The technique described is capable of tackling the so-called inverse problem where the velocity wall distributions are prescribed from which the duct wall shape is calculated, as well as the direct problem where the velocity distribution on the duct walls are calculated from prescribed duct geometries. The two different cases as outlined in this paper are in fact boundary value problems with Neumann and Dirichlet boundary conditions respectively. Even though both approaches are discussed, only numerical results for the case of the Dirichlet boundary conditions are given. A downstream condition is prescribed such that cylindrical flow, that is flow which is independent of the axial coordinate, exists.

A New Approach to Solve Blasius Equation using Parameter Identification of Nonlinear Functions based on the Bees Algorithm (BA)

In this paper, a new approach is introduced to solve Blasius equation using parameter identification of a nonlinear function which is used as approximation function. Bees Algorithm (BA) is applied in order to find the adjustable parameters of approximation function regarding minimizing a fitness function including these parameters (i.e. adjustable parameters). These parameters are determined how the approximation function has to satisfy the boundary conditions. In order to demonstrate the presented method, the obtained results are compared with another numerical method. Present method can be easily extended to solve a wide range of problems.

Thermal Treatment Influence on the Quality of Rye Bread Packaged in Different Polymer Films

this study was carried out to investigate the changes in quality parameters of rye bread packaged in different polymer films during convection air-flow thermal treatment process. Whole loafs of bread were placed in polymer pouches, which were sealed in reduced pressure air ambiance, bread was thermally treated in at temperature +(130; 140; and 150) ± 5 ºC within 40min, as long as the core temperature of the samples have reached accordingly +80±1 ºC. For bread packaging pouches were used: anti-fog Mylar®OL12AF and thermo resistant combined polymer material. Main quality parameters was analysed using standard methods: temperature in bread core, bread crumb and crust firmness value, starch granules volume and microflora. In the current research it was proved, that polymer films significantly influence rye bread quality parameters changes during thermal treatment. Thermo resistant combined polymer material film could be recommendable for packaged rye bread pasteurization, for maximal bread quality parameter keeping.

Novelty as a Measure of Interestingness in Knowledge Discovery

Rule Discovery is an important technique for mining knowledge from large databases. Use of objective measures for discovering interesting rules leads to another data mining problem, although of reduced complexity. Data mining researchers have studied subjective measures of interestingness to reduce the volume of discovered rules to ultimately improve the overall efficiency of KDD process. In this paper we study novelty of the discovered rules as a subjective measure of interestingness. We propose a hybrid approach based on both objective and subjective measures to quantify novelty of the discovered rules in terms of their deviations from the known rules (knowledge). We analyze the types of deviation that can arise between two rules and categorize the discovered rules according to the user specified threshold. We implement the proposed framework and experiment with some public datasets. The experimental results are promising.

Analysis of Aiming Performance for Games Using Mapping Method of Corneal Reflections Based on Two Different Light Sources

Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.

Simulation and Analysis of the Shift Process for an Automatic Transmission

The automatic transmission (AT) is one of the most important components of many automobile transmission systems. The shift quality has a significant influence on the ride comfort of the vehicle. During the AT shift process, the joint elements such as the clutch and bands engage or disengage, linking sets of gears to create a fixed gear ratio. Since these ratios differ between gears in a fixed gear ratio transmission, the motion of the vehicle could change suddenly during the shift process if the joint elements are engaged or disengaged inappropriately, additionally impacting the entire transmission system and increasing the temperature of connect elements.The objective was to establish a system model for an AT powertrain using Matlab/Simulink. This paper further analyses the effect of varying hydraulic pressure and the associated impact on shift quality during both engagment and disengagement of the joint elements, proving that shift quality improvements could be achieved with appropriate hydraulic pressure control.

Simulation Study on the Indoor Thermal Comfort with Insulation on Interior Structural Components of Super High-Rise Residences

In this study, we discussed the effects on the thermal comfort of super high-rise residences that how effected by the high thermal capacity structural components. We considered different building orientations, structures, and insulation methods. We used the dynamic simulation software THERB (simulation of the thermal environment of residential buildings). It can estimate the temperature, humidity, sensible temperature, and heating/cooling load for multiple buildings. In the past studies, we examined the impact of air-conditioning loads (hereinafter referred to as AC loads) on the interior structural parts and the AC-usage patterns of super-high-rise residences. Super-high-rise residences have more structural components such as pillars and beams than do ordinary apartment buildings. The skeleton is generally made of concrete and steel, which have high thermal-storage capacities. The thermal-storage capacity of super-high-rise residences is considered to have a larger impact on the AC load and thermal comfort than that of ordinary residences. We show that the AC load of super-high-rise units would be reduced by installing insulation on the surfaces of interior walls that are not usually insulated in Japan.

Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis

This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.

How Valid Are Our Language Test Interpretations? A Demonstrative Example

Validity is an overriding consideration in language testing. If a test score is intended for a particular purpose, this must be supported through empirical evidence. This article addresses the validity of a multiple-choice achievement test (MCT). The test is administered at the end of each semester to decide about students' mastery of a course in general English. To provide empirical evidence pertaining to the validity of this test, two criterion measures were used. In so doing, a Cloze test and a C-test which are reported to gauge general English proficiency were utilized. The results of analyses show that there is a statistically significant correlation among participants' scores on the MCT, Cloze, and Ctest. Drawing on the findings of the study, it can be cautiously deduced that these tests measure the same underlying trait. However, allowing for the limitations of using criterion measures to validate tests, we cannot make any absolute claim as to the validity of this MCT test.

Numerical Analysis of Flow through Abrasive Water Suspension Jet: The Effect of Garnet, Aluminum Oxide and Silicon Carbide Abrasive on Skin Friction Coefficient Due To Wall Shear and Jet Exit Kinetic Energy

It is well known that the abrasive particles in the abrasive water suspension has significant effect on the erosion characteristics of the inside surface of the nozzle. Abrasive particles moving with the flow cause severe skin friction effect, there by altering the nozzle diameter due to wear which in turn reflects on the life of the nozzle for effective machining. Various commercial abrasives are available for abrasive water jet machining. The erosion characteristic of each abrasive is different. In consideration of this aspect, in the present work, the effect of abrasive materials namely garnet, aluminum oxide and silicon carbide on skin friction coefficient due to wall shear stress and jet kinetic energy has been analyzed. It is found that the abrasive material of lower density produces a relatively higher skin friction effect and higher jet exit kinetic energy.

Static Headspace GC Method for Aldehydes Determination in Different Food Matrices

Aldehydes as secondary lipid oxidation products are highly specific to the oxidative degradation of particular polyunsaturated fatty acids present in foods. Gas chromatographic analysis of those volatile compounds has been widely used for monitoring of the deterioration of food products. Developed static headspace gas chromatography method using flame ionization detector (SHS GC FID) was applied to monitor the aldehydes present in processed foods such as bakery, meat and confectionary products. Five selected aldehydes were determined in samples without any sample preparation, except grinding for bakery and meat products. SHS–GC analysis allows the separation of propanal, pentanal, hexanal, heptanal and octanal, within 15min. Aldehydes were quantified in fresh and stored samples, and the obtained range of aldehydes in crackers was 1.62±0.05 – 9.95±0.05mg/kg, in sausages 6.62±0.46 – 39.16±0.39mg/kg; and in cocoa spread cream 0.48±0.01 – 1.13±0.02mg/kg. Referring to the obtained results, the following can be concluded, proposed method is suitable for different types of samples, content of aldehydes varies depending on the type of a sample, and differs in fresh and stored samples of the same type.

Optimization of the Structures of the Electric Feeder Systems of the Oil Pumping Plants in Algeria

In Algeria, now, the oil pumping plants are fed with electric power by independent local sources. This type of feeding has many advantages (little climatic influence, independent operation). However it requires a qualified maintenance staff, a rather high frequency of maintenance and repair and additional fuel costs. Taking into account the increasing development of the national electric supply network (Sonelgaz), a real possibility of transfer of the local sources towards centralized sources appears.These latter cannot only be more economic but more reliable than the independent local sources as well. In order to carry out this transfer, it is necessary to work out an optimal strategy to rebuilding these networks taking in account the economic parameters and the indices of reliability.

Study of Natural Convection in a Triangular Cavity Filled with Water: Application of the Lattice Boltzmann Method

The Lattice Boltzmann Method (LBM) with double populations is applied to solve the steady-state laminar natural convective heat transfer in a triangular cavity filled with water. The bottom wall is heated, the vertical wall is cooled, and the inclined wall is kept adiabatic. The buoyancy effect was modeled by applying the Boussinesq approximation to the momentum equation. The fluid velocity is determined by D2Q9 LBM and the energy equation is discritized by D2Q4 LBM to compute the temperature field. Comparisons with previously published work are performed and found to be in excellent agreement. Numerical results are obtained for a wide range of parameters: the Rayleigh number from  to  and the inclination angle from 0° to 360°. Flow and thermal fields were exhibited by means of streamlines and isotherms. It is observed that inclination angle can be used as a relevant parameter to control heat transfer in right-angled triangular enclosures.  

Hybrid Honeypot System for Network Security

Nowadays, we are facing with network threats that cause enormous damage to the Internet community day by day. In this situation, more and more people try to prevent their network security using some traditional mechanisms including firewall, Intrusion Detection System, etc. Among them honeypot is a versatile tool for a security practitioner, of course, they are tools that are meant to be attacked or interacted with to more information about attackers, their motives and tools. In this paper, we will describe usefulness of low-interaction honeypot and high-interaction honeypot and comparison between them. And then we propose hybrid honeypot architecture that combines low and high -interaction honeypot to mitigate the drawback. In this architecture, low-interaction honeypot is used as a traffic filter. Activities like port scanning can be effectively detected by low-interaction honeypot and stop there. Traffic that cannot be handled by low-interaction honeypot is handed over to high-interaction honeypot. In this case, low-interaction honeypot is used as proxy whereas high-interaction honeypot offers the optimal level realism. To prevent the high-interaction honeypot from infections, containment environment (VMware) is used.

The Vulnerability Analysis of Java Bytecode Based on Points-to Dataflow

Today many developers use the Java components collected from the Internet as external LIBs to design and develop their own software. However, some unknown security bugs may exist in these components, such as SQL injection bug may comes from the components which have no specific check for the input string by users. To check these bugs out is very difficult without source code. So a novel method to check the bugs in Java bytecode based on points-to dataflow analysis is in need, which is different to the common analysis techniques base on the vulnerability pattern check. It can be used as an assistant tool for security analysis of Java bytecode from unknown softwares which will be used as extern LIBs.