Site Selection of Traffic Camera based on Dempster-Shafer and Bagging Theory

Traffic incident has bad effect on all parts of society so controlling road networks with enough traffic devices could help to decrease number of accidents, so using the best method for optimum site selection of these devices could help to implement good monitoring system. This paper has considered here important criteria for optimum site selection of traffic camera based on aggregation methods such as Bagging and Dempster-Shafer concepts. In the first step, important criteria such as annual traffic flow, distance from critical places such as parks that need more traffic controlling were identified for selection of important road links for traffic camera installation, Then classification methods such as Artificial neural network and Decision tree algorithms were employed for classification of road links based on their importance for camera installation. Then for improving the result of classifiers aggregation methods such as Bagging and Dempster-Shafer theories were used.

Electronic Commerce: Costumer Protection In Electronic Payments

As a by-product of its "cyberspace" status, electronic commerce is global, encompassing a whole range of B2C relationships which need to be approached with solutions provided at a local level while remaining viable when applied to global issues. Today, the European Union seems to be endowed with a reliable legal framework for consumer protection. A question which remains, however, is enforcement of this protection. This is probably a matter of time and awareness from both parties in the B2C relationship. Business should realize that enhancing trust in the minds of consumers is more than a question of technology; it is a question of best practice. Best practice starts with the online service of high street banks as well as with the existence of a secure, user-friendly and cost-effective payment system. It also includes the respect of privacy and the use of smart cards as well as enhancing privacy technologies and fair information practice. In sum, only by offering this guarantee of privacy and security will the consumer be assured that, in cyberspace, his/her interests will be protected in the same manner as in a traditional commercial environment.

Why Traditional Technology Acceptance Models Won't Work for Future Information Technologies?

This paper illustrates why existing technology acceptance models are only of limited use for predicting and explaining the adoption of future information and communication technologies. It starts with a general overview over technology adoption processes, and presents several theories for the acceptance as well as adoption of traditional information technologies. This is followed by an overview over the recent developments in the area of information and communication technologies. Based on the arguments elaborated in these sections, it is shown why the factors used to predict adoption in existing systems, will not be sufficient for explaining the adoption of future information and communication technologies.

MAS Simulations of Optical Antenna Structures

A semi-analytic boundary discretization method, the Method of Auxiliary Sources (MAS) is used to analyze Optical Antennas consisting of metallic parts. In addition to standard dipoletype antennas, consisting of two pieces of metal, a new structure consisting of a single metal piece with a tiny groove in the center is analyzed. It is demonstrated that difficult numerical problems are caused because optical antennas exhibit strong material dispersion, loss, and plasmon-polariton effects that require a very accurate numerical simulation. This structure takes advantage of the Channel Plasmon-Polariton (CPP) effect and exhibits a strong enhancement of the electric field in the groove. Also primitive 3D antenna model with spherical nano particles is analyzed.

Thermogravimetry Study on Pyrolysis of Various Lignocellulosic Biomass for Potential Hydrogen Production

This paper aims to study decomposition behavior in pyrolytic environment of four lignocellulosic biomass (oil palm shell, oil palm frond, rice husk and paddy straw), and two commercial components of biomass (pure cellulose and lignin), performed in a thermogravimetry analyzer (TGA). The unit which consists of a microbalance and a furnace flowed with 100 cc (STP) min-1 Nitrogen, N2 as inert. Heating rate was set at 20⁰C min-1 and temperature started from 50 to 900⁰C. Hydrogen gas production during the pyrolysis was observed using Agilent Gas Chromatography Analyzer 7890A. Oil palm shell, oil palm frond, paddy straw and rice husk were found to be reactive enough in a pyrolytic environment of up to 900°C since pyrolysis of these biomass starts at temperature as low as 200°C and maximum value of weight loss is achieved at about 500°C. Since there was not much different in the cellulose, hemicelluloses and lignin fractions between oil palm shell, oil palm frond, paddy straw and rice husk, the T-50 and R-50 values obtained are almost similar. H2 productions started rapidly at this temperature as well due to the decompositions of biomass inside the TGA. Biomass with more lignin content such as oil palm shell was found to have longer duration of H2 production compared to materials of high cellulose and hemicelluloses contents.

Instability Problem of Turbo-Machines with Radial Distortion Problems

In the upstream we place a piece of ring and rotate it with 83Hz, 166Hz, 333Hz,and 666H to find the effect of the periodic distortion.In the experiment this type of the perturbation will not allow since the mechanical failure of any parts of the equipment in the upstream will destroy the blade system. This type of study will be only possible by CFD. We use two pumps NS32 (ENSAM) and three blades pump (Tamagawa Univ). The benchmark computations were performed without perturbation parts, and confirm the computational results well agreement in head-flow rate. We obtained the pressure fluctuation growth rate that is representing the global instability of the turbo-system. The fluctuating torque components were 0.01Nm(5000rpm), 0.1Nm(10000rmp), 0.04Nm(20000rmp), 0.15Nm( 40000rmp) respectively. Only for 10000rpm(166Hz) the output toque was random, and it implies that it creates unsteady flow by separations on the blades, and will reduce the pressure loss significantly

An Advanced Hybrid P2p Botnet 2.0

Recently, malware attacks have become more serious over the Internet by e-mail, denial of service (DoS) or distributed denial of service (DDoS). The Botnets have become a significant part of the Internet malware attacks. The traditional botnets include three parts – botmaster, command and control (C&C) servers and bots. The C&C servers receive commands from botmaster and control the distributions of computers remotely. Bots use DNS to find the positions of C&C server. In this paper, we propose an advanced hybrid peer-to-peer (P2P) botnet 2.0 (AHP2P botnet 2.0) using web 2.0 technology to hide the instructions from botmaster into social sites, which are regarded as C&C servers. Servent bots are regarded as sub-C&C servers to get the instructions from social sites. The AHP2P botnet 2.0 can evaluate the performance of servent bots, reduce DNS traffics from bots to C&C servers, and achieve harder detection bots actions than IRC-based botnets over the Internet.

Fabrication of High Aluminum Content Mg alloys using a Horizontal Twin Roll Caster

This study was aimed for investigating of manufacturing high aluminum content Mg alloys using a horizontal twin roll caster. Recently, weight saving has been key issues for lighter transport equipments as well as electronic component parts. As alternative materials to aluminum alloys, developing magnesium alloy with higher strength has been expected. Normally high Aluminum content Mg alloy has poor ductility and is difficult to be rolled because of its high strength. However, twin roll casting process is suitable for manufacturing wrought Mg alloys because materials can be cast directly from molten metal. In this study, manufacturing of high aluminum content magnesium alloy sheet using the roll casting process has been carried out. Effects of manufacturing parameter, such as roll velocity, pouring temperature and roll gap, on casting was investigated. A microscopic observation of the crystals of cross section of as cast strip as well as rolled strip was conducted.

Issues in Procurement of Castings

The aim of this paper is to present current and future procedures in castings procurement. Differences in procurement are highlighted. The supplier selection criteria used in practice is compared to literature findings. Different trends related to supply chains are presented and it is described how they are reflected in reality to castings procurement. To fulfil the aim, interviews were conducted in nine companies using castings. It was found that largest casting users have the most subcontractor foundries and it is more typical that they have multiple suppliers for the same parts. Currently only two companies out of nine purchase castings outside Europe, but the others are also progressing in the same direction. The main reason is the need to lower purchasing costs. Another trend is that all companies want to buy cast components or sub-assemblies instead of raw castings from foundries. It was found that price is a main supplier selection criterion. All companies use competitive bidding in supplier selection.

Regression Test Selection Technique for Multi-Programming Language

Regression testing is a maintenance activity applied to modified software to provide confidence that the changed parts are correct and that the unchanged parts have not been adversely affected by the modifications. Regression test selection techniques reduce the cost of regression testing, by selecting a subset of an existing test suite to use in retesting modified programs. This paper presents the first general regression-test-selection technique, which based on code and allows selecting test cases for any programs written in any programming language. Then it handles incomplete program. We also describe RTSDiff, a regression-test-selection system that implements the proposed technique. The results of the empirical studied that performed in four programming languages java, C#, Cµ and Visual basic show that the efficiency and effective in reducing the size of test suit.

Design of an Innovative Accelerant Detector

Today, canines are still used effectively in acceleration detection situation. However, this method is becoming impractical in modern age and a new automated replacement to the canine is required. This paper reports the design of an innovative accelerant detector. Designing an accelerant detector is a long process as is any design process; therefore, a solution to the need for a mobile, effective accelerant detector is hereby presented. The device is simple and efficient to ensure that any accelerant detection can be conducted quickly and easily. The design utilizes Ultra Violet (UV) light to detect the accelerant. When the UV light shines on an accelerant, the hydrocarbons in the accelerant emit florescence. The advantages of using the UV light to detect accelerant are also outlined in this paper. The mobility of the device is achieved by using a Direct Current (DC) motor to run tank tracks. Tank tracks were chosen as to ensure that the device will be mobile in the rough terrain of a fire site. The materials selected for the various parts are also presented. A Solid Works Simulation was also conducted on the stresses in the shafts and the results are presented. This design is an innovative solution which offers a user friendly interface. The design is also environmentally friendly, ecologically sound and safe to use.

RANFIS : Rough Adaptive Neuro-Fuzzy Inference System

The paper presents a new hybridization methodology involving Neural, Fuzzy and Rough Computing. A Rough Sets based approximation technique has been proposed based on a certain Neuro – Fuzzy architecture. A New Rough Neuron composition consisting of a combination of a Lower Bound neuron and a Boundary neuron has also been described. The conventional convergence of error in back propagation has been given away for a new framework based on 'Output Excitation Factor' and an inverse input transfer function. The paper also presents a brief comparison of performances, of the existing Rough Neural Networks and ANFIS architecture against the proposed methodology. It can be observed that the rough approximation based neuro-fuzzy architecture is superior to its counterparts.

2D Human Motion Regeneration with Stick Figure Animation Using Accelerometers

This paper explores the opportunity of using tri-axial wireless accelerometers for supervised monitoring of sports movements. A motion analysis system for the upper extremities of lawn bowlers in particular is developed. Accelerometers are placed on parts of human body such as the chest to represent the shoulder movements, the back to capture the trunk motion, back of the hand, the wrist and one above the elbow, to capture arm movements. These sensors placement are carefully designed in order to avoid restricting bowler-s movements. Data is acquired from these sensors in soft-real time using virtual instrumentation; the acquired data is then conditioned and converted into required parameters for motion regeneration. A user interface was also created to facilitate in the acquisition of data, and broadcasting of commands to the wireless accelerometers. All motion regeneration in this paper deals with the motion of the human body segment in the X and Y direction, looking into the motion of the anterior/ posterior and lateral directions respectively.

Nonconforming Control Charts for Zero-Inflated Poisson Distribution

This paper developed the c-Chart based on a Zero- Inflated Poisson (ZIP) processes that approximated by a geometric distribution with parameter p. The p estimated that fit for ZIP distribution used in calculated the mean, median, and variance of geometric distribution for constructed the c-Chart by three difference methods. For cg-Chart, developed c-Chart by used the mean and variance of the geometric distribution constructed control limits. For cmg-Chart, the mean used for constructed the control limits. The cme- Chart, developed control limits of c-Chart from median and variance values of geometric distribution. The performance of charts considered from the Average Run Length and Average Coverage Probability. We found that for an in-control process, the cg-Chart is superior for low level of mean at all level of proportion zero. For an out-of-control process, the cmg-Chart and cme-Chart are the best for mean = 2, 3 and 4 at all level of parameter.

Dynamic Clustering using Particle Swarm Optimization with Application in Unsupervised Image Classification

A new dynamic clustering approach (DCPSO), based on Particle Swarm Optimization, is proposed. This approach is applied to unsupervised image classification. The proposed approach automatically determines the "optimum" number of clusters and simultaneously clusters the data set with minimal user interference. The algorithm starts by partitioning the data set into a relatively large number of clusters to reduce the effects of initial conditions. Using binary particle swarm optimization the "best" number of clusters is selected. The centers of the chosen clusters is then refined via the Kmeans clustering algorithm. The experiments conducted show that the proposed approach generally found the "optimum" number of clusters on the tested images.

Authentic Learning for Computer Network with Mobile Device-Based Hands-On Labware

Computer network courses are essential parts of college computer science curriculum and hands-on networking experience is well recognized as an effective approach to help students understand better about the network concepts, the layered architecture of network protocols, and the dynamics of the networks. However, existing networking labs are usually server-based and relatively cumbersome, which require a certain level of specialty and resource to set up and maintain the lab environment. Many universities/colleges lack the resources and build-ups in this field and have difficulty to provide students with hands-on practice labs. A new affordable and easily-adoptable approach to networking labs is desirable to enhance network teaching and learning. In addition, current network labs are short on providing hands-on practice for modern wireless and mobile network learning. With the prevalence of smart mobile devices, wireless and mobile network are permeating into various aspects of our information society. The emerging and modern mobile technology provides computer science students with more authentic learning experience opportunities especially in network learning. A mobile device based hands-on labware can provide an excellent ‘real world’ authentic learning environment for computer network especially for wireless network study. In this paper, we present our mobile device-based hands-on labware (series of lab module) for computer network learning which is guided by authentic learning principles to immerse students in a real world relevant learning environment. We have been using this labware in teaching computer network, mobile security, and wireless network classes. The student feedback shows that students can learn more when they have hands-on authentic learning experience. 

Stature Estimation Using Foot and Shoeprint Length of Malaysian Population

Formulation of biological profile is one of the modern roles of forensic anthropologist. The present study was conducted to estimate height using foot and shoeprint length of Malaysian population. The present work can be very useful information in the process of identification of individual in forensic cases based on shoeprint evidence. It can help to narrow down suspects and ease the police investigation. Besides, stature is important parameters in determining the partial identify of unidentified and mutilated bodies. Thus, this study can help the problem encountered in cases of mass disaster, massacre, explosions and assault cases. This is because it is very hard to identify parts of bodies in these cases where people are dismembered and become unrecognizable. Samples in this research were collected from 200 Malaysian adults (100 males and 100 females) with age ranging from 20 to 45 years old. In this research, shoeprint length were measured based on the print of the shoes made from the flat shoes. Other information like gender, foot length and height of subject were also recorded. The data was analyzed using IBM® SPSS Statistics 19 software. Results indicated that, foot length has a strong correlation with stature than shoeprint length for both sides of the feet. However, in the unknown, where the gender was undetermined have shown a better correlation in foot length and shoeprint length parameter compared to males and females analyzed separately. In addition, prediction equations are developed to estimate the stature using linear regression analysis of foot length and shoeprint length. However, foot lengths give better prediction than shoeprint length. 

A Data Hiding Model with High Security Features Combining Finite State Machines and PMM method

Recent years have witnessed the rapid development of the Internet and telecommunication techniques. Information security is becoming more and more important. Applications such as covert communication, copyright protection, etc, stimulate the research of information hiding techniques. Traditionally, encryption is used to realize the communication security. However, important information is not protected once decoded. Steganography is the art and science of communicating in a way which hides the existence of the communication. Important information is firstly hidden in a host data, such as digital image, video or audio, etc, and then transmitted secretly to the receiver.In this paper a data hiding model with high security features combining both cryptography using finite state sequential machine and image based steganography technique for communicating information more securely between two locations is proposed. The authors incorporated the idea of secret key for authentication at both ends in order to achieve high level of security. Before the embedding operation the secret information has been encrypted with the help of finite-state sequential machine and segmented in different parts. The cover image is also segmented in different objects through normalized cut.Each part of the encoded secret information has been embedded with the help of a novel image steganographic method (PMM) on different cuts of the cover image to form different stego objects. Finally stego image is formed by combining different stego objects and transmit to the receiver side. At the receiving end different opposite processes should run to get the back the original secret message.

Technological Deep Assessment of Automotive Parts Manufacturers Case of Iranian Manufacturers

In order to develop any strategy, it is essential to first identify opportunities, threats, weak and strong points. Assessment of technology level provides the possibility of concentrating on weak and strong points. The results of technology assessment have a direct effect on decision making process in the field of technology transfer or expansion of internal research capabilities so it has a critical role in technology management. This paper presents a conceptual model to analyze the technology capability of a company as a whole and in four main aspects of technology. This model was tested on 10 automotive parts manufacturers in IRAN. Using this model, capability level of manufacturers was investigated in four fields of managing aspects, hard aspects, human aspects, and information and knowledge aspects. Results show that these firms concentrate on hard aspect of technology while others aspects are poor and need to be supported more. So this industry should develop other aspects of technology as well as hard aspect to have effective and efficient use of its technology. These paper findings are useful for the technology planning and management in automotive part manufactures in IRAN and other Industries which are technology followers and transport their needed technologies.

Optimal Capacitor Placement in a Radial Distribution System using Plant Growth Simulation Algorithm

This paper presents a new and efficient approach for capacitor placement in radial distribution systems that determine the optimal locations and size of capacitor with an objective of improving the voltage profile and reduction of power loss. The solution methodology has two parts: in part one the loss sensitivity factors are used to select the candidate locations for the capacitor placement and in part two a new algorithm that employs Plant growth Simulation Algorithm (PGSA) is used to estimate the optimal size of capacitors at the optimal buses determined in part one. The main advantage of the proposed method is that it does not require any external control parameters. The other advantage is that it handles the objective function and the constraints separately, avoiding the trouble to determine the barrier factors. The proposed method is applied to 9, 34, and 85-bus radial distribution systems. The solutions obtained by the proposed method are compared with other methods. The proposed method has outperformed the other methods in terms of the quality of solution.