E-Business Security: Methodological Considerations

A great deal of research works in the field information systems security has been based on a positivist paradigm. Applying the reductionism concept of the positivist paradigm for information security means missing the bigger picture and thus, the lack of holism which could be one of the reasons why security is still overlooked, comes as an afterthought or perceived from a purely technical dimension. We need to reshape our thinking and attitudes towards security especially in a complex and dynamic environment such as e- Business to develop a holistic understanding of e-Business security in relation to its context as well as considering all the stakeholders in the problem area. In this paper we argue the suitability and need for more inductive interpretive approach and qualitative research method to investigate e-Business security. Our discussion is based on a holistic framework of enquiry, nature of the research problem, the underling theoretical lens and the complexity of e-Business environment. At the end we present a research strategy for developing a holistic framework for understanding of e-Business security problems in the context of developing countries based on an interdisciplinary inquiry which considers their needs and requirements.

Micro Environmental Concrete

Reactive powder concretes (RPC) are characterized by particle diameter not exceeding 600 μm and having very high compressive and tensile strengths. This paper describes a new generation of micro concrete, which has an initial, as well as a final, high physicomechanical performance. To achieve this, we replaced the Portland cement (15% by weight) by materials rich in Silica (Slag and Dune Sand). The results obtained from tests carried out on RPC show that compressive and tensile strengths increase when adding the additions, thus improving the compactness of mixtures via filler and pozzolanic effect. With a reduction of the aggregate phase in the RPC and the abundance of dune sand (south Algeria) and slag (industrial byproduct of blast furnace), the use of the RPC will allow Algeria to fulfil economical as well as ecological requirements.

Meta-requirements that Model Change

One of the common problems encountered in software engineering is addressing and responding to the changing nature of requirements. While several approaches have been devised to address this issue, ranging from instilling resistance to changing requirements in order to mitigate impact to project schedules, to developing an agile mindset towards requirements, the approach discussed in this paper is one of conceptualizing the delta in requirement and modeling it, in order to plan a response to it. To provide some context here, change is first formally identified and categorized as either formal change or informal change. While agile methodology facilitates informal change, the approach discussed in this paper seeks to develop the idea of facilitating formal change. To collect, document meta-requirements that represent the phenomena of change would be a pro-active measure towards building a realistic cognition of the requirements entity that can further be harnessed in the software engineering process.

Knowledge Based Model for Power Transformer Life Cycle Management Using Knowledge Engineering

Under the limitation of investment budget, a utility company is required to maximize the utilization of their existing assets during their life cycle satisfying both engineering and financial requirements. However, utility does not have knowledge about the status of each asset in the portfolio neither in terms of technical nor financial values. This paper presents a knowledge based model for the utility companies in order to make an optimal decision on power transformer with their utilization. CommonKADS methodology, a structured development for knowledge and expertise representation, is utilized for designing and developing knowledge based model. A case study of One MVA power transformer of Nepal Electricity Authority is presented. The results show that the reusable knowledge can be categorized, modeled and utilized within the utility company using the proposed methodologies. Moreover, the results depict that utility company can achieve both engineering and financial benefits from its utilization.

Mechanical Design and Theoretical Analysis of a Four Fingered Prosthetic Hand Incorporating Embedded SMA Bundle Actuators

The psychological and physical trauma associated with the loss of a human limb can severely impact on the quality of life of an amputee rendering even the most basic of tasks very difficult. A prosthetic device can be of great benefit to the amputee in the performance of everyday human tasks. This paper outlines a proposed mechanical design of a 12 degree-of-freedom SMA actuated artificial hand. It is proposed that the SMA wires be embedded intrinsically within the hand structure which will allow for significant flexibility for use either as a prosthetic hand solution, or as part of a complete lower arm prosthetic solution. A modular approach is taken in the design facilitating ease of manufacture and assembly, and more importantly, also allows the end user to easily replace SMA wires in the event of failure. A biomimetric approach has been taken during the design process meaning that the artificial hand should replicate that of a human hand as far as is possible with due regard to functional requirements. The proposed design has been exposed to appropriate loading through the use of finite element analysis (FEA) to ensure that it is structurally sound. Theoretical analysis of the mechanical framework was also carried out to establish the limits of the angular displacement and velocity of the finger tip as well finger tip force generation. A combination of various polymers and Titanium, which are suitably lightweight, are proposed for the manufacture of the design.

Wireless Sensor Network: Characteristics and Architectures

An information procuring and processing emerging technology wireless sensor network (WSN) Consists of autonomous nodes with versatile devices underpinned by applications. Nodes are equipped with different capabilities such as sensing, computing, actuation and wireless communications etc. based on application requirements. A WSN application ranges from military implementation in the battlefield, environmental monitoring, health sector as well as emergency response of surveillance. The nodes are deployed independently to cooperatively monitor the physical and environmental conditions. The architecture of WSN differs based on the application requirements and focus on low cost, flexibility, fault tolerance capability, deployment process as well as conserve energy. In this paper we have present the characteristics, architecture design objective and architecture of WSN

Designing of Virtual Laboratories Based on Extended Event Driving Simulation Method

Here are many methods for designing and implementation of virtual laboratories, because of their special features. The most famous architectural designs are based on the events. This model of architecting is so efficient for virtual laboratories implemented on a local network. Later, serviceoriented architecture, gave the remote access ability to them and Peer-To-Peer architecture, hired to exchanging data with higher quality and more speed. Other methods, such as Agent- Based architecting, are trying to solve the problems of distributed processing in a complicated laboratory system. This study, at first, reviews the general principles of designing a virtual laboratory, and then compares the different methods based on EDA, SOA and Agent-Based architecting to present weaknesses and strengths of each method. At the end, we make the best choice for design, based on existing conditions and requirements.

A Fair Non-transfer Exchange Protocol

Network exchange is now widely used. However, it still cannot avoid the problems evolving from network exchange. For example. A buyer may not receive the order even if he/she makes the payment. For another example, the seller possibly get nothing even when the merchandise is sent. Some studies about the fair exchange have proposed protocols for the design of efficiency and exploited the signature property to specify that two parties agree on the exchange. The information about purchased item and price are disclosed in this way. This paper proposes a new fair network payment protocol with off-line trusted third party. The proposed protocol can protect the buyers- purchase message from being traced. In addition, the proposed protocol can meet the proposed requirements. The most significant feature is Non-transfer property we achieved.

A Variable Structure MRAC for a Class of MIMO Systems

A Variable Structure Model Reference Adaptive Controller using state variables is proposed for a class of multi input-multi output systems. Adaptation law is of variable structure type and switching functions is designed based on stability requirements. Global exponential stability is proved based on Lyapunov criterion. Transient behavior is analyzed using sliding mode control and shows perfect model following at a finite time.

Environmental Efficiency of Electric Power Industry of the United States: A Data Envelopment Analysis Approach

Importance of environmental efficiency of electric power industry stems from high demand for energy combined with global warming concerns. It is especially essential for the world largest economies like that of the United States. The paper introduces a Data Envelopment Analysis (DEA) model of environmental efficiency using indicators of fossil fuels utilization, emissions rate, and electric power losses. Using DEA is advantageous in this situation over other approaches due to its nonparametric nature. The paper analyzes data for the period of 1990 - 2006 by comparing actual yearly levels in each dimension with the best values of partial indicators for the period. As positive factors of efficiency, tendency to the decline in emissions rates starting 2000, and in electric power losses starting 2004 may be mentioned together with increasing trend of fuel utilization starting 1999. As a result, dynamics of environmental efficiency is positive starting 2002. The main concern is the decline in fossil fuels utilization in 2006. This negative change should be reversed to comply with ecological and economic requirements.

Limitations of the Analytic Hierarchy Process Technique with Respect to Geographically Distributed Stakeholders

The selection of appropriate requirements for product releases can make a big difference in a product success. The selection of requirements is done by different requirements prioritization techniques. These techniques are based on pre-defined and systematic steps to calculate the requirements relative weight. Prioritization is complicated by new development settings, shifting from traditional co-located development to geographically distributed development. Stakeholders, connected to a project, are distributed all over the world. These geographically distributions of stakeholders make it hard to prioritize requirements as each stakeholder have their own perception and expectations of the requirements in a software project. This paper discusses limitations of the Analytical Hierarchy Process with respect to geographically distributed stakeholders- (GDS) prioritization of requirements. This paper also provides a solution, in the form of a modified AHP, in order to prioritize requirements for GDS. We will conduct two experiments in this paper and will analyze the results in order to discuss AHP limitations with respect to GDS. The modified AHP variant is also validated in this paper.

Requirements Management in a Distributed Agile Environment

The importance of good requirements engineering is well documented. Agile practices, promoting collaboration and communications, facilitate the elicitation and management of volatile requirements. However, current Agile practices work in a well-defined environment. It is necessary to have a co-located customer. With distributed development it is not always possible to realize this co-location. In this environment a suitable process, possibly supported by tools, is required to support changing requirements. This paper introduces the issues of concern when managing requirements in a distributed environment and describes work done at the Software Technology Research Centre as part of the NOMAD project.

Neural Network Evaluation of FRP Strengthened RC Buildings Subjected to Near-Fault Ground Motions having Fling Step

Recordings from recent earthquakes have provided evidence that ground motions in the near field of a rupturing fault differ from ordinary ground motions, as they can contain a large energy, or “directivity" pulse. This pulse can cause considerable damage during an earthquake, especially to structures with natural periods close to those of the pulse. Failures of modern engineered structures observed within the near-fault region in recent earthquakes have revealed the vulnerability of existing RC buildings against pulse-type ground motions. This may be due to the fact that these modern structures had been designed primarily using the design spectra of available standards, which have been developed using stochastic processes with relatively long duration that characterizes more distant ground motions. Many recently designed and constructed buildings may therefore require strengthening in order to perform well when subjected to near-fault ground motions. Fiber Reinforced Polymers are considered to be a viable alternative, due to their relatively easy and quick installation, low life cycle costs and zero maintenance requirements. The objective of this paper is to investigate the adequacy of Artificial Neural Networks (ANN) to determine the three dimensional dynamic response of FRP strengthened RC buildings under the near-fault ground motions. For this purpose, one ANN model is proposed to estimate the base shear force, base bending moments and roof displacement of buildings in two directions. A training set of 168 and a validation set of 21 buildings are produced from FEA analysis results of the dynamic response of RC buildings under the near-fault earthquakes. It is demonstrated that the neural network based approach is highly successful in determining the response.

Improvement of Overall Equipment Effectiveness through Total Productive Maintenance

Frequent machine breakdowns, low plant availability and increased overtime are a great threat to a manufacturing plant as they increase operating costs of an industry. The main aim of this study was to improve Overall Equipment Effectiveness (OEE) at a manufacturing company through the implementation of innovative maintenance strategies. A case study approach was used. The paper focuses on improving the maintenance in a manufacturing set up using an innovative maintenance regime mix to improve overall equipment effectiveness. Interviews, reviewing documentation and historical records, direct and participatory observation were used as data collection methods during the research. Usually production is based on the total kilowatt of motors produced per day. The target kilowatt at 91% availability is 75 Kilowatts a day. Reduced demand and lack of raw materials particularly imported items are adversely affecting the manufacturing operations. The company had to reset its targets from the usual figure of 250 Kilowatt per day to mere 75 per day due to lower availability of machines as result of breakdowns as well as lack of raw materials. The price reductions and uncertainties as well as general machine breakdowns further lowered production. Some recommendations were given. For instance, employee empowerment in the company will enhance responsibility and authority to improve and totally eliminate the six big losses. If the maintenance department is to realise its proper function in a progressive, innovative industrial society, then its personnel must be continuously trained to meet current needs as well as future requirements. To make the maintenance planning system effective, it is essential to keep track of all the corrective maintenance jobs and preventive maintenance inspections. For large processing plants these cannot be handled manually. It was therefore recommended that the company implement (Computerised Maintenance Management System) CMMS.

An Ontology Abstract Machine

As more people from non-technical backgrounds are becoming directly involved with large-scale ontology development, the focal point of ontology research has shifted from the more theoretical ontology issues to problems associated with the actual use of ontologies in real-world, large-scale collaborative applications. Recently the National Science Foundation funded a large collaborative ontology development project for which a new formal ontology model, the Ontology Abstract Machine (OAM), was developed to satisfy some unique functional and data representation requirements. This paper introduces the OAM model and the related algorithms that enable maintenance of an ontology that supports node-based user access. The successful software implementation of the OAM model and its subsequent acceptance by a large research community proves its validity and its real-world application value.

Approaches to Determining Optimal Asset Structure for a Commercial Bank

Every commercial bank optimises its asset portfolio depending on the profitability of assets and chosen or imposed constraints. This paper proposes and applies a stylized model for optimising banks' asset and liability structure, reflecting profitability of different asset categories and their risks as well as costs associated with different liability categories and reserve requirements. The level of detail for asset and liability categories is chosen to create a suitably parsimonious model and to include the most important categories in the model. It is shown that the most appropriate optimisation criterion for the model is the maximisation of the ratio of net interest income to assets. The maximisation of this ratio is subject to several constraints. Some are accounting identities or dictated by legislative requirements; others vary depending on the market objectives for a particular bank. The model predicts variable amount of assets allocated to loan provision.

IMDC: An Image-Mapped Data Clustering Technique for Large Datasets

In this paper, we present a new algorithm for clustering data in large datasets using image processing approaches. First the dataset is mapped into a binary image plane. The synthesized image is then processed utilizing efficient image processing techniques to cluster the data in the dataset. Henceforth, the algorithm avoids exhaustive search to identify clusters. The algorithm considers only a small set of the data that contains critical boundary information sufficient to identify contained clusters. Compared to available data clustering techniques, the proposed algorithm produces similar quality results and outperforms them in execution time and storage requirements.

An Efficient Multi Join Algorithm Utilizing a Lattice of Double Indices

In this paper, a novel multi join algorithm to join multiple relations will be introduced. The novel algorithm is based on a hashed-based join algorithm of two relations to produce a double index. This is done by scanning the two relations once. But instead of moving the records into buckets, a double index will be built. This will eliminate the collision that can happen from a complete hash algorithm. The double index will be divided into join buckets of similar categories from the two relations. The algorithm then joins buckets with similar keys to produce joined buckets. This will lead at the end to a complete join index of the two relations. without actually joining the actual relations. The time complexity required to build the join index of two categories is Om log m where m is the size of each category. Totaling time complexity to O n log m for all buckets. The join index will be used to materialize the joined relation if required. Otherwise, it will be used along with other join indices of other relations to build a lattice to be used in multi-join operations with minimal I/O requirements. The lattice of the join indices can be fitted into the main memory to reduce time complexity of the multi join algorithm.

Rotor Bearing System Analysis Using the Transfer Matrix Method with Thickness Assumption of Disk and Bearing

There are lots of different ways to find the natural frequencies of a rotating system. One of the most effective methods which is used because of its precision and correctness is the application of the transfer matrix. By use of this method the entire continuous system is subdivided and the corresponding differential equation can be stated in matrix form. So to analyze shaft that is this paper issue the rotor is divided as several elements along the shaft which each one has its own mass and moment of inertia, which this work would create possibility of defining the named matrix. By Choosing more elements number, the size of matrix would become larger and as a result more accurate answers would be earned. In this paper the dynamics of a rotor-bearing system is analyzed, considering the gyroscopic effect. To increase the accuracy of modeling the thickness of the disk and bearings is also taken into account which would cause more complicated matrix to be solved. Entering these parameters to our modeling would change the results completely that these differences are shown in the results. As said upper, to define transfer matrix to reach the natural frequencies of probed system, introducing some elements would be one of the requirements. For the boundary condition of these elements, bearings at the end of the shaft are modeled as equivalent spring and dampers for the discretized system. Also, continuous model is used for the shaft in the system. By above considerations and using transfer matrix, exact results are taken from the calculations. Results Show that, by increasing thickness of the bearing the amplitude of vibration would decrease, but obviously the stiffness of the shaft and the natural frequencies of the system would accompany growth. Consequently it is easily understood that ignoring the influences of bearing and disk thicknesses would results not real answers.

Hierarchical Clustering Analysis with SOM Networks

This work presents a neural network model for the clustering analysis of data based on Self Organizing Maps (SOM). The model evolves during the training stage towards a hierarchical structure according to the input requirements. The hierarchical structure symbolizes a specialization tool that provides refinements of the classification process. The structure behaves like a single map with different resolutions depending on the region to analyze. The benefits and performance of the algorithm are discussed in application to the Iris dataset, a classical example for pattern recognition.