Considerations of Public Key Infrastructure (PKI), Functioning as a Chain of Trust in Electronic Payments Systems

The growth of open networks created the interest to commercialise it. The establishment of an electronic business mechanism must be accompanied by a digital – electronic payment system to transfer the value of transactions. Financial organizations are requested to offer a secure e-payment synthesis with equivalent level of security served in conventional paper-based payment transactions. PKI, which is functioning as a chain of trust in security architecture, can enable security services of cryptography to epayments, in order to take advantage of the wider base either of customer or of trading partners and the reduction of cost transaction achieved by the use of Internet channels. The paper addresses the possibilities and the implementation suggestions of PKI in relevance to electronic payments by suggesting a framework that should be followed.

An Address-Oriented Transmit Mechanism for GALS NoC

Since Network-on-Chip (NoC) uses network interfaces (NIs) to improve the design productivity, by now, there have been a few papers addressing the design and implementation of a NI module. However, none of them considered the difference of address encoding methods between NoC and the traditional bus-shared architecture. On the basis of this difference, in the paper, we introduce a transmit mechanism to solve such a problem for global asynchronous locally synchronous (GALS) NoC. Furthermore, we give the concrete implementation of the NI module in this transmit mechanism. Finally, we evaluate its performance and area overhead by a VHDL-based cycle-accurate RTL model and simulation results confirm the validity of this address-oriented transmit mechanism.

Neural Network Implementation Using FPGA: Issues and Application

.Hardware realization of a Neural Network (NN), to a large extent depends on the efficient implementation of a single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. This paper discusses the issues involved in implementation of a multi-input neuron with linear/nonlinear excitation functions using FPGA. Implementation method with resource/speed tradeoff is proposed to handle signed decimal numbers. The VHDL coding developed is tested using Xilinx XC V50hq240 Chip. To improve the speed of operation a lookup table method is used. The problems involved in using a lookup table (LUT) for a nonlinear function is discussed. The percentage saving in resource and the improvement in speed with an LUT for a neuron is reported. An attempt is also made to derive a generalized formula for a multi-input neuron that facilitates to estimate approximately the total resource requirement and speed achievable for a given multilayer neural network. This facilitates the designer to choose the FPGA capacity for a given application. Using the proposed method of implementation a neural network based application, namely, a Space vector modulator for a vector-controlled drive is presented

Enhancement Throughput of Unplanned Wireless Mesh Networks Deployment Using Partitioning Hierarchical Cluster (PHC)

Wireless mesh networks based on IEEE 802.11 technology are a scalable and efficient solution for next generation wireless networking to provide wide-area wideband internet access to a significant number of users. The deployment of these wireless mesh networks may be within different authorities and without any planning, they are potentially overlapped partially or completely in the same service area. The aim of the proposed model is design a new model to Enhancement Throughput of Unplanned Wireless Mesh Networks Deployment Using Partitioning Hierarchical Cluster (PHC), the unplanned deployment of WMNs are determinates there performance. We use throughput optimization approach to model the unplanned WMNs deployment problem based on partitioning hierarchical cluster (PHC) based architecture, in this paper the researcher used bridge node by allowing interworking traffic between these WMNs as solution for performance degradation.

NGN and WiMAX: Putting the Pieces Together

With the exponential rise in the number of multimedia applications available, the best-effort service provided by the Internet today is insufficient. Researchers have been working on new architectures like the Next Generation Network (NGN) which, by definition, will ensure Quality of Service (QoS) in an all-IP based network [1]. For this approach to become a reality, reservation of bandwidth is required per application per user. WiMAX (Worldwide Interoperability for Microwave Access) is a wireless communication technology which has predefined levels of QoS which can be provided to the user [4]. IPv6 has been created as the successor for IPv4 and resolves issues like the availability of IP addresses and QoS. This paper provides a design to use the power of WiMAX as an NSP (Network Service Provider) for NGN using IPv6. The use of the Traffic Class (TC) field and the Flow Label (FL) field of IPv6 has been explained for making QoS requests and grants [6], [7]. Using these fields, the processing time is reduced and routing is simplified. Also, we define the functioning of the ASN gateway and the NGN gateway (NGNG) which are edge node interfaces in the NGNWiMAX design. These gateways ensure QoS management through built in functions and by certain physical resources and networking capabilities.

A Novel VLSI Architecture of Hybrid Image Compression Model based on Reversible Blockade Transform

Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. Furthermore, the discrete cosine transform has emerged as the new state-of-the art standard for image compression. In this paper, a hybrid image compression technique based on reversible blockade transform coding is proposed. The technique, implemented over regions of interest (ROIs), is based on selection of the coefficients that belong to different transforms, depending on the coefficients is proposed. This method allows: (1) codification of multiple kernals at various degrees of interest, (2) arbitrary shaped spectrum,and (3) flexible adjustment of the compression quality of the image and the background. No standard modification for JPEG2000 decoder was required. The method was applied over different types of images. Results show a better performance for the selected regions, when image coding methods were employed for the whole set of images. We believe that this method is an excellent tool for future image compression research, mainly on images where image coding can be of interest, such as the medical imaging modalities and several multimedia applications. Finally VLSI implementation of proposed method is shown. It is also shown that the kernal of Hartley and Cosine transform gives the better performance than any other model.

Multi-Agent Model for Automation of Business Process Management System Based on Service Oriented Architecture

Business process automation is an important task in an enterprise business environment software development. The requirements of processing acceleration and automation level of enterprises are inherently different from one organization to another. We present a methodology and system for automation of business process management system architecture by multi-agent collaboration based on SOA. Design layer processes are modeled in semantic markup language for web services application. At the core of our system is considering certain types of human tasks to their further automation across over multiple platform environments. An improved abnormality processing with model for automation of BPMS architecture by multi-agent collaboration based on SOA is introduced. Validating system for efficiency of process automation, an application for educational knowledge base instance would also be described.

A Mapping Approach of Code Generation for Arinc653-Based Avionics Software

Avionic software architecture has transit from a federated avionics architecture to an integrated modular avionics (IMA) .ARINC 653 (Avionics Application Standard Software Interface) is a software specification for space and time partitioning in Safety-critical avionics Real-time operating systems. Methods to transform the abstract avionics application logic function to the executable model have been brought up, however with less consideration about the code generating input and output model specific for ARINC 653 platform and inner-task synchronous dynamic interaction order sequence. In this paper, we proposed an AADL-based model-driven design methodology to fulfill the purpose to automatically generating Cµ executable model on ARINC 653 platform from the ARINC653 architecture which defined as AADL653 in order to facilitate the development of the avionics software constructed on ARINC653 OS. This paper presents the mapping rules between the AADL653 elements and the elements in Cµ language, and define the code generating rules , designs an automatic C µ code generator .Then, we use a case to illustrate our approach. Finally, we give the related work and future research directions.

Choice of Efficient Information System with Service-Oriented Architecture using Multiple Criteria Threshold Algorithms (With Practical Example)

Author presents the results of a study conducted to identify criteria of efficient information system (IS) with serviceoriented architecture (SOA) realization and proposes a ranking method to evaluate SOA information systems using a set of architecture quality criteria before the systems are implemented. The method is used to compare 7 SOA projects and ranking result for SOA efficiency of the projects is provided. The choice of SOA realization project depends on following criteria categories: IS internal work and organization, SOA policies, guidelines and change management, processes and business services readiness, risk management and mitigation. The last criteria category was analyzed on the basis of projects statistics.

Neural Networks: From Black Box towards Transparent Box Application to Evapotranspiration Modeling

Neural networks are well known for their ability to model non linear functions, but as statistical methods usually does, they use a no parametric approach thus, a priori knowledge is not obvious to be taken into account no more than the a posteriori knowledge. In order to deal with these problematics, an original way to encode the knowledge inside the architecture is proposed. This method is applied to the problem of the evapotranspiration inside karstic aquifer which is a problem of huge utility in order to deal with water resource.

Mimicking Morphogenesis for Robust Behaviour of Cellular Architectures

Morphogenesis is the process that underpins the selforganised development and regeneration of biological systems. The ability to mimick morphogenesis in artificial systems has great potential for many engineering applications, including production of biological tissue, design of robust electronic systems and the co-ordination of parallel computing. Previous attempts to mimick these complex dynamics within artificial systems have relied upon the use of evolutionary algorithms that have limited their size and complexity. This paper will present some insight into the underlying dynamics of morphogenesis, then show how to, without the assistance of evolutionary algorithms, design cellular architectures that converge to complex patterns.

Agent Decision using Granular Computing in Traffic System

In recent years multi-agent systems have emerged as one of the interesting architectures facilitating distributed collaboration and distributed problem solving. Each node (agent) of the network might pursue its own agenda, exploit its environment, develop its own problem solving strategy and establish required communication strategies. Within each node of the network, one could encounter a diversity of problem-solving approaches. Quite commonly the agents can realize their processing at the level of information granules that is the most suitable from their local points of view. Information granules can come at various levels of granularity. Each agent could exploit a certain formalism of information granulation engaging a machinery of fuzzy sets, interval analysis, rough sets, just to name a few dominant technologies of granular computing. Having this in mind, arises a fundamental issue of forming effective interaction linkages between the agents so that they fully broadcast their findings and benefit from interacting with others.

An Assessment of Technological Competencies on Professional Service Firms Business Performance

This study was initiated with a three prong objective. One, to identify the relationship between Technological Competencies factors (Technical Capability, Firm Innovativeness and E-Business Practices and professional service firms- business performance. To investigate the predictors of professional service firms business performance and finally to evaluate the predictors of business performance according to the type of professional service firms, a survey questionnaire was deployed to collect empirical data. The questionnaire was distributed to the owners of the professional small medium size enterprises services in the Accounting, Legal, Engineering and Architecture sectors. Analysis showed that all three Technology Competency factors have moderate effect on business performance. In addition, the regression models indicate that technical capability is the most highly influential that could determine business performance, followed by e-business practices and firm innovativeness. Subsequently, the main predictor of business performance for all types of firms is Technical capability.

QoS Expectations in IP Networks: A Practical View

Traditionally, Internet has provided best-effort service to every user regardless of its requirements. However, as Internet becomes universally available, users demand more bandwidth and applications require more and more resources, and interest has developed in having the Internet provide some degree of Quality of Service. Although QoS is an important issue, the question of how it will be brought into the Internet has not been solved yet. Researches, due to the rapid advances in technology are proposing new and more desirable capabilities for the next generation of IP infrastructures. But neither all applications demand the same amount of resources, nor all users are service providers. In this way, this paper is the first of a series of papers that presents an architecture as a first step to the optimization of QoS in the Internet environment as a solution to a SMSE's problem whose objective is to provide public service to internet with certain Quality of Service expectations. The service provides new business opportunities, but also presents new challenges. We have designed and implemented a scalable service framework that supports adaptive bandwidth based on user demands, and the billing based on usage and on QoS. The developed application has been evaluated and the results show that traffic limiting works at optimum and so it does exceeding bandwidth distribution. However, some considerations are done and currently research is under way in two basic areas: (i) development and testing new transfer protocols, and (ii) developing new strategies for traffic improvements based on service differentiation.

STLF Based on Optimized Neural Network Using PSO

The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.

Design of Low-Area HEVC Core Transform Architecture

This paper proposes and implements an core transform architecture, which is one of the major processes in HEVC video compression standard. The proposed core transform architecture is implemented with only adders and shifters instead of area-consuming multipliers. Shifters in the proposed core transform architecture are implemented in wires and multiplexers, which significantly reduces chip area. Also, it can process from 4×4 to 16×16 blocks with common hardware by reusing processing elements. Designed core transform architecture in 0.13um technology can process a 16×16 block with 2-D transform in 130 cycles, and its gate count is 101,015 gates.

An Efficient Run Time Interface for Heterogeneous Architecture of Large Scale Supercomputing System

In this paper we propose a novel Run Time Interface (RTI) technique to provide an efficient environment for MPI jobs on the heterogeneous architecture of PARAM Padma. It suggests an innovative, unified framework for the job management interface system in parallel and distributed computing. This approach employs proxy scheme. The implementation shows that the proposed RTI is highly scalable and stable. Moreover RTI provides the storage access for the MPI jobs in various operating system platforms and improve the data access performance through high performance C-DAC Parallel File System (C-PFS). The performance of the RTI is evaluated by using the standard HPC benchmark suites and the simulation results show that the proposed RTI gives good performance on large scale supercomputing system.

Improved Modulo 2n +1 Adder Design

Efficient modulo 2n+1 adders are important for several applications including residue number system, digital signal processors and cryptography algorithms. In this paper we present a novel modulo 2n+1 addition algorithm for a recently represented number system. The proposed approach is introduced for the reduction of the power dissipated. In a conventional modulo 2n+1 adder, all operands have (n+1)-bit length. To avoid using (n+1)-bit circuits, the diminished-1 and carry save diminished-1 number systems can be effectively used in applications. In the paper, we also derive two new architectures for designing modulo 2n+1 adder, based on n-bit ripple-carry adder. The first architecture is a faster design whereas the second one uses less hardware. In the proposed method, the special treatment required for zero operands in Diminished-1 number system is removed. In the fastest modulo 2n+1 adders in normal binary system, there are 3-operand adders. This problem is also resolved in this paper. The proposed architectures are compared with some efficient adders based on ripple-carry adder and highspeed adder. It is shown that the hardware overhead and power consumption will be reduced. As well as power reduction, in some cases, power-delay product will be also reduced.

Vector Space of the Extended Base-triplets over the Galois Field of five DNA Bases Alphabet

A plausible architecture of an ancient genetic code is derived from an extended base triplet vector space over the Galois field of the extended base alphabet {D, G, A, U, C}, where the letter D represent one or more hypothetical bases with unspecific pairing. We hypothesized that the high degeneration of a primeval genetic code with five bases and the gradual origin and improvements of a primitive DNA repair system could make possible the transition from the ancient to the modern genetic code. Our results suggest that the Watson-Crick base pairing and the non-specific base pairing of the hypothetical ancestral base D used to define the sum and product operations are enough features to determine the coding constraints of the primeval and the modern genetic code, as well as the transition from the former to the later. Geometrical and algebraic properties of this vector space reveal that the present codon assignment of the standard genetic code could be induced from a primeval codon assignment. Besides, the Fourier spectrum of the extended DNA genome sequences derived from the multiple sequence alignment suggests that the called period-3 property of the present coding DNA sequences could also exist in the ancient coding DNA sequences.

New VLSI Architecture for Motion Estimation Algorithm

This paper presents an efficient VLSI architecture design to achieve real time video processing using Full-Search Block Matching (FSBM) algorithm. The design employs parallel bank architecture with minimum latency, maximum throughput, and full hardware utilization. We use nine parallel processors in our architecture and each controlled by a state machine. State machine control implementation makes the design very simple and cost effective. The design is implemented using VHDL and the programming techniques we incorporated makes the design completely programmable in the sense that the search ranges and the block sizes can be varied to suit any given requirements. The design can operate at frequencies up to 36 MHz and it can function in QCIF and CIF video resolution at 1.46 MHz and 5.86 MHz, respectively.