Demand and Price Evolution Forecasting as Tools for Facilitating the RoadMapping Process of the Photonic Component Industry

The photonic component industry is a highly innovative industry with a large value chain. In order to ensure the growth of the industry much effort must be devoted to road mapping activities. In such activities demand and price evolution forecasting tools can prove quite useful in order to help in the roadmap refinement and update process. This paper attempts to provide useful guidelines in roadmapping of optical components and considers two models based on diffusion theory and the extended learning curve for demand and price evolution forecasting.

Design of High Gain, High Bandwidth Op-Amp for Reduction of Mismatch Currents in Charge Pump PLL in 180 nm CMOS Technology

The designing of charge pump with high gain Op- Amp is a challenging task for getting faithful response .Design of high performance phase locked loop require ,a design of high performance charge pump .We have designed a operational amplifier for reducing the error caused by high speed glitch in a transistor and mismatch currents . A separate Op-Amp has designed in 180 nm CMOS technology by CADENCE VIRTUOSO tool. This paper describes the design of high performance charge pump for GHz CMOS PLL targeting orthogonal frequency division multiplexing (OFDM) application. A high speed low power consumption Op-Amp with more than 500 MHz bandwidth has designed for increasing the speed of charge pump in Phase locked loop.

Empirical Evidence on Equity Valuation of Thai Firms

This study aims at providing empirical evidence on a comparison of two equity valuation models: (1) the dividend discount model (DDM) and (2) the residual income model (RIM), in estimating equity values of Thai firms during 1995-2004. Results suggest that DDM and RIM underestimate equity values of Thai firms and that RIM outperforms DDM in predicting cross-sectional stock prices. Results on regression of cross-sectional stock prices on the decomposed DDM and RIM equity values indicate that book value of equity provides the greatest incremental explanatory power, relative to other components in DDM and RIM terminal values, suggesting that book value distortions resulting from accounting procedures and choices are less severe than forecast and measurement errors in discount rates and growth rates. We also document that the incremental explanatory power of book value of equity during 1998-2004, representing the information environment under Thai Accounting Standards reformed after the 1997 economic crisis to conform to International Accounting Standards, is significantly greater than that during 1995-1996, representing the information environment under the pre-reformed Thai Accounting Standards. This implies that the book value distortions are less severe under the 1997 Reformed Thai Accounting Standards than the pre-reformed Thai Accounting Standards.

Semi-Blind Two-Dimensional Code Acquisition in CDMA Communications

In this paper, we propose a new algorithm for joint time-delay and direction-of-arrival (DOA) estimation, here called two-dimensional code acquisition, in an asynchronous directsequence code-division multiple-access (DS-CDMA) array system. This algorithm depends on eigenvector-eigenvalue decomposition of sample correlation matrix, and requires to know desired user-s training sequence. The performance of the algorithm is analyzed both analytically and numerically in uncorrelated and coherent multipath environment. Numerical examples show that the algorithm is robust with unknown number of coherent signals.

Autistic Children and Different Tense Forms

Autism spectrum disorder is characterized by abnormalities in social communication, language abilities and repetitive behaviors. The present study focused on some grammatical deficits in autistic children. We evaluated the impairment of correct use of different Persian verb tenses in autistic children-s speech. Two standardized Language Test were administered then gathered data were analyzed. The main result of this study was significant difference between the mean scores of correct responses to present tense in comparison with past tense in Persian language. This study demonstrated that tense is severely impaired in autistic children-s speech. Our findings indicated those autistic children-s production of simple present/ past tense opposition to be better than production of future and past periphrastic forms (past perfect, present perfect, past progressive).

EAAC: Energy-Aware Admission Control Scheme for Ad Hoc Networks

The decisions made by admission control algorithms are based on the availability of network resources viz. bandwidth, energy, memory buffers, etc., without degrading the Quality-of-Service (QoS) requirement of applications that are admitted. In this paper, we present an energy-aware admission control (EAAC) scheme which provides admission control for flows in an ad hoc network based on the knowledge of the present and future residual energy of the intermediate nodes along the routing path. The aim of EAAC is to quantify the energy that the new flow will consume so that it can be decided whether the future residual energy of the nodes along the routing path can satisfy the energy requirement. In other words, this energy-aware routing admits a new flow iff any node in the routing path does not run out of its energy during the transmission of packets. The future residual energy of a node is predicted using the Multi-layer Neural Network (MNN) model. Simulation results shows that the proposed scheme increases the network lifetime. Also the performance of the MNN model is presented.

A Novel Pilot Scheme for Frequency Offset and Channel Estimation in 2x2 MIMO-OFDM

The Carrier Frequency Offset (CFO) due to timevarying fading channel is the main cause of the loss of orthogonality among OFDM subcarriers which is linked to inter-carrier interference (ICI). Hence, it is necessary to precisely estimate and compensate the CFO. Especially for mobile broadband communications, CFO and channel gain also have to be estimated and tracked to maintain the system performance. Thus, synchronization pilots are embedded in every OFDM symbol to track the variations. In this paper, we present the pilot scheme for both channel and CFO estimation where channel estimation process can be carried out with only one OFDM symbol. Additional, the proposed pilot scheme also provides better performance in CFO estimation comparing with the conventional orthogonal pilot scheme due to the increasing of signal-tointerference ratio.

An Anatomically-Based Model of the Nerves in the Human Foot

Sensory nerves in the foot play an important part in the diagnosis of various neuropathydisorders, especially in diabetes mellitus.However, a detailed description of the anatomical distribution of the nerves is currently lacking. A computationalmodel of the afferent nerves inthe foot may bea useful tool for the study of diabetic neuropathy. In this study, we present the development of an anatomically-based model of various major sensory nerves of the sole and dorsal sidesof the foot. In addition, we presentan algorithm for generating synthetic somatosensory nerve networks in the big-toe region of a right foot model. The algorithm was based on a modified version of the Monte Carlo algorithm, with the capability of being able to vary the intra-epidermal nerve fiber density in differentregionsof the foot model. Preliminary results from the combinedmodel show the realistic anatomical structure of the major nerves as well as the smaller somatosensory nerves of the foot. The model may now be developed to investigate the functional outcomes of structural neuropathyindiabetic patients.

Modeling and Simulating of Gas Turbine Cooled Blades

In contrast to existing methods which do not take into account multiconnectivity in a broad sense of this term, we develop mathematical models and highly effective combination (BIEM and FDM) numerical methods of calculation of stationary and quasistationary temperature field of a profile part of a blade with convective cooling (from the point of view of realization on PC). The theoretical substantiation of these methods is proved by appropriate theorems. For it, converging quadrature processes have been developed and the estimations of errors in the terms of A.Ziqmound continuity modules have been received. For visualization of profiles are used: the method of the least squares with automatic conjecture, device spline, smooth replenishment and neural nets. Boundary conditions of heat exchange are determined from the solution of the corresponding integral equations and empirical relationships. The reliability of designed methods is proved by calculation and experimental investigations heat and hydraulic characteristics of the gas turbine first stage nozzle blade.

CAD Model of Cole Cole Representation for Analyzing Performance of Microstrip Moisture Sensing Applications

In the past decade, the development of microstrip sensor application has evolved tremendously. Although cut and trial method was adopted to develop microstrip sensing applications in the past, Computer-Aided-Design (CAD) is a more effective as it ensures less time is consumed and cost saving is achieved in developing microstrip sensing applications. Therefore microstrip sensing applications has gained popularity as an effective tool adopted in continuous sensing of moisture content particularly in products that is administered mainly by liquid content. In this research, the Cole-Cole representation of reactive relaxation is applied to assess the performance of the microstrip sensor devices. The microstrip sensor application is an effective tool suitable for sensing the moisture content of dielectric material. Analogous to dielectric relaxation consideration of Cole-Cole diagrams as applied to dielectric materials, a “reactive relaxation concept” concept is introduced to represent the frequency-dependent and moisture content characteristics of microstrip sensor devices.

Computer Verification in Cryptography

In this paper we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (o--algebras, probability spaces and condi¬tional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes' Formula. Besides we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this paper shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in crypto-graphic research, if the corresponding basic mathematical knowledge is available in a database.

The Performance of Genetic Algorithm for Synchronized Chaotic Chen System in CDMA Satellite Channel

Synchronization is a difficult problem in CDMA satellite communications. Due to the influence of additive noise and fading in the mobile channel, it is not easy to keep up with the attenuation and offset. This paper considers a recently proposed approach to solve the problem of synchronization chaotic Chen system in CDMA satellite communication in the presence of constant attenuation and offset. An analytic algorithm that provides closed form channel and carrier offset estimates is presented. The principle of this approach is based on adding a compensation block before the receiver to compensate the distortion of the imperfect channel by using genetic algorithm. The resultants presented, show that the receiver is able to recover rapidly the synchronization with the transmitter.

Identifying Blind Spots in a Stereo View for Early Decisions in SI for Fusion based DMVC

In DMVC, we have more than one options of sources available for construction of side information. The newer techniques make use of both the techniques simultaneously by constructing a bitmask that determines the source of every block or pixel of the side information. A lot of computation is done to determine each bit in the bitmask. In this paper, we have tried to define areas that can only be well predicted by temporal interpolation and not by multiview interpolation or synthesis. We predict that all such areas that are not covered by two cameras cannot be appropriately predicted by multiview synthesis and if we can identify such areas in the first place, we don-t need to go through the script of computations for all the pixels that lie in those areas. Moreover, this paper also defines a technique based on KLT to mark the above mentioned areas before any other processing is done on the side view.

Analysis of Effect of Pre-Logic Factoring on Cell Based Combinatorial Logic Synthesis

In this paper, an analysis is presented, which demonstrates the effect pre-logic factoring could have on an automated combinational logic synthesis process succeeding it. The impact of pre-logic factoring for some arbitrary combinatorial circuits synthesized within a FPGA based logic design environment has been analyzed previously. This paper explores a similar effect, but with the non-regenerative logic synthesized using elements of a commercial standard cell library. On an overall basis, the results obtained pertaining to the analysis on a variety of MCNC/IWLS combinational logic benchmark circuits indicate that pre-logic factoring has the potential to facilitate simultaneous power, delay and area optimized synthesis solutions in many cases.

Topology Optimization of Cable Truss Web for Prestressed Suspension Bridge

A suspension bridge is the most suitable type of structure for a long-span bridge due to rational use of structural materials. Increased deformability, which is conditioned by appearance of the elastic and kinematic displacements, is the major disadvantage of suspension bridges. The problem of increased kinematic displacements under the action of non-symmetrical load can be solved by prestressing. The prestressed suspension bridge with the span of 200 m was considered as an object of investigations. The cable truss with the cross web was considered as the main load carrying structure of the prestressed suspension bridge. The considered cable truss was optimized by 47 variable factors using Genetic algorithm and FEM program ANSYS. It was stated, that the maximum total displacements are reduced up to 29.9% by using of the cable truss with the rational characteristics instead of the single cable in the case of the worst situated load.

Numerical Modeling of Gas Turbine Engines

In contrast to existing methods which do not take into account multiconnectivity in a broad sense of this term, we develop mathematical models and highly effective combination (BIEM and FDM) numerical methods of calculation of stationary and quasi-stationary temperature field of a profile part of a blade with convective cooling (from the point of view of realization on PC). The theoretical substantiation of these methods is proved by appropriate theorems. For it, converging quadrature processes have been developed and the estimations of errors in the terms of A.Ziqmound continuity modules have been received. For visualization of profiles are used: the method of the least squares with automatic conjecture, device spline, smooth replenishment and neural nets. Boundary conditions of heat exchange are determined from the solution of the corresponding integral equations and empirical relationships. The reliability of designed methods is proved by calculation and experimental investigations heat and hydraulic characteristics of the gas turbine first stage nozzle blade.

Minimizing Examinee Collusion with a Latin- Square Treatment Structure

Cheating on standardized tests has been a major concern as it potentially minimizes measurement precision. One major way to reduce cheating by collusion is to administer multiple forms of a test. Even with this approach, potential collusion is still quite large. A Latin-square treatment structure for distributing multiple forms is proposed to further reduce the colluding potential. An index to measure the extent of colluding potential is also proposed. Finally, with a simple algorithm, the various Latin-squares were explored to find the best structure to keep the colluding potential to a minimum.

Localizing and Recognizing Integral Pitches of Cheque Document Images

Automatic reading of handwritten cheque is a computationally complex process and it plays an important role in financial risk management. Machine vision and learning provide a viable solution to this problem. Research effort has mostly been focused on recognizing diverse pitches of cheques and demand drafts with an identical outline. However most of these methods employ templatematching to localize the pitches and such schemes could potentially fail when applied to different types of outline maintained by the bank. In this paper, the so-called outline problem is resolved by a cheque information tree (CIT), which generalizes the localizing method to extract active-region-of-entities. In addition, the weight based density plot (WBDP) is performed to isolate text entities and read complete pitches. Recognition is based on texture features using neural classifiers. Legal amount is subsequently recognized by both texture and perceptual features. A post-processing phase is invoked to detect the incorrect readings by Type-2 grammar using the Turing machine. The performance of the proposed system was evaluated using cheque and demand drafts of 22 different banks. The test data consists of a collection of 1540 leafs obtained from 10 different account holders from each bank. Results show that this approach can easily be deployed without significant design amendments.

Roadmapping as a Collaborative Strategic Decision-Making Process: Shaping Social Dialogue Options for the European Banking Sector

The new status generated by technological advancements and changes in the global economy raises important issues on how communities and organisations need to innovate upon their traditional processes in order to adapt to the challenges of the Knowledge Society. The DialogoS+ European project aims to study the role of and promote social dialogue in the banking sector, strengthen the link between old and new members and make social dialogue at the European level a force for innovation and change, also given the context of the international crisis emerging in 2008- 2009. Under the scope of DialogoS+, this paper describes how the community of Europe-s banking sector trade unions attempted to adapt to the challenges of the Knowledge Society by exploiting the benefits of new channels of communication, learning, knowledge generation and diffusion focusing on the concept of roadmapping. Important dimensions of social dialogue such as collective bargaining and working conditions are addressed.

Impulse Response Shortening for Discrete Multitone Transceivers using Convex Optimization Approach

In this paper we propose a new criterion for solving the problem of channel shortening in multi-carrier systems. In a discrete multitone receiver, a time-domain equalizer (TEQ) reduces intersymbol interference (ISI) by shortening the effective duration of the channel impulse response. Minimum mean square error (MMSE) method for TEQ does not give satisfactory results. In [1] a new criterion for partially equalizing severe ISI channels to reduce the cyclic prefix overhead of the discrete multitone transceiver (DMT), assuming a fixed transmission bandwidth, is introduced. Due to specific constrained (unit morm constraint on the target impulse response (TIR)) in their method, the freedom to choose optimum vector (TIR) is reduced. Better results can be obtained by avoiding the unit norm constraint on the target impulse response (TIR). In this paper we change the cost function proposed in [1] to the cost function of determining the maximum of a determinant subject to linear matrix inequality (LMI) and quadratic constraint and solve the resulting optimization problem. Usefulness of the proposed method is shown with the help of simulations.