Abstract: The most widely used semiconductor memory types
are the Dynamic Random Access Memory (DRAM) and Static
Random Access memory (SRAM). Competition among memory
manufacturers drives the need to decrease power consumption and
reduce the probability of read failure. A technology that is relatively
new and has not been explored is the FinFET technology. In this
paper, a single cell Schmitt Trigger Based Static RAM using FinFET
technology is proposed and analyzed. The accuracy of the result is
validated by means of HSPICE simulations with 32nm FinFET
technology and the results are then compared with 6T SRAM using
the same technology.
Abstract: As communications systems and technology become more advanced and complex, it will be increasingly important to focus on users- individual needs. Personalization and effective user profile management will be necessary to ensure the uptake and success of new services and devices and it is therefore important to focus on the users- requirements in this area and define solutions that meet these requirements. The work on personalization and user profiles emerged from earlier ETSI work on a Universal Communications Identifier (UCI) which is a unique identifier of the user rather than a range of identifiers of the many of communication devices or services (e.g. numbers of fixed phone at home/work, mobile phones, fax and email addresses). This paper describes work on personalization including standardized information and preferences and an architectural framework providing a description of how personalization can be integrated in Next Generation Networks, together with the UCI concept.
Abstract: We investigated the effects of modified
preprogrammed training mode Chase Trainer from Balance Trainer
(BT3, HurLab, Tampere, Finland) on athlete who experienced
unilateral Patellofemoral Pain Syndrome (PFPS). Twenty-seven
athletes with mean age= 14.23 ±1.31 years, height = 164.89 ± 7.85
cm, weight = 56.94 ± 9.28 kg were randomly assigned to two groups:
experiment (EG; n = 14) and injured (IG; n = 13). EG performed a
series of Chase Trainer program which required them to shift their
body weight at different directions, speeds and angle of leaning twice
a week for duration of 8 weeks. The static postural control and
perceived pain level measures were taken at baseline, after 6 weeks
and 8 weeks of training. There was no significant difference in any of
tested variables between EG and IG before and after 6-week the
intervention period. However, after 8-week of training, the postural
control (eyes open) and perceived pain level of EG improved
compared to IG (p
Abstract: Let p be a prime number, Fp be a finite field and t ∈ F*p= Fp- {0}. In this paper we obtain some properties of ellipticcurves Ep,t: y2= y2= x3- t2x over Fp. In the first sectionwe give some notations and preliminaries from elliptic curves. In the second section we consider the rational points (x, y) on Ep,t. Wegive a formula for the number of rational points on Ep,t over Fnp for an integer n ≥ 1. We also give some formulas for the sum of x?andy?coordinates of the points (x, y) on Ep,t. In the third section weconsider the rank of Et: y2= x3- t2x and its 2-isogenous curve Et over Q. We proved that the rank of Etand Etis 2 over Q. In the last section we obtain some formulas for the sums Σt∈F?panp,t for an integer n ≥ 1, where ap,t denote the trace of Frobenius.
Abstract: In this paper a nonlinear model is presented to
demonstrate the relation between production and marketing
departments. By introducing some functions such as pricing cost and
market share loss functions it will be tried to show some aspects of
market modelling which has not been regarded before. The proposed
model will be a constrained signomial geometric programming
model. For model solving, after variables- modifications an iterative
technique based on the concept of geometric mean will be introduced
to solve the resulting non-standard posynomial model which can be
applied to a wide variety of models in non-standard posynomial
geometric programming form. At the end a numerical analysis will
be presented to accredit the validity of the mentioned model.
Abstract: A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.
Abstract: When designing information systems that deal with
large amount of domain knowledge, system designers need to consider
ambiguities of labeling termsin domain vocabulary for navigating
users in the information space. The goal of this study is to develop a
methodology for system designers to label navigation items, taking
account of ambiguities stems from synonyms or polysemes of labeling
terms. In this paper, we propose a method for concept labeling based
on mappings between domain ontology andthesaurus, and report
results of an empirical evaluation.
Abstract: We present a dextran modified silicon microring
resonator sensor for high density antibody immobilization. An array
of sensors consisting of three sensor rings and a reference ring was
fabricated and its surface sensitivity and the limit of detection were
obtained using polyelectrolyte multilayers. The mass sensitivity and
the limit of detection of the fabricated sensor ring are 0.35 nm/ng
mm-2 and 42.8 pg/mm2 in air, respectively. Dextran modified sensor
surface was successfully prepared by covalent grafting of oxidized
dextran on 3-aminopropyltriethoxysilane (APTES) modified silicon
sensor surface. The antibody immobilization on hydrogel dextran
matrix improves 40% compared to traditional antibody
immobilization method via APTES and glutaraldehyde linkage.
Abstract: Clustering techniques have been used by many intelligent software agents to group similar access patterns of the Web users into high level themes which express users intentions and interests. However, such techniques have been mostly focusing on one salient feature of the Web document visited by the user, namely the extracted keywords. The major aim of these techniques is to come up with an optimal threshold for the number of keywords needed to produce more focused themes. In this paper we focus on both keyword and similarity thresholds to generate themes with concentrated themes, and hence build a more sound model of the user behavior. The purpose of this paper is two fold: use distance based clustering methods to recognize overall themes from the Proxy log file, and suggest an efficient cut off levels for the keyword and similarity thresholds which tend to produce more optimal clusters with better focus and efficient size.
Abstract: This work proposes an accurate crosstalk noise estimation method in the presence of multiple RLC lines for the use in design automation tools. This method correctly models the loading effects of non switching aggressors and aggressor tree branches using resistive shielding effect and realistic exponential input waveforms. Noise peak and width expressions have been derived. The results obtained are at good agreement with SPICE results. Results show that average error for noise peak is 4.7% and for the width is 6.15% while allowing a very fast analysis.
Abstract: In this paper, collocation based cubic B-spline and
extended cubic uniform B-spline method are considered for
solving one-dimensional heat equation with a nonlocal initial
condition. Finite difference and θ-weighted scheme is used for
time and space discretization respectively. The stability of the
method is analyzed by the Von Neumann method. Accuracy of
the methods is illustrated with an example. The numerical results
are obtained and compared with the analytical solutions.
Abstract: In this paper flow around two cam shaped cylinders had been studied numerically. The equivalent diameter of cylinders is 27.6 mm. The space between center to center of two cam shaped cylinders is define as longitudinal pitch ratio and it varies in range of
2 varies in range of 50
Abstract: Despite the extensive use of eLearning systems, there
is no consensus on a standard framework for evaluating this kind of
quality system. Hence, there is only a minimum set of tools that can
supervise this judgment and gives information about the course
content value. This paper presents two kinds of quality set evaluation
indicators for eLearning courses based on the computational process
of three known metrics, the Euclidian, Hamming and Levenshtein
distances. The “distance" calculus is applied to standard evaluation
templates (i.e. the European Commission Programme procedures vs.
the AFNOR Z 76-001 Standard), determining a reference point in the
evaluation of the e-learning course quality vs. the optimal concept(s).
The case study, based on the results of project(s) developed in the
framework of the European Programme “Leonardo da Vinci", with
Romanian contractors, try to put into evidence the benefits of such a
method.
Abstract: A straightforward and intuitive combination of single simulations into an aggregated master-simulation is not trivial. There are lots of problems, which trigger-specific difficulties during the modeling and execution of such a simulation. In this paper we identify these problems and aim to solve them by mapping the task to the field of multi agent systems. The solution is a new meta-model named AGENTMAP, which is able to mitigate most of the problems and to support intuitive modeling at the same time. This meta-model will be introduced and explained on basis of an example from the e-commerce domain.
Abstract: Existing work in temporal logic on representing the
execution of infinitely many transactions, uses linear-time temporal
logic (LTL) and only models two-step transactions. In this paper,
we use the comparatively efficient branching-time computational tree
logic CTL and extend the transaction model to a class of multistep
transactions, by introducing distinguished propositional variables
to represent the read and write steps of n multi-step transactions
accessing m data items infinitely many times. We prove that the
well known correspondence between acyclicity of conflict graphs
and serializability for finite schedules, extends to infinite schedules.
Furthermore, in the case of transactions accessing the same set of
data items in (possibly) different orders, serializability corresponds
to the absence of cycles of length two. This result is used to give an
efficient encoding of the serializability condition into CTL.
Abstract: In this paper, we consider the global exponential stability of the equilibrium point of Hopfield neural networks with delays and impulsive perturbation. Some new exponential stability criteria of the system are derived by using the Lyapunov functional method and the linear matrix inequality approach for estimating the upper bound of the derivative of Lyapunov functional. Finally, we illustrate two numerical examples showing the effectiveness of our theoretical results.
Abstract: The objective of this paper is to present a research
study of the convectors that are used for heating or cooling of the
living room or industrial halls. The key points are experimental
measurement and comprehensive numerical simulation of the flow
coming throughout the part of the convector such as heat exchanger,
input from the fan etc.. From the obtained results, the components of
the convector are optimized in sense to increase thermal power
efficiency due to improvement of heat convection or reduction of air
drag friction. Both optimized aspects are leading to the more
effective service conditions and to energy saving. The significant part
of the convector research is a design of the unique measurement
laboratory and adopting measure techniques. The new laboratory
provides possibility to measure thermal power efficiency and other
relevant parameters under specific service conditions of the
convectors.
Abstract: The batch nature limits the standard kernel principal component analysis (KPCA) methods in numerous applications, especially for dynamic or large-scale data. In this paper, an efficient adaptive approach is presented for online extraction of the kernel principal components (KPC). The contribution of this paper may be divided into two parts. First, kernel covariance matrix is correctly updated to adapt to the changing characteristics of data. Second, KPC are recursively formulated to overcome the batch nature of standard KPCA.This formulation is derived from the recursive eigen-decomposition of kernel covariance matrix and indicates the KPC variation caused by the new data. The proposed method not only alleviates sub-optimality of the KPCA method for non-stationary data, but also maintains constant update speed and memory usage as the data-size increases. Experiments for simulation data and real applications demonstrate that our approach yields improvements in terms of both computational speed and approximation accuracy.
Abstract: Collateralized Debt Obligations are not as widely used
nowadays as they were before 2007 Subprime crisis. Nonetheless
there remains an enthralling challenge to optimize cash flows
associated with synthetic CDOs. A Gaussian-based model is used
here in which default correlation and unconditional probabilities of
default are highlighted. Then numerous simulations are performed
based on this model for different scenarios in order to evaluate the
associated cash flows given a specific number of defaults at different
periods of time. Cash flows are not solely calculated on a single
bought or sold tranche but rather on a combination of bought and
sold tranches. With some assumptions, the simplex algorithm gives
a way to find the maximum cash flow according to correlation of
defaults and maturities. The used Gaussian model is not realistic in
crisis situations. Besides present system does not handle buying or
selling a portion of a tranche but only the whole tranche. However the
work provides the investor with relevant elements on how to know
what and when to buy and sell.
Abstract: Semiconductor detector arrays are widely used in
high-temperature plasma diagnostics. They have a fast response,
which allows observation of many processes and instabilities in
tokamaks. In this paper, there are reviewed several diagnostics based
on semiconductor arrays as cameras, AXUV photodiodes (referred
often as fast “bolometers") and detectors of both soft X-rays and
visible light installed on the COMPASS tokamak recently. Fresh
results from both spring and summer campaigns in 2012 are
introduced. Examples of the utilization of the detectors are shown on
the plasma shape determination, fast calculation of the radiation
center, two-dimensional plasma radiation tomography in different
spectral ranges, observation of impurity inflow, and also on
investigation of MHD activity in the COMPASS tokamak discharges.