Abstract: Eukaryotic protein-coding genes are interrupted by spliceosomal introns, which are removed from the RNA transcripts before translation into a protein. The exon-intron structures of different eukaryotic species are quite different from each other, and the evolution of such structures raises many questions. We try to address some of these questions using statistical analysis of whole genomes. We go through all the protein-coding genes in a genome and study correlations between the net length of all the exons in a gene, the number of the exons, and the average length of an exon. We also take average values of these features for each chromosome and study correlations between those averages on the chromosomal level. Our data show universal features of exon-intron structures common to animals, plants, and protists (specifically, Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Cryptococcus neoformans, Homo sapiens, Mus musculus, Oryza sativa, and Plasmodium falciparum). We have verified linear correlation between the number of exons in a gene and the length of a protein coded by the gene, while the protein length increases in proportion to the number of exons. On the other hand, the average length of an exon always decreases with the number of exons. Finally, chromosome clustering based on average chromosome properties and parameters of linear regression between the number of exons in a gene and the net length of those exons demonstrates that these average chromosome properties are genome-specific features.
Abstract: Webcam systems now function as the new privileged
vantage points from which to view the city. This transformation of
CCTV technology from surveillance to promotional tool is significant
because its'scopic regime' presents, back to the public, a new virtual
'site' that sits alongside its real-time counterpart. Significantly,
thisraw 'image' data can, in fact,be co-optedand processed so as to
disrupt their original purpose. This paper will demonstrate this
disruptive capacity through an architectural project. It will reveal how
the adaption the webcam image offers a technical springboard by
which to initiate alternate urban form making decisions and subvert
the disciplinary reliance on the 'flat' orthographic plan. In so doing,
the paper will show how this 'digital material' exceeds the imagistic
function of the image; shiftingit from being a vehicle of signification
to a site of affect.
Abstract: Coronary artery bypass grafts (CABG) are widely
studied with respect to hemodynamic conditions which play
important role in presence of a restenosis. However, papers which
concern with constitutive modeling of CABG are lacking in the
literature. The purpose of this study is to find a constitutive model for
CABG tissue. A sample of the CABG obtained within an autopsy
underwent an inflation–extension test. Displacements were
recoredered by CCD cameras and subsequently evaluated by digital
image correlation. Pressure – radius and axial force – elongation
data were used to fit material model. The tissue was modeled as onelayered
composite reinforced by two families of helical fibers. The
material is assumed to be locally orthotropic, nonlinear,
incompressible and hyperelastic. Material parameters are estimated
for two strain energy functions (SEF). The first is classical
exponential. The second SEF is logarithmic which allows
interpretation by means of limiting (finite) strain extensibility.
Presented material parameters are estimated by optimization based
on radial and axial equilibrium equation in a thick-walled tube. Both
material models fit experimental data successfully. The exponential
model fits significantly better relationship between axial force and
axial strain than logarithmic one.
Abstract: The aim of this paper is description of the notion of
the death for prisoners and the ways of deal with. They express
indifference, coldness, inability to accept the blame, they have no
shame and no empathy. Is it enough to perform acts verging on the
death. In this paper we described mechanisms and regularities of selfdestructive
behaviour in the view of the relevant literature? The
explanation of the phenomenon is of a biological and sociopsychological
nature. It must be clearly stated that all forms of selfdestructive
behaviour result from various impulses, conflicts and
deficits. That is why they should be treated differently in terms of
motivation and functions which they perform in a given group of
people. Behind self-destruction there seems to be a motivational
mechanism which forces prisoners to rebel and fight against the hated
law and penitentiary systems. The imprisoned believe that pain and
suffering inflicted on them by themselves are better than passive
acceptance of repression. The variety of self-destruction acts is wide,
and some of them take strange forms. We assume that a life-death
barrier is a kind of game for them. If they cannot change the
degrading situation, their life loses sense.
Abstract: In the recent works related with mixture discriminant
analysis (MDA), expectation and maximization (EM) algorithm is
used to estimate parameters of Gaussian mixtures. But, initial values
of EM algorithm affect the final parameters- estimates. Also, when
EM algorithm is applied two times, for the same data set, it can be
give different results for the estimate of parameters and this affect the
classification accuracy of MDA. Forthcoming this problem, we use
Self Organizing Mixture Network (SOMN) algorithm to estimate
parameters of Gaussians mixtures in MDA that SOMN is more robust
when random the initial values of the parameters are used [5]. We
show effectiveness of this method on popular simulated waveform
datasets and real glass data set.
Abstract: The more recent satellite projects/programs makes
extensive usage of real – time embedded systems. 16 bit processors
which meet the Mil-Std-1750 standard architecture have been used in
on-board systems. Most of the Space Applications have been written
in ADA. From a futuristic point of view, 32 bit/ 64 bit processors are
needed in the area of spacecraft computing and therefore an effort is
desirable in the study and survey of 64 bit architectures for space
applications. This will also result in significant technology
development in terms of VLSI and software tools for ADA (as the
legacy code is in ADA).
There are several basic requirements for a special processor for
this purpose. They include Radiation Hardened (RadHard) devices,
very low power dissipation, compatibility with existing operational
systems, scalable architectures for higher computational needs,
reliability, higher memory and I/O bandwidth, predictability, realtime
operating system and manufacturability of such processors.
Further on, these may include selection of FPGA devices, selection
of EDA tool chains, design flow, partitioning of the design, pin
count, performance evaluation, timing analysis etc.
This project deals with a brief study of 32 and 64 bit processors
readily available in the market and designing/ fabricating a 64 bit
RISC processor named RISC MicroProcessor with added
functionalities of an extended double precision floating point unit
and a 32 bit signal processing unit acting as co-processors. In this
paper, we emphasize the ease and importance of using Open Core
(OpenSparc T1 Verilog RTL) and Open “Source" EDA tools such as
Icarus to develop FPGA based prototypes quickly. Commercial tools
such as Xilinx ISE for Synthesis are also used when appropriate.
Abstract: In the context of computer numerical control (CNC) and computer aided manufacturing (CAM), the capabilities of programming languages such as symbolic and intuitive programming, program portability and geometrical portfolio have special importance. They allow to save time and to avoid errors during part programming and permit code re-usage. Our updated literature review indicates that the current state of art presents voids in parametric programming, program portability and programming flexibility. In response to this situation, this article presents a compiler implementation for EGCL (Extended G-code Language), a new, enriched CNC programming language which allows the use of descriptive variable names, geometrical functions and flow-control statements (if-then-else, while). Our compiler produces low-level generic, elementary ISO-compliant Gcode, thus allowing for flexibility in the choice of the executing CNC machine and in portability. Our results show that readable variable names and flow control statements allow a simplified and intuitive part programming and permit re-usage of the programs. Future work includes allowing the programmer to define own functions in terms of EGCL, in contrast to the current status of having them as library built-in functions.
Abstract: In this paper, by using the continuation theorem of coincidence degree theory, M-matrix theory and constructing some suitable Lyapunov functions, some sufficient conditions are obtained for the existence and global exponential stability of periodic solutions of recurrent neural networks with distributed delays and impulses on time scales. Without assuming the boundedness of the activation functions gj, hj , these results are less restrictive than those given in the earlier references.
Abstract: Effective evaluation of software development effort is an important aspect of successful project management. Based on a large database with 4106 projects ever developed, this study statistically examines the factors that influence development effort. The factors found to be significant for effort are project size, average number of developers that worked on the project, type of development, development language, development platform, and the use of rapid application development. Among these factors, project size is the most critical cost driver. Unsurprisingly, this study found that the use of CASE tools does not necessarily reduce development effort, which adds support to the claim that the use of tools is subtle. As many of the current estimation models are rarely or unsuccessfully used, this study proposes a parsimonious parametric model for the prediction of effort which is both simple and more accurate than previous models.
Abstract: Software reliability, defined as the probability of a
software system or application functioning without failure or errors
over a defined period of time, has been an important area of research
for over three decades. Several research efforts aimed at developing
models to improve reliability are currently underway. One of the
most popular approaches to software reliability adopted by some of
these research efforts involves the use of operational profiles to
predict how software applications will be used. Operational profiles
are a quantification of usage patterns for a software application. The
research presented in this paper investigates an innovative multiagent
framework for automatic creation and management of
operational profiles for generic distributed systems after their release
into the market. The architecture of the proposed Operational Profile
MAS (Multi-Agent System) is presented along with detailed
descriptions of the various models arrived at following the analysis
and design phases of the proposed system. The operational profile in
this paper is extended to comprise seven different profiles. Further,
the criticality of operations is defined using a new composed metrics
in order to organize the testing process as well as to decrease the time
and cost involved in this process. A prototype implementation of the
proposed MAS is included as proof-of-concept and the framework is
considered as a step towards making distributed systems intelligent
and self-managing.
Abstract: Optimization is often a critical issue for most system
design problems. Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, finding optimal solution to complex high
dimensional, multimodal problems often require highly
computationally expensive function evaluations and hence are
practically prohibitive. The Dynamic Approximate Fitness based
Hybrid EA (DAFHEA) model presented in our earlier work [14]
reduced computation time by controlled use of meta-models to
partially replace the actual function evaluation by approximate
function evaluation. However, the underlying assumption in
DAFHEA is that the training samples for the meta-model are
generated from a single uniform model. Situations like model
formation involving variable input dimensions and noisy data
certainly can not be covered by this assumption. In this paper we
present an enhanced version of DAFHEA that incorporates a
multiple-model based learning approach for the SVM approximator.
DAFHEA-II (the enhanced version of the DAFHEA framework) also
overcomes the high computational expense involved with additional
clustering requirements of the original DAFHEA framework. The
proposed framework has been tested on several benchmark functions
and the empirical results illustrate the advantages of the proposed
technique.
Abstract: The purposes of this study were as follows to evaluate
the economic value of Phu Kradueng National Park by the travel cost
method (TCM) and the contingent valuation method (CVM) and to
estimate the demand for traveling and the willingness to pay. The
data for this study were collected by conducting two large scale
surveys on users and non-users. A total of 1,016 users and 1,034
non-users were interviewed. The data were analyzed using multiple
linear regression analysis, logistic regression model and the
consumer surplus (CS) was the integral of demand function for trips.
The survey found, were as follows:
1)Using the travel cost method which provides an estimate of direct
benefits to park users, we found that visitors- total willingness to pay
per visit was 2,284.57 bath, of which 958.29 bath was travel cost,
1,129.82 bath was expenditure for accommodation, food, and
services, and 166.66 bath was consumer surplus or the visitors -net
gain or satisfaction from the visit (the integral of demand function for
trips).
2) Thai visitors to Phu Kradueng National Park were further willing
to pay an average of 646.84 bath per head per year to ensure the
continued existence of Phu Kradueng National Park and to preserve
their option to use it in the future.
3) Thai non-visitors, on the other hand, are willing to pay an average
of 212.61 bath per head per year for the option and existence value
provided by the Park.
4) The total economic value of Phu Kradueng National Park to Thai
visitors and non-visitors taken together stands today at 9,249.55
million bath per year.
5) The users- average willingness to pay for access to Phu Kradueng
National Park rises
from 40 bath to 84.66 bath per head per trip for improved services
such as road improvement, increased cleanliness, and upgraded
information.
This paper was needed to investigate of the potential market
demand for bio prospecting in Phu Kradueng national Park and to
investigate how a larger share of the economic benefits of tourism
could be distributed income to the local residents.
Abstract: In order to realize long-lived electric propulsion
systems, we have been investigating an electrodeless plasma thruster.
In our concept, a helicon plasma is accelerated by the magnetic nozzle
for the thrusts production. In addition, the electromagnetic thrust can
be enhanced by the additional radio-frequency rotating electric field
(REF) power in the magnetic nozzle. In this study, a direct
measurement of the electromagnetic thrust and a probe measurement
have been conducted using a laboratory model of the thruster under the
condition without the REF power input. Fromthrust measurement, it is
shown that the thruster produces a sub-milli-newton order
electromagnetic thrust force without the additional REF power. The
thrust force and the density jump are observed due to the discharge
mode transition from the inductive coupled plasma to the helicon wave
excited plasma. The thermal thrust is theoretically estimated, and the
total thrust force, which is a sum of the electromagnetic and the
thermal thrust force and specific impulse are calculated to be up to 650
μN (plasma production power of 400 W, Ar gas mass flow rate of 1.0
mg/s) and 210 s (plasma production power of 400 W, Ar gas mass flow
rate of 0.2 mg/s), respectively.
Abstract: Functional Magnetic Resonance Imaging(fMRI) is a
noninvasive imaging technique that measures the hemodynamic
response related to neural activity in the human brain. Event-related
functional magnetic resonance imaging (efMRI) is a form of
functional Magnetic Resonance Imaging (fMRI) in which a series of
fMRI images are time-locked to a stimulus presentation and averaged
together over many trials. Again an event related potential (ERP) is a
measured brain response that is directly the result of a thought or
perception. Here the neuronal response of human visual cortex in
normal healthy patients have been studied. The patients were asked
to perform a visual three choice reaction task; from the relative
response of each patient corresponding neuronal activity in visual
cortex was imaged. The average number of neurons in the adult
human primary visual cortex, in each hemisphere has been estimated
at around 140 million. Statistical analysis of this experiment was
done with SPM5(Statistical Parametric Mapping version 5) software.
The result shows a robust design of imaging the neuronal activity of
human visual cortex.
Abstract: This paper presents the experimental results of
leakage current waveforms which appears on porcelain insulator
surface due to existence of artificial pollutants. The tests have been
done using the chemical compounds of NaCl, Na2SiO3, H2SO4, CaO,
Na2SO4, KCl, Al2SO4, MgSO4, FeCl3, and TiO2. The insulator
surface was coated with those compounds and dried. Then, it was
tested in the chamber where the high voltage was applied. Using
correspondence analysis, the result indicated that the fundamental
harmonic of leakage current was very close to the applied voltage
and third harmonic leakage current was close to the yielded leakage
current amplitude. The first harmonic power was correlated to first
harmonic amplitude of leakage current, and third harmonic power
was close to third harmonic one. The chemical compounds of H2SO4
and Na2SiO3 affected to the power factor of around 70%. Both are the
most conductive, due to the power factor drastically increase among
the chemical compounds.
Abstract: Lake Nasser is one of the largest reservoirs in the
world. Over 120 million metric tons of sediments are deposited in its
dead storage zone every year. The main objective of the present work
was to determine the physical and chemical characteristics of Lake
Nasser sediments. The sample had a relatively low surface area of 2.9
m2/g which increased more than 3-fold upon chemical activation. The
main chemical elements of the raw sediments were C, O and Si with
some traces of Al, Fe and Ca. The organic functional groups for the
tested sample included O-H, C=C, C-H and C-O, with indications of
Si-O and other metal-C and/or metal-O bonds normally associated
with clayey materials. Potentiometric titration of the sample in
different ionic strength backgrounds revealed an alkaline material with
very strong positive surface charge at pH values just a little less than
the pH of zero charge which is ~9. Surface interactions of the
sediments with the background electrolyte were significant. An
advanced surface complexation model was able to capture these
effects, employing a single-site approach to represent protolysis
reactions in aqueous solution, and to determine the significant surface
species in the pH range of environmental interest.
Abstract: For relatively small particles of aluminum (5%) is observed to
corrode before passivation occurs at moderate temperatures (>50oC)
in de-ionized water within one hour. Physical contact with alumina
powder results in a significant increase in both the rate of corrosion
and the extent of corrosion before passivation. Whereas the resulting
release of hydrogen gas could be of commercial interest for portable
hydrogen supply systems, the fundamental aspects of Al corrosion
acceleration in presence of dispersed alumina particles are equally
important. This paper investigates the effects of various amounts of
alumina on the corrosion rate of aluminum powders in water and the
effect of multiple additions of aluminum into a single reactor.
Abstract: Economic dispatch problem is an optimization problem where objective function is highly non linear, non-convex, non-differentiable and may have multiple local minima. Therefore, classical optimization methods may not converge or get trapped to any local minima. This paper presents a comparative study of four different evolutionary algorithms i.e. genetic algorithm, bacteria foraging optimization, ant colony optimization and particle swarm optimization for solving the economic dispatch problem. All the methods are tested on IEEE 30 bus test system. Simulation results are presented to show the comparative performance of these methods.
Abstract: In this paper, a new algorithm for generating codebook is proposed for vector quantization (VQ) in image coding. The significant features of the training image vectors are extracted by using the proposed Orthogonal Polynomials based transformation. We propose to generate the codebook by partitioning these feature vectors into a binary tree. Each feature vector at a non-terminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. The binary tree codebook is used for encoding and decoding the feature vectors. In the decoding process the feature vectors are subjected to inverse transformation with the help of basis functions of the proposed Orthogonal Polynomials based transformation to get back the approximated input image training vectors. The results of the proposed coding are compared with the VQ using Discrete Cosine Transform (DCT) and Pairwise Nearest Neighbor (PNN) algorithm. The new algorithm results in a considerable reduction in computation time and provides better reconstructed picture quality.
Abstract: We show that Chebyshev Polynomials are a practical representation of computable functions on the computable reals. The paper presents error estimates for common operations and demonstrates that Chebyshev Polynomial methods would be more efficient than Taylor Series methods for evaluation of transcendental functions.