Abstract: REY area has been located in Tehran Province and several archaeological ruins of this area indicate that the settlement in this area has been started since several thousand years ago. In this paper, the main investigation items consist of analysis of oil components and groundwater quality inside the wells. By finding the contents of oil in the well, it is possible to find out the pollution source by comparing the oil contents of well with other oil products that are used inside and outside of the oil farm. Investigation items consist of analysis of BTEX (Benzene, Toluene, Ethyl-benzene, Xylene), Gas chromatographic distillation characteristics, Water content, Density, Sulfur content, Lead content, Atmospheric distillation, MTBE(Methyl tertiary butyl ether). Analysis of polluting oil components showed that except MW(Monitoring Well)10 and MW 15 that oil with slightly heavy components was detected in them; with a high possibility the polluting oil is light oil.
Abstract: In this paper, we provide complete end-to-end delay analyses including the relay nodes for instant messages. Message Session Relay Protocol (MSRP) is used to provide congestion control for large messages in the Instant Messaging (IM) service. Large messages are broken into several chunks. These chunks may traverse through a maximum number of two relay nodes before reaching destination according to the IETF specification of the MSRP relay extensions. We discuss the current solutions of sending large instant messages and introduce a proposal to reduce message flows in the IM service. We consider virtual traffic parameter i.e., the relay nodes are stateless non-blocking for scalability purpose. This type of relay node is also assumed to have input rate at constant bit rate. We provide a new scheduling policy that schedules chunks according to their previous node?s delivery time stamp tags. Validation and analysis is shown for such scheduling policy. The performance analysis with the model introduced in this paper is simple and straight forward, which lead to reduced message flows in the IM service.
Abstract: Air conditioning is mainly use as human comfort
cooling medium. It use more in high temperatures are country such as
Malaysia. Proper estimation of cooling load will archive ideal
temperature. Without proper estimation can lead to over estimation or
under estimation. The ideal temperature should be comfort enough.
This study is to develop a program to calculate an ideal cooling load
demand, which is match with heat gain. Through this study, it is easy
to calculate cooling load estimation. Objective of this study are to
develop user-friendly and easy excess cooling load program. This is
to insure the cooling load can be estimate by any of the individual
rather than them using rule-of-thumb. Developed software is carryout
by using Matlab-GUI. These developments are only valid for
common building in Malaysia only. An office building was select as
case study to verify the applicable and accuracy of develop software.
In conclusion, the main objective has successfully where developed
software is user friendly and easily to estimate cooling load demand.
Abstract: Market based models are frequently used in the resource
allocation on the computational grid. However, as the size of
the grid grows, it becomes difficult for the customer to negotiate
directly with all the providers. Middle agents are introduced to
mediate between the providers and customers and facilitate the
resource allocation process. The most frequently deployed middle
agents are the matchmakers and the brokers. The matchmaking agent
finds possible candidate providers who can satisfy the requirements
of the consumers, after which the customer directly negotiates with
the candidates. The broker agents are mediating the negotiation with
the providers in real time.
In this paper we present a new type of middle agent, the marketmaker.
Its operation is based on two parallel operations - through
the investment process the marketmaker is acquiring resources and
resource reservations in large quantities, while through the resale process
it sells them to the customers. The operation of the marketmaker
is based on the fact that through its global view of the grid it can
perform a more efficient resource allocation than the one possible in
one-to-one negotiations between the customers and providers.
We present the operation and algorithms governing the operation
of the marketmaker agent, contrasting it with the matchmaker and
broker agents. Through a series of simulations in the task oriented
domain we compare the operation of the three agents types. We find
that the use of marketmaker agent leads to a better performance in the
allocation of large tasks and a significant reduction of the messaging
overhead.
Abstract: The presence of cold air with the convergent
topography of the Lut valley over the valley-s sloping terrain can
generate Low Level Jets (LLJ). Moreover, the valley-parallel
pressure gradients and northerly LLJ are produced as a result of the
large-scale processes. In the numerical study the regional MM5
model was run leading to achieve an appropriate dynamical analysis
of flows in the region for summer and winter. The results of this
study show the presence of summer synoptical systems cause the
formation of north-south pressure gradients in the valley which could
be led to the blowing of winds with the velocity more than 14 ms-1
and vulnerable dust and wind storms lasting more than 120 days.
Whereas the presence of cold air masses in the region in winter,
cause the average speed of LLJs decrease. In this time downslope
flows are noticeable in creating the night LLJs.
Abstract: Traffic density, an indicator of traffic
conditions, is one of the most critical characteristics to
Intelligent Transport Systems (ITS). This paper investigates
recursive traffic density estimation using the information
provided from inductive loop detectors. On the basis of the
phenomenological relationship between speed and density, the
existing studies incorporate a state space model and update the
density estimate using vehicular speed observations via the
extended Kalman filter, where an approximation is made
because of the linearization of the nonlinear observation
equation. In practice, this may lead to substantial estimation
errors. This paper incorporates a suitable transformation to
deal with the nonlinear observation equation so that the
approximation is avoided when using Kalman filter to
estimate the traffic density. A numerical study is conducted. It
is shown that the developed method outperforms the existing
methods for traffic density estimation.
Abstract: In this paper we present a technique to speed up
ICA based on the idea of reducing the dimensionality of the data
set preserving the quality of the results. In particular we refer to
FastICA algorithm which uses the Kurtosis as statistical property
to be maximized. By performing a particular Johnson-Lindenstrauss
like projection of the data set, we find the minimum dimensionality
reduction rate ¤ü, defined as the ratio between the size k of the reduced
space and the original one d, which guarantees a narrow confidence
interval of such estimator with high confidence level. The derived
dimensionality reduction rate depends on a system control parameter
β easily computed a priori on the basis of the observations only.
Extensive simulations have been done on different sets of real world
signals. They show that actually the dimensionality reduction is very
high, it preserves the quality of the decomposition and impressively
speeds up FastICA. On the other hand, a set of signals, on which the
estimated reduction rate is greater than 1, exhibits bad decomposition
results if reduced, thus validating the reliability of the parameter β.
We are confident that our method will lead to a better approach to
real time applications.
Abstract: Approximate tandem repeats in a genomic sequence are
two or more contiguous, similar copies of a pattern of nucleotides.
They are used in DNA mapping, studying molecular evolution
mechanisms, forensic analysis and research in diagnosis of inherited
diseases. All their functions are still investigated and not well
defined, but increasing biological databases together with tools for
identification of these repeats may lead to discovery of their specific
role or correlation with particular features. This paper presents a new
approach for finding approximate tandem repeats in a given sequence,
where the similarity between consecutive repeats is measured using
the Hamming distance. It is an enhancement of a method for finding
exact tandem repeats in DNA sequences based on the Burrows-
Wheeler transform.
Abstract: Thermal water hammer is a special type of water
hammer which rarely occurs in heat exchangers. In biphasic fluids, if
steam bubbles are surrounded by condensate, regarding lower
condensate temperature than steam, they will suddenly collapse. As a
result, the vacuum caused by an extreme change in volume lead to
movement of the condensates in all directions and their collision the
force produced by this collision leads to a severe stress in the pipe
wall. This phenomenon is a special type of water hammer. According
to fluid mechanics, this phenomenon is a particular type of transient
flows during which abrupt change of fluid leads to sudden pressure
change inside the tube. In this paper, the mechanism of abrupt failure
of 80 tubes of 481 tubes of a methanol heat exchanger is discussed.
Initially, due to excessive temperature differences between heat
transfer fluids and simultaneous failure of 80 tubes, thermal shock
was presupposed as the reason of failure. Deeper investigation on
cross-section of failed tubes showed that failure was, ductile type of
failure, so the first hypothesis was rejected. Further analysis and more
accurate experiments revealed that failure of tubes caused by thermal
water hammer. Finally, the causes of thermal water hammer and
various solutions to avoid such mechanism are discussed.
Abstract: In this paper we propose a blind algorithm for peakto- average power ratio (PAPR) reduction in OFDM systems, based on selected mapping (SLM) algorithm as a distortionless method. The main drawback of the conventional SLM technique is the need for transmission of several side information bits, for each data block, which results in loss in data rate transmission. In the proposed method some special number of carriers in the OFDM frame is reserved to be rotated with one of the possible phases according to the number of phase sequence blocks in SLM algorithm. Reserving some limited number of carriers wont effect the reduction in PAPR of OFDM signal. Simulation results show using ML criteria at the receiver will lead to the same system-performance as the conventional SLM algorithm, while there is no need to send any side information to the receiver.
Abstract: Artificial Intelligence (AI) methods are increasingly being used for problem solving. This paper concerns using AI-type learning machines for power quality problem, which is a problem of general interest to power system to provide quality power to all appliances. Electrical power of good quality is essential for proper operation of electronic equipments such as computers and PLCs. Malfunction of such equipment may lead to loss of production or disruption of critical services resulting in huge financial and other losses. It is therefore necessary that critical loads be supplied with electricity of acceptable quality. Recognition of the presence of any disturbance and classifying any existing disturbance into a particular type is the first step in combating the problem. In this work two classes of AI methods for Power quality data mining are studied: Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs). We show that SVMs are superior to ANNs in two critical respects: SVMs train and run an order of magnitude faster; and SVMs give higher classification accuracy.
Abstract: The basis of this paper is the assumption, that graviton
is a measurable entity of molecular gravitational acceleration and this
is not a hypothetical entity. The adoption of this assumption as an
axiom is tantamount to fully opening the previously locked door to
the boundary theory between laminar and turbulent flows. It leads to
the theorem, that the division of flows of Newtonian (viscous) fluids
into laminar and turbulent is true only, if the fluid is influenced by a
powerful, external force field. The mathematical interpretation of this
theorem, presented in this paper shows, that the boundary between
laminar and turbulent flow can be determined theoretically. This is a
novelty, because thus far the said boundary was determined
empirically only and the reasons for its existence were unknown.
Abstract: The flow field in a centrifugal fan is highly complex
with flow reversal taking place on the suction side of impeller and
diffuser vanes. Generally performance of the centrifugal fan could be
enhanced by judiciously introducing splitter vanes so as to improve
the diffusion process. An extensive numerical whole field analysis on
the effect of splitter vanes placed in discrete regions of suspected
separation points is possible using CFD. This paper examines the
effect of splitter vanes corresponding to various geometrical
locations on the impeller and diffuser. The analysis shows that the
splitter vanes located near the diffuser exit improves the static
pressure recovery across the diffusing domain to a larger extent. Also
it is found that splitter vanes located at the impeller trailing edge and
diffuser leading edge at the mid-span of the circumferential distance
between the blades show a marginal improvement in the static
pressure recovery across the fan. However, splitters provided near to
the suction side of the impeller trailing edge (25% of the
circumferential gap between the impeller blades towards the suction
side), adversely affect the static pressure recovery of the fan.
Abstract: Sol-gel method has been used to fabricate
nanocomposite films on glass substrates composed halloysite clay
mineral and nanocrystalline TiO2. The methodology for the synthesis
involves a simple chemistry method utilized nonionic surfactant
molecule as pore directing agent along with the acetic acid-based solgel
route with the absence of water molecules. The thermal treatment
of composite films at 450oC ensures elimination of organic material
and lead to the formation of TiO2 nanoparticles onto the surface of
the halloysite nanotubes. Microscopy techniques and porosimetry
methods used in order to delineate the structural characteristics of the
materials. The nanocomposite films produced have no cracks and
active anatase crystal phase with small crystallite size were deposited
on halloysite nanotubes. The photocatalytic properties for the new
materials were examined for the decomposition of the Basic Blue 41
azo dye in solution. These, nanotechnology based composite films
show high efficiency for dye’s discoloration in spite of different
halloysite quantities and small amount of halloysite/TiO2 catalyst
immobilized onto glass substrates. Moreover, we examined the
modification of the halloysite/TiO2 films with silver particles in order
to improve the photocatalytic properties of the films. Indeed, the
presence of silver nanoparticles enhances the discoloration rate of the
Basic Blue 41 compared to the efficiencies obtained for unmodified
films.
Abstract: The main goal of this work is to propose a way for
combined use of two nontraditional algorithms by solving topological
problems on telecommunications concentrator networks. The
algorithms suggested are the Simulated Annealing algorithm and the
Genetic Algorithm. The Algorithm of Simulated Annealing unifies
the well known local search algorithms. In addition - Simulated
Annealing allows acceptation of moves in the search space witch lead
to decisions with higher cost in order to attempt to overcome any
local minima obtained. The Genetic Algorithm is a heuristic approach
witch is being used in wide areas of optimization works. In the last
years this approach is also widely implemented in
Telecommunications Networks Planning. In order to solve less or
more complex planning problem it is important to find the most
appropriate parameters for initializing the function of the algorithm.
Abstract: Paper presents knowledge about types of test in area
of materials properties of selected methods of rapid prototyping
technologies. In today used rapid prototyping technologies for
production of models and final parts are used materials in initial state
as solid, liquid or powder material structure. In solid state are used
various forms such as pellets, wire or laminates. Basic range
materials include paper, nylon, wax, resins, metals and ceramics. In
Fused Deposition Modeling (FDM) rapid prototyping technology are
mainly used as basic materials ABS (Acrylonitrile Butadiene
Styrene), polyamide, polycarbonate, polyethylene and polypropylene.
For advanced FDM applications are used special materials as silicon
nitrate, PZT (Piezoceramic Material - Lead Zirconate Titanate),
aluminium oxide, hydroxypatite and stainless steel.
Abstract: Calcium is a vital second messenger used in signal transduction. Calcium controls secretion, cell movement, muscular contraction, cell differentiation, ciliary beating and so on. Two theories have been used to simplify the system of reaction-diffusion equations of calcium into a single equation. One is excess buffer approximation (EBA) which assumes that mobile buffer is present in excess and cannot be saturated. The other is rapid buffer approximation (RBA), which assumes that calcium binding to buffer is rapid compared to calcium diffusion rate. In the present work, attempt has been made to develop a model for calcium diffusion under excess buffer approximation in neuron cells. This model incorporates the effect of [Na+] influx on [Ca2+] diffusion,variable calcium and sodium sources, sodium-calcium exchange protein, Sarcolemmal Calcium ATPase pump, sodium and calcium channels. The proposed mathematical model leads to a system of partial differential equations which have been solved numerically using Forward Time Centered Space (FTCS) approach. The numerical results have been used to study the relationships among different types of parameters such as buffer concentration, association rate, calcium permeability.
Abstract: The introduction of haptic elements in a graphic user interfaces are becoming more widespread. Since haptics are being introduced rapidly into computational tools, investigating how these models affect Human-Computer Interaction would help define how to integrate and model new modes of interaction. The interest of this paper is to discuss and investigate the issues surrounding Haptic and Graphic User Interface designs (GUI) as separate systems, as well as understand how these work in tandem. The development of these systems is explored from a psychological perspective, based on how usability is addressed through learning and affordances, defined by J.J. Gibson. Haptic design can be a powerful tool, aiding in intuitive learning. The problems discussed within the text is how can haptic interfaces be integrated within a GUI without the sense of frivolity. Juxtaposing haptics and Graphic user interfaces has issues of motivation; GUI tends to have a performatory process, while Haptic Interfaces use affordances to learn tool use. In a deeper view, it is noted that two modes of perception, foveal and ambient, dictate perception. These two modes were once thought to work in tandem, however it has been discovered that these processes work independently from each other. Foveal modes interpret orientation is space which provide for posture, locomotion, and motor skills with variations of the sensory information, which instructs perceptions of object-task performance. It is contended, here, that object-task performance is a key element in the use of Haptic Interfaces because exploratory learning uses affordances in order to use an object, without meditating an experience cognitively. It is a direct experience that, through iteration, can lead to skill-sets. It is also indicated that object-task performance will not work as efficiently without the use of exploratory or kinesthetic learning practices. Therefore, object-task performance is not as congruently explored in GUI than it is practiced in Haptic interfaces.
Abstract: The modified Claus process is commonly used in oil
refining and gas processing to recover sulfur and destroy
contaminants formed in upstream processing. A Claus furnace feed
containing a relatively low concentration of H2S may be incapable of
producing a stable flame. Also, incomplete combustion of
hydrocarbons in the feed can lead to deterioration of the catalyst in
the reactors due to soot or carbon deposition. Therefore, special
consideration is necessary to achieve the appropriate overall sulfur
recovery. In this paper, some configurations available to treat lean
acid gas streams are described and the most appropriate ones are
studied to overcome low H2S concentration problems. As a result,
overall sulfur recovery is investigated for feed preheating and hot gas
configurations.
Abstract: High performance Resistive Random Access Memory
(RRAM) based on HfOx has been prepared and its temperature
instability has been investigated in this work. With increasing
temperature, it is found that: leakage current at high resistance state
increases, which can be explained by the higher density of traps
inside dielectrics (related to trap-assistant tunneling), leading to a
smaller On/Off ratio; set and reset voltages decrease, which may be
attributed to the higher oxygen ion mobility, in addition to the
reduced potential barrier to create / recover oxygen ions (or oxygen
vacancies); temperature impact on the RRAM retention degradation
is more serious than electrical bias.