Abstract: The paper presents an applied study of a multivariate AR(p) process fitted to daily data from U.S. commodity futures markets with the use of Bayesian statistics. In the first part a detailed description of the methods used is given. In the second part two BVAR models are chosen one with assumption of lognormal, the second with normal distribution of prices conditioned on the parameters. For a comparison two simple benchmark models are chosen that are commonly used in todays Financial Mathematics. The article compares the quality of predictions of all the models, tries to find an adequate rate of forgetting of information and questions the validity of Efficient Market Hypothesis in the semi-strong form.
Abstract: Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.
Abstract: A new class of percolation model in complex networks,
in which nodes are characterized by hidden variables reflecting the
properties of nodes and the occupied probability of each link is
determined by the hidden variables of the end nodes, is studied
in this paper. By the mean field theory, the analytical expressions
for the phase of percolation transition is deduced. It is determined
by the distribution of the hidden variables for the nodes and the
occupied probability between pairs of them. Moreover, the analytical
expressions obtained are checked by means of numerical simulations
on a particular model. Besides, the general model can be applied
to describe and control practical diffusion models, such as disease
diffusion model, scientists cooperation networks, and so on.
Abstract: Composite of Celatom-ZeoliteY (Cel-ZY) was used to
remove cobalt ion from an aqueous solution using batch mode.
ZeoliteY has successfully superimposed on Celatom FW-14 surface
using hydrothermal treatment .The product was synthesized as a
novel of hierarchical porous material. It was observed from the
results that Cel-ZY has higher ability to remove cobalt ions than the
pure ZeoliteY powder (PZY) synthesized under the same conditions.
Several parameters were studied in this project to investigate the
effect of removal cobalt ion such as pH and initial cobalt
concentration. It was clearly observed that the uptake of cobalt ions
was affected with increase these parameters. The results proved that
the product can be used effectively to remove Co2+ ions from
wastewater as an environmentally friendly alternative.
Abstract: In this study, the sorption of Malachite green (MG) on Hydrilla verticillata biomass, a submerged aquatic plant, was investigated in a batch system. The effects of operating parameters such as temperature, adsorbent dosage, contact time, adsorbent size, and agitation speed on the sorption of Malachite green were analyzed using response surface methodology (RSM). The proposed quadratic model for central composite design (CCD) fitted very well to the experimental data that it could be used to navigate the design space according to ANOVA results. The optimum sorption conditions were determined as temperature - 43.5oC, adsorbent dosage - 0.26g, contact time - 200min, adsorbent size - 0.205mm (65mesh), and agitation speed - 230rpm. The Langmuir and Freundlich isotherm models were applied to the equilibrium data. The maximum monolayer coverage capacity of Hydrilla verticillata biomass for MG was found to be 91.97 mg/g at an initial pH 8.0 indicating that the optimum sorption initial pH. The external and intra particle diffusion models were also applied to sorption data of Hydrilla verticillata biomass with MG, and it was found that both the external diffusion as well as intra particle diffusion contributes to the actual sorption process. The pseudo-second order kinetic model described the MG sorption process with a good fitting.
Abstract: Snow cover is an important phenomenon in
hydrology, hence modeling the snow accumulation and melting is an
important issue in places where snowmelt significantly contributes to
runoff and has significant effect on water balance. The physics-based
models are invariably distributed, with the basin disaggregated into
zones or grid cells. Satellites images provide valuable data to verify
the accuracy of spatially distributed model outputs. In this study a
spatially distributed physically based model (WetSpa) was applied to
predict snow cover and melting in the Latyan dam watershed in Iran.
Snowmelt is simulated based on an energy balance approach. The
model is applied and calibrated with one year of observed daily
precipitation, air temperature, windspeed, and daily potential
evaporation. The predicted snow-covered area is compared with
remotely sensed images (MODIS). The results show that simulated
snow cover area SCA has a good agreement with satellite image
snow cover area SCA from MODIS images. The model performance
is also tested by statistical and graphical comparison of simulated and
measured discharges entering the Latyan dam reservoir.
Abstract: The rate of production of main products of the Fischer-Tropsch reactions over Fe/HZSM5 bifunctional catalyst in a fixed bed reactor is investigated at a broad range of temperature, pressure, space velocity, H2/CO feed molar ratio and CO2, CH4 and water flow rates. Model discrimination and parameter estimation were performed according to the integral method of kinetic analysis. Due to lack of mechanism development for Fisher – Tropsch Synthesis on bifunctional catalysts, 26 different models were tested and the best model is selected. Comprehensive one and two dimensional heterogeneous reactor models are developed to simulate the performance of fixed-bed Fischer – Tropsch reactors. To reduce computational time for optimization purposes, an Artificial Feed Forward Neural Network (AFFNN) has been used to describe intra particle mass and heat transfer diffusion in the catalyst pellet. It is seen that products' reaction rates have direct relation with H2 partial pressure and reverse relation with CO partial pressure. The results show that the hybrid model has good agreement with rigorous mechanistic model, favoring that the hybrid model is about 25-30 times faster.
Abstract: This method decrease usage power (expenditure) in networks on chips (NOC). This method data coding for data transferring in order to reduces expenditure. This method uses data compression reduces the size. Expenditure calculation in NOC occurs inside of NOC based on grown models and transitive activities in entry ports. The goal of simulating is to weigh expenditure for encoding, decoding and compressing in Baseline networks and reduction of switches in this type of networks. KeywordsNetworks on chip, Compression, Encoding, Baseline networks, Banyan networks.
Abstract: This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW Photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three Radial Basis Function Neural Networks (RBFNN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated RBFNN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and non-linear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network.
Abstract: Software crisis refers to the situation in which the developers are not able to complete the projects within time and budget constraints and moreover these overscheduled and over budget projects are of low quality as well. Several methodologies have been adopted form time to time to overcome this situation and now in the focus is component based software engineering. In this approach, emphasis is on reuse of already existing software artifacts. But the results can not be achieved just by preaching the principles; they need to be practiced as well. This paper highlights some of the very basic elements of this approach, which has to be in place to get the desired goals of high quality, low cost with shorter time-to-market software products.
Abstract: Paper presents knowledge about types of test in area
of materials properties of selected methods of rapid prototyping
technologies. In today used rapid prototyping technologies for
production of models and final parts are used materials in initial state
as solid, liquid or powder material structure. In solid state are used
various forms such as pellets, wire or laminates. Basic range
materials include paper, nylon, wax, resins, metals and ceramics. In
Fused Deposition Modeling (FDM) rapid prototyping technology are
mainly used as basic materials ABS (Acrylonitrile Butadiene
Styrene), polyamide, polycarbonate, polyethylene and polypropylene.
For advanced FDM applications are used special materials as silicon
nitrate, PZT (Piezoceramic Material - Lead Zirconate Titanate),
aluminium oxide, hydroxypatite and stainless steel.
Abstract: The world is moving rapidly toward the deployment
of information and communication systems. Nowadays, computing
systems with their fast growth are found everywhere and one of the main challenges for these systems is increasing attacks and security threats against them. Thus, capturing, analyzing and verifying security requirements becomes a very important activity in
development process of computing systems, specially in developing
systems such as banking, military and e-business systems. For
developing every system, a process model which includes a process,
methods and tools is chosen. The Rational Unified Process (RUP) is
one of the most popular and complete process models which is used
by developers in recent years. This process model should be extended to be used in developing secure software systems. In this
paper, the Requirement Discipline of RUP is extended to improve RUP for developing secure software systems. These proposed extensions are adding and integrating a number of Activities, Roles,
and Artifacts to RUP in order to capture, document and model threats
and security requirements of system. These extensions introduce a
group of clear and stepwise activities to developers. By following these activities, developers assure that security requirements are
captured and modeled. These models are used in design, implementation and test activitie
Abstract: 20 years of dentistry was a period of transition from
communist to market economy but Romanian doctors have
insufficient management knowledge. Recently, the need for modern
management has increased due to technologies and superior materials
appearance, as patient-s demands.
Research goal is to increase efficiency by evaluating dental
medical office cost categories in real pricing procedures.
Empirical research is based on guided study that includes
information about the association between categories of cost
perception and therapeutic procedures commonly used in dental
offices.
Due to the obtained results to identify all the labours that make up
a settled procedure costs were determined for each procedure.
Financial evaluation software was created with the main functions:
introducing and maintaining patient records, treatment and
appointments made, procedures cost and monitoring office
productivity.
We believe that the study results can significantly improve the
financial management of dental offices, increasing the effectiveness
and quality of services.
Abstract: Rooted in the study of social functioning of space in architecture, Space Syntax (SS) and the more recent Network Pattern (NP) researches demonstrate the 'spatial structures' of city, i.e. the hierarchical patterns of streets, junctions and alley ends. Applying SS and NP models, planners can conceptualize the real city-s patterns. Although, both models yield the optimal path of the city their underpinning displays of the city-s spatial configuration differ. The Axial Map analyzes the topological non-distance-based connectivity structure, whereas, the Central-Node Map and the Shortcut-Path Map, in contrast, analyze the metrical distance-based structures. This research contrasts and combines them to understand various forms of city-s structures. It concludes that, while they reveal different spatial structures, Space Syntax and Network Pattern urban models support each the other. Combining together they simulate the global access and the locally compact structures namely the central nodes and the shortcuts for the city.
Abstract: The introduction of haptic elements in a graphic user interfaces are becoming more widespread. Since haptics are being introduced rapidly into computational tools, investigating how these models affect Human-Computer Interaction would help define how to integrate and model new modes of interaction. The interest of this paper is to discuss and investigate the issues surrounding Haptic and Graphic User Interface designs (GUI) as separate systems, as well as understand how these work in tandem. The development of these systems is explored from a psychological perspective, based on how usability is addressed through learning and affordances, defined by J.J. Gibson. Haptic design can be a powerful tool, aiding in intuitive learning. The problems discussed within the text is how can haptic interfaces be integrated within a GUI without the sense of frivolity. Juxtaposing haptics and Graphic user interfaces has issues of motivation; GUI tends to have a performatory process, while Haptic Interfaces use affordances to learn tool use. In a deeper view, it is noted that two modes of perception, foveal and ambient, dictate perception. These two modes were once thought to work in tandem, however it has been discovered that these processes work independently from each other. Foveal modes interpret orientation is space which provide for posture, locomotion, and motor skills with variations of the sensory information, which instructs perceptions of object-task performance. It is contended, here, that object-task performance is a key element in the use of Haptic Interfaces because exploratory learning uses affordances in order to use an object, without meditating an experience cognitively. It is a direct experience that, through iteration, can lead to skill-sets. It is also indicated that object-task performance will not work as efficiently without the use of exploratory or kinesthetic learning practices. Therefore, object-task performance is not as congruently explored in GUI than it is practiced in Haptic interfaces.
Abstract: An on-line condition monitoring method for transmission line is proposed using electrical circuit theory and IT technology in this paper. It is reasonable that the circuit parameters such as resistance (R), inductance (L), conductance (g) and capacitance (C) of a transmission line expose the electrical conditions and physical state of the line. Those parameters can be calculated from the linear equation composed of voltages and currents measured by synchro-phasor measurement technique at both end of the line. A set of linear voltage drop equations containing four terminal constants (A, B ,C ,D ) are mathematical models of the transmission line circuits. At least two sets of those linear equations are established from different operation condition of the line, they may mathematically yield those circuit parameters of the line. The conditions of line connectivity including state of connecting parts or contacting parts of the switching device may be monitored by resistance variations during operation. The insulation conditions of the line can be monitored by conductance (g) and capacitance(C) measurements. Together with other condition monitoring devices such as partial discharge, sensors and visual sensing device etc.,they may give useful information to monitor out any incipient symptoms of faults. The prototype of hardware system has been developed and tested through laboratory level simulated transmission lines. The test has shown enough evident to put the proposed method to practical uses.
Abstract: We present an implementation of an Online Exhibition System (OES) web service(s) that reflects our experiences with using web service development packages and software process models. The system provides major functionality that exists in similar packages. While developing such a complex web service, we gained insightful experience (i) in the traditional software development processes: waterfall model and evolutionary development and their fitness to web services development, (ii) in the fitness and effectiveness of a major web services development kit.
Abstract: Electronic commerce is growing rapidly with on-line
sales already heading for hundreds of billion dollars per year. Due to
the huge amount of money transferred everyday, an increased
security level is required. In this work we present the architecture of
an intelligent speaker verification system, which is able to accurately
verify the registered users of an e-commerce service using only their
voices as an input. According to the proposed architecture, a
transaction-based e-commerce application should be complemented
by a biometric server where customer-s unique set of speech models
(voiceprint) is stored. The verification procedure requests from the
user to pronounce a personalized sequence of digits and after
capturing speech and extracting voice features at the client side are
sent back to the biometric server. The biometric server uses pattern
recognition to decide whether the received features match the stored
voiceprint of the customer who claims to be, and accordingly grants
verification. The proposed architecture can provide e-commerce
applications with a higher degree of certainty regarding the identity
of a customer, and prevent impostors to execute fraudulent
transactions.
Abstract: A design flow of multi-standard down-conversion
CMOS mixers for three modern standards: Global System Mobile,
Digital Enhanced Cordless Telephone and Universal Mobile
Telecommunication Systems is presented. Three active mixer-s
structures are studied. The first is based on the Gilbert cell which
gives a tolerable noise figure and linearity with a low conversion
gain. The second and third structures use the current bleeding and
charge injection techniques in order to increase the conversion gain.
An improvement of about 2 dB of the conversion gain is achieved
without a considerable degradation of the other characteristics. The
models used for noise figure, conversion gain and IIP3 used are
studied. This study describes the nature of trade-offs inherent in such
structures and gives insights that help in identifying which structure
is better for given conditions.
Abstract: Generation system reliability assessment is an
important task which can be performed using deterministic or
probabilistic techniques. The probabilistic approaches have
significant advantages over the deterministic methods. However,
more complicated modeling is required by the probabilistic
approaches. Power generation model is a basic requirement for this
assessment. One form of the generation models is the well known
capacity outage probability table (COPT). Different analytical
techniques have been used to construct the COPT. These approaches
require considerable mathematical modeling of the generating units.
The unit-s models are combined to build the COPT which will add
more burdens on the process of creating the COPT. Decimal to
Binary Conversion (DBC) technique is widely and commonly applied
in electronic systems and computing This paper proposes a novel
utilization of the DBC to create the COPT without engaging in
analytical modeling or time consuming simulations. The simple
binary representation , “0 " and “1 " is used to model the states o f
generating units. The proposed technique is proven to be an effective
approach to build the generation model.