Abstract: Cerium-doped lanthanum bromide LaBr3:Ce(5%)
crystals are considered to be one of the most advanced scintillator
materials used in PET scanning, combining a high light yield, fast
decay time and excellent energy resolution. Apart from the correct
choice of scintillator, it is also important to optimise the detector
geometry, not least in terms of source-to-detector distance in order to
obtain reliable measurements and efficiency. In this study a
commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce
(5%) detector was characterised in terms of its efficiency at varying
source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and
137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As
a result of the change in solid angle subtended by the detector, the
geometric efficiency reduced in efficiency with increasing distance.
High efficiencies at low distances can cause pulse pile-up when
subsequent photons are detected before previously detected events
have decayed. To reduce this systematic error the source-to-detector
distance should be balanced between efficiency and pulse pile-up
suppression as otherwise pile-up corrections would need to be
necessary at short distances. In addition to the experimental
measurements Monte Carlo simulations have been carried out for the
same setup, allowing a comparison of results. The advantages and
disadvantages of each approach have been highlighted.
Abstract: This paper argues that networks, such as the ECN and the American network, are affected by certain small events which are inherent to path dependence and preclude the full evolution towards efficiency. It is advocated that the American network is superior to the ECN in many respects due to its greater flexibility and longer history. This stems in particular from the creation of the American network, which was based on a small number of cases. Such a structure encourages further changes and modifications which are not necessarily radical. The ECN, by contrast, was established by legislative action, which explains its rigid structure and resistance to change. This paper is an attempt to transpose the superiority of the American network on to the ECN. It looks at concepts such as judicial cooperation, harmonisation of procedure, peer review and regulatory impact assessments (RIAs), and dispute resolution procedures.
Abstract: PDMS (Polydimethylsiloxane) polymer is a suitable material for biological and MEMS (Microelectromechanical systems) designers, because of its biocompatibility, transparency and high resistance under plasma treatment. PDMS round channel is always been of great interest due to its ability to confine the liquid with membrane type micro valves. In this paper we are presenting a very simple way to form round shapemicrofluidic channel, which is based on reflow of positive photoresist AZ® 40 XT. With this method, it is possible to obtain channel of different height simply by varying the spin coating parameters of photoresist.
Abstract: Recurrent event data is a special type of multivariate
survival data. Dynamic and frailty models are one of the approaches
that dealt with this kind of data. A comparison between these two
models is studied using the empirical standard deviation of the
standardized martingale residual processes as a way of assessing the
fit of the two models based on the Aalen additive regression model.
Here we found both approaches took heterogeneity into account and
produce residual standard deviations close to each other both in the
simulation study and in the real data set.
Abstract: This article presents the simulation, parameterization and optimization of an electromagnet with the C–shaped configuration, intended for the study of magnetic properties of materials. The electromagnet studied consists of a C-shaped yoke, which provides self–shielding for minimizing losses of magnetic flux density, two poles of high magnetic permeability and power coils wound on the poles. The main physical variable studied was the static magnetic flux density in a column within the gap between the poles, with 4cm2 of square cross section and a length of 5cm, seeking a suitable set of parameters that allow us to achieve a uniform magnetic flux density of 1x104 Gaussor values above this in the column, when the system operates at room temperature and with a current consumption not exceeding 5A. By means of a magnetostatic analysis by the finite element method, the magnetic flux density and the distribution of the magnetic field lines were visualized and quantified. From the results obtained by simulating an initial configuration of electromagnet, a structural optimization of the geometry of the adjustable caps for the ends of the poles was performed. The magnetic permeability effect of the soft magnetic materials used in the poles system, such as low– carbon steel (0.08% C), Permalloy (45% Ni, 54.7% Fe) and Mumetal (21.2% Fe, 78.5% Ni), was also evaluated. The intensity and uniformity of the magnetic field in the gap showed a high dependence with the factors described above. The magnetic field achieved in the column was uniform and its magnitude ranged between 1.5x104 Gauss and 1.9x104 Gauss according to the material of the pole used, with the possibility of increasing the magnetic field by choosing a suitable geometry of the cap, introducing a cooling system for the coils and adjusting the spacing between the poles. This makes the device a versatile and scalable tool to generate the magnetic field necessary to perform magnetic characterization of materials by techniques such as vibrating sample magnetometry (VSM), Hall-effect, Kerr-effect magnetometry, among others. Additionally, a CAD design of the modules of the electromagnet is presented in order to facilitate the construction and scaling of the physical device.
Abstract: The increasing importance of data stream arising in a
wide range of advanced applications has led to the extensive study of
mining frequent patterns. Mining data streams poses many new
challenges amongst which are the one-scan nature, the unbounded
memory requirement and the high arrival rate of data streams. In this
paper, we propose a new approach for mining itemsets on data
stream. Our approach SFIDS has been developed based on FIDS
algorithm. The main attempts were to keep some advantages of the
previous approach and resolve some of its drawbacks, and
consequently to improve run time and memory consumption. Our
approach has the following advantages: using a data structure similar
to lattice for keeping frequent itemsets, separating regions from each
other with deleting common nodes that results in a decrease in search
space, memory consumption and run time; and Finally, considering
CPU constraint, with increasing arrival rate of data that result in
overloading system, SFIDS automatically detect this situation and
discard some of unprocessing data. We guarantee that error of results
is bounded to user pre-specified threshold, based on a probability
technique. Final results show that SFIDS algorithm could attain
about 50% run time improvement than FIDS approach.
Abstract: The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.
Abstract: Models are placed by modeling paradigm at the center of development process. These models are represented by languages, like UML the language standardized by the OMG which became necessary for development. Moreover the ontology engineering paradigm places ontologies at the center of development process; in this paradigm we find OWL the principal language for knowledge representation. Building ontologies from scratch is generally a difficult task. The bridging between UML and OWL appeared on several regards such as the classes and associations. In this paper, we have to profit from convergence between UML and OWL to propose an approach based on Meta-Modelling and Graph Grammars and registered in the MDA architecture for the automatic generation of OWL ontologies from UML class diagrams. The transformation is based on transformation rules; the level of abstraction in these rules is close to the application in order to have usable ontologies. We illustrate this approach by an example.
Abstract: Smart Dust particles, are small smart materials used for generating weather maps. We investigate question of the optimal number of Smart Dust particles necessary for generating precise, computationally feasible and cost effective 3–D weather maps. We also give an optimal matching algorithm for the generalized scenario, when there are N Smart Dust particles and M ground receivers.
Abstract: Results in one field necessarily give insight into the
others, and all have much potential for scientific and technological
application. The Hadamard-transform technique once been applied to
the spectrometry also has its use in the SNR Enhancement of OTDR.
In this report, a new set of code (Simplex-codes) is discussed and
where the addition gain of SNR come from is implied.
Abstract: Target tracking and localization are important applications
in wireless sensor networks. In these applications, sensor nodes
collectively monitor and track the movement of a target. They have
limited energy supplied by batteries, so energy efficiency is essential
for sensor networks. Most existing target tracking protocols need to
wake up sensors periodically to perform tracking. Some unnecessary
energy waste is thus introduced. In this paper, an energy efficient
protocol for target localization is proposed. In order to preserve
energy, the protocol fixes the number of sensors for target tracking,
but it retains the quality of target localization in an acceptable
level. By selecting a set of sensors for target localization, the other
sensors can sleep rather than periodically wake up to track the target.
Simulation results show that the proposed protocol saves a significant
amount of energy and also prolongs the network lifetime.
Abstract: Since injection engines have a considerable portion, in
consumption of energy and environmental pollution, using an
alternative source of energy with lower pollutant effects in this
regard is necessary.
Biodiesel fuel is a suitable alternative for gasoline in diesel
engines.
In this research the property of biodiesel, the function and the
pollution effects of diesel engine, when using 100% biodiesel, using
100% gasoline and mixing ratio of both fuels for comparing them,
have been investigated.
The researches have shown, using biodiesel fuel in prevalent
diesel engine, will reduce the pollutants such as Co, half burned
carbohydrate and suspended particles and a little increase in
oxidation will achieve while power consumption, particularly fuel
and thermal efficiency of diesel fuel has the same.
Abstract: When reconstructing a scenario, it is necessary to
know the structure of the elements present on the scene to have an
interpretation. In this work we link 3D scenes reconstruction to
evolutionary algorithms through the vision stereo theory. We
consider vision stereo as a method that provides the reconstruction of
a scene using only a couple of images of the scene and performing
some computation. Through several images of a scene, captured from
different positions, vision stereo can give us an idea about the threedimensional
characteristics of the world. Vision stereo usually
requires of two cameras, making an analogy to the mammalian vision
system. In this work we employ only a camera, which is translated
along a path, capturing images every certain distance. As we can not
perform all computations required for an exhaustive reconstruction,
we employ an evolutionary algorithm to partially reconstruct the
scene in real time. The algorithm employed is the fly algorithm,
which employ “flies" to reconstruct the principal characteristics of
the world following certain evolutionary rules.
Abstract: In the past years, the world has witnessed significant work in the field of Manufacturing. Special efforts have been made in the implementation of new technologies, management and control systems, among many others which have all evolved the field. Closely following all this, due to the scope of new projects and the need of turning the existing flexible ideas into more autonomous and intelligent ones, i.e.: moving toward a more intelligent manufacturing, the present paper emerges with the main aim of contributing to the analysis and a few customization issues of a new iCIM 3000 system at the IPSAM. In this process, special emphasis in made on the material flow problem. For this, besides offering a description and analysis of the system and its main parts, also some tips on how to define other possible alternative material flow scenarios and a partial analysis of the combinatorial nature of the problem are offered as well. All this is done with the intentions of relating it with the use of simulation tools, for which these have been briefly addressed with a special focus on the Witness simulation package. For a better comprehension, the previous elements are supported by a few figures and expressions which would help obtaining necessary data. Such data and others will be used in the future, when simulating the scenarios in the search of the best material flow configurations.
Abstract: Lately, significant work in the area of Intelligent
Manufacturing has become public and mainly applied within the
frame of industrial purposes. Special efforts have been made in the
implementation of new technologies, management and control
systems, among many others which have all evolved the field. Aware
of all this and due to the scope of new projects and the need of
turning the existing flexible ideas into more autonomous and
intelligent ones, i.e.: Intelligent Manufacturing, the present paper
emerges with the main aim of contributing to the design and analysis
of the material flow in either systems, cells or work stations under
this new “intelligent" denomination. For this, besides offering a
conceptual basis in some of the key points to be taken into account
and some general principles to consider in the design and analysis of
the material flow, also some tips on how to define other possible
alternative material flow scenarios and a classification of the states a
system, cell or workstation are offered as well. All this is done with
the intentions of relating it with the use of simulation tools, for which
these have been briefly addressed with a special focus on the Witness
simulation package. For a better comprehension, the previous
elements are supported by a detailed layout, other figures and a few
expressions which could help obtaining necessary data. Such data and
others will be used in the future, when simulating the scenarios in the
search of the best material flow configurations.
Abstract: The belief K-modes method (BKM) approach is a new
clustering technique handling uncertainty in the attribute values of
objects in both the cluster construction task and the classification one.
Like the standard version of this method, the BKM results depend on
the chosen initial modes. So, one selection method of initial modes
is developed, in this paper, aiming at improving the performances of
the BKM approach. Experiments with several sets of real data show
that by considered the developed selection initial modes method, the
clustering algorithm produces more accurate results.
Abstract: The approach of subset selection in polynomial
regression model building assumes that the chosen fixed full set of
predefined basis functions contains a subset that is sufficient to
describe the target relation sufficiently well. However, in most cases
the necessary set of basis functions is not known and needs to be
guessed – a potentially non-trivial (and long) trial and error process.
In our research we consider a potentially more efficient approach –
Adaptive Basis Function Construction (ABFC). It lets the model
building method itself construct the basis functions necessary for
creating a model of arbitrary complexity with adequate predictive
performance. However, there are two issues that to some extent
plague the methods of both the subset selection and the ABFC,
especially when working with relatively small data samples: the
selection bias and the selection instability. We try to correct these
issues by model post-evaluation using Cross-Validation and model
ensembling. To evaluate the proposed method, we empirically
compare it to ABFC methods without ensembling, to a widely used
method of subset selection, as well as to some other well-known
regression modeling methods, using publicly available data sets.
Abstract: Selection of the best possible set of suppliers has a
significant impact on the overall profitability and success of any
business. For this reason, it is usually necessary to optimize all
business processes and to make use of cost-effective alternatives for
additional savings. This paper proposes a new efficient context-aware
supplier selection model that takes into account possible changes of
the environment while significantly reducing selection costs. The
proposed model is based on data clustering techniques while
inspiring certain principles of online algorithms for an optimally
selection of suppliers. Unlike common selection models which re-run
the selection algorithm from the scratch-line for any decision-making
sub-period on the whole environment, our model considers the
changes only and superimposes it to the previously defined best set
of suppliers to obtain a new best set of suppliers. Therefore, any recomputation
of unchanged elements of the environment is avoided
and selection costs are consequently reduced significantly. A
numerical evaluation confirms applicability of this model and proves
that it is a more optimal solution compared with common static
selection models in this field.
Abstract: Evaluation and survey of curriculum quality as one of the most important components of universities system is necessary for different levels in higher education. The main purpose of this study was to survey of the curriculum quality of Actuarial science field. Case: University of SHahid Beheshti and Higher education institute of Eco insurance (according to viewpoint of students, alumni, employers and faculty members). Descriptive statistics (mean, tables, percentage, and frequency distribution) and inferential statistics (CHI SQUARE) were used to analyze the data. Six criteria considered for the Quality of curriculum: objectives, content, teaching and learning methods, space and facilities, Time, assessment of learning. Content, teaching and learning methods, space and facilities, assessment of learning criteria were relatively desirable level, objectives and time criterions were desirable level. The quality of curriculum of Actuarial Science field was relatively desirable level.
Abstract: Pretreatment of oil palm empty fruit bunch (OPEFB) with N-Methylmorpholine-N-oxide (NMMO) to enhance biogas production was investigated. The pretreatments were performed at 90 and 120ºC for 1, 3, and 5 h using three different concentrations of NMMO of 73%, 79%, and 85%. The pretreated OPEFB was subsequently anaerobically digested to produce biogas. After pretreatment, there were no significant changes of the main composition of OPEFB and the maximum total solid recovery was 92%. The amorphous phase was increased up to 78% at pretreatment condition using 85% NMMO solution for 3 h at 120oC. In general, higher concentration of NMMO and higher temperature resulted in increased amorphous form and higher biogas production. The best results of biogas production reached enhancement of methane yield of 148% compared to the untreated OPEFB and increased in digestion of 94% compared to starch as reference.