Abstract: Landfill leachates contain a number of persistent pollutants, including heavy metals. They have the ability to spread in ecosystems and accumulate in fish which most of them are classified as top-consumers of trophic chains. Fish are freely swimming organisms; but perhaps, due to their species-specific ecological and behavioral properties, they often prefer the most suitable biotopes and therefore, did not avoid harmful substances or environments. That is why it is necessary to evaluate the persistent pollutant dispersion in hydroecosystem using fish tissue metal concentration. In hydroecosystems of hybrid type (e.g. river-pond-river) the distance from the pollution source could be a perfect indicator of such a kind of metal distribution. The studies were carried out in the Kairiai landfill neighboring hybrid-type ecosystem which is located 5 km east of the Šiauliai City. Fish tissue (gills, liver, and muscle) metal concentration measurements were performed on two types of ecologically-different fishes according to their feeding characteristics: benthophagous (Gibel carp, roach) and predatory (Northern pike, perch). A number of mathematical models (linear, non-linear, using log and other transformations) have been applied in order to identify the most satisfactorily description of the interdependence between fish tissue metal concentration and the distance from the pollution source. However, the only one log-multiple regression model revealed the pattern that the distance from the pollution source is closely and positively correlated with metal concentration in all predatory fish tissues studied (gills, liver, and muscle).
Abstract: The increasing availability of information about earth
surface elevation (Digital Elevation Models DEM) generated from
different sources (remote sensing, Aerial Images, Lidar) poses the
question about how to integrate and make available to the most than
possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the
quality of data management plays a fundamental role. Due to the high
acquisition costs and the huge amount of generated data, highresolution
terrain surveys tend to be small or medium sized and
available on limited portion of earth. Here comes the need to merge
large-scale height maps that typically are made available for free at
worldwide level, with very specific high resolute datasets. One the
other hand, the third dimension increases the user experience and the
data representation quality, unlocking new possibilities in data
analysis for civil protection, real estate, urban planning, environment
monitoring, etc. The open-source 3D virtual globes, which are
trending topics in Geovisual Analytics, aim at improving the
visualization of geographical data provided by standard web services
or with proprietary formats. Typically, 3D Virtual globes like do not
offer an open-source tool that allows the generation of a terrain
elevation data structure starting from heterogeneous-resolution terrain
datasets. This paper describes a technological solution aimed to set
up a so-called “Terrain Builder”. This tool is able to merge
heterogeneous-resolution datasets, and to provide a multi-resolution
worldwide terrain services fully compatible with CesiumJS and
therefore accessible via web using traditional browser without any
additional plug-in.
Abstract: With 40% of total world energy consumption,
building systems are developing into technically complex large
energy consumers suitable for application of sophisticated power
management approaches to largely increase the energy efficiency
and even make them active energy market participants. Centralized
control system of building heating and cooling managed by
economically-optimal model predictive control shows promising
results with estimated 30% of energy efficiency increase. The research
is focused on implementation of such a method on a case study
performed on two floors of our faculty building with corresponding
sensors wireless data acquisition, remote heating/cooling units and
central climate controller. Building walls are mathematically modeled
with corresponding material types, surface shapes and sizes. Models
are then exploited to predict thermal characteristics and changes in
different building zones. Exterior influences such as environmental
conditions and weather forecast, people behavior and comfort
demands are all taken into account for deriving price-optimal climate
control. Finally, a DC microgrid with photovoltaics, wind turbine,
supercapacitor, batteries and fuel cell stacks is added to make the
building a unit capable of active participation in a price-varying
energy market. Computational burden of applying model predictive
control on such a complex system is relaxed through a hierarchical
decomposition of the microgrid and climate control, where the
former is designed as higher hierarchical level with pre-calculated
price-optimal power flows control, and latter is designed as lower
level control responsible to ensure thermal comfort and exploit
the optimal supply conditions enabled by microgrid energy flows
management. Such an approach is expected to enable the inclusion
of more complex building subsystems into consideration in order to
further increase the energy efficiency.
Abstract: In this paper, the regression dependence of dancing
intensity from wind speed and length of span was established due to
the statistic data obtained from multi-year observations on line wires
dancing accumulated by power systems of Kazakhstan and the
Russian Federation. The lower and upper limitations of the equations
parameters were estimated, as well as the adequacy of the regression
model. The constructed model will be used in research of dancing
phenomena for the development of methods and means of protection
against dancing and for zoning plan of the territories of line wire
dancing.
Abstract: The cities of Johannesburg and Pretoria both located in the Gauteng province are separated by a distance of 58 km. The traffic queues on the Ben Schoeman freeway which connects these two cities can stretch for almost 1.5 km. Vehicle traffic congestion impacts negatively on the business and the commuter’s quality of life. The goal of this paper is to identify variables that influence the flow of traffic and to design a vehicle traffic prediction model, which will predict the traffic flow pattern in advance. The model will unable motorist to be able to make appropriate travel decisions ahead of time. The data used was collected by Mikro’s Traffic Monitoring (MTM). Multi-Layer perceptron (MLP) was used individually to construct the model and the MLP was also combined with Bagging ensemble method to training the data. The cross—validation method was used for evaluating the models. The results obtained from the techniques were compared using predictive and prediction costs. The cost was computed using combination of the loss matrix and the confusion matrix. The predicted models designed shows that the status of the traffic flow on the freeway can be predicted using the following parameters travel time, average speed, traffic volume and day of month. The implications of this work is that commuters will be able to spend less time travelling on the route and spend time with their families. The logistics industry will save more than twice what they are currently spending.
Abstract: Patient-specific models are instance-based learning
algorithms that take advantage of the particular features of the patient
case at hand to predict an outcome. We introduce two patient-specific
algorithms based on decision tree paradigm that use AUC as a
metric to select an attribute. We apply the patient specific algorithms
to predict outcomes in several datasets, including medical datasets.
Compared to the patient-specific decision path (PSDP) entropy-based
and CART methods, the AUC-based patient-specific decision path
models performed equivalently on area under the ROC curve (AUC).
Our results provide support for patient-specific methods being a
promising approach for making clinical predictions.
Abstract: This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.
Abstract: Data fusion technology can be the best way to extract
useful information from multiple sources of data. It has been widely
applied in various applications. This paper presents a data fusion
approach in multimedia data for event detection in twitter by using
Dempster-Shafer evidence theory. The methodology applies a mining
algorithm to detect the event. There are two types of data in the
fusion. The first is features extracted from text by using the bag-ofwords
method which is calculated using the term frequency-inverse
document frequency (TF-IDF). The second is the visual features
extracted by applying scale-invariant feature transform (SIFT). The
Dempster - Shafer theory of evidence is applied in order to fuse the
information from these two sources. Our experiments have indicated
that comparing to the approaches using individual data source, the
proposed data fusion approach can increase the prediction accuracy
for event detection. The experimental result showed that the proposed
method achieved a high accuracy of 0.97, comparing with 0.93 with
texts only, and 0.86 with images only.
Abstract: Nowadays, education cannot be imagined without digital technologies. It broadens the horizons of teaching learning processes. Several universities are offering online courses. For evaluation purpose, e-examination systems are being widely adopted in academic environments. Multiple-choice tests are extremely popular. Moving away from traditional examinations to e-examination, Moodle as Learning Management Systems (LMS) is being used. Moodle logs every click that students make for attempting and navigational purposes in e-examination. Data mining has been applied in various domains including retail sales, bioinformatics. In recent years, there has been increasing interest in the use of data mining in e-learning environment. It has been applied to discover, extract, and evaluate parameters related to student’s learning performance. The combination of data mining and e-learning is still in its babyhood. Log data generated by the students during online examination can be used to discover knowledge with the help of data mining techniques. In web based applications, number of right and wrong answers of the test result is not sufficient to assess and evaluate the student’s performance. So, assessment techniques must be intelligent enough. If student cannot answer the question asked by the instructor then some easier question can be asked. Otherwise, more difficult question can be post on similar topic. To do so, it is necessary to identify difficulty level of the questions. Proposed work concentrate on the same issue. Data mining techniques in specific clustering is used in this work. This method decide difficulty levels of the question and categories them as tough, easy or moderate and later this will be served to the desire students based on their performance. Proposed experiment categories the question set and also group the students based on their performance in examination. This will help the instructor to guide the students more specifically. In short mined knowledge helps to support, guide, facilitate and enhance learning as a whole.
Abstract: Many cluster based routing protocols have been
proposed in the field of wireless sensor networks, in which a group of
nodes are formed as clusters. A cluster head is selected from one
among those nodes based on residual energy, coverage area, number
of hops and that cluster-head will perform data gathering from
various sensor nodes and forwards aggregated data to the base station
or to a relay node (another cluster-head), which will forward the
packet along with its own data packet to the base station. Here a
Game Theory based Diligent Energy Utilization Algorithm (GTDEA)
for routing is proposed. In GTDEA, the cluster head selection is done
with the help of game theory, a decision making process, that selects
a cluster-head based on three parameters such as residual energy
(RE), Received Signal Strength Index (RSSI) and Packet Reception
Rate (PRR). Finding a feasible path to the destination with minimum
utilization of available energy improves the network lifetime and is
achieved by the proposed approach. In GTDEA, the packets are
forwarded to the base station using inter-cluster routing technique,
which will further forward it to the base station. Simulation results
reveal that GTDEA improves the network performance in terms of
throughput, lifetime, and power consumption.
Abstract: Although Mobile Wireless Sensor Networks (MWSNs),
which consist of mobile sensor nodes (MSNs), can cover a wide range
of observation region by using a small number of sensor nodes, they
need to construct a network to collect the sensing data on the base
station by moving the MSNs. As an effective method, the network
construction method based on Virtual Rails (VRs), which is referred
to as VR method, has been proposed. In this paper, we propose two
types of effective techniques for the VR method. They can prolong
the operation time of the network, which is limited by the battery
capabilities of MSNs and the energy consumption of MSNs. The
first technique, an effective arrangement of VRs, almost equalizes
the number of MSNs belonging to each VR. The second technique,
an adaptive movement method of MSNs, takes into account the
residual energy of battery. In the simulation, we demonstrate that each
technique can improve the network lifetime and the combination of
both techniques is the most effective.
Abstract: Energy consumption data, in particular those involving
public buildings, are impacted by many factors: the building structure,
climate/environmental parameters, construction, system operating
condition, and user behavior patterns. Traditional methods for data
analysis are insufficient. This paper delves into the data mining
technology to determine its application in the analysis of building
energy consumption data including energy consumption prediction,
fault diagnosis, and optimal operation. Recent literature are reviewed
and summarized, the problems faced by data mining technology in the
area of energy consumption data analysis are enumerated, and research
points for future studies are given.
Abstract: Model updating is an inverse eigenvalue problem which
concerns the modification of an existing but inaccurate model with
measured modal data. In this paper, an efficient gradient based
iterative method for updating the mass, damping and stiffness
matrices simultaneously using a few of complex measured modal
data is developed. Convergence analysis indicates that the iterative
solutions always converge to the unique minimum Frobenius norm
symmetric solution of the model updating problem by choosing a
special kind of initial matrices.
Abstract: Introduction: To update ourselves and understand the
concept of latest electronic formats available for Health care
providers and how it could be used and developed as per standards.
The idea is to correlate between the patients Manual Medical Records
keeping and maintaining patients Electronic Information in a Health
care setup in this world. Furthermore, this stands with adapting to the
right technology depending upon the organization and improve our
quality and quantity of Healthcare providing skills. Objective: The
concept and theory is to explain the terms of Electronic Medical
Record (EMR), Electronic Health Record (EHR) and Personal Health
Record (PHR) and selecting the best technical among the available
Electronic sources and software before implementing. It is to guide
and make sure the technology used by the end users without any
doubts and difficulties. The idea is to evaluate is to admire the uses
and barriers of EMR-EHR-PHR. Aim and Scope: The target is to
achieve the health care providers like Physicians, Nurses, Therapists,
Medical Bill reimbursements, Insurances and Government to assess
the patient’s information on easy and systematic manner without
diluting the confidentiality of patient’s information. Method: Health
Information Technology can be implemented with the help of
Organisations providing with legal guidelines and help to stand by
the health care provider. The main objective is to select the correct
embedded and affordable database management software and
generating large-scale data. The parallel need is to know how the
latest software available in the market. Conclusion: The question lies
here is implementing the Electronic information system with
healthcare providers and organization. The clinicians are the main
users of the technology and manage us to “go paperless”. The fact is
that day today changing technologically is very sound and up to date.
Basically, the idea is to tell how to store the data electronically safe
and secure. All three exemplifies the fact that an electronic format
has its own benefit as well as barriers.
Abstract: Torrefaction of biomass pellets is considered as a
useful pretreatment technology in order to convert them into a high
quality solid biofuel that is more suitable for pyrolysis, gasification,
combustion, and co-firing applications. In the course of torrefaction,
the temperature varies across the pellet, and therefore chemical
reactions proceed unevenly within the pellet. However, the
uniformity of the thermal distribution along the pellet is generally
assumed. The torrefaction process of a single cylindrical pellet is
modeled here, accounting for heat transfer coupled with chemical
kinetics. The drying sub-model was also introduced. The nonstationary
process of wood pellet decomposition is described by the
system of non-linear partial differential equations over the
temperature and mass. The model captures well the main features of
the experimental data.
Abstract: This paper aims to link together the concepts of job
satisfaction, work engagement, trust, job meaningfulness and loyalty
to the organisation focusing on specific type of employment –
academic jobs. The research investigates the relationships between
job satisfaction, work engagement and loyalty as well as the impact
of trust and job meaningfulness on the work engagement and loyalty.
The survey was conducted in one of the largest Latvian higher
education institutions and the sample was drawn from academic staff
(n=326). Structured questionnaire with 44 reflective type questions
was developed to measure the constructs. Data was analysed using
SPSS and Smart-PLS software. Variance based structural equation
modelling (PLS-SEM) technique was used to test the model and to
predict the most important factors relevant to employee engagement
and loyalty. The first order model included two endogenous
constructs (loyalty and intention to stay and recommend to work in
this organisation, and employee engagement), as well as six
exogenous constructs (feeling of fair treatment and trust in
management; career growth opportunities; compensation, pay and
benefits; management; colleagues and teamwork; and finally job
meaningfulness). Job satisfaction was developed as second order
construct and both: first and second order models were designed for
data analysis. It was found that academics are more engaged than
satisfied with their work and main reason for that was found to be job
meaningfulness, which is significant predictor for work engagement,
but not for job satisfaction. Compensation is not significantly related
to work engagement, but only to job satisfaction. Trust was not
significantly related neither to engagement, nor to satisfaction,
however, it appeared to be significant predictor of loyalty and
intentions to stay with the University. Paper revealed academic jobs
as specific kind of employment where employees can be more
engaged than satisfied and highlighted the specific role of job
meaningfulness in the University settings.
Abstract: This paper outlines the development of an
experimental technique in quantifying supersonic jet flows, in an
attempt to avoid seeding particle problems frequently associated with
particle-image velocimetry (PIV) techniques at high Mach numbers.
Based on optical flow algorithms, the idea behind the technique
involves using high speed cameras to capture Schlieren images of the
supersonic jet shear layers, before they are subjected to an adapted
optical flow algorithm based on the Horn-Schnuck method to
determine the associated flow fields. The proposed method is capable
of offering full-field unsteady flow information with potentially
higher accuracy and resolution than existing point-measurements or
PIV techniques. Preliminary study via numerical simulations of a
circular de Laval jet nozzle successfully reveals flow and shock
structures typically associated with supersonic jet flows, which serve
as useful data for subsequent validation of the optical flow based
experimental results. For experimental technique, a Z-type Schlieren
setup is proposed with supersonic jet operated in cold mode,
stagnation pressure of 4 bar and exit Mach of 1.5. High-speed singleframe
or double-frame cameras are used to capture successive
Schlieren images. As implementation of optical flow technique to
supersonic flows remains rare, the current focus revolves around
methodology validation through synthetic images. The results of
validation test offers valuable insight into how the optical flow
algorithm can be further improved to improve robustness and
accuracy. Despite these challenges however, this supersonic flow
measurement technique may potentially offer a simpler way to
identify and quantify the fine spatial structures within the shock shear
layer.
Abstract: The present study investigates the effectiveness of
newly designed clayey pellets (fired clay pellets diameter sizes of 5
and 8 mm, and unfired clay pellets with the diameter size of 15 mm)
as the beds in the column adsorption process. The adsorption
experiments in the batch mode were performed before the column
experiment with the purpose to determine the order of adsorbent
package in the column which was to be designed in the investigation.
The column experiment was performed by using a known mass of the
clayey beds and the volume of the waste printing developer, which
was purified. The column was filled in the following order: fired clay
pellets of the diameter size of 5 mm, fired clay pellets of the diameter
size of 8 mm, and unfired clay pellets of the diameter size of 15 mm.
The selected order of the adsorbents showed a high removal
efficiency for zinc (97.8%) and copper (81.5%) ions. These
efficiencies were better than those in the case of the already existing
mode adsorption. The obtained experimental data present a good
basis for the selection of an appropriate column fill, but further
testing is necessary in order to obtain more accurate results.
Abstract: At the Savonia University of Applied Sciences (UAS),
curriculum and studies have been improved by applying an Open
Innovation Space approach (OIS). It is based on multidisciplinary
action learning. The key elements of OIS-ideology are work-life
orientation, and student-centric communal learning. In this approach,
every participant can learn from each other and innovations will be
created. In this social innovation educational approach, all practices
are carried out in close collaboration with enterprises in real-life
settings, not in classrooms. As an example, in this paper, Savonia
UAS’s Future Food RDI hub (FF) shows how OIS practices are
implemented by providing food product development and consumer
research services for enterprises in close collaboration with
academicians, students and consumers. In particular one example of
OIS experimentation in the field is provided by a consumer research
carried out utilizing verbal analysis protocol combined with audiovisual
observation (VAP-WAVO). In this case, all co-learners were
acting together in supermarket settings to collect the relevant data for
a product development and the marketing department of a company.
The company benefitted from the results obtained, students were
more satisfied with their studies, educators and academicians were
able to obtain good evidence for further collaboration as well as
renewing curriculum contents based on the requirements of working
life. In addition, society will benefit over time as young university
adults find careers more easily through their OIS related food science
studies. Also this knowledge interaction model re-news education
practices and brings working-life closer to educational research
institutes.
Abstract: In the present study, a numerical approach to describe the pyrolysis of a single solid particle of wood is used to study the influence of various conditions such as particle size, heat transfer coefficient, reactor temperature and heating rate. The influence of these parameters in the change of the duration of the pyrolysis cycle was studied. Mathematical modeling was employed to simulate the heat, mass transfer, and kinetic processes inside the reactor. The evolutions of the mass loss as well as the evolution of temperature inside the thick piece are investigated numerically. The elaborated model was also employed to study the effect of the reactor temperature and the rate of heating on the change of the temperature and the local loss of the mass inside the piece of wood. The obtained results are in good agreement with the experimental data available in the literature.