Abstract: While the feature sizes of recent Complementary Metal
Oxid Semiconductor (CMOS) devices decrease the influence of static
power prevails their energy consumption. Thus, power savings that
benefit from Dynamic Frequency and Voltage Scaling (DVFS) are
diminishing and temporal shutdown of cores or other microchip
components become more worthwhile. A consequence of powering off unused parts of a chip is that the
relative difference between idle and fully loaded power consumption
is increased. That means, future chips and whole server systems gain
more power saving potential through power-aware load balancing,
whereas in former times this power saving approach had only
limited effect, and thus, was not widely adopted. While powering
off complete servers was used to save energy, it will be superfluous
in many cases when cores can be powered down. An important
advantage that comes with that is a largely reduced time to respond
to increased computational demand. We include the above developments in a server power model
and quantify the advantage. Our conclusion is that strategies from
datacenters when to power off server systems might be used in the
future on core level, while load balancing mechanisms previously
used at core level might be used in the future at server level.
Abstract: Flash flood is occurred in short time rainfall interval:
from 1 hour to 12 hours in small and medium basins. Flash floods
typically have two characteristics: large water flow and big flow
velocity. Flash flood is occurred at hill valley site (strip of lowland of
terrain) in a catchment with large enough distribution area, steep
basin slope, and heavy rainfall. The risk of flash floods is determined
through Gridded Basin Flash Flood Potential Index (GBFFPI). Flash
Flood Potential Index (FFPI) is determined through terrain slope
flash flood index, soil erosion flash flood index, land cover flash
floods index, land use flash flood index, rainfall flash flood index.
Determining GBFFPI, each cell in a map can be considered as outlet
of a water accumulation basin. GBFFPI of the cell is determined as
basin average value of FFPI of the corresponding water accumulation
basin. Based on GIS, a tool is developed to compute GBFFPI using
ArcObjects SDK for .NET. The maps of GBFFPI are built in two
types: GBFFPI including rainfall flash flood index (real time flash
flood warning) or GBFFPI excluding rainfall flash flood index.
GBFFPI Tool can be used to determine a high flash flood potential
site in a large region as quick as possible. The GBFFPI is improved
from conventional FFPI. The advantage of GBFFPI is that GBFFPI is
taking into account the basin response (interaction of cells) and
determines more true flash flood site (strip of lowland of terrain)
while conventional FFPI is taking into account single cell and does
not consider the interaction between cells. The GBFFPI Map of
QuangNam, QuangNgai, DaNang, Hue is built and exported to
Google Earth. The obtained map proves scientific basis of GBFFPI.
Abstract: The source of the jet noise is generated by rocket exhaust plume during rocket engine testing. A domain decomposition approach is applied to the jet noise prediction in this paper. The aerodynamic noise coupling is based on the splitting into acoustic sources generation and sound propagation in separate physical domains. Large Eddy Simulation (LES) is used to simulate the supersonic jet flow. Based on the simulation results of the flow-fields, the jet noise distribution of the sound pressure level is obtained by applying the Ffowcs Williams-Hawkings (FW-H) acoustics equation and Fourier transform. The calculation results show that the complex structures of expansion waves, compression waves and the turbulent boundary layer could occur due to the strong interaction between the gas jet and the ambient air. In addition, the jet core region, the shock cell and the sound pressure level of the gas jet increase with the nozzle size increasing. Importantly, the numerical simulation results of the far-field sound are in good agreement with the experimental measurements in directivity.
Abstract: Magnetic Resonance Imaging Contrast Agents
(MRI-CM) are significant in the clinical and biological imaging as
they have the ability to alter the normal tissue contrast, thereby
affecting the signal intensity to enhance the visibility and detectability
of images. Superparamagnetic Iron Oxide (SPIO) nanoparticles,
coated with dextran or carboxydextran are currently available for
clinical MR imaging of the liver. Most SPIO contrast agents are
T2 shortening agents and Resovist (Ferucarbotran) is one of a
clinically tested, organ-specific, SPIO agent which has a low
molecular carboxydextran coating. The enhancement effect of
Resovist depends on its relaxivity which in turn depends on factors
like magnetic field strength, concentrations, nanoparticle properties,
pH and temperature. Therefore, this study was conducted to
investigate the impact of field strength and different contrast
concentrations on enhancement effects of Resovist. The study
explored the MRI signal intensity of Resovist in the physiological
range of plasma from T2-weighted spin echo sequence at three
magnetic field strengths: 0.47 T (r1=15, r2=101), 1.5 T (r1=7.4,
r2=95), and 3 T (r1=3.3, r2=160) and the range of contrast
concentrations by a mathematical simulation. Relaxivities of r1 and r2
(L mmol-1 Sec-1) were obtained from a previous study and the selected
concentrations were 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1.0, 2.0, and 3.0 mmol/L. T2-weighted images were
simulated using TR/TE ratio as 2000 ms /100 ms. According to the
reference literature, with increasing magnetic field strengths, the
r1 relaxivity tends to decrease while the r2 did not show any
systematic relationship with the selected field strengths. In parallel,
this study results revealed that the signal intensity of Resovist at lower
concentrations tends to increase than the higher concentrations. The
highest reported signal intensity was observed in the low field strength
of 0.47 T. The maximum signal intensities for 0.47 T, 1.5 T and 3 T
were found at the concentration levels of 0.05, 0.06 and 0.05 mmol/L,
respectively. Furthermore, it was revealed that, the concentrations
higher than the above, the signal intensity was decreased
exponentially. An inverse relationship can be found between the field
strength and T2 relaxation time, whereas, the field strength was
increased, T2 relaxation time was decreased accordingly. However,
resulted T2 relaxation time was not significantly different between
0.47 T and 1.5 T in this study. Moreover, a linear correlation of
transverse relaxation rates (1/T2, s–1) with the concentrations of
Resovist can be observed. According to these results, it can conclude
that the concentration of SPIO nanoparticle contrast agents and the
field strengths of MRI are two important parameters which can affect the signal intensity of T2-weighted SE sequence. Therefore, when MR
imaging those two parameters should be considered prudently.
Abstract: The increasing availability of information about earth
surface elevation (Digital Elevation Models DEM) generated from
different sources (remote sensing, Aerial Images, Lidar) poses the
question about how to integrate and make available to the most than
possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the
quality of data management plays a fundamental role. Due to the high
acquisition costs and the huge amount of generated data, highresolution
terrain surveys tend to be small or medium sized and
available on limited portion of earth. Here comes the need to merge
large-scale height maps that typically are made available for free at
worldwide level, with very specific high resolute datasets. One the
other hand, the third dimension increases the user experience and the
data representation quality, unlocking new possibilities in data
analysis for civil protection, real estate, urban planning, environment
monitoring, etc. The open-source 3D virtual globes, which are
trending topics in Geovisual Analytics, aim at improving the
visualization of geographical data provided by standard web services
or with proprietary formats. Typically, 3D Virtual globes like do not
offer an open-source tool that allows the generation of a terrain
elevation data structure starting from heterogeneous-resolution terrain
datasets. This paper describes a technological solution aimed to set
up a so-called “Terrain Builder”. This tool is able to merge
heterogeneous-resolution datasets, and to provide a multi-resolution
worldwide terrain services fully compatible with CesiumJS and
therefore accessible via web using traditional browser without any
additional plug-in.
Abstract: With 40% of total world energy consumption,
building systems are developing into technically complex large
energy consumers suitable for application of sophisticated power
management approaches to largely increase the energy efficiency
and even make them active energy market participants. Centralized
control system of building heating and cooling managed by
economically-optimal model predictive control shows promising
results with estimated 30% of energy efficiency increase. The research
is focused on implementation of such a method on a case study
performed on two floors of our faculty building with corresponding
sensors wireless data acquisition, remote heating/cooling units and
central climate controller. Building walls are mathematically modeled
with corresponding material types, surface shapes and sizes. Models
are then exploited to predict thermal characteristics and changes in
different building zones. Exterior influences such as environmental
conditions and weather forecast, people behavior and comfort
demands are all taken into account for deriving price-optimal climate
control. Finally, a DC microgrid with photovoltaics, wind turbine,
supercapacitor, batteries and fuel cell stacks is added to make the
building a unit capable of active participation in a price-varying
energy market. Computational burden of applying model predictive
control on such a complex system is relaxed through a hierarchical
decomposition of the microgrid and climate control, where the
former is designed as higher hierarchical level with pre-calculated
price-optimal power flows control, and latter is designed as lower
level control responsible to ensure thermal comfort and exploit
the optimal supply conditions enabled by microgrid energy flows
management. Such an approach is expected to enable the inclusion
of more complex building subsystems into consideration in order to
further increase the energy efficiency.
Abstract: Seeking and sharing knowledge on online forums
have made them popular in recent years. Although online forums are
valuable sources of information, due to variety of sources of
messages, retrieving reliable threads with high quality content is an
issue. Majority of the existing information retrieval systems ignore
the quality of retrieved documents, particularly, in the field of thread
retrieval. In this research, we present an approach that employs
various quality features in order to investigate the quality of retrieved
threads. Different aspects of content quality, including completeness,
comprehensiveness, and politeness, are assessed using these features,
which lead to finding not only textual, but also conceptual relevant
threads for a user query within a forum. To analyse the influence of
the features, we used an adopted version of voting model thread
search as a retrieval system. We equipped it with each feature solely
and also various combinations of features in turn during multiple
runs. The results show that incorporating the quality features
enhances the effectiveness of the utilised retrieval system
significantly.
Abstract: 21st century has transformed the labor market
landscape in a way of posing new and different demands on
university graduates as well as university lecturers, which means that
the knowledge and academic skills students acquire in the course of
their studies should be applicable and transferable from the higher
education context to their future professional careers. Given the
context of the Languages for Specific Purposes (LSP) classroom, the
teachers’ objective is not only to teach the language itself, but also to
prepare students to use that language as a medium to develop generic
skills and competences. These include media and information
literacy, critical and creative thinking, problem-solving and analytical
skills, effective written and oral communication, as well as
collaborative work and social skills, all of which are necessary to
make university graduates more competitive in everyday professional
environments. On the other hand, due to limitations of time and large
numbers of students in classes, the frequently topic-centered syllabus
of LSP courses places considerable focus on acquiring the subject
matter and specialist vocabulary instead of sufficient development of
skills and competences required by students’ prospective employers.
This paper intends to explore some of those issues as viewed both by
LSP lecturers and by business professionals in their respective
surveys. The surveys were conducted among more than 50 LSP
lecturers at higher education institutions in Croatia, more than 40 HR
professionals and more than 60 university graduates with degrees in
economics and/or business working in management positions in
mainly large and medium-sized companies in Croatia. Various elements of LSP course content have been taken into
consideration in this research, including reading and listening
comprehension of specialist texts, acquisition of specialist vocabulary
and grammatical structures, as well as presentation and negotiation
skills. The ability to hold meetings, conduct business correspondence,
write reports, academic texts, case studies and take part in debates
were also taken into consideration, as well as informal business
communication, business etiquette and core courses delivered in a
foreign language. The results of the surveys conducted among LSP
lecturers will be analyzed with reference to what extent those
elements are included in their courses and how consistently and
thoroughly they are evaluated according to their course requirements.
Their opinions will be compared to the results of the surveys
conducted among professionals from a range of industries in Croatia
so as to examine how useful and important they perceive the same
elements of the LSP course content in their working environments.
Such comparative analysis will thus show to what extent the syllabi
of LSP courses meet the demands of the employment market when it
comes to the students’ language skills and competences, as well as
transferable skills. Finally, the findings will also be compared to the
observations based on practical teaching experience and the relevant
sources that have been used in this research. In conclusion, the ideas and observations in this paper are merely
open-ended questions that do not have conclusive answers, but might
prompt LSP lecturers to re-evaluate the content and objectives of
their course syllabi.
Abstract: Patient-specific models are instance-based learning
algorithms that take advantage of the particular features of the patient
case at hand to predict an outcome. We introduce two patient-specific
algorithms based on decision tree paradigm that use AUC as a
metric to select an attribute. We apply the patient specific algorithms
to predict outcomes in several datasets, including medical datasets.
Compared to the patient-specific decision path (PSDP) entropy-based
and CART methods, the AUC-based patient-specific decision path
models performed equivalently on area under the ROC curve (AUC).
Our results provide support for patient-specific methods being a
promising approach for making clinical predictions.
Abstract: This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.
Abstract: Nowadays, education cannot be imagined without digital technologies. It broadens the horizons of teaching learning processes. Several universities are offering online courses. For evaluation purpose, e-examination systems are being widely adopted in academic environments. Multiple-choice tests are extremely popular. Moving away from traditional examinations to e-examination, Moodle as Learning Management Systems (LMS) is being used. Moodle logs every click that students make for attempting and navigational purposes in e-examination. Data mining has been applied in various domains including retail sales, bioinformatics. In recent years, there has been increasing interest in the use of data mining in e-learning environment. It has been applied to discover, extract, and evaluate parameters related to student’s learning performance. The combination of data mining and e-learning is still in its babyhood. Log data generated by the students during online examination can be used to discover knowledge with the help of data mining techniques. In web based applications, number of right and wrong answers of the test result is not sufficient to assess and evaluate the student’s performance. So, assessment techniques must be intelligent enough. If student cannot answer the question asked by the instructor then some easier question can be asked. Otherwise, more difficult question can be post on similar topic. To do so, it is necessary to identify difficulty level of the questions. Proposed work concentrate on the same issue. Data mining techniques in specific clustering is used in this work. This method decide difficulty levels of the question and categories them as tough, easy or moderate and later this will be served to the desire students based on their performance. Proposed experiment categories the question set and also group the students based on their performance in examination. This will help the instructor to guide the students more specifically. In short mined knowledge helps to support, guide, facilitate and enhance learning as a whole.
Abstract: In the aviation industry, many faults may occur frequently during the maintenance processes and assembly operations of complex structured aircrafts because of their high dependencies of components. These faults affect the quality of aircraft parts or developed modules adversely. Technical employee requires long time and high labor force while checking the correctness of each component. In addition, the person must be trained regularly because of the ever-growing and changing technology. Generally, the cost of this training is very high. Augmented Reality (AR) technology reduces the cost of training radically and improves the effectiveness of the training. In this study, the usage of AR technology in the aviation industry has been investigated and the effectiveness of AR with heads-up display glasses has been examined. An application has been developed for comparison of production process with AR and manual one.
Abstract: For several hundred years, the design of railway tracks
has practically remained unchanged. Traditionally, rail tracks are
placed on a ballast layer due to several reasons, including economy,
rapid drainage, and high load bearing capacity. The primary function
of ballast is to distributing dynamic track loads to sub-ballast and
subgrade layers, while also providing lateral resistance and allowing
for rapid drainage. Upon repeated trainloads, the ballast becomes
fouled due to ballast degradation and the intrusion of fines which
adversely affects the strength and deformation behaviour of ballast.
This paper presents the use of three-dimensional discrete element
method (DEM) in studying the shear behaviour of the fouled ballast
subjected to direct shear loading. Irregularly shaped particles of
ballast were modelled by grouping many spherical balls together in
appropriate sizes to simulate representative ballast aggregates. Fouled
ballast was modelled by injecting a specified number of miniature
spherical particles into the void spaces. The DEM simulation
highlights that the peak shear stress of the ballast assembly decreases
and the dilation of fouled ballast increases with an increase level of
fouling. Additionally, the distributions of contact force chain and
particle displacement vectors were captured during shearing progress,
explaining the formation of shear band and the evolutions of
volumetric change of fouled ballast.
Abstract: Aerobic dance has becoming a popular mode of
exercise especially among women due to its fun nature. With a catchy
music background and joyful dance steps, aerobic dancers would be
able to have fun while sweating out. Depending on its level of
aggressiveness, aerobic may also improve and maintain
cardiorespiratory fitness other than being a great tool for weight loss.
This study intends to prove that aerobic dance activity can bring the
same, if not better impacts on health than other types of
cardiovascular exercise such as jogging and cycling. The objective of
this study was to evaluate and identify the effect of six weeks aerobic
dance on cardiovascular fitness and weight loss among women. This
study, which was held in Seremban Fit Challenge, used a quasiexperimental
design. The subjects selected include a total of 14
women (n = 14) with age (32.4 years old ± 9.1), weight (65.93 kg ±
11.24) and height (165.36 ± 3.46) who joined the Seremban Fit
Challenge Season 13. The subjects were asked to join an aerobic
dance class with a duration of one hour for six weeks in a row. As for
the outcome, cardiovascular fitness was measured with a 1-mile run
test while any changes on weight were measured using the weighing
scale. The result showed that there was a significant difference
between pre and post-test for cardiovascular fitness when p = 0.02
Abstract: This paper describes an ab-initio design, development and calibration results of an Optical Sensor Ground Reaction Force Measurement Platform (OSGRFP) for gait and geriatric studies. The developed system employs an array of FBG sensors to measure the respective ground reaction forces from all three axes (X, Y and Z), which are perpendicular to each other. The novelty of this work is two folded. One is in its uniqueness to resolve the tri axial resultant forces during the stance in to the respective pure axis loads and the other is the applicability of inherently advantageous FBG sensors which are most suitable for biomechanical instrumentation. To validate the response of the FBG sensors installed in OSGRFP and to measure the cross sensitivity of the force applied in other directions, load sensors with indicators are used. Further in this work, relevant mathematical formulations are presented for extracting respective ground reaction forces from wavelength shifts/strain of FBG sensors on the OSGRFP. The result of this device has implications in understanding the foot function, identifying issues in gait cycle and measuring discrepancies between left and right foot. The device also provides a method to quantify and compare relative postural stability of different subjects under test, which has implications in post-surgical rehabilitation, geriatrics and optimizing training protocols for sports personnel.
Abstract: Parental expectations often differ to that of their children and the influence and involvement of parents, at home, may affect the student performance in the classroom. This paper presents results from a survey of Asian and European background secondary school mathematics students (N=128) in Melbourne, Australia. Student responses to survey questions were analysed using confirmatory factor analysis, followed by t-tests and ANOVA. The aim of the analysis was to identify similarities and differences in parental expectations in relation to ethnicity, gender, and the year level of the students. The notable findings from the analysis showed no significant difference (at 0.05 level) in parental expectations and student performance, in relation to ethnicity or gender. Conversely, there was a significant difference in both parental expectations and student performance between year 7 and year 12 students. Further, whilst there was a significant difference in parental expectations between year 7 and year 11 students, the students’ performances were not significantly different. The results suggest further research may be needed to understand the parental expectations and student performance between the lower and upper secondary school mathematics students.
Abstract: With the strengthened regulation on the mandatory use
of recycled aggregate, development of construction materials using
recycled aggregate has recently increased. This study aimed to secure
the performance of asphalt concrete mixture by developing
recycled-modified asphalt using recycled basalt aggregate from the
Jeju area. The strength of the basalt aggregate from the Jeju area used
in this study was similar to that of general aggregate, while the specific
surface area was larger due to the development of pores. Modified
asphalt was developed using a general aggregate-recycled aggregate
ratio of 7:3, and the results indicated that the Marshall stability
increased by 27% compared to that of asphalt concrete mixture using
only general aggregate, and the flow values showed similar levels.
Also, the indirect tensile strength increased by 79%, and the toughness
increased by more than 100%. In addition, the TSR for examining
moisture resistance was 0.95 indicating that the reduction in the
indirect tensile strength due to moisture was very low (5% level), and
the developed recycled-modified asphalt could satisfy all the quality
standards of asphalt concrete mixture.
Abstract: Energy consumption data, in particular those involving
public buildings, are impacted by many factors: the building structure,
climate/environmental parameters, construction, system operating
condition, and user behavior patterns. Traditional methods for data
analysis are insufficient. This paper delves into the data mining
technology to determine its application in the analysis of building
energy consumption data including energy consumption prediction,
fault diagnosis, and optimal operation. Recent literature are reviewed
and summarized, the problems faced by data mining technology in the
area of energy consumption data analysis are enumerated, and research
points for future studies are given.
Abstract: Introduction: To update ourselves and understand the
concept of latest electronic formats available for Health care
providers and how it could be used and developed as per standards.
The idea is to correlate between the patients Manual Medical Records
keeping and maintaining patients Electronic Information in a Health
care setup in this world. Furthermore, this stands with adapting to the
right technology depending upon the organization and improve our
quality and quantity of Healthcare providing skills. Objective: The
concept and theory is to explain the terms of Electronic Medical
Record (EMR), Electronic Health Record (EHR) and Personal Health
Record (PHR) and selecting the best technical among the available
Electronic sources and software before implementing. It is to guide
and make sure the technology used by the end users without any
doubts and difficulties. The idea is to evaluate is to admire the uses
and barriers of EMR-EHR-PHR. Aim and Scope: The target is to
achieve the health care providers like Physicians, Nurses, Therapists,
Medical Bill reimbursements, Insurances and Government to assess
the patient’s information on easy and systematic manner without
diluting the confidentiality of patient’s information. Method: Health
Information Technology can be implemented with the help of
Organisations providing with legal guidelines and help to stand by
the health care provider. The main objective is to select the correct
embedded and affordable database management software and
generating large-scale data. The parallel need is to know how the
latest software available in the market. Conclusion: The question lies
here is implementing the Electronic information system with
healthcare providers and organization. The clinicians are the main
users of the technology and manage us to “go paperless”. The fact is
that day today changing technologically is very sound and up to date.
Basically, the idea is to tell how to store the data electronically safe
and secure. All three exemplifies the fact that an electronic format
has its own benefit as well as barriers.
Abstract: This paper proposes a method of learning topics for
broadcasting contents. There are two kinds of texts related to
broadcasting contents. One is a broadcasting script, which is a series of
texts including directions and dialogues. The other is blogposts, which
possesses relatively abstracted contents, stories, and diverse
information of broadcasting contents. Although two texts range over
similar broadcasting contents, words in blogposts and broadcasting
script are different. When unseen words appear, it needs a method to
reflect to existing topic. In this paper, we introduce a semantic
vocabulary expansion method to reflect unseen words. We expand
topics of the broadcasting script by incorporating the words in
blogposts. Each word in blogposts is added to the most semantically
correlated topics. We use word2vec to get the semantic correlation
between words in blogposts and topics of scripts. The vocabularies of
topics are updated and then posterior inference is performed to
rearrange the topics. In experiments, we verified that the proposed
method can discover more salient topics for broadcasting contents.