Abstract: In situ modified cyclohexanone-formaldehyde resins
were prepared by addition of alendronic acid during resin
preparation. Clay nanocomposites in ketonic resins were achieved by
adding clay into the flask at the beginning of the resin preparation.
The prepared resins were used for the synthesis of fire resistant
polyurethanes foam. Both phosphorous containing modifier
compound alendronic acid and nanoclay increases fire resistance of
the cyclohexanone-formaldehyde resin thus polyurethane produced
from these resins. The effect of the concentrations of alendronic acid
and clay on the fire resistance and physical properties of
polyurethanes was studied.
Abstract: Testing the first year students of Informatics at the
University of Debrecen revealed that students start their tertiary
studies in programming with a low level of programming knowledge
and algorithmic skills. The possible reasons which lead the students
to this very unfortunate result were examined. The results of the test
were compared to the students’ results in the school leaving exams
and to their self-assessment values. It was found that there is only a
slight connection between the students’ results in the test and in the
school leaving exams, especially at intermediate level. Beyond this,
the school leaving exams do not seem to enable students to evaluate
their own abilities.
Abstract: In the scope of application of technical textiles, Non-
Crimp Fabrics are increasingly used. In general, NCF exhibit
excellent load bearing properties, but caused by the manufacturing
process, there are some remaining disadvantages which have to be
reduced. Regarding to this, a novel technique of processing NCF was
developed substituting the binding-thread by an adhesive. This stitchfree
method requires new manufacturing concept as well as new basic
methods to prove adhesion of glue at fibres and textiles. To improve
adhesion properties and the wettability of carbon-fibres by the
adhesive, oxy-fluorination was used. The modification of carbonfibres
by oxy-fluorination was investigated via scanning electron
microscope, X-ray photoelectron spectroscopy and single fibre
tensiometry. Special tensile tests were developed to determine the
maximum force required for detachment.
Abstract: An exploration of the related literature reveals that all
instruction methods aim at training autonomous learners. After the
turn of second language pedagogy toward learner-oriented strategies,
learners’ needs were more focused. Yet; the historical, social and
political aspects of learning were still neglected. The present study
investigates the notion of autonomous learning and explains its
various facets from a pedagogical point of view. Furthermore;
different elements, fields and scopes of autonomous learning will be
explored. After exploring different aspects of autonomy, it is
postulated that liberatory autonomy is highlighted since it not only
covers social autonomy but also reveals learners’ capabilities and
human potentials. It is also recommended that learners consider
different elements of autonomy such as motivation, knowledge,
confidence, and skills.
Abstract: In-memory database systems are becoming popular
due to the availability and affordability of sufficiently large RAM and
processors in modern high-end servers with the capacity to manage
large in-memory database transactions. While fast and reliable inmemory
systems are still being developed to overcome cache misses,
CPU/IO bottlenecks and distributed transaction costs, disk-based data
stores still serve as the primary persistence. In addition, with the
recent growth in multi-tenancy cloud applications and associated
security concerns, many organisations consider the trade-offs and
continue to require fast and reliable transaction processing of diskbased
database systems as an available choice. For these
organizations, the only way of increasing throughput is by improving
the performance of disk-based concurrency control. This warrants a
hybrid database system with the ability to selectively apply an
enhanced disk-based data management within the context of inmemory
systems that would help improve overall throughput.
The general view is that in-memory systems substantially
outperform disk-based systems. We question this assumption and
examine how a modified variation of access invariance that we call
enhanced memory access, (EMA) can be used to allow very high
levels of concurrency in the pre-fetching of data in disk-based
systems. We demonstrate how this prefetching in disk-based systems
can yield close to in-memory performance, which paves the way for
improved hybrid database systems. This paper proposes a novel EMA
technique and presents a comparative study between disk-based EMA
systems and in-memory systems running on hardware configurations
of equivalent power in terms of the number of processors and their
speeds. The results of the experiments conducted clearly substantiate
that when used in conjunction with all concurrency control
mechanisms, EMA can increase the throughput of disk-based systems
to levels quite close to those achieved by in-memory system. The
promising results of this work show that enhanced disk-based
systems facilitate in improving hybrid data management within the
broader context of in-memory systems.
Abstract: Proof of controlling crack width is a basic condition
for securing suitable performance in serviceability limit state. The
cracking in concrete can occur at any time from the casting of time to
the years after the concrete has been set in place. Most codes struggle
with offering procedure for crack width calculation. There is lack in
availability of design charts for designers to compute crack width
with ease. The focus of the study is to utilize design charts and
parametric equations in calculating crack width with minimum error.
The paper contains a simplified procedure to calculate crack width
for reinforced concrete (RC) sections subjected to bending with axial
tensile force following the guidelines of Euro code [DS EN-1992-1-1
& DS EN-1992-1-2]. Numerical examples demonstrate the
application of the suggested procedure. Comparison with parallel
analytical tools supports the validity of result and show the
percentage deviation of crack width in both the procedures. The
technique is simple, user friendly and ready to evolve for a greater
spectrum of section sizes and materials.
Abstract: Food as perishable goods represents a specific and
sensitive part in the supply chain theory, since changing physical or
chemical characteristics considerably influence the approach to stock
management. The most delicate phase of this process is
transportation, where it becomes difficult to ensure the stable
conditions which limit deterioration, since the value of the
deterioration rate could be easily influenced by the mode of
transportation. The fuzzy definition of variables allows one to take
these variations into account. Furthermore, an appropriate choice of
the defuzzification method permits one to adapt results to real
conditions as far as possible. In this article those methods which take
into account the relationship between the deterioration rate of
perishable goods and transportation by ship will be applied with the
aim of (a) minimizing the total cost function, defined as the sum of
the ordering cost, holding cost, disposing cost and transportation
costs, and (b) improving the supply chain sustainability by reducing
environmental impact and waste disposal costs.
Abstract: Electric vehicles are one of the most complicated
electric devices to simulate due to the significant number of different
processes involved in electrical structure of it. There are concurrent
processes of energy consumption and generation with different
onboard systems, which make simulation tasks more complicated to
perform. More accurate simulation on energy consumption can
provide a better understanding of all energy management for electric
transport. As a result of all those processes, electric transport can
allow for a more sustainable future and become more convenient in
relation to the distance range and recharging time. This paper
discusses the problems of energy consumption simulations for
electric vehicles using different software packages to provide ideas
on how to make this process more precise, which can help engineers
create better energy management strategies for electric vehicles.
Abstract: Web-based Cognitive Writing Instruction (WeCWI)’s
contribution towards language development can be divided into
linguistic and non-linguistic perspectives. In linguistic perspective,
WeCWI focuses on the literacy and language discoveries, while the
cognitive and psychological discoveries are the hubs in non-linguistic
perspective. In linguistic perspective, WeCWI draws attention to free
reading and enterprises, which are supported by the language
acquisition theories. Besides, the adoption of process genre approach
as a hybrid guided writing approach fosters literacy development.
Literacy and language developments are interconnected in the
communication process; hence, WeCWI encourages meaningful
discussion based on the interactionist theory that involves input,
negotiation, output, and interactional feedback. Rooted in the elearning
interaction-based model, WeCWI promotes online
discussion via synchronous and asynchronous communications,
which allows interactions happened among the learners, instructor,
and digital content. In non-linguistic perspective, WeCWI highlights
on the contribution of reading, discussion, and writing towards
cognitive development. Based on the inquiry models, learners’
critical thinking is fostered during information exploration process
through interaction and questioning. Lastly, to lower writing anxiety,
WeCWI develops the instructional tool with supportive features to
facilitate the writing process. To bring a positive user experience to
the learner, WeCWI aims to create the instructional tool with
different interface designs based on two different types of perceptual
learning style.
Abstract: In this paper we describe the Levenvberg-Marquardt
(LM) algorithm for identification and equalization of CDMA
signals received by an antenna array in communication channels.
The synthesis explains the digital separation and equalization of
signals after propagation through multipath generating intersymbol
interference (ISI). Exploiting discrete data transmitted and three
diversities induced at the reception, the problem can be composed
by the Block Component Decomposition (BCD) of a tensor of
order 3 which is a new tensor decomposition generalizing the
PARAFAC decomposition. We optimize the BCD decomposition by
Levenvberg-Marquardt method gives encouraging results compared to
classical alternating least squares algorithm (ALS). In the equalization
part, we use the Minimum Mean Square Error (MMSE) to perform
the presented method. The simulation results using the LM algorithm
are important.
Abstract: Cost of governance in Nigeria has become a challenge
to development and concern to practitioners and scholars alike in the
field of business and social science research. In the 2010 national
budget of NGN4.6 trillion or USD28.75billion for instance, only a
pantry sum of NGN1.8trillion or USD11.15billion was earmarked for
capital expenditure. Similarly, in 2013, out of a total national budget
of NGN4.92trillion or USD30.75billion, only the sum of
NGN1.50trllion or USD9.38billion was voted for capital expenditure.
Therefore, based on the data sourced from the Nigerian Office of
Statistics, Central bank of Nigeria Statistical Bulletin as well as from
the United Nations Development Programme, this study examined
the causes of high cost of governance in Nigeria. It found out that the
high cost of governance in the country is in the interest of the ruling
class, arising from their unethical behaviour – corrupt practices and
the poor management of public resources. As a result, the study
recommends the need to intensify the war against corruption and
mismanagement of public resources by government officials as
possible solution to overcome the high cost of governance in Nigeria.
This could be achieved by strengthening the constitutional powers of
the various anti-corruption agencies in the area of arrest, investigation
and prosecution of offenders without the interference of the executive
arm of government either at the local, state or federal level.
Abstract: Validity, integrity, and impacts of the IT systems of
the US federal courts have been studied as part of the Human Rights
Alert-NGO (HRA) submission for the 2015 Universal Periodic
Review (UPR) of human rights in the United States by the Human
Rights Council (HRC) of the United Nations (UN). The current
report includes overview of IT system analysis, data-mining and case
studies. System analysis and data-mining show: Development and
implementation with no lawful authority, servers of unverified
identity, invalidity in implementation of electronic signatures,
authentication instruments and procedures, authorities and
permissions; discrimination in access against the public and
unrepresented (pro se) parties and in favor of attorneys; widespread
publication of invalid judicial records and dockets, leading to their
false representation and false enforcement. A series of case studies
documents the impacts on individuals' human rights, on banking
regulation, and on international matters. Significance is discussed in
the context of various media and expert reports, which opine
unprecedented corruption of the US justice system today, and which
question, whether the US Constitution was in fact suspended. Similar
findings were previously reported in IT systems of the State of
California and the State of Israel, which were incorporated, subject to
professional HRC staff review, into the UN UPR reports (2010 and
2013). Solutions are proposed, based on the principles of publicity of
the law and the separation of power: Reliance on US IT and legal
experts under accountability to the legislative branch, enhancing
transparency, ongoing vigilance by human rights and internet
activists. IT experts should assume more prominent civic duties in the
safeguard of civil society in our era.
Abstract: There is decagram of strategic decisions of operations
and production/service management (POSM) within operational
research (OR) which must collate, namely: design, inventory, quality,
location, process and capacity, layout, scheduling, maintain ace, and
supply chain. This paper presents an architectural configuration
conceptual framework of a decagram of sets decisions in a form of
mathematical complete graph and abelian graph.
Mathematically, a complete graph is undirected (UDG), and
directed (DG) a relationship where every pair of vertices is
connected, collated, confluent, and holomorphic.
There has not been any study conducted which, however,
prioritizes the holomorphic sets which of POMS within OR field of
study. The study utilizes OR structured technique known as The
Analytic Hierarchy Process (AHP) analysis for organizing, sorting
and prioritizing(ranking) the sets within the decagram of POMS
according to their attribution (propensity), and provides an analysis
how the prioritization has real-world application within the 21st
century.
Abstract: This paper presents a combination of both robust
nonlinear controller and nonlinear controller for a class of nonlinear
4Y Octorotor UAV using Back-stepping and sliding mode controller.
The robustness against internal and external disturbance and
decoupling control are the merits of the proposed paper. The
proposed controller decouples the Octorotor dynamical system. The
controller is then applied to a 4Y Octortor UAV and its feature will
be shown.
Abstract: The Roma (Gypsies) is a transnational minority with a
high degree of consanguineous marriages. Similar to other
genetically isolated founder populations, the Roma harbor a number
of unique or rare genetic disorders. This paper discusses about a rare
form of Charcot-Marie-Tooth disease – type 4G (CMT4G), also
called Hereditary Motor and Sensory Neuropathy type Russe, an
autosomal recessive disease caused by mutation private to Roma
characterized by abnormally increased density of non-myelinated
axons. CMT4G was originally found in Bulgarian Roma and in 2009
two putative causative mutations in the HK1 gene were identified.
Since then, several cases were reported in Roma families mainly
from Bulgaria and Spain. Here we present a Slovak Roma family in
which CMT4G was diagnosed on the basis of clinical examination
and genetic testing. This case is a further proof of the role of the HK1
gene in pathogenesis of the disease. It confirms that mutation in the
HK1 gene is a common cause of autosomal recessive CMT disease in
Roma and should be considered as a common part of a diagnostic
procedure.
Abstract: Environmental impacts of six 3D printers using
various materials were compared to determine if material choice
drove sustainability, or if other factors such as machine type, machine
size, or machine utilization dominate. Cradle-to-grave life-cycle
assessments were performed, comparing a commercial-scale FDM
machine printing in ABS plastic, a desktop FDM machine printing in
ABS, a desktop FDM machine printing in PET and PLA plastics, a
polyjet machine printing in its proprietary polymer, an SLA machine
printing in its polymer, and an inkjet machine hacked to print in salt
and dextrose. All scenarios were scored using ReCiPe Endpoint H
methodology to combine multiple impact categories, comparing
environmental impacts per part made for several scenarios per
machine. Results showed that most printers’ ecological impacts were
dominated by electricity use, not materials, and the changes in
electricity use due to different plastics was not significant compared
to variation from one machine to another. Variation in machine idle
time determined impacts per part most strongly. However, material
impacts were quite important for the inkjet printer hacked to print in
salt: In its optimal scenario, it had up to 1/38th the impacts coreper
part as the worst-performing machine in the same scenario. If salt
parts were infused with epoxy to make them more physically robust,
then much of this advantage disappeared, and material impacts
actually dominated or equaled electricity use. Future studies should
also measure DMLS and SLS processes / materials.
Abstract: One of the major difficulties introduced with wind
power penetration is the inherent uncertainty in production originating
from uncertain wind conditions. This uncertainty impacts many
different aspects of power system operation, especially the balancing
power requirements. For this reason, in power system development
planing, it is necessary to evaluate the potential uncertainty in future
wind power generation. For this purpose, simulation models are
required, reproducing the performance of wind power forecasts.
This paper presents a wind power forecast error simulation models
which are based on the stochastic process simulation. Proposed
models capture the most important statistical parameters recognized
in wind power forecast error time series. Furthermore, two distinct
models are presented based on data availability. First model uses
wind speed measurements on potential or existing wind power plant
locations, while the seconds model uses statistical distribution of wind
speeds.
Abstract: It is difficult to study the effect of various variables on
cycle fitting through actual experiment. To overcome such difficulty,
the forward dynamics of a musculoskeletal model was applied to cycle
fitting in this study. The measured EMG data weres compared with the
muscle activities of the musculoskeletal model through forward
dynamics. EMG data were measured from five cyclists who do not
have musculoskeletal diseases during three minutes pedaling with a
constant load (150 W) and cadence (90 RPM). The muscles used for
the analysis were the Vastus Lateralis (VL), Tibialis Anterior (TA),
Bicep Femoris (BF), and Gastrocnemius Medial (GM). Person’s
correlation coefficients of the muscle activity patterns, the peak timing
of the maximum muscle activities, and the total muscle activities were
calculated and compared. BIKE3D model of AnyBody (Anybodytech,
Denmark) was used for the musculoskeletal model simulation. The
comparisons of the actual experiments with the simulation results
showed significant correlations in the muscle activity patterns (VL:
0.789, TA: 0.503, BF: 0.468, GM: 0.670). The peak timings of the
maximum muscle activities were distributed at particular phases. The
total muscle activities were compared with the normalized muscle
activities, and the comparison showed about 10% difference in the VL
(+10%), TA (+9.7%), and BF (+10%), excluding the GM (+29.4%).
Thus, it can be concluded that muscle activities of model &
experiment showed similar results. The results of this study indicated
that it was possible to apply the simulation of further improved
musculoskeletal model to cycle fitting.
Abstract: Alkylated silicon nanocrystals (C11-SiNCs) were
prepared successfully by galvanostatic etching of p-Si(100) wafers
followed by a thermal hydrosilation reaction of 1-undecene in
refluxing toluene in order to extract C11-SiNCs from porous silicon.
Erbium trichloride was added to alkylated SiNCs using a simple
mixing chemical route. To the best of our knowledge, this is the first
investigation on mixing SiNCs with erbium ions (III) by this
chemical method. The chemical characterization of C11-SiNCs and
their mixtures with Er3+(Er/C11-SiNCs) were carried out using X-ray
photoemission spectroscopy (XPS). The optical properties of C11-
SiNCs and their mixtures with Er3+ were investigated using Raman
spectroscopy and photoluminescence (PL). The erbium mixed
alkylated SiNCs shows an orange PL emission peak at around 595
nm that originates from radiative recombination of Si. Er/C11-SiNCs
mixture also exhibits a weak PL emission peak at 1536 nm that
originates from the intra-4f transition in erbium ions (Er3+). The PL
peak of Si in Er/C11-SiNCs mixture is increased in the intensity up to
three times as compared to pure C11-SiNCs. The collected data
suggest that this chemical mixing route leads instead to a transfer of
energy from erbium ions to alkylated SiNCs.
Abstract: The Simulation based VLSI Implementation of
FELICS (Fast Efficient Lossless Image Compression System)
Algorithm is proposed to provide the lossless image compression and
is implemented in simulation oriented VLSI (Very Large Scale
Integrated). To analysis the performance of Lossless image
compression and to reduce the image without losing image quality
and then implemented in VLSI based FELICS algorithm. In FELICS
algorithm, which consists of simplified adjusted binary code for
Image compression and these compression image is converted in
pixel and then implemented in VLSI domain. This parameter is used
to achieve high processing speed and minimize the area and power.
The simplified adjusted binary code reduces the number of arithmetic
operation and achieved high processing speed. The color difference
preprocessing is also proposed to improve coding efficiency with
simple arithmetic operation. Although VLSI based FELICS
Algorithm provides effective solution for hardware architecture
design for regular pipelining data flow parallelism with four stages.
With two level parallelisms, consecutive pixels can be classified into
even and odd samples and the individual hardware engine is
dedicated for each one. This method can be further enhanced by
multilevel parallelisms.