Abstract: Smart metering and demand response are gaining
ground in industrial and residential applications. Smart Appliances
have been given concern towards achieving Smart home. The success
of Smart grid development relies on the successful implementation of
Information and Communication Technology (ICT) in power sector.
Smart Appliances have been the technology under development and
many new contributions to its realization have been reported in the
last few years. The role of ICT here is to capture data in real time,
thereby allowing bi-directional flow of information/data between
producing and utilization point; that lead a way for the attainment of
Smart appliances where home appliances can communicate between
themselves and provide a self-control (switch on and off) using the
signal (information) obtained from the grid. This paper depicts the
background on ICT for smart appliances paying a particular attention
to the current technology and identifying the future ICT trends for
load monitoring through which smart appliances can be achieved to
facilitate an efficient smart home system which promote demand
response program. This paper grouped and reviewed the recent
contributions, in order to establish the current state of the art and
trends of the technology, so that the reader can be provided with a
comprehensive and insightful review of where ICT for smart
appliances stands and is heading to. The paper also presents a brief
overview of communication types, and then narrowed the discussion
to the load monitoring (Non-intrusive Appliances Load Monitoring
‘NALM’). Finally, some future trends and challenges in the further
development of the ICT framework are discussed to motivate future
contributions that address open problems and explore new
possibilities.
Abstract: The recent instability in economy was found to be
influencing the situation in Malaysia whether directly or indirectly.
Taking that into consideration, the government needs to find the best
approach to balance its citizen’s socio-economic strata level urgently.
Through education platform is among the efforts planned and acted
upon for the purpose of balancing the effects of the influence,
through the exposure of social entrepreneurial activity towards youth
especially those in higher institution level. Armed with knowledge
and skills that they gained, with the support by entrepreneurial
culture and environment while in campus; indirectly, the students will
lean more on making social entrepreneurship as a career option when
they graduate. Following the issues of marketability and workability
of current graduates that are becoming dire, research involving how
far the willingness of student to create social innovation that
contribute to the society without focusing solely on personal gain is
relevant enough to be conducted. With that, this research is
conducted with the purpose of identifying the level of entrepreneurial
intention and social entrepreneurship among higher institution
students in Malaysia. Stratified random sampling involves 355
undergraduate students from five public universities had been made
as research respondents and data were collected through surveys. The
data was then analyzed descriptively using min score and standard
deviation. The study found that the entrepreneurial intention of higher
education students are on moderate level, however it is the contrary
for social entrepreneurship activities, where it was shown on a high
level. This means that while the students only have moderate level of
willingness to be a social entrepreneur, they are very committed to
created social innovation through the social entrepreneurship
activities conducted. The implication from this study can be
contributed towards the higher institution authorities in prediction the
tendency of student in becoming social entrepreneurs. Thus, the
opportunities and facilities for realizing the courses related to social
entrepreneurship must be created expansively so that the vision of
creating as many social entrepreneurs as possible can be achieved.
Abstract: We present an approach to triangle mesh simplification
designed to be executed on the GPU. We use a quadric error metric
to calculate an error value for each vertex of the mesh and order all
vertices based on this value. This step is followed by the parallel
removal of a number of vertices with the lowest calculated error
values. To allow for the parallel removal of multiple vertices we use
a set of per-vertex boundaries that prevent mesh foldovers even when
simplification operations are performed on neighbouring vertices. We
execute multiple iterations of the calculation of the vertex errors,
ordering of the error values and removal of vertices until either a
desired number of vertices remains in the mesh or a minimum error
value is reached. This parallel approach is used to speed up the
simplification process while maintaining mesh topology and avoiding
foldovers at every step of the simplification.
Abstract: Cloud computing can reduce the start-up expenses of implementing EHR (Electronic Health Records). However, many of the healthcare institutions are yet to implement cloud computing due to the associated privacy and security issues. In this paper, we analyze the challenges and opportunities of implementing cloud computing in healthcare. We also analyze data of over 5000 US hospitals that use Telemedicine applications. This analysis helps to understand the importance of smart phones over the desktop systems in different departments of the healthcare institutions. The wide usage of smartphones and cloud computing allows ubiquitous and affordable access to the health data by authorized persons, including patients and doctors. Cloud computing will prove to be beneficial to a majority of the departments in healthcare. Through this analysis, we attempt to understand the different healthcare departments that may benefit significantly from the implementation of cloud computing.
Abstract: E-Learning enables the users to learn at anywhere at
any time. In E-Learning systems, authenticating the E-Learning user
has security issues. The usage of appropriate communication
networks for providing the internet connectivity for E-learning is
another challenge. WiMAX networks provide Broadband Wireless
Access through the Multicast Broadcast Service so these networks
can be most suitable for E-Learning applications. The authentication
of E-Learning user is vulnerable to session hijacking problems. The
repeated authentication of users can be done to overcome these
issues. In this paper, session based Profile Caching Authentication is
proposed. In this scheme, the credentials of E-Learning users can be
cached at authentication server during the initial authentication
through the appropriate subscriber station. The proposed cache based
authentication scheme performs fast authentication by using cached
user profile. Thus, the proposed authentication protocol reduces the
delay in repeated authentication to enhance the security in ELearning.
Abstract: Intellectual capital is one of the most valuable and
important parts of the intangible assets of enterprises especially in
knowledge-based enterprises. With respect to increasing gap between
the market value and the book value of the companies, intellectual
capital is one of the components that can be placed in this gap. This
paper uses the value added efficiency of the three components,
capital employed, human capital and structural capital, to measure the
intellectual capital efficiency of Iranian industries groups, listed in
the Tehran Stock Exchange (TSE), using a 8 years period data set
from 2005 to 2012. In order to analyze the effect of intellectual
capital on the market-to-book value ratio of the companies, the data
set was divided into 10 industries, Banking, Pharmaceutical, Metals
& Mineral Nonmetallic, Food, Computer, Building, Investments,
Chemical, Cement and Automotive, and the panel data method was
applied to estimating pooled OLS. The results exhibited that value
added of capital employed has a positive significant relation with
increasing market value in the industries, Banking, Metals & Mineral
Nonmetallic, Food, Computer, Chemical and Cement, and also,
showed that value added efficiency of structural capital has a positive
significant relation with increasing market value in the Banking,
Pharmaceutical and Computer industries groups. The results of the
value added showed a negative relation with the Banking and
Pharmaceutical industries groups and a positive relation with
computer and Automotive industries groups. Among the studied
industries, computer industry has placed the widest gap between the
market value and book value in its intellectual capital.
Abstract: In this paper, a method has been developed to
construct the membership surfaces of row and column vectors and
arithmetic operations of imprecise matrix. A matrix with imprecise
elements would be called an imprecise matrix. The membership
surface of imprecise vector has been already shown based on
Randomness-Impreciseness Consistency Principle. The Randomness-
Impreciseness Consistency Principle leads to defining a normal law
of impreciseness using two different laws of randomness. In this
paper, the author has shown row and column membership surfaces
and arithmetic operations of imprecise matrix and demonstrated with
the help of numerical example.
Abstract: The inverted pendulum system is a classic control
problem that is used in universities around the world. It is a suitable
process to test prototype controllers due to its high non-linearities and
lack of stability. The inverted pendulum represents a challenging
control problem, which continually moves toward an uncontrolled
state. This paper presents the possibility of balancing an inverted
pendulum system using sliding mode control (SMC). The goal is to
determine which control strategy delivers better performance with
respect to pendulum’s angle and cart's position. Therefore,
proportional-integral-derivative (PID) is used for comparison. Results
have proven SMC control produced better response compared to PID
control in both normal and noisy systems.
Abstract: Dan C. Lortie’s Schoolteacher: A sociological study is
one of the best works on the sociology of teaching since W. Waller’s
classic study. It is a book worthy of review. Following the tradition of
symbolic interactionists, Lortie demonstrated the qualities who studied
the occupation of teaching. Using several methods to gather effective
data, Lortie has portrayed the ethos of the teaching profession.
Therefore, the work is an important book on the teaching profession
and teacher culture. Though outstanding, Lortie’s work is also flawed
in that his perspectives and methodology were adopted largely from
symbolic interactionism. First, Lortie in his work analyzed many
points regarding teacher culture; for example, he was interested in
exploring “sentiment,” “cathexis,” and “ethos.” Thus, he was more a
psychologist than a sociologist. Second, symbolic interactionism led
him to discern the teacher culture from a micro view, thereby missing
the structural aspects. For example, he did not fully discuss the issue of
gender and he ignored the issue of race. Finally, following the
qualitative sociological tradition, Lortie employed many qualitative
methods to gather data but only foucused on obtaining and presenting
interview data. Moreover, he used measurement methods that were too
simplistic for analyzing quantitative data fully.
Abstract: While the feature sizes of recent Complementary Metal
Oxid Semiconductor (CMOS) devices decrease the influence of static
power prevails their energy consumption. Thus, power savings that
benefit from Dynamic Frequency and Voltage Scaling (DVFS) are
diminishing and temporal shutdown of cores or other microchip
components become more worthwhile. A consequence of powering off unused parts of a chip is that the
relative difference between idle and fully loaded power consumption
is increased. That means, future chips and whole server systems gain
more power saving potential through power-aware load balancing,
whereas in former times this power saving approach had only
limited effect, and thus, was not widely adopted. While powering
off complete servers was used to save energy, it will be superfluous
in many cases when cores can be powered down. An important
advantage that comes with that is a largely reduced time to respond
to increased computational demand. We include the above developments in a server power model
and quantify the advantage. Our conclusion is that strategies from
datacenters when to power off server systems might be used in the
future on core level, while load balancing mechanisms previously
used at core level might be used in the future at server level.
Abstract: Flash flood is occurred in short time rainfall interval:
from 1 hour to 12 hours in small and medium basins. Flash floods
typically have two characteristics: large water flow and big flow
velocity. Flash flood is occurred at hill valley site (strip of lowland of
terrain) in a catchment with large enough distribution area, steep
basin slope, and heavy rainfall. The risk of flash floods is determined
through Gridded Basin Flash Flood Potential Index (GBFFPI). Flash
Flood Potential Index (FFPI) is determined through terrain slope
flash flood index, soil erosion flash flood index, land cover flash
floods index, land use flash flood index, rainfall flash flood index.
Determining GBFFPI, each cell in a map can be considered as outlet
of a water accumulation basin. GBFFPI of the cell is determined as
basin average value of FFPI of the corresponding water accumulation
basin. Based on GIS, a tool is developed to compute GBFFPI using
ArcObjects SDK for .NET. The maps of GBFFPI are built in two
types: GBFFPI including rainfall flash flood index (real time flash
flood warning) or GBFFPI excluding rainfall flash flood index.
GBFFPI Tool can be used to determine a high flash flood potential
site in a large region as quick as possible. The GBFFPI is improved
from conventional FFPI. The advantage of GBFFPI is that GBFFPI is
taking into account the basin response (interaction of cells) and
determines more true flash flood site (strip of lowland of terrain)
while conventional FFPI is taking into account single cell and does
not consider the interaction between cells. The GBFFPI Map of
QuangNam, QuangNgai, DaNang, Hue is built and exported to
Google Earth. The obtained map proves scientific basis of GBFFPI.
Abstract: Mumbai, being traditionally the epicenter of India's
trade and commerce, the existing major ports such as Mumbai and
Jawaharlal Nehru Ports (JN) situated in Thane estuary are also
developing its waterfront facilities. Various developments over the
passage of decades in this region have changed the tidal flux
entering/leaving the estuary. The intake at Pir-Pau is facing the
problem of shortage of water in view of advancement of shoreline,
while jetty near Ulwe faces the problem of ship scheduling due to
existence of shallower depths between JN Port and Ulwe Bunder. In
order to solve these problems, it is inevitable to have information
about tide levels over a long duration by field measurements.
However, field measurement is a tedious and costly affair;
application of artificial intelligence was used to predict water levels
by training the network for the measured tide data for one lunar tidal
cycle. The application of two layered feed forward Artificial Neural
Network (ANN) with back-propagation training algorithms such as
Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to
predict the yearly tide levels at waterfront structures namely at Ulwe
Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe,
and Vashi for a period of lunar tidal cycle (2013) was used to train,
validate and test the neural networks. These trained networks having
high co-relation coefficients (R= 0.998) were used to predict the tide
at Ulwe, and Vashi for its verification with the measured tide for the
year 2000 & 2013. The results indicate that the predicted tide levels
by ANN give reasonably accurate estimation of tide. Hence, the
trained network is used to predict the yearly tide data (2015) for
Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was
predicted by using the neural network which was trained with the
help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The
measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is
maximum amplification of tide by about 10-20 cm with a phase lag
of 10-20 minutes with reference to the tide at Apollo Bunder
(Mumbai). LM training algorithm is faster than GD and with increase
in number of neurons in hidden layer and the performance of the
network increases. The predicted tide levels by ANN at Pir-Pau and
Ulwe provides valuable information about the occurrence of high and
low water levels to plan the operation of pumping at Pir-Pau and
improve ship schedule at Ulwe.
Abstract: In this paper, we investigate the low-lying energy
levels of the two-dimensional parabolic graphene quantum dots
(GQDs) in the presence of topological defects with long range
Coulomb impurity and subjected to an external uniform magnetic
field. The low-lying energy levels of the system are obtained within
the framework of the perturbation theory. We theoretically
demonstrate that a valley splitting can be controlled by geometrical
parameters of the graphene quantum dots and/or by tuning a uniform
magnetic field, as well as topological defects. It is found that, for
parabolic graphene dots, the valley splitting occurs due to the
introduction of spatial confinement. The corresponding splitting is
enhanced by the introduction of a uniform magnetic field and it
increases by increasing the angle of the cone in subcritical regime.
Abstract: Open jet testing is a valuable testing technique which
provides the desired results with reasonable accuracy. It has been
used in past for the airships and now has recently been applied for the
hybrid ones, having more non-buoyant force coming from the wings,
empennage and the fuselage. In the present review work, an effort
has been done to review the challenges involved in open jet testing.
In order to shed light on the application of this technique, the
experimental results of two different configurations are presented.
Although, the aerodynamic results of such vehicles are unique to its
own design; however, it will provide a starting point for planning any
future testing. Few important testing areas which need more attention
are also highlighted. Most of the hybrid buoyant aerial vehicles are
unconventional in shape and there experimental data is generated,
which is unique to its own design.
Abstract: Our purpose is to investigate how the relationship
between employees and innovation management processes can drive
organizations to successful innovations. This research is deeply
related to a new way of thinking about human resources management
practices. It’s not simply about improving the employees’
engagement, but rather about a different and more radical
commitment: the employee can take on the role traditionally played
by the customer, namely to become the first tester of an innovative
product or service, the first user/customer and eventually the first
investor in the innovation. This new perception of employees could
create the basis of a novelty in the innovation process where
innovation is taken to a next level when the problems with customer
driven innovation on the one hand, and employees driven innovation
on the other can be balanced. This research identifies an effective
approach to innovation where the employees will participate
throughout the whole innovation process, not only in the idea
creation but also in the idea definition and development by giving
feedback in parallel to that provided by customers and lead-users.
Abstract: Magnetic Resonance Imaging Contrast Agents
(MRI-CM) are significant in the clinical and biological imaging as
they have the ability to alter the normal tissue contrast, thereby
affecting the signal intensity to enhance the visibility and detectability
of images. Superparamagnetic Iron Oxide (SPIO) nanoparticles,
coated with dextran or carboxydextran are currently available for
clinical MR imaging of the liver. Most SPIO contrast agents are
T2 shortening agents and Resovist (Ferucarbotran) is one of a
clinically tested, organ-specific, SPIO agent which has a low
molecular carboxydextran coating. The enhancement effect of
Resovist depends on its relaxivity which in turn depends on factors
like magnetic field strength, concentrations, nanoparticle properties,
pH and temperature. Therefore, this study was conducted to
investigate the impact of field strength and different contrast
concentrations on enhancement effects of Resovist. The study
explored the MRI signal intensity of Resovist in the physiological
range of plasma from T2-weighted spin echo sequence at three
magnetic field strengths: 0.47 T (r1=15, r2=101), 1.5 T (r1=7.4,
r2=95), and 3 T (r1=3.3, r2=160) and the range of contrast
concentrations by a mathematical simulation. Relaxivities of r1 and r2
(L mmol-1 Sec-1) were obtained from a previous study and the selected
concentrations were 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1.0, 2.0, and 3.0 mmol/L. T2-weighted images were
simulated using TR/TE ratio as 2000 ms /100 ms. According to the
reference literature, with increasing magnetic field strengths, the
r1 relaxivity tends to decrease while the r2 did not show any
systematic relationship with the selected field strengths. In parallel,
this study results revealed that the signal intensity of Resovist at lower
concentrations tends to increase than the higher concentrations. The
highest reported signal intensity was observed in the low field strength
of 0.47 T. The maximum signal intensities for 0.47 T, 1.5 T and 3 T
were found at the concentration levels of 0.05, 0.06 and 0.05 mmol/L,
respectively. Furthermore, it was revealed that, the concentrations
higher than the above, the signal intensity was decreased
exponentially. An inverse relationship can be found between the field
strength and T2 relaxation time, whereas, the field strength was
increased, T2 relaxation time was decreased accordingly. However,
resulted T2 relaxation time was not significantly different between
0.47 T and 1.5 T in this study. Moreover, a linear correlation of
transverse relaxation rates (1/T2, s–1) with the concentrations of
Resovist can be observed. According to these results, it can conclude
that the concentration of SPIO nanoparticle contrast agents and the
field strengths of MRI are two important parameters which can affect the signal intensity of T2-weighted SE sequence. Therefore, when MR
imaging those two parameters should be considered prudently.
Abstract: The increasing availability of information about earth
surface elevation (Digital Elevation Models DEM) generated from
different sources (remote sensing, Aerial Images, Lidar) poses the
question about how to integrate and make available to the most than
possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the
quality of data management plays a fundamental role. Due to the high
acquisition costs and the huge amount of generated data, highresolution
terrain surveys tend to be small or medium sized and
available on limited portion of earth. Here comes the need to merge
large-scale height maps that typically are made available for free at
worldwide level, with very specific high resolute datasets. One the
other hand, the third dimension increases the user experience and the
data representation quality, unlocking new possibilities in data
analysis for civil protection, real estate, urban planning, environment
monitoring, etc. The open-source 3D virtual globes, which are
trending topics in Geovisual Analytics, aim at improving the
visualization of geographical data provided by standard web services
or with proprietary formats. Typically, 3D Virtual globes like do not
offer an open-source tool that allows the generation of a terrain
elevation data structure starting from heterogeneous-resolution terrain
datasets. This paper describes a technological solution aimed to set
up a so-called “Terrain Builder”. This tool is able to merge
heterogeneous-resolution datasets, and to provide a multi-resolution
worldwide terrain services fully compatible with CesiumJS and
therefore accessible via web using traditional browser without any
additional plug-in.
Abstract: With 40% of total world energy consumption,
building systems are developing into technically complex large
energy consumers suitable for application of sophisticated power
management approaches to largely increase the energy efficiency
and even make them active energy market participants. Centralized
control system of building heating and cooling managed by
economically-optimal model predictive control shows promising
results with estimated 30% of energy efficiency increase. The research
is focused on implementation of such a method on a case study
performed on two floors of our faculty building with corresponding
sensors wireless data acquisition, remote heating/cooling units and
central climate controller. Building walls are mathematically modeled
with corresponding material types, surface shapes and sizes. Models
are then exploited to predict thermal characteristics and changes in
different building zones. Exterior influences such as environmental
conditions and weather forecast, people behavior and comfort
demands are all taken into account for deriving price-optimal climate
control. Finally, a DC microgrid with photovoltaics, wind turbine,
supercapacitor, batteries and fuel cell stacks is added to make the
building a unit capable of active participation in a price-varying
energy market. Computational burden of applying model predictive
control on such a complex system is relaxed through a hierarchical
decomposition of the microgrid and climate control, where the
former is designed as higher hierarchical level with pre-calculated
price-optimal power flows control, and latter is designed as lower
level control responsible to ensure thermal comfort and exploit
the optimal supply conditions enabled by microgrid energy flows
management. Such an approach is expected to enable the inclusion
of more complex building subsystems into consideration in order to
further increase the energy efficiency.
Abstract: This paper describes a simple way to control the speed
of PMBLDC motor using Fuzzy logic control method. In the
conventional PI controller the performance of the motor system is
simulated and the speed is regulated by using PI controller. These
methods used to improve the performance of PMSM drives, but in
some cases at different operating conditions when the dynamics of
the system also vary over time and it can change the reference speed,
parameter variations and the load disturbance. The simulation is
powered with the MATLAB program to get a reliable and flexible
simulation. In order to highlight the effectiveness of the speed control
method the FLC method is used. The proposed method targeted in
achieving the improved dynamic performance and avoids the
variations of the motor drive. This drive has high accuracy, robust
operation from near zero to high speed. The effectiveness and
flexibility of the individual techniques of the speed control method
will be thoroughly discussed for merits and demerits and finally
verified through simulation and experimental results for comparative
analysis.
Abstract: The cities of Johannesburg and Pretoria both located in the Gauteng province are separated by a distance of 58 km. The traffic queues on the Ben Schoeman freeway which connects these two cities can stretch for almost 1.5 km. Vehicle traffic congestion impacts negatively on the business and the commuter’s quality of life. The goal of this paper is to identify variables that influence the flow of traffic and to design a vehicle traffic prediction model, which will predict the traffic flow pattern in advance. The model will unable motorist to be able to make appropriate travel decisions ahead of time. The data used was collected by Mikro’s Traffic Monitoring (MTM). Multi-Layer perceptron (MLP) was used individually to construct the model and the MLP was also combined with Bagging ensemble method to training the data. The cross—validation method was used for evaluating the models. The results obtained from the techniques were compared using predictive and prediction costs. The cost was computed using combination of the loss matrix and the confusion matrix. The predicted models designed shows that the status of the traffic flow on the freeway can be predicted using the following parameters travel time, average speed, traffic volume and day of month. The implications of this work is that commuters will be able to spend less time travelling on the route and spend time with their families. The logistics industry will save more than twice what they are currently spending.