Abstract: Pyrolysis of waste oil is an effective process to produce high quality liquid fuels. In this work, pyrolysis experiments of waste oil over Y zeolite were carried out in a semi-batch reactor under a flow of nitrogen at atmospheric pressure and at different reaction temperatures (350-450 oC). The products were gas, liquid fuel, and residue. Only liquid fuel was further characterized for its composition and properties by using gas chromatography, thermogravimetric analyzer, and bomb calorimeter. Experimental results indicated that the pyrolysis reaction temperature significantly affected both yield and composition distribution of pyrolysis oil. An increase in reaction temperature resulted in increased fuel yield, especially gasoline fraction. To obtain high amount of fuel, the optimal reaction temperature should be higher than 350 oC. A presence of Y zeolite in the system enhanced the cracking activity. In addition, the pyrolysis oil yield is proportional to the catalyst quantity.
Abstract: One of the areas that present an opportunity to reduce the national carbon emission is the energy management of public buildings. To our present knowledge, there is no easy-to-use and centralized mechanism that enables the government to monitor the overall energy performance, as well as the carbon footprint, of Malaysia’s public buildings. Therefore, the Public Works Department Malaysia, or PWD, has developed a web-based energy performance reporting tool called JENOSYS (JKR Energy Online System), which incorporates a database of utility account numbers acquired from the utility service provider for analysis and reporting. For test case purposes, 23 buildings under PWD were selected and monitored for their monthly energy performance (in kWh), carbon emission reduction (in tCO₂eq) and utility cost (in MYR), against the baseline. This paper demonstrates the simplicity with which buildings without energy metering can be monitored centrally and the benefits that can be accrued by the government in terms of building energy disclosure and concludes with the recommendation of expanding the system to all the public buildings in Malaysia.
Abstract: Recent innovations in the field of technology led to the use of wireless sensor networks in various applications, which consists of a number of small, very tiny, low-cost, non-tamper proof and resource constrained sensor nodes. These nodes are often distributed and deployed in an unattended environment, so as to collaborate with each other to share data or information. Amidst various applications, wireless sensor network finds a major role in monitoring battle field in military applications. As these non-tamperproof nodes are deployed in an unattended location, they are vulnerable to many security attacks. Amongst many security attacks, the node replication attack seems to be more threatening to the network users. Node Replication attack is caused by an attacker, who catches one true node, duplicates the first certification and cryptographic materials, makes at least one or more copies of the caught node and spots them at certain key positions in the system to screen or disturb the network operations. Preventing the occurrence of such node replication attacks in network is a challenging task. In this survey article, we provide the classification of detection schemes and also explore the various schemes proposed in each category. Also, we compare the various detection schemes against certain evaluation parameters and also its limitations. Finally, we provide some suggestions for carrying out future research work against such attacks.
Abstract: The purpose of this research is to construct a watching system that monitors human activity in a room and detects abnormalities at an early stage to prevent unattended deaths of people living alone. In this article, we propose a method whereby highly urgent abnormal conditions of a person are determined by changes in the concentration of CO2 generated from activity and respiration in a room. We also discussed the effects the amount of activity has on the determination. The results showed that this discrimination method is not dependent on the amount of activity and is effective in judging highly urgent abnormal conditions.
Abstract: Usability testing (UT) is one of the vital steps in the User-centred design (UCD) process when designing a product. In an e-commerce ecosystem, UT becomes primary as new products, features, and services are launched very frequently. And, there are losses attached to the company if an unusable and inefficient product is put out to market and is rejected by customers. This paper tries to answer why UT is important in the product life-cycle of an E-commerce ecosystem. Secondary user research was conducted to find out work patterns, development methods, type of stakeholders, and technology constraints, etc. of a typical E-commerce company. Qualitative user interviews were conducted with product managers and designers to find out the structure, project planning, product management method and role of the design team in a mid-level company. The paper tries to address the usual apprehensions of the company to inculcate UT within the team. As well, it stresses upon factors like monetary resources, lack of usability expert, narrow timelines, and lack of understanding of higher management as some primary reasons. Outsourcing UT to vendors is also very prevalent with mid-level e-commerce companies, but it has its own severe repercussions like very little team involvement, huge cost, misinterpretation of the findings, elongated timelines, and lack of empathy towards the customer, etc. The shortfalls of the unavailability of a UT process in place within the team and conducting UT through vendors are bad user experiences for customers while interacting with the product, badly designed products which are neither useful and nor utilitarian. As a result, companies see dipping conversions rates in apps and websites, huge bounce rates and increased uninstall rates. Thus, there was a need for a more lean UT system in place which could solve all these issues for the company. This paper highlights on optimizing the UT process with a collaborative method. The degree of optimization and structure of collaborative method is the highlight of this paper. Collaborative method of UT is one in which the centralised design team of the company takes for conducting and analysing the UT. The UT is usually a formative kind where designers take findings into account and uses in the ideation process. The success of collaborative method of UT is due to its ability to sync with the product management method employed by the company or team. The collaborative methods focus on engaging various teams (design, marketing, product, administration, IT, etc.) each with its own defined roles and responsibility in conducting a smooth UT with users In-house. The paper finally highlights the positive results of collaborative UT method after conducting more than 100 In-lab interviews with users across the different lines of businesses. Some of which are the improvement of interaction between stakeholders and the design team, empathy towards users, improved design iteration, better sanity check of design solutions, optimization of time and money, effective and efficient design solution. The future scope of collaborative UT is to make this method leaner, by reducing the number of days to complete the entire project starting from planning between teams to publishing the UT report.
Abstract: An automated fibre placement method has been
developed to build through-thickness reinforcement into carbon fibre
reinforced plastic laminates during their production, with the goal
of increasing delamination fracture toughness while circumventing
the additional costs and defects imposed by post-layup stitching
and z-pinning. Termed ‘inter-weaving’, the method uses custom
placement sequences of thermoset prepreg tows to distribute regular
fibre link regions in traditionally clean ply interfaces. Inter-weaving’s impact on mode I delamination fracture toughness
was evaluated experimentally through double cantilever beam tests
(ASTM standard D5528-13) on [±15°]9 laminates made from Park
Electrochemical Corp. E-752-LT 1/4” carbon fibre prepreg tape.
Unwoven and inter-woven automated fibre placement samples were
compared to those of traditional laminates produced from standard
uni-directional plies of the same material system. Unwoven automated fibre placement laminates were found to
suffer a mostly constant 3.5% decrease in mode I delamination
fracture toughness compared to flat uni-directional plies. Inter-weaving caused significant local fracture toughness increases
(up to 50%), though these were offset by a matching overall
reduction. These positive and negative behaviours of inter-woven
laminates were respectively found to be caused by fibre breakage
and matrix deformation at inter-weave sites, and the 3D layering
of inter-woven ply interfaces providing numerous paths of least
resistance for crack propagation.
Abstract: This paper brings to fore the inherent advantages in application of mobile agents to procure software products rather than downloading software content on the Internet. It proposes a system whereby the products come on compact disk with mobile agent as deliverable. The client/user purchases a software product, but must connect to the remote server of the software developer before installation. The user provides an activation code that activates mobile agent which is part of the software product on compact disk. The validity of the activation code is checked on connection at the developer’s end to ascertain authenticity and prevent piracy. The system is implemented by downloading two different software products as compare with installing same products on compact disk with mobile agent’s application. Downloading software contents from developer’s database as in the traditional method requires a continuously open connection between the client and the developer’s end, a fixed network is not economically or technically feasible. Mobile agent after being dispatched into the network becomes independent of the creating process and can operate asynchronously and autonomously. It can reconnect later after completing its task and return for result delivery. Response Time and Network Load are very minimal with application of Mobile agent.
Abstract: This paper presents a thirteen-level asymmetrical
cascaded H-bridge single phase inverter. In this configuration, the
desired output voltage level is achieved by connecting the DC sources in
different combinations by triggering the switches. The modes of
operation are explained well for positive level generations. Moreover, a
comparison is made with conventional topologies of diode clamped,
flying capacitors and cascaded-H-bridge and some recently proposed
topologies to show the significance of the proposed topology in terms of
reduced part counts. The simulation work has been carried out in
MATLAB/Simulink environment. The experimental work is also carried
out for lower rating to verify the performance and feasibility of the
proposed topology. Further the results are presented for different loading
conditions.
Abstract: Most accidents occur in urban areas, and the most related casualties are vulnerable road users (pedestrians and cyclists). The traffic calming measures (TCMs) are widely used and considered to be successful in reducing speed and traffic volume. However, TCMs create unwanted effects include: noise, emissions, energy consumption, vehicle delays and emergency response time (ERT). Different vertical and horizontal TCMs have been already applied nationally (Sweden) and internationally with different impacts. It is a big challenge among traffic engineers, planners, and policy-makers to choose and priorities the best TCMs to be implemented. This study will assess the existing guidelines for TCMs in relation to safety and ERT with focus on data from Norrkoping city in Sweden. The expected results will save lives, time, and money on particularly Swedish Roads. The study will also review newly technologies and how they can improve safety and reduce ERT.
Abstract: Focus on reducing energy consumption in existing
buildings at large scale, e.g. in cities or countries, has been
increasing in recent years. In order to reduce energy consumption
in existing buildings, political incentive schemes are put in place and
large scale investments are made by utility companies. Prioritising
these investments requires a comprehensive overview of the energy
consumption in the existing building stock, as well as potential
energy-savings. However, a building stock comprises thousands
of buildings with different characteristics making it difficult to
model energy consumption accurately. Moreover, the complexity of
the building stock makes it difficult to convey model results to
policymakers and other stakeholders. In order to manage the complexity of the building stock, building
archetypes are often employed in building stock energy models
(BSEMs). Building archetypes are formed by segmenting the building
stock according to specific characteristics. Segmenting the building
stock according to building type and building age is common, among
other things because this information is often easily available. This
segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all
buildings in a segment of the building stock is associated with
loss of detail. Thermal characteristics are aggregated while other
characteristics, which could affect the energy efficiency of a building,
are disregarded. Thus, using a simplified representation of the
building stock could come at the expense of the accuracy of the
model. The present study evaluates the accuracy of a conventional
archetype-based BSEM that segments the building stock according
to building type- and age. The accuracy is evaluated in terms of the
archetypes’ ability to accurately emulate the average energy demands
of the corresponding buildings they were meant to represent. This is
done for the buildings’ energy demands as a whole as well as for
relevant sub-demands. Both are evaluated in relation to the type- and
the age of the building. This should provide researchers, who use
archetypes in BSEMs, with an indication of the expected accuracy
of the conventional archetype model, as well as the accuracy lost in
specific parts of the calculation, due to use of the archetype method.
Abstract: One of the major shortcomings of widely used
scientometric indicators is that different disciplines cannot be
compared with each other. The issue of cross-disciplinary
normalization has been long discussed, but even the classification
of publications into scientific domains poses problems. Structural
properties of citation networks offer new possibilities, however, the
large size and constant growth of these networks asks for precaution.
Here we present a new tool that in order to perform cross-field
normalization of scientometric indicators of individual publications
relays on the structural properties of citation networks. Due to the
large size of the networks, a systematic procedure for identifying
scientific domains based on a local community detection algorithm
is proposed. The algorithm is tested with different benchmark
and real-world networks. Then, by the use of this algorithm, the
mechanism of the scientometric indicator normalization process is
shown for a few indicators like the citation number, P-index and
a local version of the PageRank indicator. The fat-tail trend of the
article indicator distribution enables us to successfully perform the
indicator normalization process.
Abstract: Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.
Abstract: What people say on social media has turned into a
rich source of information to understand social behavior. Specifically,
the growing use of Twitter social media for political communication
has arisen high opportunities to know the opinion of large numbers
of politically active individuals in real time and predict the global
political tendencies of a specific country. It has led to an increasing
body of research on this topic. The majority of these studies have
been focused on polarized political contexts characterized by only
two alternatives. Unlike them, this paper tackles the challenge
of forecasting Spanish political trends, characterized by multiple
political parties, by means of analyzing the Twitters Users political
tendency. According to this, a new strategy, named Tweets Analysis
Strategy (TAS), is proposed. This is based on analyzing the users
tweets by means of discovering its sentiment (positive, negative or
neutral) and classifying them according to the political party they
support. From this individual political tendency, the global political
prediction for each political party is calculated. In order to do this,
two different strategies for analyzing the sentiment analysis are
proposed: one is based on Positive and Negative words Matching
(PNM) and the second one is based on a Neural Networks Strategy
(NNS). The complete TAS strategy has been performed in a Big-Data
environment. The experimental results presented in this paper reveal
that NNS strategy performs much better than PNM strategy to analyze
the tweet sentiment. In addition, this research analyzes the viability
of the TAS strategy to obtain the global trend in a political context
make up by multiple parties with an error lower than 23%.
Abstract: In order to study the topic of cities crossing rivers, a Four-Dimensional Analysis Method consisting of timeline, X-axis, Y-axis, and Z-axis is proposed. Policies, plans, and their implications are summarized and researched along with the timeline. The X-axis is the direction which is parallel to the river. The research area was chosen because of its important connection function. It is proposed that more surface water network should be built because of the ecological orientation of the research area. And the analysis of groundwater makes it for sure that the proposal is feasible. After the blue water network is settled, the green landscape network which is surrounded by it could be planned. The direction which is transversal to the river (Y-axis) should run through the transportation axis so that the urban texture could stretch in an ecological way. Therefore, it is suggested that the work of the planning bureau and river bureau should be coordinated. The Z-axis research is on the section view of the river, especially on the Yellow River’s special feature of being a perched river. Based on water control safety demands, river parks could be constructed on the embankment buffer zone, whereas many kinds of ornamental trees could be used to build the buffer zone. City Crossing River is a typical case where we make use of landscaping to build a symbiotic relationship between the urban landscape architecture and the environment. The local environment should be respected in the process of city expansion. The planning order of "Benefit- Flood Control Safety" should be replaced by "Flood Control Safety - Landscape Architecture- People - Benefit".
Abstract: Various processes are modelled using a discrete phase,
where particles are seeded from a source. Such particles can represent
liquid water droplets, which are affecting the continuous phase by
exchanging thermal energy, momentum, species etc. Discrete phases
are typically modelled using parcel, which represents a collection of
particles, which share properties such as temperature, velocity etc.
When coupling the phases, the exchange rates are integrated over
the cell, in which the parcel is located. This can cause spikes and
fluctuating exchange rates. This paper presents an alternative method of coupling a discrete
and a continuous plug flow phase. This is done using triangular
parcels, which span between nodes following the dynamics of single
droplets. Thus, the triangular parcels are propagated using the corner
nodes. At each time step, the exchange rates are spatially integrated
over the surface of the triangular parcels, which yields a smooth
continuous exchange rate to the continuous phase. The results shows that the method is more stable, converges
slightly faster and yields smooth exchange rates compared with
the steam tube approach. However, the computational requirements
are about five times greater, so the applicability of the alternative
method should be limited to processes, where the exchange rates are
important. The overall balances of the exchanged properties did not
change significantly using the new approach.
Abstract: In this paper we present a contribution for the modeling and control of wind energy conversion system based on a Doubly Fed Induction Generator (DFIG). Since the wind speed is random the system has to produce an optimal electrical power to the Network and ensures important strength and stability. In this work, the Backstepping controller is used to control the generator via two converter witch placed a DC bus capacitor and connected to the grid by a Filter R-L, in order to optimize capture wind energy. All is simulated and presented under MATLAB/Simulink Software to show performance and robustness of the proposed controller.
Abstract: Over the past decade, there has been a steep rise in
the data-driven analysis in major areas of medicine, such as clinical
decision support system, survival analysis, patient similarity analysis,
image analytics etc. Most of the data in the field are well-structured
and available in numerical or categorical formats which can be used
for experiments directly. But on the opposite end of the spectrum,
there exists a wide expanse of data that is intractable for direct
analysis owing to its unstructured nature which can be found in the
form of discharge summaries, clinical notes, procedural notes which
are in human written narrative format and neither have any relational
model nor any standard grammatical structure. An important step
in the utilization of these texts for such studies is to transform
and process the data to retrieve structured information from the
haystack of irrelevant data using information retrieval and data mining
techniques. To address this problem, the authors present Q-Map in
this paper, which is a simple yet robust system that can sift through
massive datasets with unregulated formats to retrieve structured
information aggressively and efficiently. It is backed by an effective
mining technique which is based on a string matching algorithm
that is indexed on curated knowledge sources, that is both fast
and configurable. The authors also briefly examine its comparative
performance with MetaMap, one of the most reputed tools for medical
concepts retrieval and present the advantages the former displays over
the latter.
Abstract: Underground structures are of those structures that have uncertainty in design procedures. That is due to the complexity of soil condition around. Under passing tunnels are also such affected structures. Despite geotechnical site investigations, lots of uncertainties exist in soil properties due to unknown events. As results, it possibly causes conflicting settlements in numerical analysis with recorded values in the project. This paper aims to report a case study on a specific under passing tunnel constructed by New Austrian Tunnelling Method in Iran. The intended tunnel has an overburden of about 11.3m, the height of 12.2m and, the width of 14.4m with 2.5 traffic lane. The numerical modeling was developed by a 2D finite element program (PLAXIS Version 8). Comparing displacement histories at the ground surface during the entire installation of initial lining, the estimated surface settlement was about four times the field recorded one, which indicates that some local unknown events affect that value. Also, the displacement ratios were in a big difference between the numerical and field data. Consequently, running several numerical back analyses using laboratory and field tests data, the geotechnical parameters were accurately revised to match with the obtained monitoring data. Finally, it was found that usually the values of soil parameters are conservatively low-estimated up to 40 percent by typical engineering judgment. Additionally, it could be attributed to inappropriate constitutive models applied for the specific soil condition.
Abstract: This paper presents a study of Lamb wave damage
diagnosis of composite delamination using instantaneous phase
data. Numerical experiments are performed using the finite element
method. Different sizes of delamination damages are modeled
using finite element package ABAQUS. Lamb wave excitation
and responses data are obtained using a pitch-catch configuration.
Empirical mode decomposition is employed to extract the intrinsic
mode functions (IMF). Hilbert–Huang Transform is applied to each
of the resulting IMFs to obtain the instantaneous phase information.
The baseline data for healthy plates are also generated using the
same procedure. The size of delamination is correlated with the
instantaneous phase change for damage diagnosis. It is observed that
the unwrapped instantaneous phase of shows a consistent behavior
with the increasing delamination size.
Abstract: A critical problem in wireless sensor networks is limited battery and memory of nodes. Therefore, each node in the network could maintain only a subset of its neighbors to communicate with. This will increase the battery usage in the network because each packet should take more hops to reach its destination. In order to tackle these problems, spanner graphs are defined. Since each node has a small degree in a spanner graph and the distance in the graph is not much greater than its actual geographical distance, spanner graphs are suitable candidates to be used for the topology of a wireless sensor network. In this paper, we study Yao graphs and their behavior for a randomly selected set of points. We generate several random point sets and compare the properties of their Yao graphs with the complete graph. Based on our data sets, we obtain several charts demonstrating how Yao graphs behave for a set of randomly chosen point set. As the results show, the stretch factor of a Yao graph follows a normal distribution. Furthermore, the stretch factor is in average far less than the worst case stretch factor proved for Yao graphs in previous results. Furthermore, we use Yao graph for a realistic point set and study its stretch factor in real world.