Abstract: Using of natural lighting has come into prominence in
constructed buildings, especially in last ten years, under scope of
energy efficiency. Natural lighting methods are one of the methods
that aim to take advantage of day light in maximum level and
decrease using of artificial lighting. Increasing of day light amount in
buildings by using suitable methods will give optimum result in
terms of comfort and energy saving when the daylight-artificial light
integration is ensured with a suitable control system. Using of natural
light in places that require lighting will ensure energy saving in great
extent. With this study, it is aimed to save energy used for purpose of
lighting. Under this scope, lighting of a scanning laboratory of a
hospital was realized by using a lighting automation containing
natural and artificial lighting. In natural lighting, light pipes were
used and in artificial lighting, dimmable power LED modules were
used. Necessity of lighting was followed with motion sensors. The
lighting automation containing natural and artificial light was ensured
with fuzzy logic control. At the scanning laboratory where this
application was realized, energy saving in lighting was obtained.
Abstract: Aiming at the problems existing in low-carbon technology of Chinese manufacturing industries, such as irrational energy structure, lack of technological innovation, financial constraints, this paper puts forward the suggestion that the leading role of the government is combined with the roles of enterprises and market. That is, through increasing the governmental funding the adjustment of the industrial structures and enhancement of the legal supervision are supported. Technological innovation is accelerated by the enterprises, and the carbon trading will be promoted so as to trigger the low-carbon revolution in Chinese manufacturing field.
Abstract: This paper presents a constrained valley detection
algorithm. The intent is to find valleys in the map for the path planning
that enables a robot or a vehicle to move safely. The constraint to the
valley is a desired width and a desired depth to ensure the space for
movement when a vehicle passes through the valley. We propose an
algorithm to find valleys satisfying these 2 dimensional constraints.
The merit of our algorithm is that the pre-processing and the
post-processing are not necessary to eliminate undesired small valleys.
The algorithm is validated through simulation using digitized
elevation data.
Abstract: CIM is the standard formalism for modeling management
information developed by the Distributed Management Task
Force (DMTF) in the context of its WBEM proposal, designed to
provide a conceptual view of the managed environment. In this
paper, we propose the inclusion of formal knowledge representation
techniques, based on Description Logics (DLs) and the Web Ontology
Language (OWL), in CIM-based conceptual modeling, and then we
examine the benefits of such a decision. The proposal is specified
as a CIM metamodel level mapping to a highly expressive subset
of DLs capable of capturing all the semantics of the models. The
paper shows how the proposed mapping provides CIM diagrams with
precise semantics and can be used for automatic reasoning about the
management information models, as a design aid, by means of newgeneration
CASE tools, thanks to the use of state-of-the-art automatic
reasoning systems that support the proposed logic and use algorithms
that are sound and complete with respect to the semantics. Such a
CASE tool framework has been developed by the authors and its
architecture is also introduced. The proposed formalization is not
only useful at design time, but also at run time through the use of
rational autonomous agents, in response to a need recently recognized
by the DMTF.
Abstract: This work presents the mixed-mode II/III prestressed split-cantilever beam specimen for the fracture testing of composite materials. In accordance with the concept of prestressed composite beams one of the two fracture modes is provided by the prestressed state of the specimen, and the other one is increased up to fracture initiation by using a testing machine. The novel beam-like specimen is able to provide any combination of the mode-II and mode-III energy release rates. A simple closed-form solution is developed using beam theory as a data reduction scheme and for the calculation of the energy release rates in the new configuration. The applicability and the limitations of the novel fracture mechanical test are demonstrated using unidirectional glass/polyester composite specimens. If only crack propagation onset is involved then the mixed-mode beam specimen can be used to obtain the fracture criterion of transparent composite materials in the GII - GIII plane in a relatively simple way.
Abstract: This paper proposes a scheduling scheme using feedback
control to reduce the response time of aperiodic tasks with soft
real-time constraints. We design an algorithm based on the proposed
scheduling scheme and Total Bandwidth Server (TBS) that is a
conventional server technique for scheduling aperiodic tasks. We then
describe the feedback controller of the algorithm and give the control
parameter tuning methods. The simulation study demonstrates that the
algorithm can reduce the mean response time up to 26% compared
to TBS in exchange for slight deadline misses.
Abstract: Human computer interaction has progressed
considerably from the traditional modes of interaction. Vision based
interfaces are a revolutionary technology, allowing interaction
through human actions, gestures. Researchers have developed
numerous accurate techniques, however, with an exception to few
these techniques are not evaluated using standard HCI techniques. In
this paper we present a comprehensive framework to address this
issue. Our evaluation of a computer vision application shows that in
addition to the accuracy, it is vital to address human factors
Abstract: Capacitive electrocardiogram (ECG) measurement is an attractive approach for long-term health monitoring. However, there is little literature available on its implementation, especially for multichannel system in standard ECG leads. This paper begins from the design criteria for capacitive ECG measurement and presents a multichannel limb-lead capacitive ECG system with conductive fabric tapes pasted on a double layer PCB as the capacitive sensors. The proposed prototype system incorporates a capacitive driven-body (CDB) circuit to reduce the common-mode power-line interference (PLI). The presented prototype system has been verified to be stable by theoretic analysis and practical long-term experiments. The signal quality is competitive to that acquired by commercial ECG machines. The feasible size and distance of capacitive sensor have also been evaluated by a series of tests. From the test results, it is suggested to be greater than 60 cm2 in sensor size and be smaller than 1.5 mm in distance for capacitive ECG measurement.
Abstract: Cyber attacks pose a serious threat to all states. Therefore, states constantly seek for various methods to encounter those threats. In addition, recent changes in the nature of cyber attacks and their more complicated methods have created a new concept: active cyber defense (ACD). This article tries to answer firstly why ACD is important to NATO and find out the viewpoint of NATO towards ACD. Secondly, infrastructure protection is essential to cyber defense. Critical infrastructure protection with ACD means is even more important. It is assumed that by implementing active cyber defense, NATO may not only be able to repel the attacks but also be deterrent. Hence, the use of ACD has a direct positive effect in all international organizations’ future including NATO.
Abstract: The nexus between language and culture is so
intertwined and very significant that language is largely seen as a
vehicle for cultural transmission. Culture itself refers to the aggregate
belief system of a people, embellishing its corporate national image
or brand. If we conceive national rebranding as a campaign to
rekindle the patriotic flame in the consciousness of a people towards
its sociocultural imperatives and values, then, Nigerian indigenous
linguistic flame has not been ignited. Consequently, the paper
contends that the current national rebranding policy remains a myth
in the confines of the elitists' intellectual squabble. It however
recommends that the use of our indigenous languages should be
supported by adequate legislation and also propagated by Nollywood
in order to revamp and sustain the people’s interest in their local
languages. Finally, the use of the indigenous Nigerian languages
demonstrates patriotism, an important ingredient for actualizing a
genuine national rebranding.
Abstract: This is the second part of the paper. It, aside from the
core subroutine test reported previously, focuses on the simulation of
turbulence governed by the full STF Navier-Stokes equations on a
large scale. Law of the wall is found plausible in this study as a model
of the boundary layer dynamics. Model validations proceed to
include velocity profiles of a stationary turbulent Couette flow, pure
sloshing flow simulations, and the identification of water-surface
inclination due to fluid accelerations. Errors resulting from the
irrotational and hydrostatic assumptions are explored when studying
a wind-driven water circulation with no shakings. Illustrative
examples show that this numerical strategy works for the simulation
of sloshing-shear mixed flow in a 3-D rigid rectangular base tank.
Abstract: Multiple sequence alignment is a fundamental part in
many bioinformatics applications such as phylogenetic analysis.
Many alignment methods have been proposed. Each method gives a
different result for the same data set, and consequently generates a
different phylogenetic tree. Hence, the chosen alignment method
affects the resulting tree. However in the literature, there is no
evaluation of multiple alignment methods based on the comparison of
their phylogenetic trees. This work evaluates the following eight
aligners: ClustalX, T-Coffee, SAGA, MUSCLE, MAFFT, DIALIGN,
ProbCons and Align-m, based on their phylogenetic trees (test trees)
produced on a given data set. The Neighbor-Joining method is used
to estimate trees. Three criteria, namely, the dNNI, the dRF and the
Id_Tree are established to test the ability of different alignment
methods to produce closer test tree compared to the reference one
(true tree). Results show that the method which produces the most
accurate alignment gives the nearest test tree to the reference tree.
MUSCLE outperforms all aligners with respect to the three criteria
and for all datasets, performing particularly better when sequence
identities are within 10-20%. It is followed by T-Coffee at lower
sequence identity (30%), trees scores of all methods
become similar.
Abstract: Wireless sensor networks can be used to measure and monitor many challenging problems and typically involve in monitoring, tracking and controlling areas such as battlefield monitoring, object tracking, habitat monitoring and home sentry systems. However, wireless sensor networks pose unique security challenges including forgery of sensor data, eavesdropping, denial of service attacks, and the physical compromise of sensor nodes. Node in a sensor networks may be vanished due to power exhaustion or malicious attacks. To expand the life span of the sensor network, a new node deployment is needed. In military scenarios, intruder may directly organize malicious nodes or manipulate existing nodes to set up malicious new nodes through many kinds of attacks. To avoid malicious nodes from joining the sensor network, a security is required in the design of sensor network protocols. In this paper, we proposed a security framework to provide a complete security solution against the known attacks in wireless sensor networks. Our framework accomplishes node authentication for new nodes with recognition of a malicious node. When deployed as a framework, a high degree of security is reachable compared with the conventional sensor network security solutions. A proposed framework can protect against most of the notorious attacks in sensor networks, and attain better computation and communication performance. This is different from conventional authentication methods based on the node identity. It includes identity of nodes and the node security time stamp into the authentication procedure. Hence security protocols not only see the identity of each node but also distinguish between new nodes and old nodes.
Abstract: Magnesium alloys have gained increased attention in recent years in automotive, electronics, and medical industry. This because of magnesium alloys have better properties than aluminum alloys and steels in respects of their low density and high strength to weight ratio. However, the main problems of magnesium alloy welding are the crack formation and the appearance of porosity during the solidification. This paper proposes a unique technique to weld two thin sheets of AZ31B magnesium alloy using a paste containing Ag nanoparticles. The paste containing Ag nanoparticles of 5 nm in average diameter and an organic solvent was used to coat the surface of AZ31B thin sheet. The coated sheet was heated at 100 °C for 60 s to evaporate the solvent. The dried sheet was set as a lower AZ31B sheet on the jig, and then lap fillet welding was carried out by using a pulsed Nd:YAG laser in a closed box filled with argon gas. The characteristics of the microstructure and the corrosion behavior of the joints were analyzed by opticalmicroscopy (OM), energy dispersive spectrometry (EDS), electron probe micro-analyzer (EPMA), scanning electron microscopy (SEM), and immersion corrosion test. The experimental results show that the wrought AZ31B magnesium alloy can be joined successfully using Ag nanoparticles. Ag nanoparticles insert promote grain refinement, narrower the HAZ width and wider bond width compared to weld without and insert. Corrosion rate of welded AZ31B with Ag nanoparticles reduced up to 44 % compared to base metal. The improvement of corrosion resistance of welded AZ31B with Ag nanoparticles due to finer grains and large grain boundaries area which consist of high Al content. β-phase Mg17Al12 could serve as effective barrier and suppressed further propagation of corrosion. Furthermore, Ag distribution in fusion zone provide much more finer grains and may stabilize the magnesium solid solution making it less soluble or less anodic in aqueous
Abstract: Stochastic resonance (SR) is a phenomenon whereby
the signal transmission or signal processing through certain nonlinear
systems can be improved by adding noise. This paper discusses SR in
nonlinear signal detection by a simple test statistic, which can be
computed from multiple noisy data in a binary decision problem based
on a maximum a posteriori probability criterion. The performance of
detection is assessed by the probability of detection error Per . When
the input signal is subthreshold signal, we establish that benefit from
noise can be gained for different noises and confirm further that the
subthreshold SR exists in nonlinear signal detection. The efficacy of
SR is significantly improved and the minimum of Per can
dramatically approach to zero as the sample number increases. These
results show the robustness of SR in signal detection and extend the
applicability of SR in signal processing.
Abstract: Abstract–Let k ≥ 3 be an integer, and let G be a graph of order n with n ≥ 9k +3- 42(k - 1)2 + 2. Then a spanning subgraph F of G is called a k-factor if dF (x) = k for each x ∈ V (G). A fractional k-factor is a way of assigning weights to the edges of a graph G (with all weights between 0 and 1) such that for each vertex the sum of the weights of the edges incident with that vertex is k. A graph G is a fractional k-deleted graph if there exists a fractional k-factor after deleting any edge of G. In this paper, it is proved that G is a fractional k-deleted graph if G satisfies δ(G) ≥ k + 1 and |NG(x) ∪ NG(y)| ≥ 1 2 (n + k - 2) for each pair of nonadjacent vertices x, y of G.
Abstract: Energy Efficiency Management is the heart of a
worldwide problem. The capability of a multi-agent system as a
technology to manage the micro-grid operation has already been
proved. This paper deals with the implementation of a decisional
pattern applied to a multi-agent system which provides intelligence to
a distributed local energy network considered at local consumer level.
Development of multi-agent application involves agent
specifications, analysis, design, and realization. Furthermore, it can
be implemented by following several decisional patterns. The
purpose of present article is to suggest a new approach for a
decisional pattern involving a multi-agent system to control a
distributed local energy network in a decentralized competitive
system. The proposed solution is the result of a dichotomous
approach based on environment observation. It uses an iterative
process to solve automatic learning problems and converges
monotonically very fast to system attracting operation point.
Abstract: In large Internet backbones, Service Providers
typically have to explicitly manage the traffic flows in order to
optimize the use of network resources. This process is often referred
to as Traffic Engineering (TE). Common objectives of traffic
engineering include balance traffic distribution across the network
and avoiding congestion hot spots. Raj P H and SVK Raja designed
the Bayesian network approach to identify congestion hors pots in
MPLS. In this approach for every node in the network the
Conditional Probability Distribution (CPD) is specified. Based on
the CPD the congestion hot spots are identified. Then the traffic can
be distributed so that no link in the network is either over utilized or
under utilized. Although the Bayesian network approach has been
implemented in operational networks, it has a number of well known
scaling issues.
This paper proposes a new approach, which we call the Pragati
(means Progress) Node Popularity (PNP) approach to identify the
congestion hot spots with the network topology alone. In the new
Pragati Node Popularity approach, IP routing runs natively over the
physical topology rather than depending on the CPD of each node as
in Bayesian network. We first illustrate our approach with a simple
network, then present a formal analysis of the Pragati Node
Popularity approach. Our PNP approach shows that for any given
network of Bayesian approach, it exactly identifies the same result
with minimum efforts. We further extend the result to a more
generic one: for any network topology and even though the network
is loopy. A theoretical insight of our result is that the optimal routing
is always shortest path routing with respect to some considerations of
hot spots in the networks.
Abstract: Efficient handoff algorithms are a cost-effective way
of enhancing the capacity and QoS of cellular system. The higher
value of hysteresis effectively prevents unnecessary handoffs but
causes undesired cell dragging. This undesired cell dragging causes
interference or could lead to dropped calls in microcellular
environment. The problems are further exacerbated by the corner
effect phenomenon which causes the signal level to drop by 20-30 dB
in 10-20 meters. Thus, in order to maintain reliable communication
in a microcellular system new and better handoff algorithms must be
developed. A fuzzy based handoff algorithm is proposed in this paper
as a solution to this problem. Handoff on the basis of ratio of slopes
of normal signal loss to the actual signal loss is presented. The fuzzy
based solution is supported by comparing its results with the results
obtained in analytical solution.
Abstract: The present work consecutively on synthesis and
characterization of composites, Al/Al alloy A 384.1 as matrix in
which the main ingredient as Al/Al-5% MgO alloy based metal
matrix composite. As practical implications the low cost processing
route for the fabrication of Al alloy A 384.1 and operational
difficulties of presently available manufacturing processes based in
liquid manipulation methods. As all new developments, complete
understanding of the influence of processing variables upon the final
quality of the product. And the composite is applied comprehensively
to the acquaintance for achieving superiority of information
concerning the specific heat measurement of a material through the
aid of thermographs. Products are evaluated concerning relative
particle size and mechanical behavior under tensile strength.
Furthermore, Taguchi technique was employed to examine the
experimental optimum results are achieved, owing to effectiveness of
this approach.