Abstract: CIM is the standard formalism for modeling management
information developed by the Distributed Management Task
Force (DMTF) in the context of its WBEM proposal, designed to
provide a conceptual view of the managed environment. In this
paper, we propose the inclusion of formal knowledge representation
techniques, based on Description Logics (DLs) and the Web Ontology
Language (OWL), in CIM-based conceptual modeling, and then we
examine the benefits of such a decision. The proposal is specified
as a CIM metamodel level mapping to a highly expressive subset
of DLs capable of capturing all the semantics of the models. The
paper shows how the proposed mapping provides CIM diagrams with
precise semantics and can be used for automatic reasoning about the
management information models, as a design aid, by means of newgeneration
CASE tools, thanks to the use of state-of-the-art automatic
reasoning systems that support the proposed logic and use algorithms
that are sound and complete with respect to the semantics. Such a
CASE tool framework has been developed by the authors and its
architecture is also introduced. The proposed formalization is not
only useful at design time, but also at run time through the use of
rational autonomous agents, in response to a need recently recognized
by the DMTF.
Abstract: The development of many measurement and inspection systems of products based on real-time image processing can not be carried out totally in a laboratory due to the size or the temperature of the manufactured products. Those systems must be developed in successive phases. Firstly, the system is installed in the production line with only an operational service to acquire images of the products and other complementary signals. Next, a recording service of the image and signals must be developed and integrated in the system. Only after a large set of images of products is available, the development of the real-time image processing algorithms for measurement or inspection of the products can be accomplished under realistic conditions. Finally, the recording service is turned off or eliminated and the system operates only with the real-time services for the acquisition and processing of the images. This article presents a systematic performance evaluation of the image compression algorithms currently available to implement a real-time recording service. The results allow establishing a trade off between the reduction or compression of the image size and the CPU time required to get that compression level.
Abstract: This paper proposes a scheduling scheme using feedback
control to reduce the response time of aperiodic tasks with soft
real-time constraints. We design an algorithm based on the proposed
scheduling scheme and Total Bandwidth Server (TBS) that is a
conventional server technique for scheduling aperiodic tasks. We then
describe the feedback controller of the algorithm and give the control
parameter tuning methods. The simulation study demonstrates that the
algorithm can reduce the mean response time up to 26% compared
to TBS in exchange for slight deadline misses.
Abstract: Capacitive electrocardiogram (ECG) measurement is an attractive approach for long-term health monitoring. However, there is little literature available on its implementation, especially for multichannel system in standard ECG leads. This paper begins from the design criteria for capacitive ECG measurement and presents a multichannel limb-lead capacitive ECG system with conductive fabric tapes pasted on a double layer PCB as the capacitive sensors. The proposed prototype system incorporates a capacitive driven-body (CDB) circuit to reduce the common-mode power-line interference (PLI). The presented prototype system has been verified to be stable by theoretic analysis and practical long-term experiments. The signal quality is competitive to that acquired by commercial ECG machines. The feasible size and distance of capacitive sensor have also been evaluated by a series of tests. From the test results, it is suggested to be greater than 60 cm2 in sensor size and be smaller than 1.5 mm in distance for capacitive ECG measurement.
Abstract: Cyber attacks pose a serious threat to all states. Therefore, states constantly seek for various methods to encounter those threats. In addition, recent changes in the nature of cyber attacks and their more complicated methods have created a new concept: active cyber defense (ACD). This article tries to answer firstly why ACD is important to NATO and find out the viewpoint of NATO towards ACD. Secondly, infrastructure protection is essential to cyber defense. Critical infrastructure protection with ACD means is even more important. It is assumed that by implementing active cyber defense, NATO may not only be able to repel the attacks but also be deterrent. Hence, the use of ACD has a direct positive effect in all international organizations’ future including NATO.
Abstract: Five lignin samples were fractionated with
Acetone/Water mixtures and the obtained fractions were subjected to
extensive structural characterization, including Fourier Transform
Infrared (FT-IR), Gel permeation Chromatography (GPC) and
Phosphorus-31 NMR spectroscopy (31P-NMR). The results showed
that for all studied lignins the solubility increases with the increment
of the acetone concentration. Wheat straw lignin has the highest
solubility in 90/10 (v/v) Acetone/Water mixture, 400 mg lignin being
dissolved in 1 mL mixture. The weight average molecular weight of
the obtained fractions increased with the increment of acetone
concentration and thus with solubility. 31P-NMR analysis based on
lignin modification by reactive phospholane into phosphitylated
compounds was used to differentiate and quantify the different types
of OH groups (aromatic, aliphatic, and carboxylic) found in the
fractions obtained with 70/30 (v/v) Acetone/Water mixture.
Abstract: Wireless sensor networks can be used to measure and monitor many challenging problems and typically involve in monitoring, tracking and controlling areas such as battlefield monitoring, object tracking, habitat monitoring and home sentry systems. However, wireless sensor networks pose unique security challenges including forgery of sensor data, eavesdropping, denial of service attacks, and the physical compromise of sensor nodes. Node in a sensor networks may be vanished due to power exhaustion or malicious attacks. To expand the life span of the sensor network, a new node deployment is needed. In military scenarios, intruder may directly organize malicious nodes or manipulate existing nodes to set up malicious new nodes through many kinds of attacks. To avoid malicious nodes from joining the sensor network, a security is required in the design of sensor network protocols. In this paper, we proposed a security framework to provide a complete security solution against the known attacks in wireless sensor networks. Our framework accomplishes node authentication for new nodes with recognition of a malicious node. When deployed as a framework, a high degree of security is reachable compared with the conventional sensor network security solutions. A proposed framework can protect against most of the notorious attacks in sensor networks, and attain better computation and communication performance. This is different from conventional authentication methods based on the node identity. It includes identity of nodes and the node security time stamp into the authentication procedure. Hence security protocols not only see the identity of each node but also distinguish between new nodes and old nodes.
Abstract: Currently, slider process of Hard Disk Drive Industry
become more complex, defective diagnosis for yield improvement
becomes more complicated and time-consumed. Manufacturing data
analysis with data mining approach is widely used for solving that
problem. The existing mining approach from combining of the KMean
clustering, the machine oriented Kruskal-Wallis test and the
multivariate chart were applied for defective diagnosis but it is still
be a semiautomatic diagnosis system. This article aims to modify an
algorithm to support an automatic decision for the existing approach.
Based on the research framework, the new approach can do an
automatic diagnosis and help engineer to find out the defective
factors faster than the existing approach about 50%.
Abstract: As the disfunctions of the information society and
social development progress, intrusion problems such as malicious
replies, spam mail, private information leakage, phishing, and
pharming, and side effects such as the spread of unwholesome
information and privacy invasion are becoming serious social
problems. Illegal access to information is also becoming a problem as
the exchange and sharing of information increases on the basis of the
extension of the communication network. On the other hand, as the
communication network has been constructed as an international,
global system, the legal response against invasion and cyber-attack
from abroad is facing its limit. In addition, in an environment where
the important infrastructures are managed and controlled on the basis
of the information communication network, such problems pose a
threat to national security. Countermeasures to such threats are
developed and implemented on a yearly basis to protect the major
infrastructures of information communication. As a part of such
measures, we have developed a methodology for assessing the
information protection level which can be used to establish the
quantitative object setting method required for the improvement of the
information protection level.
Abstract: In the last decades, a number of robust fuzzy clustering algorithms have been proposed to partition data sets affected by noise and outliers. Robust fuzzy C-means (robust-FCM) is certainly one of the most known among these algorithms. In robust-FCM, noise is modeled as a separate cluster and is characterized by a prototype that has a constant distance δ from all data points. Distance δ determines the boundary of the noise cluster and therefore is a critical parameter of the algorithm. Though some approaches have been proposed to automatically determine the most suitable δ for the specific application, up to today an efficient and fully satisfactory solution does not exist. The aim of this paper is to propose a novel method to compute the optimal δ based on the analysis of the distribution of the percentage of objects assigned to the noise cluster in repeated executions of the robust-FCM with decreasing values of δ . The extremely encouraging results obtained on some data sets found in the literature are shown and discussed.
Abstract: The primary objective of the paper is to propose a new method for solving assignment problem under uncertain situation. In the classical assignment problem (AP), zpqdenotes the cost for assigning the qth job to the pth person which is deterministic in nature. Here in some uncertain situation, we have assigned a cost in the form of composite relative degree Fpq instead of and this replaced cost is in the maximization form. In this paper, it has been solved and validated by the two proposed algorithms, a new mathematical formulation of IVIF assignment problem has been presented where the cost has been considered to be an IVIFN and the membership of elements in the set can be explained by positive and negative evidences. To determine the composite relative degree of similarity of IVIFS the concept of similarity measure and the score function is used for validating the solution which is obtained by Composite relative similarity degree method. Further, hypothetical numeric illusion is conducted to clarify the method’s effectiveness and feasibility developed in the study. Finally, conclusion and suggestion for future work are also proposed.
Abstract: In large Internet backbones, Service Providers
typically have to explicitly manage the traffic flows in order to
optimize the use of network resources. This process is often referred
to as Traffic Engineering (TE). Common objectives of traffic
engineering include balance traffic distribution across the network
and avoiding congestion hot spots. Raj P H and SVK Raja designed
the Bayesian network approach to identify congestion hors pots in
MPLS. In this approach for every node in the network the
Conditional Probability Distribution (CPD) is specified. Based on
the CPD the congestion hot spots are identified. Then the traffic can
be distributed so that no link in the network is either over utilized or
under utilized. Although the Bayesian network approach has been
implemented in operational networks, it has a number of well known
scaling issues.
This paper proposes a new approach, which we call the Pragati
(means Progress) Node Popularity (PNP) approach to identify the
congestion hot spots with the network topology alone. In the new
Pragati Node Popularity approach, IP routing runs natively over the
physical topology rather than depending on the CPD of each node as
in Bayesian network. We first illustrate our approach with a simple
network, then present a formal analysis of the Pragati Node
Popularity approach. Our PNP approach shows that for any given
network of Bayesian approach, it exactly identifies the same result
with minimum efforts. We further extend the result to a more
generic one: for any network topology and even though the network
is loopy. A theoretical insight of our result is that the optimal routing
is always shortest path routing with respect to some considerations of
hot spots in the networks.
Abstract: Nowadays there are lots of applications of power and
free conveyors in logistics. They are the most frequently used
conveyor systems worldwide. Overhead conveyor technologies like
power and free systems are used in the most intra-logistics
applications in trade and industry. The automotive, food, beverage
and textile industry as well as aeronautic catering or engineering are
among the applications. Power and free systems employ different
manufacturing intervals in manufacturing as well as in production as
temporary store and buffer. Depending on the application area, power
and free conveyors are equipped with target controls enabling
complex distribution-and sorting tasks. This article introduces a new
power and free conveyor design in intra-logistics and explains its
components. According to the explanation of the components, a
model is created by means of their technical characteristics. Through
the CAD software, the model is visualized. After that, the static
analysis is evaluated. This analysis helps the calculation of the
mandatory state of structures under force action. This powerful model
helps companies achieve lower development costs as well as quicker
market maturity.
Abstract: The stability of a software system is one of the most
important quality attributes affecting the maintenance effort. Many
techniques have been proposed to support the analysis of software
stability at the architecture, file, and class level of software systems,
but little effort has been made for that at the feature (i.e., method and
attribute) level. And the assumptions the existing techniques based
on always do not meet the practice to a certain degree. Considering
that, in this paper, we present a novel metric, Stability of Software
(SoS), to measure the stability of object-oriented software systems
by software change propagation analysis using a simulation way
in software dependency networks at feature level. The approach is
evaluated by case studies on eight open source Java programs using
different software structures (one employs design patterns versus one
does not) for the same object-oriented program. The results of the
case studies validate the effectiveness of the proposed metric. The
approach has been fully automated by a tool written in Java.
Abstract: The present work consecutively on synthesis and
characterization of composites, Al/Al alloy A 384.1 as matrix in
which the main ingredient as Al/Al-5% MgO alloy based metal
matrix composite. As practical implications the low cost processing
route for the fabrication of Al alloy A 384.1 and operational
difficulties of presently available manufacturing processes based in
liquid manipulation methods. As all new developments, complete
understanding of the influence of processing variables upon the final
quality of the product. And the composite is applied comprehensively
to the acquaintance for achieving superiority of information
concerning the specific heat measurement of a material through the
aid of thermographs. Products are evaluated concerning relative
particle size and mechanical behavior under tensile strength.
Furthermore, Taguchi technique was employed to examine the
experimental optimum results are achieved, owing to effectiveness of
this approach.
Abstract: With the aim of knowing whether curriculum and sex
differences exist in academic stress arising from perceived
expectations, high school students were asked to respond to the
Academic Expectations Stress Inventory (AESI). AESI is a nine-item
inventory with two domains, namely: expectations of
teachers/parents and expectations of self. Out of the 504 officially
enrolled high school students in a state college, 469 responded to the
inventory. Responses were analyzed using independent samples ttest.
Significant differences were found between the mean scores of
the respondents coming from the Science and the Vocational
curriculum. The respondents from the Science curriculum
consistently registered higher mean scores. Likewise, significant
differences were found between the male and the female respondents.
The female respondents consistently registered higher mean scores.
Abstract: This paper covers the present situation and problem of experimental teaching of mathematics specialty in recent years, puts
forward and demonstrates experimental teaching methods for different
education. From the aspects of content and experimental teaching
approach, uses as an example the course “Experiment for Program
Designing & Algorithmic Language" and discusses teaching practice
and laboratory course work. In addition a series of successful methods
and measures are introduced in experimental teaching.
Abstract: This paper presents the development of an MODAPTS based cost estimating system to help designers in estimating the manufacturing cost of a assembly products which is belonged from the workers in working fields. Competitiveness of manufacturing cost is getting harder because of the development of Information and telecommunication, but also globalization. Therefore, the accuracy of the assembly cost estimation is getting important. DFA and MODAPTS is useful method for measuring the working hour. But these two methods are used just as a timetable. Therefore, in this paper, we suggest the process of measuring the working hours by MODAPTS which includes the working field-s accurate information. In addition, we adduce the estimation method of accuracy assembly cost with the real information. This research could be useful for designers that can estimate the assembly cost more accurately, and also effective for the companies that which are concerned to reduce the product cost.
Abstract: Overhead conveyor systems satisfy by their simple
construction, wide application range and their full compatibility with
other manufacturing systems, which are designed according to
international standards. Ultra-light overhead conveyor systems are
rope-based conveying systems with individually driven vehicles. The
vehicles can move automatically on the rope and this can be realized
by energy and signals. Crossings are realized by switches. Overhead
conveyor systems are particularly used in the automotive industry but
also at post offices. Overhead conveyor systems always must be
integrated with a logistical process by finding the best way for a
cheaper material flow and in order to guarantee precise and fast
workflows. With their help, any transport can take place without
wasting ground and space, without excessive company capacity, lost
or damaged products, erroneous delivery, endless travels and without
wasting time. Ultra-light overhead conveyor systems provide optimal
material flow, which produces profit and saves time. This article
illustrates the advantages of the structure of the ultra-light overhead
conveyor systems in logistics applications and explains the steps of
their system design. After an illustration of the steps, currently
available systems on the market will be shown by means of their
technical characteristics. Due to their simple construction, demands
to an ultra-light overhead conveyor system will be illustrated.
Abstract: The Beshar River is one aquatic ecosystem,which is
affected by pollutants. This study was conducted to evaluate the
effects of human activities on the water quality of the Beshar river.
This river is approximately 190 km in length and situated at the
geographical positions of 51° 20' to 51° 48' E and 30° 18' to 30° 52'
N it is one of the most important aquatic ecosystems of Kohkiloye
and Boyerahmad province next to the city of Yasuj in southern Iran.
The Beshar river has been contaminated by industrial, agricultural
and other activities in this region such as factories, hospitals,
agricultural farms, urban surface runoff and effluent of wastewater
treatment plants. In order to evaluate the effects of these pollutants
on the quality of the Beshar river, five monitoring stations were
selected along its course. The first station is located upstream of
Yasuj near the Dehnow village; stations 2 to 4 are located east, south
and west of city; and the 5th station is located downstream of Yasuj.
Several water quality parameters were sampled. These include pH,
dissolved oxygen, biological oxygen demand (BOD), temperature,
conductivity, turbidity, total dissolved solids and discharge or flow
measurements. Water samples from the five stations were collected
and analysed to determine the following physicochemical
parameters: EC, pH, T.D.S, T.H, No2, DO, BOD5, COD during 2008
to 2009. The study shows that the BOD5 value of station 1 is at a
minimum (1.5 ppm) and increases downstream from stations 2 to 4 to
a maximum (7.2 ppm), and then decreases at station 5. The DO
values of station 1 is a maximum (9.55 ppm), decreases downstream
to stations 2 - 4 which are at a minimum (3.4 ppm), before increasing
at station 5. The amount of BOD and TDS are highest at the 4th
station and the amount of DO is lowest at this station, marking the
4th station as more highly polluted than the other stations. The
physicochemical parameters improve at the 5th station due to
pollutant degradation and dilution. Finally the point and nonpoint
pollutant sources of Beshar river were determined and compared to
the monitoring results.