Abstract: This paper presents an incremental formal development of the Wireless Transaction Protocol (WTP) in Event-B. WTP is part of the Wireless Application Protocol (WAP) architectures and provides a reliable request-response service. To model and verify the protocol, we use the formal technique Event-B which provides an accessible and rigorous development method. This interaction between modelling and proving reduces the complexity and helps to eliminate misunderstandings, inconsistencies, and specification gaps. As result, verification of WTP allows us to find some deficiencies in the current specification.
Abstract: We present a chronological evolution for naval telecommunication networks. We distinguish periods: with or without multiplexers, with switch systems, with federative systems, with medium switching, and with medium switching with wireless networks. This highlights the introduction of new layers and technology in the architecture. These architectures are presented using layer models of transmission, in a unified way, which enables us to integrate pre-existing models. A ship of a naval fleet has internal communications (i.e. applications' networks of the edge) and external communications (i.e. the use of the means of transmission between edges). We propose architectures, deduced from the layer model, which are the point of convergence between the networks on board and the HF, UHF radio, and satellite resources. This modelling allows to consider end-to-end naval communications, and in a more global way, that is from the user on board towards the user on shore, including transmission and networks on the shore side. The new architectures need take care of quality of services for end-to-end communications, the more remote control develops a lot and will do so in the future. Naval telecommunications will be more and more complex and will use more and more advanced technologies, it will thus be necessary to establish clear global communication schemes to grant consistency of the architectures. Our latest model has been implemented in a military naval situation, and serves as the basic architecture for the RIFAN2 network.
Abstract: We have developed a distributed computing capability, Digital Forensics Compute Cluster (DFORC2) to speed up the ingestion and processing of digital evidence that is resident on computer hard drives. DFORC2 parallelizes evidence ingestion and file processing steps. It can be run on a standalone computer cluster or in the Amazon Web Services (AWS) cloud. When running in a virtualized computing environment, its cluster resources can be dynamically scaled up or down using Kubernetes. DFORC2 is an open source project that uses Autopsy, Apache Spark and Kafka, and other open source software packages. It extends the proven open source digital forensics capabilities of Autopsy to compute clusters and cloud architectures, so digital forensics tasks can be accomplished efficiently by a scalable array of cluster compute nodes. In this paper, we describe DFORC2 and compare it with a standalone version of Autopsy when both are used to process evidence from hard drives of different sizes.
Abstract: The aim of this paper is to present the QoE (Quality of Experience) IPTV SDN-based media streaming server enhanced architecture for configuring, controlling, management and provisioning the improved delivery of IPTV service application with low cost, low bandwidth, and high security. Furthermore, it is given a virtual QoE IPTV SDN-based topology to provide an improved IPTV service based on QoE Control and Management of multimedia services functionalities. Inside OpenFlow SDN Controller there are enabled in high flexibility and efficiency Service Load-Balancing Systems; based on the Loading-Balance module and based on GeoIP Service. This two Load-balancing system improve IPTV end-users Quality of Experience (QoE) with optimal management of resources greatly. Through the key functionalities of OpenFlow SDN controller, this approach produced several important features, opportunities for overcoming the critical QoE metrics for IPTV Service like achieving incredible Fast Zapping time (Channel Switching time) < 0.1 seconds. This approach enabled Easy and Powerful Transcoding system via FFMPEG encoder. It has the ability to customize streaming dimensions bitrates, latency management and maximum transfer rates ensuring delivering of IPTV streaming services (Audio and Video) in high flexibility, low bandwidth and required performance. This QoE IPTV SDN-based media streaming architecture unlike other architectures provides the possibility of Channel Exchanging between several IPTV service providers all over the word. This new functionality brings many benefits as increasing the number of TV channels received by end –users with low cost, decreasing stream failure time (Channel Failure time < 0.1 seconds) and improving the quality of streaming services.
Abstract: In aircraft design, the jump from the conceptual to
preliminary design stage introduces a level of complexity which
cannot be realistically handled by a single optimiser, be that a
human (chief engineer) or an algorithm. The design process is often
partitioned along disciplinary lines, with each discipline given a level
of autonomy. This introduces a number of challenges including, but
not limited to: coupling of design variables; coordinating disciplinary
teams; handling of large amounts of analysis data; reaching an
acceptable design within time constraints. A number of classical
Multidisciplinary Design Optimisation (MDO) architectures exist in
academia specifically designed to address these challenges. Their
limited use in the industrial aircraft design process has inspired
the authors of this paper to develop an alternative strategy based
on well established ideas from Decision Support Systems. The
proposed rule based architecture sacrifices possibly elusive guarantees
of convergence for an attractive return in simplicity. The method
is demonstrated on analytical and aircraft design test cases and its
performance is compared to a number of classical distributed MDO
architectures.
Abstract: Big Data (BD) is associated with a new generation of technologies and architectures which can harness the value of extremely large volumes of very varied data through real time processing and analysis. It involves changes in (1) data types, (2) accumulation speed, and (3) data volume. This paper presents the main concepts related to the BD paradigm, and introduces architectures and technologies for BD and BD sets. The integration of BD with the Hadoop Framework is also underlined. BD has attracted a lot of attention in the public sector due to the newly emerging technologies that allow the availability of network access. The volume of different types of data has exponentially increased. Some applications of BD in the public sector in Romania are briefly presented.
Abstract: This paper is a survey of recent works that proposes a baseband processor architecture for software defined radio. A classification of different approaches is proposed. The performance of each architecture is also discussed in order to clarify the suitable approaches that meet software-defined radio constraints.
Abstract: Fiber-Wireless (FiWi) networks are a promising candidate for future broadband access networks. These networks combine the optical network as the back end where different passive optical network (PON) technologies are realized and the wireless network as the front end where different wireless technologies are adopted, e.g. LTE, WiMAX, Wi-Fi, and Wireless Mesh Networks (WMNs). The convergence of both optical and wireless technologies requires designing architectures with robust efficient and effective bandwidth allocation schemes. Different bandwidth allocation algorithms have been proposed in FiWi networks aiming to enhance the different segments of FiWi networks including wireless and optical subnetworks. In this survey, we focus on the differentiating between the different bandwidth allocation algorithms according to their enhancement segment of FiWi networks. We classify these techniques into wireless, optical and Hybrid bandwidth allocation techniques.
Abstract: In this study, the quad-electrical rotor driven unmanned aerial vehicle system is designed and modeled using fundamental dynamic equations. After that, mechanical, electronical and control system of the air vehicle are designed and implemented. Brushless motor speeds are altered via electronic speed controllers in order to achieve desired controllability. The vehicle's fundamental Euler angles (i.e., roll angle, pitch angle, and yaw angle) are obtained via AHRS sensor. These angles are provided as an input to the control algorithm that run on soft the processor on the electronic card. The vehicle control algorithm is implemented in the electronic card. Controller is designed and improved for each Euler angles. Finally, flight tests have been performed to observe and improve the flight characteristics.
Abstract: Web application architecture is important to achieve the desired performance for the application. Performance analysis studies are conducted to evaluate existing or planned systems. Web applications are used by hundreds of thousands of users simultaneously, which sometimes increases the risk of server failure in real time operations. We use Coloured Petri Net (CPN), a very powerful tool for modelling dynamic behaviour of a web application system. CPNs extend the vocabulary of ordinary Petri nets and add features that make them suitable for modelling large systems. The major focus of this work is on server side of web applications. The presented work focuses on modelling restructuring aspects, with major focus on concurrency and architecture, using CPN. It also focuses on bringing out the appropriate architecture for web and database servers given the number of concurrent users.
Abstract: An innovative approach to develop modified scaling free CORDIC based two parallel pipelined Multipath Delay Commutator (MDC) FFT and IFFT architectures for radix 22 FFT algorithm is presented. Multipliers and adders are the most important data paths in FFT and IFFT architectures. Multipliers occupy high area and consume more power. In order to optimize the area and power overhead, modified scaling-free CORDIC based complex multiplier is utilized in the proposed design. In general twiddle factor values are stored in RAM block. In the proposed work, modified scaling-free CORDIC based twiddle factor generator unit is used to generate the twiddle factor and efficient switching units are used. In addition to this, four point FFT operations are performed without complex multiplication which helps to reduce area and power in the last two stages of the pipelined architectures. The design proposed in this paper is based on multipath delay commutator method. The proposed design can be extended to any radix 2n based FFT/IFFT algorithm to improve the throughput. The work is synthesized using Synopsys design Compiler using TSMC 90-nm library. The proposed method proves to be better compared to the reference design in terms of area, throughput and power consumption. The comparative analysis of the proposed design with Xilinx FPGA platform is also discussed in the paper.
Abstract: In this work, we propose an algorithm developed under Python language for the modeling of ordinary scalar Bessel beams and their discrete superpositions and subsequent calculation of optical forces exerted over dielectric spherical particles. The mathematical formalism, based on the generalized Lorenz-Mie theory, is implemented in Python for its large number of free mathematical (as SciPy and NumPy), data visualization (Matplotlib and PyJamas) and multiprocessing libraries. We also propose an approach, provided by a synchronized Software as Service (SaaS) in cloud computing, to develop a user interface embedded on a mobile application, thus providing users with the necessary means to easily introduce desired unknowns and parameters and see the graphical outcomes of the simulations right at their mobile devices. Initially proposed as a free Android-based application, such an App enables data post-processing in cloud-based architectures and visualization of results, figures and numerical tables.
Abstract: Wavelength Division Multiplexing (WDM)
technology is the most promising technology for the proper
utilization of huge raw bandwidth provided by an optical fiber. One
of the key problems in implementing the all-optical WDM network is
the packet contention. This problem can be solved by several
different techniques. In time domain approach the packet contention
can be reduced by incorporating Fiber Delay Lines (FDLs) as optical
buffer in the switch architecture. Different types of buffering
architectures are reported in literatures. In the present paper a
comparative performance analysis of three most popular FDL
architectures are presented in order to obtain the best contention
resolution performance. The analysis is further extended to consider
the effect of different fiber non-linearities on the network
performance.
Abstract: This paper focuses on the questions raised through the
work of Unit 5: ‘In/Out Crisis, emergent and adaptive’; an
architectural research-based studio at [ARC] University of Nicosia. Students were asked to delve into state of Art Technologies in
order to propose sustainable Emergent and Adaptive Architectures
and Urbanities, the resulting unprecedented spatial conditions and
atmospheres of the emergent new ways of living are deemed to be the
ultimate aim of the investigation. Students explored a variety of sites
and crisis conditions seen through their primary ingredient identified
as soil, water and air and their paired combination. Within this
methodology, crisis is seen as a mechanism for allowing an
emergence of new and fascinating ultimate sustainable future cultures
and cities by taking advantage of the primary materiality of the sites.
Abstract: This work is the first dowel in a rather wide research
activity in collaboration with Euro Mediterranean Center for Climate
Changes, aimed at introducing scalable approaches in Ocean
Circulation Models. We discuss designing and implementation of
a parallel algorithm for solving the Variational Data Assimilation
(DA) problem on Graphics Processing Units (GPUs). The algorithm
is based on the fully scalable 3DVar DA model, previously proposed
by the authors, which uses a Domain Decomposition approach
(we refer to this model as the DD-DA model). We proceed with
an incremental porting process consisting of 3 distinct stages:
requirements and source code analysis, incremental development of
CUDA kernels, testing and optimization. Experiments confirm the
theoretic performance analysis based on the so-called scale up factor
demonstrating that the DD-DA model can be suitably mapped on
GPU architectures.
Abstract: The Scheduling and mapping of tasks on a set of
processors is considered as a critical problem in parallel and
distributed computing system. This paper deals with the problem of
dynamic scheduling on a special type of multiprocessor architecture
known as Linear Crossed Cube (LCQ) network. This proposed
multiprocessor is a hybrid network which combines the features of
both linear types of architectures as well as cube based architectures.
Two standard dynamic scheduling schemes namely Minimum
Distance Scheduling (MDS) and Two Round Scheduling (TRS)
schemes are implemented on the LCQ network. Parallel tasks are
mapped and the imbalance of load is evaluated on different set of
processors in LCQ network. The simulations results are evaluated
and effort is made by means of through analysis of the results to
obtain the best solution for the given network in term of load
imbalance left and execution time. The other performance matrices
like speedup and efficiency are also evaluated with the given
dynamic algorithms.
Abstract: Flash Floods, together with landslides, are a common
natural threat for people living in mountainous regions and foothills.
One way to deal with this constant menace is the use of Early
Warning Systems, which have become a very important mitigation
strategy for natural disasters.
In this work we present our proposal for a pilot Flash Flood Early
Warning System for Santiago, Chile, the first stage of a more
ambitious project that in a future stage shall also include early
warning of landslides.
To give a context for our approach, we first analyze three existing
Flash Flood Early Warning Systems, focusing on their general
architectures. We then present our proposed system, with main focus
on the decision support system, a system that integrates empirical
models and fuzzy expert systems to achieve reliable risk estimations.
Abstract: In this paper, the problem of fault detection and
isolation in the attitude control subsystem of spacecraft formation
flying is considered. In order to design the fault detection method, an
extended Kalman filter is utilized which is a nonlinear stochastic state
estimation method. Three fault detection architectures, namely,
centralized, decentralized, and semi-decentralized are designed based
on the extended Kalman filters. Moreover, the residual generation
and threshold selection techniques are proposed for these
architectures.
Abstract: The Cone Penetration Test (CPT) is a common in-situ
test which generally investigates a much greater volume of soil more
quickly than possible from sampling and laboratory tests. Therefore,
it has the potential to realize both cost savings and assessment of soil
properties rapidly and continuously. The principle objective of this
paper is to demonstrate the feasibility and efficiency of using
artificial neural networks (ANNs) to predict the soil angle of internal
friction (Φ) and the soil modulus of elasticity (E) from CPT results
considering the uncertainties and non-linearities of the soil. In
addition, ANNs are used to study the influence of different
parameters and recommend which parameters should be included as
input parameters to improve the prediction. Neural networks discover
relationships in the input data sets through the iterative presentation
of the data and intrinsic mapping characteristics of neural topologies.
General Regression Neural Network (GRNN) is one of the powerful
neural network architectures which is utilized in this study. A large
amount of field and experimental data including CPT results, plate
load tests, direct shear box, grain size distribution and calculated data
of overburden pressure was obtained from a large project in the
United Arab Emirates. This data was used for the training and the
validation of the neural network. A comparison was made between
the obtained results from the ANN's approach, and some common
traditional correlations that predict Φ and E from CPT results with
respect to the actual results of the collected data. The results show
that the ANN is a very powerful tool. Very good agreement was
obtained between estimated results from ANN and actual measured
results with comparison to other correlations available in the
literature. The study recommends some easily available parameters
that should be included in the estimation of the soil properties to
improve the prediction models. It is shown that the use of friction
ration in the estimation of Φ and the use of fines content in the
estimation of E considerable improve the prediction models.
Abstract: The practice of freeing monuments from subsequent
additions crosses the entire history of conservation and it is
traditionally connected to the aim of valorisation, both for cultural
and educational purpose and recently even for touristic exploitation.
Defence heritage has been widely interested by these cultural and
technical moods from philological restoration to critic innovations. A
renovated critical analysis of Italian episodes and in particular the
Sardinian case of the area of San Pancrazio in Cagliari, constitute an
important lesson about the limits of this practice and the uncertainty
in terms of results, towards the definition of a sustainable good
practice in the restoration of military architectures.