Abstract: An aggregate signature scheme can aggregate n signatures on n distinct messages from n distinct signers into a single signature. Thus, n verification equations can be reduced to one. So the aggregate signature adapts to Mobile Ad hoc Network (MANET). In this paper, we propose an efficient ID-based aggregate signature scheme with constant pairing computations. Compared with the existing ID-based aggregate signature scheme, this scheme greatly improves the efficiency of signature communication and verification. In addition, in this work, we apply our ID-based aggregate sig- nature to authenticated routing protocol to present a secure routing scheme. Our scheme not only provides sound authentication and a secure routing protocol in ad hoc networks, but also meets the nature of MANET.
Abstract: The construction industry is a very dynamic field. Every day new technologies and methods are developed to fasten the process and increase its efficiency. Hence, if a project uses fewer resources it will be more efficient.
This paper examines the recycling of concrete construction and demolition (C&D) waste to reuse it as aggregates in on-site applications for construction projects in Egypt and possibly in the Middle East. The study focuses on a stationary plant setting. The machinery set-up used in the plant is analyzed technically and financially.
The findings are gathered and grouped to obtain a comprehensive cost-benefit financial model to demonstrate the feasibility of establishing and operating a concrete recycling plant. Furthermore, a detailed business plan including the time and hierarchy is proposed.
Abstract: Pervious concrete is a green alternative to conventional pavements with minimal fine aggregate and a high void content. Pervious concrete allows water to infiltrate through the pavement, thereby reducing the runoff and the requirement for stormwater management systems.
Seashell By-Products (SBP) are produced in an important quantity in France and are considered as waste. This work investigated to use SBP in pervious concrete and produce an even more environmentally friendly product, Pervious Concrete Pavers.
The research methodology involved substituting the coarse aggregate in the previous concrete mix design with 20%, 40% and 60% SBP. The testing showed that pervious concrete containing less than 40% SBP had strengths, permeability and void content which are comparable to the pervious concrete containing with only natural aggregate. The samples that contained 40% SBP or higher had a significant loss in strength and an increase in permeability and a void content from the control mix pervious concrete. On the basis of the results in this research, it was found that the natural aggregate can be substituted by SBP without affecting the delicate balance of a pervious concrete mix. Additional, it is recommended that the optimum replacement percentage for SBP in pervious concrete is 40 % direct replacement of natural coarse aggregate while maintaining the structural performance and drainage capabilities of the pervious concrete.
Abstract: Most empirical studies have analyzed how liquidity risks faced by individual institutions turn into systemic risk. Recent banking crisis has highlighted the importance of grasping and controlling the systemic risk, and the acceptance by Central Banks to ease their monetary policies for saving default or illiquid banks. This last point shows that banks would pay less attention to liquidity risk which, in turn, can become a new important channel of loss. The financial regulation focuses on the most important and “systemic” banks in the global network. However, to quantify the expected loss associated with liquidity risk, it is worth to analyze sensitivity to this channel for the various elements of the global bank network. A small bank is not considered as potentially systemic; however the interaction of small banks all together can become a systemic element. This paper analyzes the impact of medium and small banks interaction on a set of banks which is considered as the core of the network. The proposed method uses the structure of agent-based model in a two-class environment. In first class, the data from actual balance sheets of 22 large and systemic banks (such as BNP Paribas or Barclays) are collected. In second one, to model a network as closely as possible to actual interbank market, 578 fictitious banks smaller than the ones belonging to first class have been split into two groups of small and medium ones. All banks are active on the European interbank network and have deposit and market activity. A simulation of 12 three month periods representing a midterm time interval three years is projected. In each period, there is a set of behavioral descriptions: repayment of matured loans, liquidation of deposits, income from securities, collection of new deposits, new demands of credit, and securities sale. The last two actions are part of refunding process developed in this paper. To strengthen reliability of proposed model, random parameters dynamics are managed with stochastic equations as rates the variations of which are generated by Vasicek model. The Central Bank is considered as the lender of last resort which allows banks to borrow at REPO rate and some ejection conditions of banks from the system are introduced.
Liquidity crunch due to exogenous crisis is simulated in the first class and the loss impact on other bank classes is analyzed though aggregate values representing the aggregate of loans and/or the aggregate of borrowing between classes. It is mainly shown that the three groups of European interbank network do not have the same response, and that intermediate banks are the most sensitive to liquidity risk.
Abstract: Numbers of software quality measurement system have been implemented over the past few years, but none of them focuses on telecommunication industry. Software quality measurement system for telecommunication industry was a system that could calculate the quality value of the measured software that totally focused in telecommunication industry. Before designing a system, quality factors, quality attributes and quality metrics were identified based on literature review and survey. Then, using the identified quality factors, quality attributes and quality metrics, quality model for telecommunication industry was constructed. Each identified quality metrics had its own formula. Quality value for the system was measured based on the quality metrics and aggregated by referring to the quality model. It would classify the quality level of the software based on Net Satisfaction Index (NSI). The system was designed using object-oriented approach in web-based environment. Thus, existing of software quality measurement system was important to both developers and users in order to produce high quality software product for telecommunication industry.
Abstract: At highly congested reinforcement regions, which is common at beam-column joint area, clear spacing between parallel bars becomes less than maximum normal aggregate size (20mm) which has not been addressed in any design code and specifications. Limited clear spacing between parallel bars (herein after thin cover) is one of the causes which affect anchorage performance. In this study, an experimental investigation was carried out to understand anchorage performance of reinforcement in Self-Compacting Concrete (SCC) and Normal Concrete (NC) at highly congested regions under uni-axial tensile loading. Column bar was pullout whereas; beam bars were offset from column reinforcement creating thin cover as per site condition. Two different sizes of coarse aggregate were used for NC (20mm and 10mm). Strain gauges were also installed along the bar in some specimens to understand the internal stress mechanism. Test results reveal that anchorage performance is affected at highly congested reinforcement region in NC with maximum aggregate size 20mm whereas; SCC and Small Aggregate (10mm) gives better structural performance.
Abstract: Microscopic emission and fuel consumption models
have been widely recognized as an effective method to quantify real
traffic emission and energy consumption when they are applied with
microscopic traffic simulation models. This paper presents a
framework for developing the Microscopic Emission (HC, CO, NOx,
and CO2) and Fuel consumption (MEF) models for light-duty
vehicles. The variable of composite acceleration is introduced into
the MEF model with the purpose of capturing the effects of historical
accelerations interacting with current speed on emission and fuel
consumption. The MEF model is calibrated by multivariate
least-squares method for two types of light-duty vehicle using
on-board data collected in Beijing, China by a Portable Emission
Measurement System (PEMS). The instantaneous validation results
shows the MEF model performs better with lower Mean Absolute
Percentage Error (MAPE) compared to other two models. Moreover,
the aggregate validation results tells the MEF model produces
reasonable estimations compared to actual measurements with
prediction errors within 12%, 10%, 19%, and 9% for HC, CO, NOx
emissions and fuel consumption, respectively.
Abstract: This paper presents the IP traffic analysis. The traffic
was collected from the network of Suranaree University of
Technology using the software based on the Simple Network
Management Protocol (SNMP). In particular, we analyze the
distribution of the aggregated traffic during the hours of peak load
and light load. The traffic profiles including the parameters described
the traffic distributions were derived. From the statistical analysis
applying three different methods, including the Kolmogorov Smirnov
test, Anderson Darling test, and Chi-Squared test, we found that the
IP traffic distribution is a non-normal distribution and the
distributions during the peak load and the light load are different. The
experimental study and analysis show high uncertainty of the IP
traffic.
Abstract: This work proposes an approach to address automatic
text summarization. This approach is a trainable summarizer, which
takes into account several features, including sentence position,
positive keyword, negative keyword, sentence centrality, sentence
resemblance to the title, sentence inclusion of name entity, sentence
inclusion of numerical data, sentence relative length, Bushy path of
the sentence and aggregated similarity for each sentence to generate
summaries. First we investigate the effect of each sentence feature on
the summarization task. Then we use all features score function to
train genetic algorithm (GA) and mathematical regression (MR)
models to obtain a suitable combination of feature weights. The
proposed approach performance is measured at several compression
rates on a data corpus composed of 100 English religious articles.
The results of the proposed approach are promising.
Abstract: In this study, ZnO nano rods and ZnO ultrafine particles were synthesized by Gel-casting method. The synthesized ZnO powder has a hexagonal zincite structure. The ZnO aggregates with rod-like morphology are typically 1.4 μm in length and 120 nm in diameter, which consist of many small nanocrystals with diameters of 10 nm. Longer wires connected by many hexahedral ZnO nanocrystals were obtained after calcinations at the temperature over 600° C.The crystalline structures and morphologies of the powder have been characterized by X-ray diffraction(XRD) and Scaning electron microscopy (SEM).The result shows that the different preparation conditions such as concentration H2O, calcinations time and calcinations temperature have a lot of influences upon the properties of nano ZnO powders, an increase in the temperature of the calcinations results in an increase of the grain size and also the increase of the calcinations time in high temperature makes the size of the grains bigger. The existences of extra watter prevent nano grains from improving like rod morphology. We have obtained the smallest grain size of ZnO powder by controlling the process conditions. Finally In a suitable condition, a novel nanostructure, namely bi-rod-like ZnO nano rods was found which is different from known ZnO nanostructures.
Abstract: Granular computing deals with representation of information in the form of some aggregates and related methods for transformation and analysis for problem solving. A granulation scheme based on clustering and Rough Set Theory is presented with focus on structured conceptualization of information has been presented in this paper. Experiments for the proposed method on four labeled data exhibit good result with reference to classification problem. The proposed granulation technique is semi-supervised imbibing global as well as local information granulation. To represent the results of the attribute oriented granulation a tree structure is proposed in this paper.
Abstract: Wireless sensor networks (WSN) consists of many
sensor nodes that are placed on unattended environments such as
military sites in order to collect important information.
Implementing a secure protocol that can prevent forwarding forged
data and modifying content of aggregated data and has low delay
and overhead of communication, computing and storage is very
important. This paper presents a new protocol for concealed data
aggregation (CDA). In this protocol, the network is divided to
virtual cells, nodes within each cell produce a shared key to send
and receive of concealed data with each other. Considering to data
aggregation in each cell is locally and implementing a secure
authentication mechanism, data aggregation delay is very low and
producing false data in the network by malicious nodes is not
possible. To evaluate the performance of our proposed protocol, we
have presented computational models that show the performance
and low overhead in our protocol.
Abstract: In this paper, we investigate multihop polling and data gathering schemes in layered sensor networks in order to extend the life time of the networks. A network consists of three layers. The lowest layer contains sensors. The middle layer contains so called super nodes with higher computational power, energy supply and longer transmission range than sensor nodes. The top layer contains a sink node. A node in each layer controls a number of nodes in lower layer by polling mechanism to gather data. We will present four types of data gathering schemes: intermediate nodes do not queue data packet, queue single packet, queue multiple packets and aggregate data, to see which data gathering scheme is more energy efficient for multihop polling in layered sensor networks.
Abstract: During the process of compaction in Hot-Mix Asphalt
(HMA) mixtures, the distance between aggregate particles decreases
as they come together and eliminate air-voids. By measuring the
inter-particle distances in a cut-section of a HMA sample the degree
of compaction can be estimated. For this, a calibration curve is
generated by computer simulation technique when the gradation and
asphalt content of the HMA mixture are known. A two-dimensional
cross section of HMA specimen was simulated using the mixture
design information (gradation, asphalt content and air-void content).
Nearest neighbor distance methods such as Delaunay triangulation
were used to study the changes in inter-particle distance and area
distribution during the process of compaction in HMA. Such
computer simulations would enable making several hundreds of
repetitions in a short period of time without the necessity to compact
and analyze laboratory specimens in order to obtain good statistics on
the parameters defined. The distributions for the statistical
parameters based on computer simulations showed similar trends as
those of laboratory specimens.
Abstract: This study reports results of a meta-analytic path analysis e-learning Acceptance Model with k = 27 studies, Databases searched included Information Sciences Institute (ISI) website. Variables recorded included perceived usefulness, perceived ease of use, attitude toward behavior, and behavioral intention to use e-learning. A correlation matrix of these variables was derived from meta-analytic data and then analyzed by using structural path analysis to test the fitness of the e-learning acceptance model to the observed aggregated data. Results showed the revised hypothesized model to be a reasonable, good fit to aggregated data. Furthermore, discussions and implications are given in this article.
Abstract: Wireless sensor networks are consisted of hundreds or
thousands of small sensors that have limited resources.
Energy-efficient techniques are the main issue of wireless sensor
networks. This paper proposes an energy efficient agent-based
framework in wireless sensor networks. We adopt biologically
inspired approaches for wireless sensor networks. Agent operates
automatically with their behavior policies as a gene. Agent aggregates
other agents to reduce communication and gives high priority to nodes
that have enough energy to communicate. Agent behavior policies are
optimized by genetic operation at the base station. Simulation results
show that our proposed framework increases the lifetime of each node.
Each agent selects a next-hop node with neighbor information and
behavior policies. Our proposed framework provides self-healing,
self-configuration, self-optimization properties to sensor nodes.
Abstract: We have studied the migration of a charged permeable aggregate in electrolyte under the influence of an axial electric field and pressure gradient. The migration of the positively charged aggregate leads to a deformation of the anionic cloud around it. The hydrodynamics of the aggregate is governed by the interaction of electroosmotic flow in and around the particle, hydrodynamic friction and electric force experienced by the aggregate. We have computed the non-linear Nernest-Planck equations coupled with the Dracy- Brinkman extended Navier-Stokes equations and Poisson equation for electric field through a finite volume method. The permeability of the aggregate enable the counterion penetration. The penetration of counterions depends on the volume charge density of the aggregate and ionic concentration of electrolytes at a fixed field strength. The retardation effect due to the double layer polarization increases the drag force compared to an uncharged aggregate. Increase in migration sped from the electrophretic velocity of the aggregate produces further asymmetry in charge cloud and reduces the electric body force exerted on the particle. The permeability of the particle have relatively little influence on the electric body force when Double layer is relatively thin. The impact of the key parameters of electrokinetics on the hydrodynamics of the aggregate is analyzed.
Abstract: The automatic construction of large, high-resolution
image vistas (mosaics) is an active area of research in the fields of
photogrammetry [1,2], computer vision [1,4], medical image
processing [4], computer graphics [3] and biometrics [8]. Image
stitching is one of the possible options to get image mosaics. Vista
Creation in image processing is used to construct an image with a
large field of view than that could be obtained with a single
photograph. It refers to transforming and stitching multiple images
into a new aggregate image without any visible seam or distortion in
the overlapping areas. Vista creation process aligns two partial
images over each other and blends them together. Image mosaics
allow one to compensate for differences in viewing geometry. Thus
they can be used to simplify tasks by simulating the condition in
which the scene is viewed from a fixed position with single camera.
While obtaining partial images the geometric anomalies like rotation,
scaling are bound to happen. To nullify effect of rotation of partial
images on process of vista creation, we are proposing rotation
invariant vista creation algorithm in this paper. Rotation of partial
image parts in the proposed method of vista creation may introduce
some missing region in the vista. To correct this error, that is to fill
the missing region further we have used image inpainting method on
the created vista. This missing view regeneration method also
overcomes the problem of missing view [31] in vista due to cropping,
irregular boundaries of partial image parts and errors in digitization
[35]. The method of missing view regeneration generates the missing
view of vista using the information present in vista itself.
Abstract: We study the problem of decision making with Dempster-Shafer belief structure. We analyze the previous work developed by Yager about using the ordered weighted averaging (OWA) operator in the aggregation of the Dempster-Shafer decision process. We discuss the possibility of aggregating with an ascending order in the OWA operator for the cases where the smallest value is the best result. We suggest the introduction of the ordered weighted geometric (OWG) operator in the Dempster-Shafer framework. In this case, we also discuss the possibility of aggregating with an ascending order and we find that it is completely necessary as the OWG operator cannot aggregate negative numbers. Finally, we give an illustrative example where we can see the different results obtained by using the OWA, the Ascending OWA (AOWA), the OWG and the Ascending OWG (AOWG) operator.
Abstract: The aim of this study was to examine the dynamics of functional composition of a non flooded Amazonian forest in response to drought stress in terms of diameter growth, recruitment and mortality. The survey was carried out in the continuous forest of the Biological dynamics of forest fragments project 90 km outside the city of Manaus, state of Amazonas Brazil. All stems >10 cm dbh where identified to species level and monitored in 18 one hectare permanent sample plots from 1981 to 2004.For statistical analysis all species where aggregated in three ecological guilds. Two distinct drought events occurred in 1983 and 1997. Results showed that more early successional species performed better than later successional ones. Response was significant for both events but for the 1997 event this was more pronounced possibly because of the fact that the event was in the middle of the dry rather than the wet period as was the 1983 one.