Abstract: This paper investigated the impact of ceiling height and window head heights variation on daylighting inside architectural teaching studio with a full width window. In architectural education, using the studio is more than normal classroom in most credit hours. Therefore, window position, size and dimension of studio have direct influence on level of daylighting. Daylighting design is a critical factor that improves student learning, concentration and behavior, in addition to these, it also reduces energy consumption. The methodology of analysis involves using Radiance in IES software under overcast and cloudy sky in Malaysia. It has been established that presentation of daylighting of architecture studio can be enhanced by changing the ceiling heights and window level, because, different ceiling heights and window head heights can contribute to different range of daylight levels.
Abstract: Graph coloring is an important problem in computer
science and many algorithms are known for obtaining reasonably
good solutions in polynomial time. One method of comparing
different algorithms is to test them on a set of standard graphs where
the optimal solution is already known. This investigation analyzes a
set of 50 well known graph coloring instances according to a set of
complexity measures. These instances come from a variety of
sources some representing actual applications of graph coloring
(register allocation) and others (mycieleski and leighton graphs) that
are theoretically designed to be difficult to solve. The size of the
graphs ranged from ranged from a low of 11 variables to a high of
864 variables. The method used to solve the coloring problem was
the square of the adjacency (i.e., correlation) matrix. The results
show that the most difficult graphs to solve were the leighton and the
queen graphs. Complexity measures such as density, mobility,
deviation from uniform color class size and number of block
diagonal zeros are calculated for each graph. The results showed that
the most difficult problems have low mobility (in the range of .2-.5)
and relatively little deviation from uniform color class size.
Abstract: This research was to evaluate a technical feasibility of
making single-layer experimental particleboard panels from bamboo
waste (Dendrocalamus asper Backer) by converting bamboo into
strips, which are used to make laminated bamboo furniture. Variable
factors were density (600, 700 and 800 kg/m3) and temperature of
condition (25, 40 and 55 °C). The experimental panels were tested for
their physical and mechanical properties including modulus of
elasticity (MOE), modulus of rupture (MOR), internal bonding
strength (IB), screw holding strength (SH) and thickness swelling
values according to the procedures defined by Japanese Industrial
Standard (JIS). The test result of mechanical properties showed that
the MOR, MOE and IB values were not in the set criteria, except the
MOR values at the density of 700 kg/m3 at 25 °C and at the density
of 800 kg/m3 at 25 and 40 °C, the IB values at the density of 600
kg/m3, at 40 °C, and at the density of 800 kg/m3 at 55 °C. The SH
values had the test result according to the set standard, except with
the density of 600 kg/m3, at 40 and 55 °C. Conclusively, a valuable
renewable biomass, bamboo waste could be used to manufacture
boards.
Abstract: IEEE has designed 802.11i protocol to address the
security issues in wireless local area networks. Formal analysis is
important to ensure that the protocols work properly without having
to resort to tedious testing and debugging which can only show the
presence of errors, never their absence. In this paper, we present
the formal verification of an abstract protocol model of 802.11i.
We translate the 802.11i protocol into the Strand Space Model and
then prove the authentication property of the resulting model using
the Strand Space formalism. The intruder in our model is imbued
with powerful capabilities and repercussions to possible attacks are
evaluated. Our analysis proves that the authentication of 802.11i is
not compromised in the presented model. We further demonstrate
how changes in our model will yield a successful man-in-the-middle
attack.
Abstract: Lately, an interest has grown greatly in the usages of
RFID in an un-presidential applications. It is shown in the adaptation
of major software companies such as Microsoft, IBM, and Oracle
the RFID capabilities in their major software products. For example
Microsoft SharePoints 2010 workflow is now fully compatible with
RFID platform. In addition, Microsoft BizTalk server is also capable
of all RFID sensors data acquisition. This will lead to applications
that required high bit rate, long range and a multimedia content in
nature. Higher frequencies of operation have been designated for
RFID tags, among them are the 2.45 and 5.8 GHz. The higher the
frequency means higher range, and higher bit rate, but the drawback
is the greater cost. In this paper we present a single layer, low
profile patch antenna operates at 5.8 GHz with pure resistive input
impedance of 50 and close to directive radiation. Also, we propose
a modification to the design in order to improve the operation band
width from 8.7 to 13.8
Abstract: Nozzle is the main part of various spinning systems
such as air-jet and Murata air vortex systems. Recently, many
researchers worked on the usage of the nozzle on different spinning
systems such as conventional ring and compact spinning systems. In
these applications, primary purpose is to improve the yarn quality. In
present study, it was produced the yarns with two different nozzle
types and determined the changes in yarn properties. In order to
explain the effect of the nozzle, airflow structure in the nozzle was
modelled and airflow variables were determined. In numerical
simulation, ANSYS 12.1 package program and Fluid Flow (CFX)
analysis method was used. As distinct from the literature, Shear
Stress Turbulent (SST) model is preferred. And also air pressure at
the nozzle inlet was measured by electronic mass flow meter and
these values were used for the simulation of the airflow. At last, the
yarn was modelled and the area from where the yarn is passing was
included to the numerical analysis.
Abstract: Steel plate shear walls (SPSWs) in buildings are
known to be an effective means for resisting lateral forces. By using
un-stiffened walls and allowing them to buckle, their energy
absorption capacity will increase significantly due to the postbuckling
capacity. The post-buckling tension field action of SPSWs
can provide substantial strength, stiffness and ductility. This paper
presents the Finite Element Analysis of low yield point (LYP) steel
shear walls. In this shear wall system, the LYP steel plate is used for
the steel panel and conventional structural steel is used for boundary
frames. A series of nonlinear cyclic analyses were carried out to
obtain the stiffness, strength, deformation capacity, and energy
dissipation capacity of the LYP steel shear wall. The effect of widthto-
thickness ratio of steel plate on buckling behavior, and energy
dissipation capacities were studied. Good energy dissipation and
deformation capacities were obtained for all models.
Abstract: Scalability poses a severe threat to the existing
DRAM technology. The capacitors that are used for storing and
sensing charge in DRAM are generally not scaled beyond 42nm.
This is because; the capacitors must be sufficiently large for reliable
sensing and charge storage mechanism. This leaves DRAM memory
scaling in jeopardy, as charge sensing and storage mechanisms
become extremely difficult. In this paper we provide an overview of
the potential and the possibilities of using Phase Change Memory
(PCM) as an alternative for the existing DRAM technology. The
main challenges that we encounter in using PCM are, the limited
endurance, high access latencies, and higher dynamic energy
consumption than that of the conventional DRAM. We then provide
an overview of various methods, which can be employed to
overcome these drawbacks. Hybrid memories involving both PCM
and DRAM can be used, to achieve good tradeoffs in access latency
and storage density. We conclude by presenting, the results of these
methods that makes PCM a potential replacement for the current
DRAM technology.
Abstract: The study investigated the practices of organisations in Gulf Cooperation Council (GCC) countries with regards to G2C egovernment maturity. It reveals that e-government G2C initiatives in the surveyed countries in particular, and arguably around the world in general, are progressing slowly because of the lack of a trusted and secure medium to authenticate the identities of online users. The authors conclude that national ID schemes will play a major role in helping governments reap the benefits of e-government if the three advanced technologies of smart card, biometrics and public key infrastructure (PKI) are utilised to provide a reliable and trusted authentication medium for e-government services.
Abstract: This paper deals with wireless relay communication
systems in which multiple sources transmit information to the
destination node by the help of multiple relays. We consider a
signal forwarding technique based on the minimum mean-square
error (MMSE) approach with multiple antennas for each relay. A
source-relay-destination joint design strategy is proposed with power
constraints at the destination and the source nodes. Simulation results
confirm that the proposed joint design method improves the average
MSE performance compared with that of conventional MMSE relaying
schemes.
Abstract: Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.
Abstract: In the artificial intelligence field, knowledge
representation and reasoning are important areas for intelligent
systems, especially knowledge base systems and expert systems.
Knowledge representation Methods has an important role in
designing the systems. There have been many models for knowledge
such as semantic networks, conceptual graphs, and neural networks.
These models are useful tools to design intelligent systems. However,
they are not suitable to represent knowledge in the domains of reality
applications. In this paper, new models for knowledge representation
called computational networks will be presented. They have been
used in designing some knowledge base systems in education for
solving problems such as the system that supports studying
knowledge and solving analytic geometry problems, the program for
studying and solving problems in Plane Geometry, the program for
solving problems about alternating current in physics.
Abstract: The paper depicts air velocity values, reproduced by laser Doppler anemometer (LDA) and ultrasonic anemometer (UA), relations with calculated ones from flow rate measurements using the gas meter which calibration uncertainty is ± (0.15 – 0.30) %. Investigation had been performed in channel installed in aerodynamical facility used as a part of national standard of air velocity. Relations defined in a research let us confirm the LDA and UA for air velocity reproduction to be the most advantageous measures. The results affirm ultrasonic anemometer to be reliable and favourable instrument for measurement of mean velocity or control of velocity stability in the velocity range of 0.05 m/s – 10 (15) m/s when the LDA used. The main aim of this research is to investigate low velocity regularities, starting from 0.05 m/s, including region of turbulent, laminar and transitional air flows. Theoretical and experimental results and brief analysis of it are given in the paper. Maximum and mean velocity relations for transitional air flow having unique distribution are represented. Transitional flow having distinctive and different from laminar and turbulent flow characteristics experimentally have not yet been analysed.
Abstract: Hemorrhage Disease of Grass Carp (HDGC) is a kind
of commonly occurring illnesses in summer, and the extremely high
death rate result in colossal losses to aquaculture. As the complex
connections among each factor which influences aquiculture diseases,
there-s no quit reasonable mathematical model to solve the problem at
present.A BP neural network which with excellent nonlinear mapping
coherence was adopted to establish mathematical model;
Environmental factor, which can easily detected, such as breeding
density, water temperature, pH and light intensity was set as the main
analyzing object. 25 groups of experimental data were used for
training and test, and the accuracy of using the model to predict the
trend of HDGC was above 80%. It is demonstrated that BP neural
network for predicating diseases in HDGC has a particularly
objectivity and practicality, thus it can be spread to other aquiculture
disease.
Abstract: A minimal complexity version of component mode
synthesis is presented that requires simplified computer
programming, but still provides adequate accuracy for modeling
lower eigenproperties of large structures and their transient
responses. The novelty is that a structural separation into components
is done along a plane/surface that exhibits rigid-like behavior, thus
only normal modes of each component is sufficient to use, without
computing any constraint, attachment, or residual-attachment modes.
The approach requires only such input information as a few (lower)
natural frequencies and corresponding undamped normal modes of
each component. A novel technique is shown for formulation of
equations of motion, where a double transformation to generalized
coordinates is employed and formulation of nonproportional damping
matrix in generalized coordinates is shown.
Abstract: In this paper we develop an efficient numerical method for the finite-element model updating of damped gyroscopic systems based on incomplete complex modal measured data. It is assumed that the analytical mass and stiffness matrices are correct and only the damping and gyroscopic matrices need to be updated. By solving a constrained optimization problem, the optimal corrected symmetric damping matrix and skew-symmetric gyroscopic matrix complied with the required eigenvalue equation are found under a weighted Frobenius norm sense.
Abstract: Due to the three- dimensional flow pattern interacting with bed material, the process of local scour around bridge piers is complex. Modeling 3D flow field and scour hole evolution around a bridge pier is more feasible nowadays because the computational cost and computational time have significantly decreased. In order to evaluate local flow and scouring around a bridge pier, a completely three-dimensional numerical model, SSIIM program, was used. The model solves 3-D Navier-Stokes equations and a bed load conservation equation. The model was applied to simulate local flow and scouring around a bridge pier in a large natural river with four piers. Computation for 1 day of flood condition was carried out to predict the maximum local scour depth. The results show that the SSIIM program can be used efficiently for simulating the scouring in natural rivers. The results also showed that among the various turbulence models, the k-ω model gives more reasonable results.
Abstract: This experiment was carried out to study the effect of
AMF, drought stress and phosphorus on physiological growth indices of basil at Iran using by a split-plot design with three replications.
The main-plot factor included: two levels of irrigation regimes (control=no drought stress and irrigation after 80 evaporation=
drought stress condition) while the sub-plot factors included
phosphorus (0, 35 and 70 kg/ha) and application and non-application of Glomus fasciculatum. The results showed that total dry matter
(TDM), life area index (LAI), relative growth rate (RGR) and crop growth rate (CGR) were all highly significantly different among the
phosphorus, whereas drought stress had effect of practical
significance on TDM, LAI, RGR and CGR. The results also showed that the highest TDM, LAI, RGR and CGR were obtained from
application of Glomus fasciculatum under no-drought condition.
Abstract: There have been various methods created based on the regression ideas to resolve the problem of data set containing censored observations, i.e. the Buckley-James method, Miller-s method, Cox method, and Koul-Susarla-Van Ryzin estimators. Even though comparison studies show the Buckley-James method performs better than some other methods, it is still rarely used by researchers mainly because of the limited diagnostics analysis developed for the Buckley-James method thus far. Therefore, a diagnostic tool for the Buckley-James method is proposed in this paper. It is called the renovated Cook-s Distance, (RD* i ) and has been developed based on the Cook-s idea. The renovated Cook-s Distance (RD* i ) has advantages (depending on the analyst demand) over (i) the change in the fitted value for a single case, DFIT* i as it measures the influence of case i on all n fitted values Yˆ∗ (not just the fitted value for case i as DFIT* i) (ii) the change in the estimate of the coefficient when the ith case is deleted, DBETA* i since DBETA* i corresponds to the number of variables p so it is usually easier to look at a diagnostic measure such as RD* i since information from p variables can be considered simultaneously. Finally, an example using Stanford Heart Transplant data is provided to illustrate the proposed diagnostic tool.
Abstract: In this study, the effects of machining parameters on
specific energy during surface grinding of 6061Al-SiC35P
composites are investigated. Vol% of SiC, feed and depth of cut were
chosen as process variables. The power needed for the calculation of
the specific energy is measured from the two watt meter method.
Experiments are conducted using standard RSM design called Central
composite design (CCD). A second order response surface model was
developed for specific energy. The results identify the significant
influence factors to minimize the specific energy. The confirmation
results demonstrate the practicability and effectiveness of the
proposed approach.