Parallel Discrete Fourier Transform for Fast FIR Filtering Based on Overlapped-save Block Structure

To successfully provide a fast FIR filter with FTT algorithms, overlapped-save algorithms can be used to lower the computational complexity and achieve the desired real-time processing. As the length of the input block increases in order to improve the efficiency, a larger volume of zero padding will greatly increase the computation length of the FFT. In this paper, we use the overlapped block digital filtering to construct a parallel structure. As long as the down-sampling (or up-sampling) factor is an exact multiple lengths of the impulse response of a FIR filter, we can process the input block by using a parallel structure and thus achieve a low-complex fast FIR filter with overlapped-save algorithms. With a long filter length, the performance and the throughput of the digital filtering system will also be greatly enhanced.

Proposed Developments of Elliptic Curve Digital Signature Algorithm

The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of DSA, where it is a digital signature scheme designed to provide a digital signature based on a secret number known only to the signer and also on the actual message being signed. These digital signatures are considered the digital counterparts to handwritten signatures, and are the basis for validating the authenticity of a connection. The security of these schemes results from the infeasibility to compute the signature without the private key. In this paper we introduce a proposed to development the original ECDSA with more complexity.

Generalized Method for Estimating Best-Fit Vertical Alignments for Profile Data

When the profile information of an existing road is missing or not up-to-date and the parameters of the vertical alignment are needed for engineering analysis, the engineer has to recreate the geometric design features of the road alignment using collected profile data. The profile data may be collected using traditional surveying methods, global positioning systems, or digital imagery. This paper develops a method that estimates the parameters of the geometric features that best characterize the existing vertical alignments in terms of tangents and the expressions of the curve, that may be symmetrical, asymmetrical, reverse, and complex vertical curves. The method is implemented using an Excel-based optimization method that minimizes the differences between the observed profile and the profiles estimated from the equations of the vertical curve. The method uses a 'wireframe' representation of the profile that makes the proposed method applicable to all types of vertical curves. A secondary contribution of this paper is to introduce the properties of the equal-arc asymmetrical curve that has been recently developed in the highway geometric design field.

A Quantum Algorithm of Constructing Image Histogram

Histogram plays an important statistical role in digital image processing. However, the existing quantum image models are deficient to do this kind of image statistical processing because different gray scales are not distinguishable. In this paper, a novel quantum image representation model is proposed firstly in which the pixels with different gray scales can be distinguished and operated simultaneously. Based on the new model, a fast quantum algorithm of constructing histogram for quantum image is designed. Performance comparison reveals that the new quantum algorithm could achieve an approximately quadratic speedup than the classical counterpart. The proposed quantum model and algorithm have significant meanings for the future researches of quantum image processing.

Neuro-Fuzzy System for Equalization Channel Distortion

In this paper the application of neuro-fuzzy system for equalization of channel distortion is considered. The structure and operation algorithm of neuro-fuzzy equalizer are described. The use of neuro-fuzzy equalizer in digital signal transmission allows to decrease training time of parameters and decrease the complexity of the network. The simulation of neuro-fuzzy equalizer is performed. The obtained result satisfies the efficiency of application of neurofuzzy technology in channel equalization.

High Dynamic Range Resampling for Software Radio

The classic problem of recovering arbitrary values of a band-limited signal from its samples has an added complication in software radio applications; namely, the resampling calculations inevitably fold aliases of the analog signal back into the original bandwidth. The phenomenon is quantified by the spur-free dynamic range. We demonstrate how a novel application of the Remez (Parks- McClellan) algorithm permits optimal signal recovery and SFDR, far surpassing state-of-the-art resamplers.

Parallel-computing Approach for FFT Implementation on Digital Signal Processor (DSP)

An efficient parallel form in digital signal processor can improve the algorithm performance. The butterfly structure is an important role in fast Fourier transform (FFT), because its symmetry form is suitable for hardware implementation. Although it can perform a symmetric structure, the performance will be reduced under the data-dependent flow characteristic. Even though recent research which call as novel memory reference reduction methods (NMRRM) for FFT focus on reduce memory reference in twiddle factor, the data-dependent property still exists. In this paper, we propose a parallel-computing approach for FFT implementation on digital signal processor (DSP) which is based on data-independent property and still hold the property of low-memory reference. The proposed method combines final two steps in NMRRM FFT to perform a novel data-independent structure, besides it is very suitable for multi-operation-unit digital signal processor and dual-core system. We have applied the proposed method of radix-2 FFT algorithm in low memory reference on TI TMSC320C64x DSP. Experimental results show the method can reduce 33.8% clock cycles comparing with the NMRRM FFT implementation and keep the low-memory reference property.

Detection of Breast Cancer in the JPEG2000 Domain

Breast cancer detection techniques have been reported to aid radiologists in analyzing mammograms. We note that most techniques are performed on uncompressed digital mammograms. Mammogram images are huge in size necessitating the use of compression to reduce storage/transmission requirements. In this paper, we present an algorithm for the detection of microcalcifications in the JPEG2000 domain. The algorithm is based on the statistical properties of the wavelet transform that the JPEG2000 coder employs. Simulation results were carried out at different compression ratios. The sensitivity of this algorithm ranges from 92% with a false positive rate of 4.7 down to 66% with a false positive rate of 2.1 using lossless compression and lossy compression at a compression ratio of 100:1, respectively.

Biologically Inspired Artificial Neural Cortex Architecture and its Formalism

The paper attempts to elucidate the columnar structure of the cortex by answering the following questions. (1) Why the cortical neurons with similar interests tend to be vertically arrayed forming what is known as cortical columns? (2) How to describe the cortex as a whole in concise mathematical terms? (3) How to design efficient digital models of the cortex?

Speech Activated Automation

This article presents a simple way to perform programmed voice commands for the interface with commercial Digital and Analogue Input/Output PCI cards, used in Robotics and Automation applications. Robots and Automation equipment can "listen" to voice commands and perform several different tasks, approaching to the human behavior, and improving the human- machine interfaces for the Automation Industry. Since most PCI Digital and Analogue Input/Output cards are sold with several DLLs included (for use with different programming languages), it is possible to add speech recognition capability, using a standard speech recognition engine, compatible with the programming languages used. It was created in this work a Visual Basic 6 (the world's most popular language) application, that listens to several voice commands, and is capable to communicate directly with several standard 128 Digital I/O PCI Cards, used to control complete Automation Systems, with up to (number of boards used) x 128 Sensors and/or Actuators.

Performance of Compound Enhancement Algorithms on Dental Radiograph Images

The purpose of this research is to compare the original intra-oral digital dental radiograph images with images that are enhanced using a combination of image processing algorithms. Intraoral digital dental radiograph images are often noisy, blur edges and low in contrast. A combination of sharpening and enhancement method are used to overcome these problems. Three types of proposed compound algorithms used are Sharp Adaptive Histogram Equalization (SAHE), Sharp Median Adaptive Histogram Equalization (SMAHE) and Sharp Contrast adaptive histogram equalization (SCLAHE). This paper presents an initial study of the perception of six dentists on the details of abnormal pathologies and improvement of image quality in ten intra-oral radiographs. The research focus on the detection of only three types of pathology which is periapical radiolucency, widen periodontal ligament space and loss of lamina dura. The overall result shows that SCLAHE-s slightly improve the appearance of dental abnormalities- over the original image and also outperform the other two proposed compound algorithms.

A Method of Protecting Relational Databases Copyright with Cloud Watermark

With the development of Internet and databases application techniques, the demand that lots of databases in the Internet are permitted to remote query and access for authorized users becomes common, and the problem that how to protect the copyright of relational databases arises. This paper simply introduces the knowledge of cloud model firstly, includes cloud generators and similar cloud. And then combined with the property of the cloud, a method of protecting relational databases copyright with cloud watermark is proposed according to the idea of digital watermark and the property of relational databases. Meanwhile, the corresponding watermark algorithms such as cloud watermark embedding algorithm and detection algorithm are proposed. Then, some experiments are run and the results are analyzed to validate the correctness and feasibility of the watermark scheme. In the end, the foreground of watermarking relational database and its research direction are prospected.

Why Are Entrepreneurs Resistant to E-tools?

Latvia is the fourth in the world by means of broadband internet speed. The total number of internet users in Latvia exceeds 70% of its population. The number of active mailboxes of the local internet e-mail service Inbox.lv accounts for 68% of the population and 97.6% of the total number of internet users. The Latvian portal Draugiem.lv is a phenomenon of social media, because 58.4 % of the population and 83.5% of internet users use it. A majority of Latvian company profiles are available on social networks, the most popular being Twitter.com. These and other parameters prove the fact consumers and companies are actively using the Internet.  However, after the authors in a number of studies analyzed how enterprises are employing the e-environment, namely, e-environment tools, they arrived to the conclusions that are not as flattering as the aforementioned statistics. There is an obvious contradiction between the statistical data and the actual studies. As a result, the authors have posed a question: Why are entrepreneurs resistant to e-tools? In order to answer this question, the authors have addressed the Technology Acceptance Model (TAM). The authors analyzed each phase and determined several factors affecting the use of e-environment, reaching the main conclusion that entrepreneurs do not have a sufficient level of e-literacy (digital literacy).  The authors employ well-established quantitative and qualitative methods of research: grouping, analysis, statistic method, factor analysis in SPSS 20  environment etc.  The theoretical and methodological background of the research is formed by, scientific researches and publications, that from the mass media and professional literature, statistical information from legal institutions as well as information collected by the author during the survey.

Comparative Study of Evolutionary Model and Clustering Methods in Circuit Partitioning Pertaining to VLSI Design

Partitioning is a critical area of VLSI CAD. In order to build complex digital logic circuits its often essential to sub-divide multi -million transistor design into manageable Pieces. This paper looks at the various partitioning techniques aspects of VLSI CAD, targeted at various applications. We proposed an evolutionary time-series model and a statistical glitch prediction system using a neural network with selection of global feature by making use of clustering method model, for partitioning a circuit. For evolutionary time-series model, we made use of genetic, memetic & neuro-memetic techniques. Our work focused in use of clustering methods - K-means & EM methodology. A comparative study is provided for all techniques to solve the problem of circuit partitioning pertaining to VLSI design. The performance of all approaches is compared using benchmark data provided by MCNC standard cell placement benchmark net lists. Analysis of the investigational results proved that the Neuro-memetic model achieves greater performance then other model in recognizing sub-circuits with minimum amount of interconnections between them.

Robust Digital Cinema Watermarking

With the advent of digital cinema and digital broadcasting, copyright protection of video data has been one of the most important issues. We present a novel method of watermarking for video image data based on the hardware and digital wavelet transform techniques and name it as “traceable watermarking" because the watermarked data is constructed before the transmission process and traced after it has been received by an authorized user. In our method, we embed the watermark to the lowest part of each image frame in decoded video by using a hardware LSI. Digital Cinema is an important application for traceable watermarking since digital cinema system makes use of watermarking technology during content encoding, encryption, transmission, decoding and all the intermediate process to be done in digital cinema systems. The watermark is embedded into the randomly selected movie frames using hash functions. Embedded watermark information can be extracted from the decoded video data. For that, there is no need to access original movie data. Our experimental results show that proposed traceable watermarking method for digital cinema system is much better than the convenient watermarking techniques in terms of robustness, image quality, speed, simplicity and robust structure.

3D Digitalization of the Human Body for Use in Orthotics and Prosthetics

The motivation of this work was to find a suitable 3D scanner for human body parts digitalization in the field of prosthetics and orthotics. The main project objective is to compare the three hand-held portable scanners (two optical and one laser) and two optical tripod scanners. The comparison was made with respect of scanning detail, simplicity of operation and ability to scan directly on the human body. Testing was carried out on a plaster cast of the upper limb and directly on a few volunteers. The objective monitored parameters were time of digitizing and post-processing of 3D data and resulting visual data quality. Subjectively, it was considered level of usage and handling of the scanner. The new tripod was developed to improve the face scanning conditions. The results provide an overview of the suitability of different types of scanners.

A Reduced-Bit Multiplication Algorithm for Digital Arithmetic

A reduced-bit multiplication algorithm based on the ancient Vedic multiplication formulae is proposed in this paper. Both the Vedic multiplication formulae, Urdhva tiryakbhyam and Nikhilam, are first discussed in detail. Urdhva tiryakbhyam, being a general multiplication formula, is equally applicable to all cases of multiplication. It is applied to the digital arithmetic and is shown to yield a multiplier architecture which is very similar to the popular array multiplier. Due to its structure, it leads to a high carry propagation delay in case of multiplication of large numbers. Nikhilam Sutra, on the other hand, is more efficient in the multiplication of large numbers as it reduces the multiplication of two large numbers to that of two smaller numbers. The framework of the proposed algorithm is taken from this Sutra and is further optimized by use of some general arithmetic operations such as expansion and bit-shifting to take advantage of bit-reduction in multiplication. We illustrate the proposed algorithm by reducing a general 4x4-bit multiplication to a single 2 x 2-bit multiplication operation.

Application of Geographic Information Systems(GIS) in the History of Cartography

This paper discusses applications of a revolutionary information technology, Geographic Information Systems (GIS), in the field of the history of cartography by examples, including assessing accuracy of early maps, establishing a database of places and historical administrative units in history, integrating early maps in GIS or digital images, and analyzing social, political, and economic information related to production of early maps. GIS provides a new mean to evaluate the accuracy of early maps. Four basic steps using GIS for this type of study are discussed. In addition, several historical geographical information systems are introduced. These include China Historical Geographic Information Systems (CHGIS), the United States National Historical Geographic Information System (NHGIS), and the Great Britain Historical Geographical Information System. GIS also provides digital means to display and analyze the spatial information on the early maps or to layer them with modern spatial data. How GIS relational data structure may be used to analyze social, political, and economic information related to production of early maps is also discussed in this paper. Through discussion on these examples, this paper reveals value of GIS applications in this field.

Weld Defect Detection in Industrial Radiography Based Digital Image Processing

Industrial radiography is a famous technique for the identification and evaluation of discontinuities, or defects, such as cracks, porosity and foreign inclusions found in welded joints. Although this technique has been well developed, improving both the inspection process and operating time, it does suffer from several drawbacks. The poor quality of radiographic images is due to the physical nature of radiography as well as small size of the defects and their poor orientation relatively to the size and thickness of the evaluated parts. Digital image processing techniques allow the interpretation of the image to be automated, avoiding the presence of human operators making the inspection system more reliable, reproducible and faster. This paper describes our attempt to develop and implement digital image processing algorithms for the purpose of automatic defect detection in radiographic images. Because of the complex nature of the considered images, and in order that the detected defect region represents the most accurately possible the real defect, the choice of global and local preprocessing and segmentation methods must be appropriated.

Arriving at an Optimum Value of Tolerance Factor for Compressing Medical Images

Medical imaging uses the advantage of digital technology in imaging and teleradiology. In teleradiology systems large amount of data is acquired, stored and transmitted. A major technology that may help to solve the problems associated with the massive data storage and data transfer capacity is data compression and decompression. There are many methods of image compression available. They are classified as lossless and lossy compression methods. In lossy compression method the decompressed image contains some distortion. Fractal image compression (FIC) is a lossy compression method. In fractal image compression an image is coded as a set of contractive transformations in a complete metric space. The set of contractive transformations is guaranteed to produce an approximation to the original image. In this paper FIC is achieved by PIFS using quadtree partitioning. PIFS is applied on different images like , Ultrasound, CT Scan, Angiogram, X-ray, Mammograms. In each modality approximately twenty images are considered and the average values of compression ratio and PSNR values are arrived. In this method of fractal encoding, the parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the other standard parameters constant. For all modalities of images the compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the decompressed image is arrived by PSNR values. From the results it is observed that the compression ratio increases with the tolerance factor and mammogram has the highest compression ratio. The quality of the image is not degraded upto an optimum value of tolerance factor, Tmax, equal to 8, because of the properties of fractal compression.