The watermarking technique can be used to protect the copyright of relational databases by hiding the ownership information into the relational databases. Difference expansion (DE) technique is one of the common reversible watermarking techniques for numerical relational databases. However, most previous schemes based on DE suffer the problem of low embedding capacity when the difference values between different attributes are relatively large. In this paper, we propose a novel reversible watermarking scheme to solve the above problem. In the scheme, a mapping difference expansion (MDE) method is proposed to convert the differences between attributes to small mapping differences. Based on the MDE, an attribute and tuple selection algorithm is designed to select the suitable data for watermarking, which can increase embedding capacity and reduce distortion. In addition, the majority voting technique is utilized to enhance the robustness of watermarking with the high embedding capacity. The experimental results have shown that the proposed scheme can provide higher embedding capacity, lower distortion and stronger robustness than other schemes.
Digital watermarking has been recognized as a useful technology for the copyright protection and authentication of digital information. However, rarely did the former methods focus on the key content of digital carrier. The idea based on the protection of key content is more targeted and can be considered in different digital information, including text, image and video. In this paper, we use text as research object and a text zero-watermarking method which uses keyword dense interval (KDI) as the key content is proposed. First, we construct zero-watermarking model by introducing the concept of KDI and giving the method of KDI extraction. Second, we design detection model which includes secondary generation of zero-watermark and the similarity computing method of keyword distribution. Besides, experiments are carried out, and the results show that the proposed method gives better performance than other available methods especially in the attacks of sentence transformation and synonyms substitution.
This study introduces a novel depth estimation method that can automatically generate plausible depth map from a single image with unstructured environment. Our goal is to extrapolate depth map with more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Based on the preexisting DepthTransfer algorithm, our approach primarily transfers depth information at the level of superpixels from the most photometrically similar retrieval images under the framework of non-parametric learning. Posteriorly, we propose to concurrently warp the corresponding superpixels in multi-scale levels, where we employ an improved SLIC technique to segment the RGBD images from coarse to fine. Then, modified Cross Bilateral Filter is leveraged to refine the final depth field. With respect to training and evaluation, we perform our experiment on the popular Make3D dataset and demonstrate that our method outperforms the state-of-the-art in both efficacy and computational efficiency. Especially, the final results show that in qualitatively evaluation, our results are visually superior in realism and simultaneously more immersive.
Proc. SPIE. 10033, Eighth International Conference on Digital Image Processing (ICDIP 2016)
KEYWORDS: Image compression, Digital filtering, Distortion, Linear filtering, Digital watermarking, Discrete wavelet transforms, Feature extraction, Image quality, Signal processing, Information security
Digital watermarking is an efficient technique for copyright protection in the current digital and network era. In this paper, a novel robust watermarking scheme is proposed based on singular value decomposition (SVD), Arnold scrambling (AS), scale invariant feature transform (SIFT) and majority voting mechanism (MVM). The watermark is embedded into each image block for three times in a novel way to enhance the robustness of the proposed watermarking scheme, while Arnold scrambling is utilized to improve the security of the proposed method. During the extraction procedure, SIFT feature points are used to detect and correct possibly geometrical attacks, and majority voting mechanism is performed to enhance the accuracy of the extracted watermark. Our analyses and experimental results demonstrate that the proposed watermarking scheme is not only robust to a wide range of common signal processing attacks (such as noise, compression and filtering attacks), but also has favorable resistance to geometrical attacks.
In this paper, a new robust multiple description image coding method with a modified interleaving sampling and a modified interpolation method using block compressed sensing is proposed. In the encoding process, the original image is decomposed into several sub-images by using the modified interleaving sampling and the redundant bits are added to enhance the reconstruction accuracy. For each sub-image the description is obtained in the block compressed sensing (BCS). In the decoding process, the signal is reconstructed from the sparse measurements by using the optimization algorithm. Our analysis and simulation results showed that the proposed method is a balanced multiple description coding scheme with higher accuracy of reconstruction and higher efficiency of coding.
Since web born-digital images have low resolution and dense text atoms, text region over-merging and miss detection are still two open issues to be addressed. In this paper a novel iterative algorithm is proposed to locate and segment text regions. In each iteration, the candidate text regions are generated by detecting Maximally Stable Extremal Region (MSER) with diminishing thresholds, and categorized into different groups based on a new similarity graph, and the texted region groups are identified by applying several features and rules. With our proposed overlap checking method the final well-segmented text regions are selected from these groups in all iterations. Experiments have been carried out on the web born-digital image datasets used for robust reading competition in ICDAR 2011 and 2013, and the results demonstrate that our proposed scheme can significantly reduce both the number of over-merge regions and the lost rate of target atoms, and the overall performance outperforms the best compared with the methods shown in the two competitions in term of recall rate and f-score at the cost of slightly higher computational complexity.
In this paper, a novel background subtraction approach is proposed to avoid stationary foreground objects being merged into the background in target detection and tracking, in which an improved background model is designed by using virtual frames and the blur can be attenuated with this model when an object moves again after it stays for a long time. Moreover, the proposed model is fused with the eigenbackgrounds to improve the environmental adaptability. Our experimental results indicate that the proposed approach enhances the performance of target detection and tracking in intelligent surveillance and is superior to some state-of-the-art methods according to the precision-recall measurement.
Load modeling is recognized as a difficult issue in field of power system digital simulation. The reliability of the
simulation results depends on the veracity of the load model which will further affect power system planning and aid
decision making. In order to increase the accuracy of the load model, the composite loads of power consuming-industries
were classified by their industry attributes and the components of them were also analyzed in this paper. Then, the
mathematical model of load composition is established on the basic of typical daily load profile and the identification
algorithm developed by C language is used to identify the parameters of composite loads by choosing the data collected
during the corresponding characteristic time period of the typical day. Based on the model vector machine theory and the
parameters identified, the parameters of composite load model of power consuming-industries can be calculated by using
the way of least square approximation. And the BP neural network was used to forecast the parameters of composite
loads of power consuming-industries. Finally, an example shows the validity of the proposed scheme.