In Compressed Sensing (CS) Theory sparse signals can be reconstructed from far fewer measurements than the Nyquist Sampling Limit. Initial Compressed Sensing algorithms implicitly assume that sparsity domain coefficients are independently distributed. Accounting for and exploiting statistical dependencies in sparse signals can improve recovery performance. Wavelets and their theoretical principles, and the structural statistical modeling of dependencies, are applied to improve feature optimization in the presence of non-linear mixtures. Sparsifying Transforms, such as the Discrete Wavelet Transform (DWT), are used for spatial dependencies such as in natural images. This can exploit hierarchical structure and multiscale subbands of frequencies and orientation, exploiting dependencies across and within scales. Bayes Least Squares-Gaussian-scale Mixtures accurately describe statistical dependencies of wavelet coefficients in images, and, therefore, can be incorporated to address dependencies and improve performance. Sparsifying Transforms and Bayes Least Squares-Gaussian-scale Mixtures are incorporated to model and account for dependency characteristics during the coefficient-weight construction of Compressed Sensing algorithm iterations. The resulting accuracy and performance improvements of incorporating wavelets and their theoretical principles, and incorporating the structural and statistical modeling of dependencies, to account for variable-dependencies in image reconstruction algorithms are shown, both quantitatively and qualitatively.