Multi-dimensional algorithms are hard to implement on classical platforms. Pipelining may exploit instruction-level
parallelism, but not in the presence of simultaneous data; threads optimize only within the given restrictions. Tiled
architectures do add a dimension to the solution space. With locally a large register store, data parallelism is handled, but
only to a dimension. 3-D technologies are meant to add a dimension in the realization. Applied on the device level, it
makes each computational node smaller. The interconnections become shorter and hence the network will be condensed.
Such advantages will be easily lost at higher implementation levels unless 3-D technologies as multi-cores or chip
stacking are also introduced. 3-D technologies scale in space, where (partial) reconfiguration scales in time. The optimal
selection over the various implementation levels is algorithm dependent. The paper discusses such principles while
applied on the scaling of cellular neural networks (CNN). It illustrates how stacking of reconfigurable chips supports
many algorithmic requirements in a defect-insensitive manner. Further the paper explores the potential of chip stacking
for multi-modal implementations in a reconfigurable approach to heterogeneous architectures for algorithm domains.