Reservoir computing is a machine-learning scheme that solves computational problems with the power of dynamical systems. In this contribution we investigate and quantitatively compare the two reservoir systems that are predominantly used nowadays: Delay and network models. Additionally, we also investigate hybrid concepts called 'multiplexed networks', that incorporate elements of both of these approaches. By constructing reservoir computers with identical numbers of readout dimensions, we can quantitatively compare the performance. We find that the time-multiplexing procedure of the classical delay-approach can be extended to hybrid delay-network systems without loss of computational power, which enables the construction of faster reservoir computers.