Watermarking sofwares are difficult to evaluate because their desired features have to be evaluated in a multidimensional space. Furthermore, the required characteristics are strongly dependent on the envisaged scenario in which the watermarking will be evaluated. While several benchmarking systems have been proposed to include attacks, perceptual and statistical evaluations, none of them constitute an established reference. Due to the difficulty of this benchmarking issue, we propose to set a web based open-source suite of tools which would allow the scientific community around watermarking research to achieve fair and reproducible benchmarking tests. This paper describes the required basic architecture. A benchmarking session is parameterized with several options relevant to media, embedding, decoding, attacks, etc. The session is divided into tests (which may encompass several runs), and the results are collected and submitted to set multidimensional curves.