Understanding the nature of data communications requires tools to collect data from the computer, the network and their interaction. A tool is needed to get better understanding of the processes generating traffic to the network. The main components are an instrumented Linux kernel, a synthetic benchmark program, a system call tracer and a set of analysis programs for the post processing. The data collected from the network can be synchronized with the data collected from computer with an adequate accuracy without expensive hardware. Changes on the operating system (e.g. scheduling algorithm) or on the network can be easily evaluated by the synthetic benchmark where it is possible to modify CPU/IO-intensity ratio and the number of processes each type thus emulating different real-world applications. The data and the code size can be modified to evaluate the memory system performances over different working set sizes. The early measurements on the Ethernet indicate that this toolbox is useful. Measurements have revealed how network traffic is affected as number of processes changes. The toolbox development continues on ATM environment.