The problem is that trying to time the execution of the TCP Read nor the TCP Write nodes will give you any meaningful timing values that you could use to calculate transfer data speed, especially if you do it only once, and not continuously in a loop.
Lets look at what happens when you issue a TCP Write: (timing is just a very rough and somewhat exaggerated indication to show the problem you will see)
0 us: TCP Write start
0.5 us: calling some internal functions to translate the TCP IP refnum to a Windows socket
1 us: calling the Windows socket function write() with the pointer to the data buffer
2 us: write() allocates a socket buffer and copies the data into it
8 us: write() signals to the Windows socket handler that the socket has a new buffer that needs to be transmitted over the network.
9 us: The Windows socket handler checks the status of the network card and if it is valid it starts transferring the first chunk of memory into the network card buffer, signalling to write() that everything is fine.
9.5 us: The network card places the first byte of the data on the wire
10 us: write() returns with success as it buffered the entire data internally in the socket and the socket handler indicated to it that it is going to handle the data as there is a valid connection
11 us: TCP Write returns happily to your diagram indicating success
50 us: The Windows socket handler finishes transferring the last chunk of data to the network card, signalling internally in the socket that the connection is still valid, and all data has been transmitted.
55us: the last byte in the network card has been sent out over the wire
So while your timing will indicate that the TCP Write took about 11 us, the data transfer period in the network card was really about 45 us, and incidentally has almost no overlap at all with what your TCP Write timing measured.
This is what I mean with asynchronous operation. The execution time of TCP Write is more or less completely independent from the execution time of the real data transfer on the network wire. As such it will be way of from what you can see in the task manager as the task manager shows internal network performance counters that are updated in the low level network card driver and indicate more accurately what is really happening on the wire.
You can get a timing that matches the Windows network measurements more accurately by writing lots and lots of data to the network in a loop and measure the average data transfer speed but even that is only so much accurate. What you measure on the application level is the TCP/IP payload, what the network layer measures is the actual IP data frames lengths that is transferred. Your TCP/IP frame has an additional TCP and an IP frame header in front of your payload data, so the network always transfers more data than what you send and receive on the application level.
Reading has a similar but in fact opposite issue. TCP Read will generally wait some time before data happens to arrive and will return when the requested data has arrived (or an error has occurred) NOT when the sent data has fully arrived, which could be a lot more.
So while your timing on the TCP Write mainly measures the execution time of copying the memory buffer into the socket itself rather than the transfer time over the network, timing the TCP Read will mainly measure the time of the function to wait for data to arrive, and to copy that data out of the socket into your LabVIEW string buffer. There is no real correlation with the network transfer at all here.
Once you start to do this in a loop that can saturate the network link (or the LabVIEW data handling in either TCP Write or Read) you get closer to a meaningful value but still nothing you can easily compare with what the Task Manager reports for your network interface.