1. Can anyone explain why this data transfer over the network is downgraded to 23Mbit/s?
Yes - it is most likely the way your disks are configured and the number of disks in the RAID sets. Can you provide more information about this please?
2. Is 23Mbit/s normal real world speed for 100Mbit/s network? What is your experience?
It is unlikely that the network is the problem - unless you have cheap and nasty NICs and cheap and nasty switches that simply can't cope with the workload
3. Does this mean, to improve the overall data transfer from server to end user, network speed is much important than disk block level data access speed?
You'll see this question (or a variation of it) asked a lot at EE - "SATAII is 3Gb/sec so why can't I get 3Gb/sec" - that sort of thing. Disk performance is *always* slower than the performance of the connections to the disk. You get good disk performance by aggregating multiple drives into a RAID set - which also protects the data. If you want to improve overall performance, the first place to look is at the disk arrays.