Question : Comparison for Data transfer rate between SATA and SCSI over network

I'm planing to expand disk storage in one of our server. I'm thinking of just attach new mirrored disk set so that the server can have more disk space. I was doing a little test to compare data transfer rate between two different disk specs as below, but the result shows not much difference, but the data transfer rate over network was very lower than I expected.

1.
HP SCSI Ultra 320 (2560 Mbit/s)
HP Smart Array 532 (1028Mbit/s)
Configured as RAID 5.

2.
WD SATA 3.0 (3Gbit/s)
Norcor SATA controller (3Gbit/s)
PCI-X slot (~1064Mbit/s)

Both over 100Mbit/s network.

Test: Transfer 170MB data over network to a host connected in 100Mbit/s Ethernet.

The result was 2 was a little faster, 55 sec(1) than 58 sec(2) and both can be said around 23Mbit/s around.

As the test shows the data transfer rate over 100Mbit/s network was not much different. The controller between the disk sets and the system board are roughly the same(smart array 1028Mbit/s vs 1068Mbit/s). So this is understandable.  However, the data transfer rate over the network was only around 23Mbit over the 100Mbit/s Ethernet.

Here are my questions.

1. Can anyone explain why this data transfer over the network is downgraded to 23Mbit/s?

2. Is 23Mbit/s normal real world speed for 100Mbit/s network? What is your experience?

3. Does this mean, to improve the overall data transfer from server to end user, network speed is much important than disk block level data access speed?

I appreciate very much in advance.



Answer : Comparison for Data transfer rate between SATA and SCSI over network

1. Can anyone explain why this data transfer over the network is downgraded to 23Mbit/s?
Yes - it is most likely the way your disks are configured and the number of disks in the RAID sets. Can you provide more information about this please?

2. Is 23Mbit/s normal real world speed for 100Mbit/s network? What is your experience?
It is unlikely that the network is the problem - unless you have cheap and nasty NICs and cheap and nasty switches that simply can't cope with the workload

3. Does this mean, to improve the overall data transfer from server to end user, network speed is much important than disk block level data access speed?
You'll see this question (or a variation of it) asked a lot at EE - "SATAII is 3Gb/sec so why can't I get 3Gb/sec" - that sort of thing. Disk performance is *always* slower than the performance of the connections to the disk. You get good disk performance by aggregating multiple drives into a RAID set - which also protects the data. If you want to improve overall performance, the first place to look is at the disk arrays.
Random Solutions  
 
programming4us programming4us