เอาอันนี้ไปดูแทนอันล่างก็แล้วกันครับ
GTX 680 SLI 2 GPU ระหว่าง PCI-E 2.0 กับ PCI-E 3.0

http://cdn.overclock.net/1/10/10b499c8_DIFFERENCE.jpeg
GTX 680 SLI 2 GPU ทดสอบได้ตามปรกติ
ต่างประมาน 8 fps ถึง 30+ fps ได้ครับ
ระหว่าง PCI-E 2.0 กับ PCI-E 3.0
ส่วนผลการทดสอบSLI x4 ล่างนี้ สงสัยจะตามนี้ ความแตกกต่างระหว่าง PCI-E 2.0 VS 3.0
PCI-E 2.0 16x/8x/8x/8x
PCI-E 3.0 16x/8x/8x/8x
kid you not, that is how much PCI-E 2.0 running at 16x/8x/8x/8x versus PCI-E 3.0 bottlenecks BF3 and Heaven 2.5 at these resolutions. I attribute this to the massive bandwidth being transferred over the PCI-E bus. We are talking 4-way SLI at up to 10-megapixels in alternate frame rendering. Entire frames at high FPS are being swapped and PCI-E 2.0 falls on it's face.
The interesting part was that while running PCI-E 2.0, the GPU utilization dropped way down as would be typically seen if you are "CPU" limited. In this instance I am not CPU limited, nor GPU limited. We are really at a point now that you can be PCI-E limited unless you go PCI-E 3.0 8x (16x PCI-E 2.0) or faster on all GPU's in the system. GPU utilization dropped down into the ~50% range due to PCI-E 2.0 choking them to death. As soon as I enabled PCI-E 3.0, the GPU utilization skyrocketed to 95+% on all cores. I was going to run more benchmarks and games but the results are such blow-outs it seems pretty pointless to do any more. It may interest some of those out there running these new PCI-E 3.0 GPU's in which they think they are CPU limited (below 95% GPU utilization) yet might have PCI-E bandwidth issues.
Down to the nitty gritty; if you run a single GPU, yes; a single 16x speed PCI-E 2.0 slot will be fine. When you start to run multiple GPU's and/or run these new cards at 8x speed, especially in Surround/Eyefinity, make sure to get PCI-E 3.0
เหตผลตามนี้
PCI-E 2.0 VS 3.0 ใครว่าไม่แตกต่าง เปิดเผยผล Test ล่าสุด เงิบ กันเลยทีเดียว
หลังจากผ่านมาหลายเดือน
ความลับเปิดเผย PCI-E 2.0 VS 3.0 ใครว่าไม่แตกต่าง
สั้นๆ ดูผล Test ครับ ไม่ต้องมีคำ บรรยาย อิอิ
PCI-E 2.0 vs PCI-E 3.0 BATTLE ROYAL!!?!
Well, the results of my PCI Express 2.0 versus 3.0 on my 4-way SLI GTX 680 FW900 Surround setup are in. The results are so incredible I had to start the tests over from scratch and run them multiple times for confirmation!
Test setup:
3960X @ 5.0 GHz (temp slow speed)Asus Rampage IV Extreme with PCI-E slots running 16x/8x/8x/8x(4) EVGA GTX 680's running 1191MHz core, 3402 MHz MemorynVidia Driver 301.10 with PCI-E 3.0 registry adjustment turned on and off for each applicable tesGPU-Z 0.6.0

After PCI-E settings changed, confirmed with GPU-Z: PCI-E 2.0

PCI-E 3.0

All settings in nVidia control panel, in-game and in benchmark, EVGA precision are all UNTOUCHED between benchmark runs. The only setting adjusted is the PCI-E 2.0 to 3.0 and back and forth for confirmation (Reboots obviously for registry edit).

kid you not, that is how much PCI-E 2.0 running at 16x/8x/8x/8x versus PCI-E 3.0 bottlenecks BF3 and Heaven 2.5 at these resolutions. I attribute this to the massive bandwidth being transferred over the PCI-E bus. We are talking 4-way SLI at up to 10-megapixels in alternate frame rendering. Entire frames at high FPS are being swapped and PCI-E 2.0 falls on it's face.
The interesting part was that while running PCI-E 2.0, the GPU utilization dropped way down as would be typically seen if you are "CPU" limited. In this instance I am not CPU limited, nor GPU limited. We are really at a point now that you can be PCI-E limited unless you go PCI-E 3.0 8x (16x PCI-E 2.0) or faster on all GPU's in the system. GPU utilization dropped down into the ~50% range due to PCI-E 2.0 choking them to death. As soon as I enabled PCI-E 3.0, the GPU utilization skyrocketed to 95+% on all cores. I was going to run more benchmarks and games but the results are such blow-outs it seems pretty pointless to do any more. It may interest some of those out there running these new PCI-E 3.0 GPU's in which they think they are CPU limited (below 95% GPU utilization) yet might have PCI-E bandwidth issues.
Down to the nitty gritty; if you run a single GPU, yes; a single 16x speed PCI-E 2.0 slot will be fine. When you start to run multiple GPU's and/or run these new cards at 8x speed, especially in Surround/Eyefinity, make sure to get PCI-E 3.0.
จากเวป http://www.overclock.net http://www.evga.com
http://www.overclock.net/t/1220962/v...r-edition-2012
http://www.evga.com/forums/tm.aspx?m...e=1&print=true

PCI-E 2.0 VS 3.0
fps ต่างกันเกือบ 61+ fps
ไม่ต้องมีคำบรรยาย งานนี้
เล่นจริง ใช้งานจริง Test จริง
โปรดใช้ วิจารณญาณ ในการรับชม ในการดู ระวังจะเป็นอันตรายต่อตัวท่านเอง อิอิ
GTX 680 SLI 2 GPU ระหว่าง PCI-E 2.0 กับ PCI-E 3.0

http://cdn.overclock.net/1/10/10b499c8_DIFFERENCE.jpeg
GTX 680 SLI 2 GPU ทดสอบได้ตามปรกติ
ต่างประมาน 8 fps ถึง 30+ fps ได้ครับ
ระหว่าง PCI-E 2.0 กับ PCI-E 3.0
ส่วนผลการทดสอบSLI x4 ล่างนี้ สงสัยจะตามนี้ ความแตกกต่างระหว่าง PCI-E 2.0 VS 3.0
PCI-E 2.0 16x/8x/8x/8x
PCI-E 3.0 16x/8x/8x/8x
kid you not, that is how much PCI-E 2.0 running at 16x/8x/8x/8x versus PCI-E 3.0 bottlenecks BF3 and Heaven 2.5 at these resolutions. I attribute this to the massive bandwidth being transferred over the PCI-E bus. We are talking 4-way SLI at up to 10-megapixels in alternate frame rendering. Entire frames at high FPS are being swapped and PCI-E 2.0 falls on it's face.
The interesting part was that while running PCI-E 2.0, the GPU utilization dropped way down as would be typically seen if you are "CPU" limited. In this instance I am not CPU limited, nor GPU limited. We are really at a point now that you can be PCI-E limited unless you go PCI-E 3.0 8x (16x PCI-E 2.0) or faster on all GPU's in the system. GPU utilization dropped down into the ~50% range due to PCI-E 2.0 choking them to death. As soon as I enabled PCI-E 3.0, the GPU utilization skyrocketed to 95+% on all cores. I was going to run more benchmarks and games but the results are such blow-outs it seems pretty pointless to do any more. It may interest some of those out there running these new PCI-E 3.0 GPU's in which they think they are CPU limited (below 95% GPU utilization) yet might have PCI-E bandwidth issues.
Down to the nitty gritty; if you run a single GPU, yes; a single 16x speed PCI-E 2.0 slot will be fine. When you start to run multiple GPU's and/or run these new cards at 8x speed, especially in Surround/Eyefinity, make sure to get PCI-E 3.0
เหตผลตามนี้
Originally posted by ZoLKoRn
View Post
Originally posted by taukung
View Post
PCI-E 2.0 VS 3.0 ใครว่าไม่แตกต่าง เปิดเผยผล Test ล่าสุด เงิบ กันเลยทีเดียว
หลังจากผ่านมาหลายเดือน
ความลับเปิดเผย PCI-E 2.0 VS 3.0 ใครว่าไม่แตกต่าง
สั้นๆ ดูผล Test ครับ ไม่ต้องมีคำ บรรยาย อิอิ
PCI-E 2.0 vs PCI-E 3.0 BATTLE ROYAL!!?!
Well, the results of my PCI Express 2.0 versus 3.0 on my 4-way SLI GTX 680 FW900 Surround setup are in. The results are so incredible I had to start the tests over from scratch and run them multiple times for confirmation!

Test setup:
3960X @ 5.0 GHz (temp slow speed)Asus Rampage IV Extreme with PCI-E slots running 16x/8x/8x/8x(4) EVGA GTX 680's running 1191MHz core, 3402 MHz MemorynVidia Driver 301.10 with PCI-E 3.0 registry adjustment turned on and off for each applicable tesGPU-Z 0.6.0

After PCI-E settings changed, confirmed with GPU-Z: PCI-E 2.0

PCI-E 3.0

All settings in nVidia control panel, in-game and in benchmark, EVGA precision are all UNTOUCHED between benchmark runs. The only setting adjusted is the PCI-E 2.0 to 3.0 and back and forth for confirmation (Reboots obviously for registry edit).

kid you not, that is how much PCI-E 2.0 running at 16x/8x/8x/8x versus PCI-E 3.0 bottlenecks BF3 and Heaven 2.5 at these resolutions. I attribute this to the massive bandwidth being transferred over the PCI-E bus. We are talking 4-way SLI at up to 10-megapixels in alternate frame rendering. Entire frames at high FPS are being swapped and PCI-E 2.0 falls on it's face.
The interesting part was that while running PCI-E 2.0, the GPU utilization dropped way down as would be typically seen if you are "CPU" limited. In this instance I am not CPU limited, nor GPU limited. We are really at a point now that you can be PCI-E limited unless you go PCI-E 3.0 8x (16x PCI-E 2.0) or faster on all GPU's in the system. GPU utilization dropped down into the ~50% range due to PCI-E 2.0 choking them to death. As soon as I enabled PCI-E 3.0, the GPU utilization skyrocketed to 95+% on all cores. I was going to run more benchmarks and games but the results are such blow-outs it seems pretty pointless to do any more. It may interest some of those out there running these new PCI-E 3.0 GPU's in which they think they are CPU limited (below 95% GPU utilization) yet might have PCI-E bandwidth issues.
Down to the nitty gritty; if you run a single GPU, yes; a single 16x speed PCI-E 2.0 slot will be fine. When you start to run multiple GPU's and/or run these new cards at 8x speed, especially in Surround/Eyefinity, make sure to get PCI-E 3.0.

จากเวป http://www.overclock.net http://www.evga.com
http://www.overclock.net/t/1220962/v...r-edition-2012
http://www.evga.com/forums/tm.aspx?m...e=1&print=true

PCI-E 2.0 VS 3.0
fps ต่างกันเกือบ 61+ fps
ไม่ต้องมีคำบรรยาย งานนี้
เล่นจริง ใช้งานจริง Test จริง
โปรดใช้ วิจารณญาณ ในการรับชม ในการดู ระวังจะเป็นอันตรายต่อตัวท่านเอง อิอิ
Comment