VOGONS

Common searches


First post, by xplus93

User metadata
Rank Oldbie
Rank
Oldbie

Seriously? Why not? Having GPU's socketed and handle more like processors or FPU's of old would be a genius concept. Easier to mount larger, quieter heatsinks and custom cooling solutions. Easy to have cheaply upgrade-able ram. If intel can have the GPU on-chip the why can't we have socketed GPUs?

The issues of bandwidth have been resolved since that's the only thing intel has focused on for the past 3-4 years. Even they know CPU's have reached the practical limits of usable speed(without counting in cryptography/security). So we should easily be able to develop a modern local bus (Yeah, I know it's called QPI) that can be dedicating to "visual processing". The only downside I see is a limit to the amount of sockets on a board, but that isn't too dissimilar to the amount of x16 slots a motherboard provides.

Also imagine the new form factors that could be developed. A liquid cooled 6 socket (2x CPU 4x GPU) could be a really thin/flat system. Not to mention if you mount the board upside-down you remove the biggest issue in liquid cooling (leakage).

Imagine a 4-5 inch thick slab sitting on your desk with the board mounted in the middle, radiators on either side and an ultra-wide screen sitting/mounted above it. Then imagine that slab having the power of a full ATX system that sounds like a jet engine.

(God-damnit, I sound like Steve Jobs. Don't I? Well, the man had at least a few good ideas)

XPS 466V|486-DX2|64MB|#9 GXE 1MB|SB32 PnP
Presario 4814|PMMX-233|128MB|Trio64
XPS R450|PII-450|384MB|TNT2 Pro| TB Montego
XPS B1000r|PIII-1GHz|512MB|GF2 PRO 64MB|SB Live!
XPS Gen2|P4 EE 3.4|2GB|GF 6800 GT OC|Audigy 2

Reply 1 of 27, by subhuman@xgtx

User metadata
Rank Oldbie
Rank
Oldbie

Lower profit margins; higher production costs and increased chance of failure per card sold that do not make it worthwhile for the consumer market?
BGA sockets are already with 'us' but they are mostly used to debug prototype hardware in very, very early stages rather than fulfilling the desires of the masses :p

7fbns0.png

tbh9k2-6.png

Reply 2 of 27, by luckybob

User metadata
Rank l33t
Rank
l33t

Have you seen the socket for the newest AMD chips?

PCB is cheap. There is nothing on it to make it worthwhile.

It is a mistake to think you can solve any major problems just with potatoes.

Reply 3 of 27, by xplus93

User metadata
Rank Oldbie
Rank
Oldbie
luckybob wrote:

Have you seen the socket for the newest AMD chips?

PCB is cheap. There is nothing on it to make it worthwhile.

From an end design perspective the benefits are amazing. Again, i'd like to point out i'm the furthest from an apple fanboy. But the new mac "pro" proves that a unified cooling design can be more efficient and compact even though others have done it before, even apple themselves (g4 cube).

subhuman@xgtx wrote:

Lower profit margins; higher production costs and increased chance of failure per card sold that do not make it worthwhile for the consumer market?
BGA sockets are already with 'us' but they are mostly used to debug prototype hardware in very, very early stages rather than fulfilling the desires of the masses :p

I'm guessing those drawbacks are referencing traditional expansion cards. Otherwise you are way off.

A chip on an interposer has way less issues when it comes to manufacturing and failure rate. Especially when you consider longevity (warped PCBs, broken traces...) When universal formats are introduced everybody in the marketplace wins and I think we are coming to a point where acceptable throughput is possible if the design effort was put towards it. That's the biggest issue of them all. Linking the VRAM and GPU at an acceptable speed while keeping them separately upgradeable. Or we could just have vram on chip and not care like it would most likely end up. We have completely standardized digital video standards, you see the result of this in MXM cards. So why can't we have the fully realized desktop equivalent of an MXM card? Maybe the issue is with die size? But intel went with cards first when they wanted to move towards on-die cache and then they went back to chips because it was simply better.

XPS 466V|486-DX2|64MB|#9 GXE 1MB|SB32 PnP
Presario 4814|PMMX-233|128MB|Trio64
XPS R450|PII-450|384MB|TNT2 Pro| TB Montego
XPS B1000r|PIII-1GHz|512MB|GF2 PRO 64MB|SB Live!
XPS Gen2|P4 EE 3.4|2GB|GF 6800 GT OC|Audigy 2

Reply 4 of 27, by Kreshna Aryaguna Nurzaman

User metadata
Rank l33t
Rank
l33t

Will the socketed GPU use system RAM?

Never thought this thread would be that long, but now, for something different.....
Kreshna Aryaguna Nurzaman.

Reply 5 of 27, by xplus93

User metadata
Rank Oldbie
Rank
Oldbie
Kreshna Aryaguna Nurzaman wrote:

Will the socketed GPU use system RAM?

If I were designing it then that would be a secondary option since from my understanding there is still a significant difference between video and system ram technologies and their throughput. I see it as going the same path as cache in the p5-p6+ era. We had coast modules and then the PPro with a separate on-chip die. Then we had cards because of the lower manufacturing cost compared to PPro. After that it became possible to just put it all on one chip. I know AMD has APUs and intel had on-chip graphics already, but that is only a half measure compared to the benefits a fully modular small package solution could offer. It could even go as far as blurring the lines even more when looking at ultra high end portables, although that is more of the realm pf fantasy.

XPS 466V|486-DX2|64MB|#9 GXE 1MB|SB32 PnP
Presario 4814|PMMX-233|128MB|Trio64
XPS R450|PII-450|384MB|TNT2 Pro| TB Montego
XPS B1000r|PIII-1GHz|512MB|GF2 PRO 64MB|SB Live!
XPS Gen2|P4 EE 3.4|2GB|GF 6800 GT OC|Audigy 2

Reply 6 of 27, by Kreshna Aryaguna Nurzaman

User metadata
Rank l33t
Rank
l33t
xplus93 wrote:
Kreshna Aryaguna Nurzaman wrote:

Will the socketed GPU use system RAM?

If I were designing it then that would be a secondary option since from my understanding there is still a significant difference between video and system ram technologies and their throughput.

So... the primary option is the socketed GPU to have its own RAM. As such, the motherboard should be equipped with video RAM slots in addition to system RAM slots.

Oookay...

Never thought this thread would be that long, but now, for something different.....
Kreshna Aryaguna Nurzaman.

Reply 7 of 27, by subhuman@xgtx

User metadata
Rank Oldbie
Rank
Oldbie
xplus93 wrote:

So why can't we have the fully realized desktop equivalent of an MXM card?

Whose cards for its format are 85% the same as their desktop counterparts (alas, soldered BGA GPU package and RAM chips) besides their smaller form factor?

🤣

7fbns0.png

tbh9k2-6.png

Reply 9 of 27, by xplus93

User metadata
Rank Oldbie
Rank
Oldbie
Jade Falcon wrote:
This has bad idea all over it, a gpu is not a cpu. The video card card pcb is no where near the cost of a mobo. […]
Show full quote

This has bad idea all over it, a gpu is not a cpu. The video card card pcb is no where near the cost of a mobo.

Also
rampage_bga.jpg

The only major difference is that GPUs are more closely tied to their memory and usually still have DACs to drive VGA monitors, etc. Move the DAC to the motherboard (like intel did), standardize power supply even more, and solve the memory throughput problem. At the end of the day it's just a piece of silicon.

A lower end board like this one is a bit clearer to see everything. You've got GPU (Red), Memory (Blue), VRM (yellow)(possibly just for memory), Analog section (Puprle)(Includes support circuitry for analog output, looks like the main VR section is here as well)

Motherboards already have power supply circuitry for the CPU and ram. It shouldn't be much trouble to add another section or VRM to handle the GPU. As i've been saying the real difficulty I see is in implementing upgradeable VRAM. The other would be convincing intel to design a chipset purely to add features that support a competing product.

Attachments

  • photo (14).JPG
    Filename
    photo (14).JPG
    File size
    1.3 MiB
    Views
    1155 views
    File license
    Fair use/fair dealing exception

XPS 466V|486-DX2|64MB|#9 GXE 1MB|SB32 PnP
Presario 4814|PMMX-233|128MB|Trio64
XPS R450|PII-450|384MB|TNT2 Pro| TB Montego
XPS B1000r|PIII-1GHz|512MB|GF2 PRO 64MB|SB Live!
XPS Gen2|P4 EE 3.4|2GB|GF 6800 GT OC|Audigy 2

Reply 10 of 27, by Jade Falcon

User metadata
Rank BANNED
Rank
BANNED

Most if not all new video cards dont have DAC.

Also this is just a bad bad bad idea. Its not cost effective and given the big difference between GPUs it will necer work has intended. To make it work one would end up haveing to change everything but the PCB every generation.
Even then youd have to change ram for different cores. You end up paying way more in the long run and only benefits would be simpler repairs. Because every core needs different VRMs witch would mean VRMs would have to be boilt like a motherboards upimg the cost. Different cores need different ram configurations too. And PCB configuration as a result.

This indeed can be done and has, most dev cards are socketed. but will never work how you want it to. Its like running two PCI voodoo5500 in sli, it can technically be done, but will be slower.

Reply 11 of 27, by luckybob

User metadata
Rank l33t
Rank
l33t
Jade Falcon wrote:

This indeed can be done and has, most dev cards are socketed. but will never work how you want it to. Its like running two PCI voodoo5500 in sli, it can technically be done, but will be slower.

I'm curious to know if you have ever actually done that. I'm genuinely curious.

It is a mistake to think you can solve any major problems just with potatoes.

Reply 12 of 27, by xplus93

User metadata
Rank Oldbie
Rank
Oldbie
Jade Falcon wrote:
Most if not all new video cards dont have DAC. […]
Show full quote

Most if not all new video cards dont have DAC.

Also this is just a bad bad bad idea. Its not cost effective and given the big difference between GPUs it will necer work has intended. To make it work one would end up haveing to change everything but the PCB every generation.
Even then youd have to change ram for different cores. You end up paying way more in the long run and only benefits would be simpler repairs. Because every core needs different VRMs witch would mean VRMs would have to be boilt like a motherboards upimg the cost. Different cores need different ram configurations too. And PCB configuration as a result.

This indeed can be done and has, most dev cards are socketed. but will never work how you want it to. Its like running two PCI voodoo5500 in sli, it can technically be done, but will be slower.

DAC being missing just makes it more possible, GPU ram is very standardized Or it could be on an adjacent die like the cache in a PPro. As far as the rest, I guess I wasn't clear. What i'm thinking is a new standard. A chip with data, video and possibly memory lines coming out of it. (Already being done to a certain extent) Yeah, the VRM would most likely be need to be removable. As far as costs go i'll agree, motherboard prices would go up to almost match mutisocket workstation boards but gpu costs would go down as well. There are a lot of benefits too. Shorter path to CPU and more freedom for cooling options are some of the benefits. Plus the supercomputing industry would benefit.

Hmmm, here's first step.

https://en.wikipedia.org/wiki/LGA_3647

EDIT: Actually more than a first step, that's EXACTLY what i'm talking about.

https://en.wikipedia.org/wiki/Xeon_Phi#Knights_Landing

Knights Landing will be built using up to 72 Airmont (Atom) cores with four threads per core,[66][67] using LGA 3647 socket[68] supporting for up to 384 GB of "far" DDR4 RAM and 8–16 GB of stacked "near" 3D MCDRAM, a version of High Bandwidth Memory. Each core will have two 512-bit vector units and will support AVX-512 SIMD instructions, specifically the Intel AVX-512 Foundational Instructions (AVX-512F) with Intel AVX-512 Conflict Detection Instructions (AVX-512CD), Intel AVX-512 Exponential and Reciprocal Instructions (AVX-512ER), and Intel AVX-512 Prefetch Instructions (AVX-512PF)

XPS 466V|486-DX2|64MB|#9 GXE 1MB|SB32 PnP
Presario 4814|PMMX-233|128MB|Trio64
XPS R450|PII-450|384MB|TNT2 Pro| TB Montego
XPS B1000r|PIII-1GHz|512MB|GF2 PRO 64MB|SB Live!
XPS Gen2|P4 EE 3.4|2GB|GF 6800 GT OC|Audigy 2

Reply 13 of 27, by cyclone3d

User metadata
Rank l33t++
Rank
l33t++

I think it would end up being more expensive overall in the long run.

If you have the boards separate from the GPUs and RAM, then the mfgs have to make a huge guesstimate on how many of each they need to make.

And for each board, you would be stuck with ONE generation of GPUs and not much better for the RAM, especially when you consider the speed increasing over the lifetime of a particular type of RAM.

And the boards would ALL have to support the maximum power output that the GPU mfgs are going to make over the life of the socket.

You would end up paying way more and mfgs would end up milking upgrades even more than they do now.

And people would no doubt end up breaking a bunch more hardware than they already do.

The whole process of making GPUs modularly upgradable like that would open up a whole new can of worms.

Mass manufacturing complete boards is much, much cheaper for the consumer than building a piece it together set up.

Oh, and onother thing I forgot. Either the GPUs themselves or the GPU boards would have to have a firmware chip on them. Probably the GPU chips because... how would you update the firmware to be able to use your new GPU without the GPU chip already in it? And that would add even more problems with joe consumer bricking more hardware and the RMA-ing or returning it because he can;t figure out how to get it to work.

It would just make higher end hardware even more of a niche market and thus raise the prices even more. I don't see the mfgs ever doing something like this.

Yamaha modified setupds and drivers
Yamaha XG repository
YMF7x4 Guide
Aopen AW744L II SB-LINK

Reply 14 of 27, by xplus93

User metadata
Rank Oldbie
Rank
Oldbie

The point isn't modularity of the graphics processor though, apart from cooling. Separate ram would be neat, but as i've said. Unlikely, and with little benefit. The point is to loose the expansion cards entirely and add modularity to the base system. GPUs are the only thing that truly need PCIex16 outside of the enterprise market.

A lot of the drawbacks are being created by poor understanding. Why do you think the socket has to only support one generation? It's called abstraction (at least in software) Power+data in / pretty pictures out, that's all it needs to be as far as board/chipset manufacturers are concerned. There's no issue with VBIOS flashing since your main CPU would have basic graphics already.

You do have a point that board traces would have to be designed to handle somewhat higher loads, a little less than now though since you don't have to power an entire board. Shorter paths generally means speed AND power efficiency gains. You have more pipe to fill before the bits start flowing, and you're working with leaky pipes so less pipes means less leakage. The GPU manufacturers will definitely milk it but that can go a lot of ways.

What I see is something like boards with two or more sockets where you can put in any combination of CPUs or GPUs that you want and no card connectors, but have an internal thunderbolt bus to support things like 5.25 bay specialized interface adapters such as audio I/O and video capture.

Maybe even take that further by using thunderbolt networking and get rid of sata by using thunderbolt native drives.

Now that I think all of this out, this is pretty much intel's plan.

XPS 466V|486-DX2|64MB|#9 GXE 1MB|SB32 PnP
Presario 4814|PMMX-233|128MB|Trio64
XPS R450|PII-450|384MB|TNT2 Pro| TB Montego
XPS B1000r|PIII-1GHz|512MB|GF2 PRO 64MB|SB Live!
XPS Gen2|P4 EE 3.4|2GB|GF 6800 GT OC|Audigy 2

Reply 15 of 27, by dexvx

User metadata
Rank Oldbie
Rank
Oldbie
xplus93 wrote:

Hmmm, here's first step.

https://en.wikipedia.org/wiki/LGA_3647

You realize that LGA-3647 is massive and expensive. The ASP of the chip packages going into that form factor is literally >$2000.

You can argue that the high end GPU's can be socketed. They have a high TDP and ASP. However, even then, you're talking at most one generation (Fermi/Fermi-shrink (GTX 480>580); Kepler/Kepler-shrink (GTX 680>770). You run into problems with memory traces (GTX 680>780 is unfeasible because it goes from 256 bit wide to 384 bit wide). You also run into potential cooling problems - socketed chips have a higher clearance and high end GPU's already have a massive copper block confined to two slots.

xplus93 wrote:

What i'm thinking is a new standard. A chip with data, video and possibly memory lines coming out of it. (Already being done to a certain extent) Yeah, the VRM would most likely be need to be removable. As far as costs go i'll agree, motherboard prices would go up to almost match mutisocket workstation boards but gpu costs would go down as well. There are a lot of benefits too. Shorter path to CPU and more freedom for cooling options are some of the benefits. Plus the supercomputing industry would benefit.

Also, the bottleneck is not between the GPU and the CPU. The GPU does the processing itself and sends formatted data back to the CPU. Easily seen with gaming or HPC benchmarks. You can literally force PCI-E link to gen1, and performance will barely degrade (most of the degradation is the fact that gen1 has worse encoding overhead). The latency gain from physically moving it closer would be entirely offset by your network devices (which live on PCI-E) uploading your formatted data to the local cluster.

I think in the coming years, the GPGPU will be effectively dead. Different workloads (e.g. AI) scale differently according to GPU design. Although GPGPU can do it better than a CPU, a dedicated tensor minded design would fair better for AI. But on the same vein, it would also not fair well for something like nuclear simulations (I assume people doing that actually care about precision).

TLDR, not worth it except for extremely high end products for niche scenarios (with limited upgrade paths). GPU is not being bottlenecked by PCI-e; moving it physically close to CPU will achieve little.

All IMO

Edit:

xplus93 wrote:

The point isn't modularity of the graphics processor though, apart from cooling. Separate ram would be neat, but as i've said. Unlikely, and with little benefit. The point is to loose the expansion cards entirely and add modularity to the base system. GPUs are the only thing that truly need PCIex16 outside of the enterprise market.

Yea that makes no sense. The whole point of PCI-E was modularity. Far cheaper to make PCI-E traces with a standardized data transport protocol than to add extra sockets to the mainboard where there is no standard whatsoever.

Are you just concerned that PCI-E length is a problem? We have external PCI-E connectors and you can literally run a GPU from 5m away (at gen3 x8 link) and you wouldn't notice the difference in gaming FPS or 'feel'

Reply 16 of 27, by xplus93

User metadata
Rank Oldbie
Rank
Oldbie
dexvx wrote:
You realize that LGA-3647 is massive and expensive. The ASP of the chip packages going into that form factor is literally >$2000 […]
Show full quote
xplus93 wrote:

Hmmm, here's first step.

https://en.wikipedia.org/wiki/LGA_3647

You realize that LGA-3647 is massive and expensive. The ASP of the chip packages going into that form factor is literally >$2000.

You can argue that the high end GPU's can be socketed. They have a high TDP and ASP. However, even then, you're talking at most one generation (Fermi/Fermi-shrink (GTX 480>580); Kepler/Kepler-shrink (GTX 680>770). You run into problems with memory traces (GTX 680>780 is unfeasible because it goes from 256 bit wide to 384 bit wide). You also run into potential cooling problems - socketed chips have a higher clearance and high end GPU's already have a massive copper block confined to two slots.

'

Of course I know how expensive it is. Anything labeled xeon is expensive until you try to sell it used after decommissioning it. How are you going to say the die size or core architecture has any impact? Do you not remember socket 775? That's not even a good comparison for various reasons, but not the ones you mentioned. Socketed would mean more possibilities for cooling actually. And I'm starting to get where you're confusion comes in. I'm not saying put a socket on a gpu card and just making it an expansion planar, although I already explained that. Yeah, having the memory separate is a bit pointless and only a minor brainstorming-only addition

dexvx wrote:
Also, the bottleneck is not between the GPU and the CPU. The GPU does the processing itself and sends formatted data back to the […]
Show full quote
xplus93 wrote:

What i'm thinking is a new standard. A chip with data, video and possibly memory lines coming out of it. (Already being done to a certain extent) Yeah, the VRM would most likely be need to be removable. As far as costs go i'll agree, motherboard prices would go up to almost match mutisocket workstation boards but gpu costs would go down as well. There are a lot of benefits too. Shorter path to CPU and more freedom for cooling options are some of the benefits. Plus the supercomputing industry would benefit.

Also, the bottleneck is not between the GPU and the CPU. The GPU does the processing itself and sends formatted data back to the CPU. Easily seen with gaming or HPC benchmarks. You can literally force PCI-E link to gen1, and performance will barely degrade (most of the degradation is the fact that gen1 has worse encoding overhead). The latency gain from physically moving it closer would be entirely offset by your network devices (which live on PCI-E) uploading your formatted data to the local cluster.

I think in the coming years, the GPGPU will be effectively dead. Different workloads (e.g. AI) scale differently according to GPU design. Although GPGPU can do it better than a CPU, a dedicated tensor minded design would fair better for AI. But on the same vein, it would also not fair well for something like nuclear simulations (I assume people doing that actually care about precision).

TLDR, not worth it except for extremely high end products for niche scenarios (with limited upgrade paths). GPU is not being bottlenecked by PCI-e; moving it physically close to CPU will achieve little.

All IMO

Edit:
'

It will open up new doors to say the least. If you want to have the discussion in the context of HPC. You could have completely different chip designs for any application. You could have a high powered AI chip instead of a GPU. What i'm saying is imagine if you could tap into Intel's QPI for anything you wanted. You could argue that's not a consumer applicable benefit, but i'm looking toward the future. For various reasons I see average consumers needing it in the future. Although, i'm not a professional analyst, but i've frequently been either ahead or right on time with predicting the course of technology. Let's hope you're wrong for intel's sake. They seem to be moving in the direction I see.

dexvx wrote:
xplus93 wrote:

The point isn't modularity of the graphics processor though, apart from cooling. Separate ram would be neat, but as i've said. Unlikely, and with little benefit. The point is to loose the expansion cards entirely and add modularity to the base system. GPUs are the only thing that truly need PCIex16 outside of the enterprise market.

Yea that makes no sense. The whole point of PCI-E was modularity. Far cheaper to make PCI-E traces with a standardized data transport protocol than to add extra sockets to the mainboard where there is no standard whatsoever.

Are you just concerned that PCI-E length is a problem? We have external PCI-E connectors and you can literally run a GPU from 5m away (at gen3 x8 link) and you wouldn't notice the difference in gaming FPS or 'feel'

PCI-E is certainly modular, but really, how close is it to the CPU and main memory? Compare that to the relationship between CPUs and FPUs. We haven't needed anything like that until now where more and more people need specialized data processing. What i'm saying is that we're moving towards the need for modularity in that context. Intel certainly thinks so.

XPS 466V|486-DX2|64MB|#9 GXE 1MB|SB32 PnP
Presario 4814|PMMX-233|128MB|Trio64
XPS R450|PII-450|384MB|TNT2 Pro| TB Montego
XPS B1000r|PIII-1GHz|512MB|GF2 PRO 64MB|SB Live!
XPS Gen2|P4 EE 3.4|2GB|GF 6800 GT OC|Audigy 2

Reply 17 of 27, by dexter311

User metadata
Rank Member
Rank
Member

There are some sorta cool concepts out there along these lines, mostly centred around the MXM graphics card format for mobile applications:

- The AsRock H110-STX MXM has a LGA1151 socket and an MXM socket for a mobile GPU, all mounted flat against the mobo.
- A dual MXM SLI carrier board from MSI which holds 2x MXM graphics cards in SLI, and all the power phases on the carrier board.

Maybe the proliferation of MXM is what we might see in the future?

Reply 18 of 27, by Jade Falcon

User metadata
Rank BANNED
Rank
BANNED
luckybob wrote:
Jade Falcon wrote:

This indeed can be done and has, most dev cards are socketed. but will never work how you want it to. Its like running two PCI voodoo5500 in sli, it can technically be done, but will be slower.

I'm curious to know if you have ever actually done that. I'm genuinely curious.

A few very bright folks in the 3dfx community had a few long discussion about it. We found that we would need a custom driver do pull it off and some back end solfware to alow the cards to talk to each other. But do to how PCI buss works it would not hold up and slow that system down by a lot. The other idea was to some how tap in to the SLI bridge on the card and run cables between the cards. But that would need a major rework of the cards PCB if I recall.

Reply 19 of 27, by cyclone3d

User metadata
Rank l33t++
Rank
l33t++
Jade Falcon wrote:
luckybob wrote:
Jade Falcon wrote:

This indeed can be done and has, most dev cards are socketed. but will never work how you want it to. Its like running two PCI voodoo5500 in sli, it can technically be done, but will be slower.

I'm curious to know if you have ever actually done that. I'm genuinely curious.

A few very bright folks in the 3dfx community had a few long discussion about it. We found that we would need a custom driver do pull it off and some back end solfware to alow the cards to talk to each other. But do to how PCI buss works it would not hold up and slow that system down by a lot. The other idea was to some how tap in to the SLI bridge on the card and run cables between the cards. But that would need a major rework of the cards PCB if I recall.

What about using a board with multiple PCI-e slots and then use PCI-E to PCI adapter cards? That way the PCI bus would not be a problem. The limitation, if any, would be solely at the card level.

Yamaha modified setupds and drivers
Yamaha XG repository
YMF7x4 Guide
Aopen AW744L II SB-LINK