Hi,
Just want to find out if a system has 2 gpus, does Microstation have a say in which gpu it will use? Or is that decided by Windows?
For example, I have the Microstation window displayed in monitor 1 which is connected to gpu 1. Does that mean only GPU 1 is used to Microstation? I'm aware that Windows 10 allows for setting specific gpu used per app, but just want to understand if that means what I think it means.
Best regards,
Tuan Le
Hi Tuan,
some more comment to Marco's answer, because you do not share complete picture of your HW configuration, and I also think one scenario is missing in Marco's explanation.
In my opinion, there are 3 different scenarios possible:
When you are not sure whether GPU is used (or what GPU when more are installed), use Windows Task manager, where all basic statistics are displayed. Also, both AMD and Nvidia provides own control panels, allowing to monitor GPU parameters and utilization.
With regards,
Jan
Bentley Accredited Developer: iTwin Platform - AssociateLabyrinth Technology | dev.notes() | cad.point
Hi Jan,
Thanks for the insight. SLI isn't on our radar. A question was raised in our office: "any benefit if we have more gpu". For day to day 3D and 2D, sans rendering, I think for most simplistic use case, answer seems to be "yes". Just want to make sure that is the case.
Tuan Le said:I think for most simplistic use case, answer seems to be "yes".
In my opinion the answer is "no":
Even average mid-range cards, especially when DirectX optimized (targeting gaming market), are able to handle MicroStation on multiple monitors in a high resolution. I have to admit I have no experience with gigabytes-large complex 3D datasets, but every time it looks MicroStation is slow (large point clouds), I feel it's because of optimization, not because of not sufficient GPU power.
An exception from this my opinion is a frequent VUE rendering, where to have more the best available GPUs makes sense. And even when I prefer AMD (my feeling is that they offer better cost / power consumption / DirectX performance), Nvidia GPU would be better, because of Nvidia denoiser availability.
But, what configuration and GPU is the best depends a lot on local specifics and project requirements. My customer often use mid-range (or even standard) PCs for normal drawing and modeling (for 2D and average 3D, even integrated GPU can provide enough power), with dedicated high-performance workstation(s), used by "company visualization guru" ;-)
About denoising, we support both the Nvidia and Intel denoisers. The Nvidia denoiser requires an Nvidia card (Maxwell generation or newer, which is about 4 years old or younger), while the Intel denoiser will run on anything, including AMD video cards. The results are comparable and the speed is also great for both cases as the denoising only takes, generally, a fraction of a second.
We plan to also implement the AMD denoiser, which should run on any hardware, in 2022.
Thanks for the details.
Marco Salino said:We plan to also implement the AMD denoiser, which should run on any hardware, denoiser in 2022.
It sounds promising, I am looking forward to it! :-)
Regards,