Graphics V8i vs CE and Nvidea Quadro vs GeoForce

Hello,

Question about graphics.

We use HP Zbooks with Nvidea Quadro P3200 (6 GB) graphics.

On paper all looks ok and we tested a lot with lots of different Nvidea control panel settings (and of course latest drivers).  

Has anyone details about what the difference is in graphics in MicroStation V8i vs. Connect Edition ?

  • With identical operations we see GPU processing in V8i but barely in CE.
  • When changing ref files dynamic cache rendering/ presentation in CE, the GPU seldom goes higher than 20%, whil rotating dynamically models, we DO see GPU activity above 20%.
  • Is there somewhere information available on how this works and when GPU rather than CPU works and vice-versa ? Generating PDF with hidden lines also uses more CPU than GPU in CE – different than in V8i.
  • We also have an internal discussion on Nvidea GeoForce performing much better than NVidea Quadro’s. Even if the the new Quadro’s “P” family also uses a Geoforce chipset. HP Zbooks have no option for Geoforce, only Quadro’s. HP has Geoforce only in HP Omen game machines – which is not the quality level / (military) specs of the Zbook.

Wil Direct X 12 be supported in MicroStation CE update 15 ?

Sorry for the many questions but we feel a bit lost and hard to find clear answers.

Many Thanks,

Best,

Nico 

Parents
  • Hi Nicolas,

    a few comments. But be aware I am not NVIDIA expert. In fact, I think AMD is better choice when DirectX application is used ;-). Also, technical details about MicroStation internals are not published (e.g. in some formal document). So my comments are based primarily on experience and what I remember from different discussions.

    Question about graphics.

    The biggest problem, when MicroStation graphics is discussion, is in my opinion that we do not discuss a simple chain like "software" > "driver" > "CPU/GPU", but "version of Windows" > "version of application" > "version of driver" > "version of GPU/CPU".

    Every GPU (and also application) supports specific DirectX feature level, e.g. for DirectX 11 it can be feature level 11_0 and 11_1. See this Microsoft documentation for DirectX feature levels table. And even more, when a specific feature is supported, it can be internally supported by HW directly, or emulated inside driver (parts of Windows can be involved also). It creates many possible combinations, depending both on SW and HW.

    Has anyone details about what the difference is in graphics in MicroStation V8i vs. Connect Edition ?

    In general:

    • MicroStation V8i uses 32bit Qvision 3 and supports DirectX 9 (plus DirectX 11 from V8i SS3).
    • MicroStation CE uses 64bit Qvision 5 and supports DirectX 11 only.

    Note: Qvision is a name of internal MicroStation graphic engine/interface.

    As I mentioned already, it's not documented how and when MicroStation uses GPU, but from what I remember, it's not for graphics only, but also as a part of printing process.

    With identical operations we see GPU processing in V8i but barely in CE.

    I am thinking about:

    • "Meters lie" rule ;-) ... what they display is an average value without detail context. So it's nothing else than "a value" providing vague information. It's the same for GPU and performance info provided by Windows.
    • Does "more (higher utilization)" means "better"? GPU utilization does not tell anything about performance, only about GPU load. E.g. when driver or software will be better (more efficient), it will lead to smaller GPU load, but for sure it will be not worse.

    It's better to measure performance subjectively and by <something> per second (see info about QV stats below). What I can see on my HW (notebook with NVIDIA GeForce GTX 1050 and desktop with AMD Radeon RX 5700), V8i is always worse both in feedback (rotation is more laggy and not so smooth) and in stats (less "draws" per second).

    Is there somewhere information available on how this works and when GPU rather than CPU works and vice-versa ?

    Not too much information is available. As I understand it, it's a bit complicated, because it's both about what software prefer (e.g. GPU should be used for views content, for tessellation, rasterization (which can be used e.g. in printing), but it's controlled also by used driver, that can simulate some operations by software (when HW does not support it).

    In my opinion you should compare Qvision statistics. See this wiki article, how to obtain them. It's quite simple: Set the variable, starts V8i and CE, and in both open (I recommend) the same DGN file. Because reports are nearly the same, it's quick to compare them row by row. E.g. on my desktop PC (AMD Ryzen 7 3700X + AMD Radeon RX 5700) the same DirectX 11 feature level is supported, but CE has better benchmark results (PixDraw...).

    We also have an internal discussion on Nvidea GeoForce performing much better than NVidea Quadro’s.

    I am not surprised. I guess (but maybe it's naive and layman's opinion) that NVIDIA Quadro cards are optimized for OpenGL and for computations, but not for DirectX. Whereas some professional cards features have general impact (e.g. wider memory bus, faster or even different types of memory technology), some their features focus specifically professional segment (open standards like OpenGL or OpenCL, or proprietary CUDA...). Even when these cards often provides great performance in DirectX also, my feeling is that's it because of pure HW performance, not because they are optimized (both by HW and SW).

    It makes sense, because pro and computation cards do not target gaming market (with different priorities and feature requirements).

    Wil Direct X 12 be supported in MicroStation CE update 15 ?

    There is no such information available in published documentation.

    Also, in this (few months old) discussion was mentioned that there are no plans to support DirectX 12 in MicroStation (anytime soon). But such plans exist for LumenRT.

    As developer, I understand it. Even when I have no personal experience with using DirectX API, what I know is that DirectX 8 ... 11 were primarily evolution, whereas DirectX 12 changes API completely and moved from high-level to quite low-level access. It means it's a huge task to migrate, and even skilled gaming developers mention in blogs and talks it always require several years to understand API completely and to be able to design and optimize engine to even reach the same performance as old, but perfectly optimized, version.

    we feel a bit lost and hard to find clear answers.

    I understand, so many dependencies in a chain create complicated situation.

    As I wrote, I recommend to analyze Qvision statistics. Also, if possible, test the same model(s) on PC or notebook with NVIDIA GeForce and also AMD Radeon (and compare results).

    With regards,

      Jan

  • Many Thanks for your reply Jan.

    very much appreciated.

    Best regards,

    Nico

Reply Children
No Data