Single-processor tasks

Good day

I've noticed that some processes within MicroStation CONNECT (and thus, by extension, ORD) are still single-processor tasks. Amongst others, corridor processing, creating of drawings from named boundaries, element and model annotation, and plotting of PDFs from a pset file.

Is there a plan to redesign these processes to take advantage of all CPU cores? Even on a machine with the latest CPU, best GPU, SSD, and oodles of RAM, annotating cross-sections is unbearably slow. In the screenshot below, I'm running the Model Annotation tool, and it's evident that ORD is only using 1 out of my 6 cores. My previous machine had 8 slightly slower cores, which made things even worse.

Kind Regards

Parents
  • Has anyone looked at this at all?

    I have noticed this same issue on multiple different machines, even different brands with different hardware (quad core/6 core, etc.) and noticed that ORD almost always only ever uses one logical processor.

    That is, on a six core machine that has two processors per core (12 processors total), only ~9% CPU utilisation occurs, and on a quad core machine with two logical processors (8 processors total) it only uses 13%. In the Resource Manager it is clear to see that ORD only uses one processor at a time, sometimes switching between processors (which is most likely due to the OS and not the program) but stops using the previous processor as it goes to a different one.

    This is still happening in 2019 R2 Refresh.

    I have waited for more than 30 mins while ORD sits at 13% CPU usage and 100% usage of one out of eight processors, where if the program was able to use all processing it would have been more like five minutes.

    This should be fixed for ALL of the software and not just in dgns where large numbers of template drops exist. I am having the single processor bottleneck my software during all sorts of tasks including processing point cloud data, loading subsurface utilities, running drainage simulations and taking minutes to reiterate a corridor with a single complex template.

    OpenRoads Designer would be a completely different program to use if it was able to utilise multi-core processing.

Reply
  • Has anyone looked at this at all?

    I have noticed this same issue on multiple different machines, even different brands with different hardware (quad core/6 core, etc.) and noticed that ORD almost always only ever uses one logical processor.

    That is, on a six core machine that has two processors per core (12 processors total), only ~9% CPU utilisation occurs, and on a quad core machine with two logical processors (8 processors total) it only uses 13%. In the Resource Manager it is clear to see that ORD only uses one processor at a time, sometimes switching between processors (which is most likely due to the OS and not the program) but stops using the previous processor as it goes to a different one.

    This is still happening in 2019 R2 Refresh.

    I have waited for more than 30 mins while ORD sits at 13% CPU usage and 100% usage of one out of eight processors, where if the program was able to use all processing it would have been more like five minutes.

    This should be fixed for ALL of the software and not just in dgns where large numbers of template drops exist. I am having the single processor bottleneck my software during all sorts of tasks including processing point cloud data, loading subsurface utilities, running drainage simulations and taking minutes to reiterate a corridor with a single complex template.

    OpenRoads Designer would be a completely different program to use if it was able to utilise multi-core processing.

Children
  • I had talked with our IT guy about this once. He said it was due to Windows 7 being a 32 bit system and Windows 10 should resolve this. I have not looked at it since upgrading to 10. I will do that next time I bog it down. I have a few files with HUGE terrain and Imagery attachments.

    Idaho Red

  • Unfortunately you will see this change memory management, but not CPU usage. Most of us are already using 64-but systems and struggling with speed for larger data sets.

    Regards,

    Mark


    OpenRoads Designer 2023  |  Microstation 2023.2  |  ProjectWise 2023

  • Correct, Mark. The fundamental difference between 32bit and 64bit architecture is the amount of memory (RAM) available for addressing. This is important when you have a large data set already loaded such as a point cloud or lidar terrain of high accuracy.

    However, the CPU (not the RAM) is what's most important when actually processing the data such as importing/converting terrain models, placing named boundaries, creating and annotating sheets such as cross sections, running a drainage simulation, as well as all processing that occurs within the program except for processing corridors with large numbers of templates (>100 IIRC).

    I've seen this CPU issue occur on 64 bit Win7 and 64bit Win10 systems all running 64bit OpenRoads Designer.

    I can't help but think it's a huge mistake; why would you ever limit such a powerful program to using a small fraction of available resources?

    Regards,

    Aiden

  • I personally think it's issues with implementation. It already outperforms similar products (ignoring legacy products) in the market and stability seems to finally be sorted - sort out performance (modeling, drawing and load times) and it will establish itself in the industry as a standard.

    Regards,

    Mark


    OpenRoads Designer 2023  |  Microstation 2023.2  |  ProjectWise 2023

  • At some point, I found an old post asking this same question for the legacy products and the answer was something to the tune of that the human input is the bottleneck, so multi-core processing wasn't a priority. That may have been the case long ago, but the "human input" is currently sitting around watching that 9-13% in the task manager...