C# Add-in access Interop Application/Methods

How do I get an instance of the interop Application class from the add-in? This is for Microstation Connect (Update 16), I want to be able to access the interop api from within the add-in for some code reusability purpose (since i have a decent amount from a few desktop apps that use the COM Api) and was looking to do some benchmarking and comparing of the interop vs the .net API performance for doing basic scanning/processing/updating elements.

With that, I understand i shouldn't be using GetObject as that will give me out-of-process instance of the Application, but is there a .NET or other api that will give me access to the interop Application? Something like Session.Istance.GetActiveDgnFile, but for the interop API? 

Thanks.

Parents
  • HI Viktor,

    With that, I understand i shouldn't be using GetObject as that will give me out-of-process instance of the Application,

    Exactly

    but is there a .NET or other api that will give me access to the interop Application?

    Utilities.ComApp.<whatever>

    and was looking to do some benchmarking and comparing of the interop vs the .net API performance for doing basic scanning/processing/updating elements.

    I have several comments to this topic:

    • It makes sense (I did couple of such measurements in the past), but good enough rule is Interop tends to be slower than NET API, because there are more layers between "end point" and public API. But it is not simple to evaluate the results, because API and object model is completely different.
    • Especially in scanning, the biggest difference is not in used API (NET vs Interop), but in used approach/class (iteration, delegate, Interop scanning), where the fastest is (no surprise there) direct iteration using NET enumeration. And my experience is that 90% of performance issue is not in API at all, but in wrong code, wring algorithms and software + application architecture ;-)
    • Strictly said, it is impossible to do correct benchmarking on top of so complex hosting application like MicroStation. Unfortunately, it is easy to obtain "some numbers" and think they are valid.
    comparing of the interop vs the .net API

    I think the rule here is not performance, but existing vs new code and APIs incompatibility:

    • For new code, use new NET API everywhere is possible (fortunately there are only a few exception like the discussed cell modification).
    • When there is bug or missing functionality in NET, decide what is better: To use Interop (also buggy, but known and it typically works fine, where NET not) or to use C++/CLI to call C++ API. To use Interop is simple, but APIs cannot be mixed freely (especially Interop Element is not NET Element). whereas C++/CLI is efficient, powerfull, but requires extra skills and lead to more complex code and application structure (separate assembly for C++/CLI wrappers).
    • When there is existing code, use it, when cost and time are priorities.

    With regards,

      Jan

  • Good points Jan. I do agree that you can have performance on .net and interop if your code is sloppy. At this time I am going to keep c++ out of the options for me simply because I am not the only person accessing/maintaining code here and trying to expect everyone to digest c++ is going to be a tall order (for me as well).

    With that said, most of my code is actually converting NET or Interop objects into my own LightObjects and then there's a shared business logic layer that deals with those objects instead of dealing with .NET or Interop objects directly. This allows me to reuse code between AutoCad/Microstation and other platforms. Because of that I think the benchmarking should be straight forward, but we'll see. 

  • Hi Viktor,

    and trying to expect everyone to digest c++ is going to be a tall order (for me as well).

    C++ is always a challenge, especially when you learned C/C++ some time ago and now you realize new standards look like new language, with even more complicated syntax. C++ today is extremely powerful, allowing to do anything, to optimize code in tiny detail. But to master it is the whole life task ... unless of course you are Bjarne Stroustrup, John Carmack or somebody on the same level :-)))

    Because of that I think the benchmarking should be straight forward

    In my experience it is usually not complicated to write the benchmark code, but it can be challenging to design the tests in such way returned values that make sense (and are not "numbers, look like precise, because calculated by computer" ;-).

    Professional benchmarking tools (like e.g. BenchmarkDotNet) often use strictly statistical approach, when required precision is calculated from percentile and measurement deviation (usually normal distribution is used). An advantage of it is that it filters out measurements with too big deviation (can be caused by host systems, when MStn/ACAD/whatever caches something, clean memory, load other internal modules etc.).

    With regards,

      Jan

Reply
  • Hi Viktor,

    and trying to expect everyone to digest c++ is going to be a tall order (for me as well).

    C++ is always a challenge, especially when you learned C/C++ some time ago and now you realize new standards look like new language, with even more complicated syntax. C++ today is extremely powerful, allowing to do anything, to optimize code in tiny detail. But to master it is the whole life task ... unless of course you are Bjarne Stroustrup, John Carmack or somebody on the same level :-)))

    Because of that I think the benchmarking should be straight forward

    In my experience it is usually not complicated to write the benchmark code, but it can be challenging to design the tests in such way returned values that make sense (and are not "numbers, look like precise, because calculated by computer" ;-).

    Professional benchmarking tools (like e.g. BenchmarkDotNet) often use strictly statistical approach, when required precision is calculated from percentile and measurement deviation (usually normal distribution is used). An advantage of it is that it filters out measurements with too big deviation (can be caused by host systems, when MStn/ACAD/whatever caches something, clean memory, load other internal modules etc.).

    With regards,

      Jan

Children
No Data