This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

WaterGEMS crashes if SCADA Datasource set for other than 24 hours

When I set up SCADA signals with a 24-hour period, it works and I am able to view SCADA historical results in both the SCADA Signal preview menu or SCADA elements in the model. However, if I change to another time frame (such as 48 hours), WaterGEMS crashes suddenly and without any error messages.

I thought at first that it was due to the complexity of the model and the large number of signals, but the problem is reproducible with an empty model and only a single signal.

Version information: 10.01.01.04 32-bit

The following procedure works without issue:

  1. Create new file, with change tracking off.
  2. Open SCADA Signal
  3. Create new DataSource
    1. OLEDB
    2. Connection String: *entered appropriate connection string*
    3. Advanced Options:
      1. Date/Time Prefix/Suffix: double quote (")
      2. Field Name Prefix/Suffix: space ( )
    4. Test Connection succeeds
  4. Set the Table Name, Signal, Value and Timestamp Fields
  5. Set to Historical
  6. Set Custom SQL Statements:
    1. Available Signals: set rowcount=0 select Tagname from ihTags
    2. Historical: set rowcount=0, samplingmode=rawbytime select Tagname,Value,TimeStamp,Quality from ihRawData where (@requestedsignals)  and ((TimeStamp>=08/20/2018) and (TimeStamp<=08/21/2018)) and Quality = "Good NonSpecific"
    3. Time Range: set rowcount=0, samplingmode=rawbytime select Min(TimeStamp),Max(TimeStamp) from ihRawData
  7. Select SCADA Signals: select a single signal from the list
  8. Click Okay, and view the selected Signal in the preview window, press Refresh
    1. The data points for the 24 hour period display correctly.

However, if the exact same procedure is followed, except the second timestamp is changed to (TimeStamp<=08/22/2018), when you click Okay from entering the Custom SQL Statements, the program freezes for about 30 seconds, then crashes completely. There is no warning message or 'crash report' dialogue.

If you change the date range to 8/21/2018 - 8/22/2018, it works fine, but a date range of 8/21/2018 - 8/23/2018 crashes. So there doesn't seem to be a problem with a specific date, just with ranges greater than 24 hours.

  • Thanks for the detailed response Stephen. Glad this is working. Feedback like this is invaluable to us. SCADA and real-time operations is one of the areas in which we are currently focused, so please be sure to let us know if you run into any other issues and we will do our best to assist.



  • Wayne,

    Thanks so much for the help - this solved the problem for me. I'll put some details below to help the next person with this issue.

    The query that I'm using now is: 

    set rowcount=5000, samplingmode=rawbytime select Tagname,Value,TimeStamp,Quality from ihRawData where (@requestedsignals) and (TimeStamp>=@startdatetime) and (TimeStamp<=@enddatetime) and Quality = "Good NonSpecific"

    A few observations:

    1. This also solved the lagging problem I was describing, so now when I select a SCADA element to Quick Graph it responds very quickly, because the behind-the-scenes query verification (which omits the Tagname filter) is limited to 5000 records, so verifies quickly.
    2. This query uses the Calculation Options (Start Date, Start Time, and Duration) to calculate the start date and end date, which is exactly what I'd like. It is working well to simply change the model Duration and have all the Scada Element Quick Graphs update appropriately. I've tested up to 168 hours. 
    3. Our data is mostly either hourly or 15 minute, so 5000 records per signal allows for 52 days, which is far more than I would ever need to view. We do have a couple of signals with 1 minute resolution, which at 5000 records is limited to about 3.5 days. However, these signals don't cause any errors if a longer time range is selected, the graph is simply restricted to the duration where data was returned (so even if the model duration is 168 hours, the Quick Graph will only extend to 83 hours for the tags with a 1 minute resolution). Since it is only a few tags that are not vital, this is not a problem for me right now, so I don't currently need to investigate how to filter these to a lower timestep. So in other words, a 5000 record limit is not expected to be a problem for me in the foreseeable future, but it could conceivably be an issue for other users with a higher data density.
    4. I use the samplingmode=rawbytime because without this, Historian will calculate and interpolate values, and I prefer to work with the actual raw values so I know exactly what I'm dealing with. Other people may decide to omit this and accept Historian's interpolations.
    5. Your query, which has "Tagname=(@requestedsignals)", did not work for me, because at query time that was getting translated to "Tagname=Tagname='<tag>'" (where '<tag>' is a selected tagname). Simply using (@requestedsignals) worked.
    6. If anyone does have a problem with the 5000 record limit, they could do tests to see how high this could be increased. This would be a function of the particular Historian setup, and at what count it returns a resource limit. However, increasing this too high would re-introduce the lag whenever WaterGEMS needed to verify the query.
    7. I've kept my other queries the same, so I still have the "set rowcount=0" in the Available Signals query. I'm doing this because I expect we will soon cross over 5000 tags, but I don't expect we will get high enough to cause a resource limit problem. In the future if I do testing to find out what that limit is, I would set that explicitly in the query.

    Once again, I very much appreciate both yours and Jesse's help. This has been an interesting issue to delve into and understand the software better.

  • Hi Stephen,

    I authored an additional response with some extra information including a modified query that seems to be working out fairly well for me, however, it was immediately deleted for being "spam or abuse". I am somewhat lost for words! I have appealed it. I am not sure how long it takes for that process to take place. if the post is not back soon, I will re-write it.

    Regards,

    Wayne.

    EDIT: Looks like the spam reply is back...



  • Hi Stephen,

    I have been playing around with the GE Historian some more.

    I was able to reproduce the same DB_E_ABORTLIMITREACHED issue, albeit without any crash. That seems to be a side effect, like you found, of specifying a date range that results in too much data being returned. The amount of data is by default limited to 5000 rows, but the part of your query where you have set rowcount=0, turns off that limit. I would guess then that keeping some kind of limit in place is a reasonable idea.

    Now, I'm not sure whether things have changed in different GE versions, but the version I am using seems to be working a lot better if I use a query like this.

    set rowcount=5000, samplingmode=rawbytime select Tagname,Value,TimeStamp,Quality from ihRawData where Tagname=(@requestedsignals) and (TimeStamp>=@startdatetime) and (TimeStamp<=@enddatetime) and Quality = "Good NonSpecific"

    I'm getting data back and it's coming back very fast now.

    Some issues I am still looking into...

    1. Trying to specify multiple tags in GE isn't working out for me or at least I can't find any suitable syntax that will work with multiple @requestedsignals. Maybe the LIKE keyword is a possibility to try to reduce number of returned rows to a minimum. Worst case, @requestedsignals could be left out of the query.
    2. The samplingmode in GE seems to answer way too many points at too fine a time increment (one every second). I'm trying to figure out how to decrease the resolution of the data, but not having a lot of luck at this point. It's like nothing that is written in the GE documentation has any effect. Given, however, that I am dealing with simulated data every 1 second, maybe this is not an issue for you?

    I'll post again if I find out anything new.

    Wayne.



    Answer Verified By: Stephen Jackson 

  • Hi Stephen,

    Sorry for the delay on getting back to you on this.

    Is there a specific reason to include hard coded date ranges into that custom query? If not, you should be able to leverage the @startdatetime and @enddatetime variables and when querying the custom query for column metadata SCADAConnect will substitute those variables such that the time range is tiny, and you should not run into any such issue.

    Regards,

    Wayne.

    ps/ More information on those date/time variables can be found here:

    https://communities.bentley.com/products/hydraulics___hydrology/w/hydraulics_and_hydrology__wiki/33335/how-to-define-real-time-custom-queries-in-scadaconnect