Difference between model and field data during average/low flow conditions

In the following tables are some results of field tests compared to model predictions at average/low flow conditions in table 1 and then at high flow conditions in table 2. In the 1st table one can see that the hydraulic grade during average/low flow conditions differs from results of the model prediction but on average they do not differ as much as the ones form the high flow conditions.  The values in red are values that seem to be bad data since the hydraulic grade is higher than that of the reservoir. From one of the articles in the calibration tips it is explained that in the low flow scenario model predictions should match field data. But my question then is if the difference during the average/ low flow scenario is significant enough or are they small enough to where one could take away that the model and field data are matching well enough?

Table 1

Hydraulic grade of reservoir (average /low flow conditions)

44.91

Hydraulic grade from model prediction (average /low flow conditions)

Hydraulic grade from measurement (average /low flow conditions)

Percentage difference

44.81

46.829

-4.51%

44.81

43.686

2.51%

44.81

43.524

2.87%

44.81

43.639

2.61%

44.81

44.353

1.02%

44.81

42.516

5.12%

44.81

46.67

-4.15%

Table 2

Hydraulic grade of reservoir (High flow conditions)

44.6

Hydraulic grade from model prediction (High flow conditions)

Hydraulic grade from measurement (High flow conditions)

Percentage difference

42.54

39.444

7.28%

43.51

36.033

17.18%

42.7

26.856

37.11%

43.51

38.378

11.79%

42.68

37.278

12.66%

42.55

23.091

45.73%

44.07

44.612

-1.23%