Prepare for CI#4775
Conversation
…rBridge2mPulse/ThyristorBridge2mPulse_DC_Drive.mo Co-authored-by: Malte Lenz <malte.lenz@gmail.com>
|
Just to summarize what I tried:
The additional annotation is: TestCase(shouldPass = true, __MAPLib_ComparisonWindow={begin, end}) Ping @MatthiasBSchaefer is the saving of file sizes (reference results) worth the invested work? Ping @christiankral @casella @HansOlsson @henrikt-ma @maltelenz @StephanZiegler |
My recollection of the MAP-Lib meeting is that there was no consensus to proceed with the idea of comparison windows, as it was not clear that a simple design would actually solve our problems. As @casella pointed out, a tiny phase shift in the signals to which we were planning to apply the window would cause a result mismatch, so it would only work when the phase is locked against a something which cannot drift. What to do instead and who should do it is not something I think we should discuss in this PR. What we noted could be done with the means available today was to make alternative test models (to go in ModelicaTest) where both |
|
@henrikt-ma you're right but the same problem with phase shift we already have today when we're comparing over the whole (long) period until stop time. Do we want to solve 2 problems at once? In most of the original examples I do not want to cut stop time and interval to preserve normal "look-and-feel" for the end user, but we might reduce the ComparisonSignals. We extend from such an example in ModelicaTest and provide a different annotation. As I tried to explain, for real comparison we need a longer stop time and a shorter interval but we could cut out a ComparisonWindow. I have no idea which shorter stop time and longer interval I should use in these examples to achieve meaningful comparisons including relevant parts of the trajectory. I stop working on that draft until we agree on a solution. |
|
I am not sure whether the TestCase-annotation is the right place to store the ComparisonWindow information. At least in my head (and also in our ReSim tool), there is a strict separation between Such, storing information about comparison (b) inside the model itself (a) is a bit misplaced. I would rather propose to store this information next to \ inside the comparisonSignals.txt, where we already store the information about comparison |
|
@MatthiasBSchaefer I'm not sure where is the right place, an annotation or a remark-line in ComparisonSignals. |
|
@henrikt-ma I understand that the comparison tool might detect a tiny phase shift during a short ComparisonWIndow while it does not detect it looking at the whole time range. It's a question of toleranve whether this tine phase shift should be detected or not. |
@MatthiasBSchaefer I strongly believe the TestCase annotation is the right place. I see no conflict between storing the information there, and keeping the separation between running the test and comparing. We already have a very nice structured way to store metadata together with classes in the form of annotations. It makes no sense to me to invent yet another way to store such information. Keeping the testing metadata together with the class itself, also makes it a possibility for tools to later easily expose viewing and editing this information, together in the same (or related/similar) place as they currently allow editing the experiment annotation. Keeping the class together with the metadata (by putting the data inside the class) also makes it trivially work if classes are renamed or moved around. If metadata is put in some external file, the connection is easily broken when doing such operations, adding additional maintenance burden on library developers. |
If all of this was only about reducing the size of reference results and we intended to just compare in same way as before, only on a shorter time interval, then phase shifts wouldn't cause more problems than they do today. However, as noted in the meeting, there are probably many of these high frequency signals that are almost not checked at all today due to the way the "tube" is set up. If we really care about these high frequency signals, we will need to use a much stricter time tolerance in the comparison window, and I suppose this could reveal some challenges due to phase shifts that have been flying under our radar so far.
Yes, I also think that the vast majority of high frequency signals will be driven by things that will not drift. We would probably encounter bigger problems if the idea of densely sampled time-windowed data was compared for trajectories of non-periodic nature – but maybe we have no intention of doing so? |
|
@AHaumer 2.) should we also reduce the stop time for the examples with comparison_range ? It doesn't really make sense to simulate e.g. 10 s but only compare the range 4.5-5s. Or are you afraid of changing the numerical behavior by adapting the stopTime?
Then we MUST also store the comparison-signals and in case different simulation settings (output-interval,tolerance,etc.) for testing in this annotation. Store the information for comparison at two different locations is a no-go in my opinion. |
Wasn't the plan to add ModelicaTest models extending from the current examples? In that case it would seem completely natural to have one comparisonSignals.txt per example class, just like today. Regarding time window vs
That is, the time window should really be about extracting a small part of a longer simulation, because the fine time-resolution is what we want the user to see in the main example, but we don't want to pay the price of a huge result file. It is the derived model in ModelicaTest that will use a coarser time grid, and where we can afford to compare more variables for the full duration. |
@MatthiasBSchaefer pls. wait a little bit until we have agreement!
You are right, silly me that I didn't see that! |
|
@henrikt-ma my point of view:
At least for the examples I touched this is the right solution (in my opinion) to catch possible deviations. |
|
@MatthiasBSchaefer @maltelenz
What's wrong with this constellation? |
Short interval and "full" |
… in ModelicaTest/Electrical/PowerConverters/Examples/
|
@henrikt-ma that's not correct. I'm looking at one of the examples, e.g. Electrical.PowerConverters.Examples.ACDC. I discussed this with @christiankral (who authored a lot of the examples we're discussing) and we both agree on this point of view. BTW #4779 is an example (the only one in Thermal with large ReferenceResult) that can be cured by setting a meaningful Interval - this one is not a draft but ready for review. |
|
As I spoke with @AHaumer on the phone today and a couple times before: I very much agree, that utilizing a ComparisonWindow makes a lot of sense. So yes, I am in favor of this approach: a) Reduce the ComparisonSignals in MSL examples, in order to reduce the result files size. This yet allows to show the intended results to the users. b) Utilize ComparisonWindow in ModelicaTest to make sure high quality results are compared in an efficient way. All the power electronics examples are based on PWM utilizing time events. So there should not arise any issues with phase shifts. |
I don't see a problem here either. For the long term, I think I'd like to move the comparison signals list into annotations as well, but that is a larger discussion that we should have later. We have to take small steps, or we will never get anywhere. |
|
Should we change the vendor specific annotation from |
I can see a situation where one wants to simulate after a comparison window, for example to verify that a |
I thought one use case was that you have a long-running simulation as Experiment and only want to compare a small part in the beginning for regression testing. For that use-case it makes sense to simulate to the normal stop-time (the most obvious reason is that we don't want that simulation to fail). |
I suggest going for something that treats the start and end of the window symmetrically. Either both should be optional and default to {{experiment.StartTime}} and {{experiment.StopTime}}, or both should be mandatory. Personally, I am leaning towards making both mandatory to encourage cutting away "unnecessary" data both at the beginning and the end. Also, I don't understand why we shouldn't reuse the This is highly related to the already standardized One thing which speaks in favor of making both start and end optional is that one could reuse the already established names (One could of course require that either both or none of these are provided, but would probably be perceived as an annoying artificial requirement.) |
|
Do we want to leave the door open to multiple comparison windows in the future? That would also influence the exact syntax of the annotation used. |
I think leaving that door open is good. However, I'm not sure how limiting it is - as we could support multiple annotations with the same name: Obviously, if we already plan to use multiple windows (especially if we want to massively use it), then it would be good to have something more suited for multiple windows. But if it is just a possibility I don't see as a major limitation. |
Oooff... Before doing that, I would choose to extend it to I brought it up, because it seemed to me one of @henrikt-ma alternatives: seemed less amenable to having multiple windows. Reusing |
I think we could afford the verbosity of a record array (avoiding the use of Or perhaps with positional constructor arguments? |
|
Thanks a lot @HansOlsson @henrikt-ma @maltelenz splendid idea to keep the annotation open to cover other cases. |
I would recommend using an uppercase initial for the record constructor, as in {{TimeSlot}}. (When I search the specification, |
|
I completely miss the aspect, that in a comparison are always involved at least two results. In our case it is the model on one hand side and the reference results on the other side. But in general you don't know in advance which results you want to compare. You can - for example - also compare the model with an other model and want to ensure that they both reach the same steady-state. In this case you need an other time-window that comparing the oscillations during the simulation time. From automation point of view it's really hard to handle multiple time windows. Should both be compared at the same time ? (CSV Compare can not handle this) or are these two different comparisons? Then, which one should be used in which case ? |
@MatthiasBSchaefer I can see that all of this would be nice to have eventually. For now, I don't understand why we would need anything new besides the time window information in the model?
I think we should stick to a single time window at this stage. I just wanted us to choose an annotation design that allows us to extend to multiple time windows in the future. The details of exactly what they mean could then also be discussed in the future. |
|
@AHaumer : Are we only talking periodic signals with high frequency or also "generic" fast alternating signals? |
2 examples with (hopefully) reduced reference results.
Please have a look at the enclosed pdf with my considerations / workflow.
@GallLeo @MatthiasBSchaefer could you please check and compare the size of the reference results before / after?
Bear in mind we have now 2 reference results (1 in MSL, 1 in ModelicaTest).
I'll convert this to "draft" but we should discuss here!
We still have to decide about the vendor specific annotation.
As a first guess I have choosen __MAPLib_ComparisonWindow={begind, end}.
Should we gather some or all changes in one PR or do it in small portions?
Anyhow, it's a considerable amount of work which affords high concentration to avoid bugs.
So I am asking for you opinion, especially
@christiankral @casella @HansOlsson @henrikt-ma @maltelenz @StephanZiegler
PrepCI.pdf