Skip to content

Prepare for CI#4775

Draft
AHaumer wants to merge 8 commits intomodelica:masterfrom
AHaumer:PrepCI
Draft

Prepare for CI#4775
AHaumer wants to merge 8 commits intomodelica:masterfrom
AHaumer:PrepCI

Conversation

@AHaumer
Copy link
Copy Markdown
Contributor

@AHaumer AHaumer commented Apr 21, 2026

2 examples with (hopefully) reduced reference results.
Please have a look at the enclosed pdf with my considerations / workflow.
@GallLeo @MatthiasBSchaefer could you please check and compare the size of the reference results before / after?
Bear in mind we have now 2 reference results (1 in MSL, 1 in ModelicaTest).
I'll convert this to "draft" but we should discuss here!
We still have to decide about the vendor specific annotation.
As a first guess I have choosen __MAPLib_ComparisonWindow={begind, end}.
Should we gather some or all changes in one PR or do it in small portions?
Anyhow, it's a considerable amount of work which affords high concentration to avoid bugs.

So I am asking for you opinion, especially
@christiankral @casella @HansOlsson @henrikt-ma @maltelenz @StephanZiegler

PrepCI.pdf

@AHaumer AHaumer added L: Electrical.PowerConverters Issue addresses Modelica.Electrical.PowerConverters discussion Discussion issue that it not necessarily related to a concrete bug or feature help wanted Extra attention is needed labels Apr 21, 2026
@AHaumer AHaumer marked this pull request as draft April 21, 2026 15:25
@AHaumer AHaumer added the ref-result Issue addresses the reference results label Apr 21, 2026
AHaumer and others added 2 commits April 21, 2026 17:45
…rBridge2mPulse/ThyristorBridge2mPulse_DC_Drive.mo

Co-authored-by: Malte Lenz <malte.lenz@gmail.com>
@AHaumer AHaumer requested a review from maltelenz April 21, 2026 16:18
@AHaumer
Copy link
Copy Markdown
Contributor Author

AHaumer commented Apr 22, 2026

Just to summarize what I tried:
I am working on Modelica.Electrical.PowerConverters.Examples with reference results > 10 MB.

  • I left the original stop time, only reduced the interval where possbile but 5 kHz sampling is desired. The trajectories should look smooth for the end user.
  • Reduced the ComparisonSignals to (in most cases) 2 signals without too much transients.
  • Extended these examples in ModelicaTest.Electrical.PowerConverters.Examples.
  • Kept the original stop time and interval. Stop time short after start time is not appropriate since in most cases we start from zero, and we need some time until we get the trajectory of interest. Therefore "ComparisonWindow" is inevitable.
  • Added an annotation with a length of (in most cases) 100 µs, to meet an "interesting" period in the whole simulation and to reduce the number of result points drastically.
  • Using the original full set of ComparisonSignals.

The additional annotation is: TestCase(shouldPass = true, __MAPLib_ComparisonWindow={begin, end})

Ping @MatthiasBSchaefer is the saving of file sizes (reference results) worth the invested work?

Ping @christiankral @casella @HansOlsson @henrikt-ma @maltelenz @StephanZiegler
Should I proceed at least with the PowerConverters?
Who will implement (and how) the ComparisonWindow in the comparison tool?
Would it be sufficient to write a small tool to delete all lines before begin and after end?
I suppose the comparison tool should work with a start time = begin > 0.

@henrikt-ma
Copy link
Copy Markdown
Contributor

henrikt-ma commented Apr 22, 2026

Ping @christiankral @casella @HansOlsson @henrikt-ma @maltelenz @StephanZiegler
Should I proceed at least with the PowerConverters?
Who will implement (and how) the ComparisonWindow in the comparison tool?
Would it be sufficient to write a small tool to delete all lines before begin and after end?
I suppose the comparison tool should work with a start time = begin > 0.

My recollection of the MAP-Lib meeting is that there was no consensus to proceed with the idea of comparison windows, as it was not clear that a simple design would actually solve our problems. As @casella pointed out, a tiny phase shift in the signals to which we were planning to apply the window would cause a result mismatch, so it would only work when the phase is locked against a something which cannot drift. What to do instead and who should do it is not something I think we should discuss in this PR.

What we noted could be done with the means available today was to make alternative test models (to go in ModelicaTest) where both Interval and StopTime are shortened, and the selected comparison signals are few. Since we can only test correctness near the beginning of a simulation this way, I am personally not convinced that it is worth the effort of creating and maintaining these additional test models until we have better ways of expressing what we would really like to test.

@AHaumer
Copy link
Copy Markdown
Contributor Author

AHaumer commented Apr 22, 2026

@henrikt-ma you're right but the same problem with phase shift we already have today when we're comparing over the whole (long) period until stop time. Do we want to solve 2 problems at once?
Well, shortening stop time is a nice trick for chaotic systems (that need some time to go to chaos) but nor for power electronics and drives. Most of these models we start at standstill and during the first phase often not really interesting things happen - shall I compare that or more relevant periods later? I need the ComparisonWindow.

In most of the original examples I do not want to cut stop time and interval to preserve normal "look-and-feel" for the end user, but we might reduce the ComparisonSignals. We extend from such an example in ModelicaTest and provide a different annotation. As I tried to explain, for real comparison we need a longer stop time and a shorter interval but we could cut out a ComparisonWindow.
In my humble opinion it should be easy to implement a small preprocessing tool that removes the lines before begin and after end of the ComparisonWindow from the csv, and then the comparison tool should work on these shorter and smaller reference results.

I have no idea which shorter stop time and longer interval I should use in these examples to achieve meaningful comparisons including relevant parts of the trajectory. I stop working on that draft until we agree on a solution.

@AHaumer AHaumer requested a review from HansOlsson April 22, 2026 22:06
@MatthiasBSchaefer
Copy link
Copy Markdown
Contributor

I am not sure whether the TestCase-annotation is the right place to store the ComparisonWindow information.

At least in my head (and also in our ReSim tool), there is a strict separation between
a) running the example models
and afterwards decide
b) against what to compare (even if for MAP-LIB this is obviously the reference results)

Such, storing information about comparison (b) inside the model itself (a) is a bit misplaced.

I would rather propose to store this information next to \ inside the comparisonSignals.txt, where we already store the information about comparison

@AHaumer
Copy link
Copy Markdown
Contributor Author

AHaumer commented Apr 23, 2026

@MatthiasBSchaefer I'm not sure where is the right place, an annotation or a remark-line in ComparisonSignals.
I'm open for both solutions. I'm just thinking that the decision about the ComparisonWindow is done by investigating the model, not the comparison.
Could you pls. check the savings in file size?

@AHaumer
Copy link
Copy Markdown
Contributor Author

AHaumer commented Apr 23, 2026

@henrikt-ma I understand that the comparison tool might detect a tiny phase shift during a short ComparisonWIndow while it does not detect it looking at the whole time range. It's a question of toleranve whether this tine phase shift should be detected or not.
Besides that, where should the tiny phase shift come from? I can imagine that a "clock" purely depending on state events will accumulate errors while time events should keep precision.
The PowerConverter examples should all rely on time events. State events occur but they are not responsible for the "clock". There could be a few time events at the same point in time, e.g. one from the modulation carrier signal and others from calculation mean values over the switching period - but this should not lead to problems.
I'll have to check the other examples with large reference results (Electrical.Machines, Magnetic.FundamentalWave, Magnetic.QuasiStatic.FundamentalWave; one in Thermal.HeatTransfer should be easy to cure with correcting interval) whether we could run into such problems. If not - why not proceed now and decide whether the ComparisonWindow should be defined in an annotation or in ComparisonSignals? I remember we want to have a solution until end of June.

@maltelenz
Copy link
Copy Markdown
Contributor

I am not sure whether the TestCase-annotation is the right place to store the ComparisonWindow information.

At least in my head (and also in our ReSim tool), there is a strict separation between a) running the example models and afterwards decide b) against what to compare (even if for MAP-LIB this is obviously the reference results)

Such, storing information about comparison (b) inside the model itself (a) is a bit misplaced.

I would rather propose to store this information next to \ inside the comparisonSignals.txt, where we already store the information about comparison

@MatthiasBSchaefer I strongly believe the TestCase annotation is the right place.

I see no conflict between storing the information there, and keeping the separation between running the test and comparing.

We already have a very nice structured way to store metadata together with classes in the form of annotations. It makes no sense to me to invent yet another way to store such information.

Keeping the testing metadata together with the class itself, also makes it a possibility for tools to later easily expose viewing and editing this information, together in the same (or related/similar) place as they currently allow editing the experiment annotation.

Keeping the class together with the metadata (by putting the data inside the class) also makes it trivially work if classes are renamed or moved around. If metadata is put in some external file, the connection is easily broken when doing such operations, adding additional maintenance burden on library developers.

@henrikt-ma
Copy link
Copy Markdown
Contributor

@henrikt-ma I understand that the comparison tool might detect a tiny phase shift during a short ComparisonWIndow while it does not detect it looking at the whole time range. It's a question of toleranve whether this tine phase shift should be detected or not.

If all of this was only about reducing the size of reference results and we intended to just compare in same way as before, only on a shorter time interval, then phase shifts wouldn't cause more problems than they do today. However, as noted in the meeting, there are probably many of these high frequency signals that are almost not checked at all today due to the way the "tube" is set up. If we really care about these high frequency signals, we will need to use a much stricter time tolerance in the comparison window, and I suppose this could reveal some challenges due to phase shifts that have been flying under our radar so far.

Besides that, where should the tiny phase shift come from? I can imagine that a "clock" purely depending on state events will accumulate errors while time events should keep precision.

Yes, I also think that the vast majority of high frequency signals will be driven by things that will not drift. We would probably encounter bigger problems if the idea of densely sampled time-windowed data was compared for trajectories of non-periodic nature – but maybe we have no intention of doing so?

@MatthiasBSchaefer
Copy link
Copy Markdown
Contributor

@AHaumer
1.) At a first glance the reduction of size seems to be successful.
It reduces the size of the csv files (sum of all changed and newly added examples) from 177 MB to 6 MB
I have still to verify and double check it and then I will make a PR in the MAP-LIB_ReferenceResult Repo

2.) should we also reduce the stop time for the examples with comparison_range ? It doesn't really make sense to simulate e.g. 10 s but only compare the range 4.5-5s. Or are you afraid of changing the numerical behavior by adapting the stopTime?

@maltelenz

@MatthiasBSchaefer I strongly believe the TestCase annotation is the right place.

I see no conflict between storing the information there, and keeping the separation between running the test and comparing.

We already have a very nice structured way to store metadata together with classes in the form of annotations. It makes no sense to me to invent yet another way to store such information.

Keeping the testing metadata together with the class itself, also makes it a possibility for tools to later easily expose viewing and editing this information, together in the same (or related/similar) place as they currently allow editing the experiment annotation.

Keeping the class together with the metadata (by putting the data inside the class) also makes it trivially work if classes are renamed or moved around. If metadata is put in some external file, the connection is easily broken when doing such operations, adding additional maintenance burden on library developers.

Then we MUST also store the comparison-signals and in case different simulation settings (output-interval,tolerance,etc.) for testing in this annotation. Store the information for comparison at two different locations is a no-go in my opinion.

@henrikt-ma
Copy link
Copy Markdown
Contributor

Then we MUST also store the comparison-signals and in case different simulation settings (output-interval,tolerance,etc.) for testing in this annotation. Store the information for comparison at two different locations is a no-go in my opinion.

Wasn't the plan to add ModelicaTest models extending from the current examples? In that case it would seem completely natural to have one comparisonSignals.txt per example class, just like today.

Regarding time window vs StopTime, I believe this is the combination we would like to achieve:

  • Main example runs with short Interval, "full" StopTime, few entries in comparisonSignals.txt, and a short time window to test.
  • Derived example runs with long Interval, "full" StopTime, more entries in comparisonSignals.txt, and no time windowing.

That is, the time window should really be about extracting a small part of a longer simulation, because the fine time-resolution is what we want the user to see in the main example, but we don't want to pay the price of a huge result file. It is the derived model in ModelicaTest that will use a coarser time grid, and where we can afford to compare more variables for the full duration.

@AHaumer
Copy link
Copy Markdown
Contributor Author

AHaumer commented Apr 24, 2026

@AHaumer 1.) At a first glance the reduction of size seems to be successful. It reduces the size of the csv files (sum of all changed and newly added examples) from 177 MB to 6 MB I have still to verify and double check it and then I will make a PR in the MAP-LIB_ReferenceResult Repo

@MatthiasBSchaefer pls. wait a little bit until we have agreement!

2.) should we also reduce the stop time for the examples with comparison_range ? It doesn't really make sense to simulate e.g. 10 s but only compare the range 4.5-5s. Or are you afraid of changing the numerical behavior by adapting the stopTime?

You are right, silly me that I didn't see that!
StopTime could be = ComparisonWindow.end

@AHaumer
Copy link
Copy Markdown
Contributor Author

AHaumer commented Apr 24, 2026

@henrikt-ma my point of view:

  • Main example runs with short Interval, "full" StopTime, few entries in comparisonSignals.txt, comparison over the whole time span.
  • Derived example runs with maybe the same Interval, "short" StopTime = ComparisonWindow.end , all entries in comparisonSignals.txt, and short ComparisonWindow.

At least for the examples I touched this is the right solution (in my opinion) to catch possible deviations.

@AHaumer
Copy link
Copy Markdown
Contributor Author

AHaumer commented Apr 24, 2026

@MatthiasBSchaefer @maltelenz
Nowadays we have a split: experiment annoation in the model + ComparisonSignals in separate textfile.
What we would get in future:

  • experiment annoation in the original model + shortened ComparisonSignals in separate textfile.
  • TestCase annoation in the derived model + full ComparisonSignals in separate textfile.

What's wrong with this constellation?

@henrikt-ma
Copy link
Copy Markdown
Contributor

henrikt-ma commented Apr 24, 2026

  • Main example runs with short Interval, "full" StopTime, few entries in comparisonSignals.txt, comparison over the whole time span.

Short interval and "full" StopTime means a ton of data. I thought we were trying hard to avoid that. I don't think that just keeping down the number of compared signals will give us the desired size reduction.

… in ModelicaTest/Electrical/PowerConverters/Examples/
@AHaumer
Copy link
Copy Markdown
Contributor Author

AHaumer commented Apr 26, 2026

@henrikt-ma that's not correct. I'm looking at one of the examples, e.g. Electrical.PowerConverters.Examples.ACDC.
RectifierCenterTap2mPulse.ACDC.RectifierCenterTap2mPulse_RLV_Characteristic (and other examples are similar):
Reducing the ComparisonSignals from 20 to 2 means a reduction by 90% even using a short interval (which is necessary to provide smooth trajectories for the end user!) and full StopTime (which is necessary for such an example without deceiving the end user!). This comparison might catch some deviations over the whole time span.
Reducing the ComparisonWindow from 10 s to 0.1 s even by the same short interval and StopTime = ComparisonWindow.end means a reduction by 99% (but a second [small!] ReferenceResult). The later start of the ComparisonWindow is necessary because in many examples we start the drive with a switch later that StartTime and we want to compare after the drive has begun to start.
So in my humble opinion we keep smooth examples for the end user and a high chance to catch deviations and reduce the size of the biggest ReferenceResults (> 10 MB) down to 10%.

I discussed this with @christiankral (who authored a lot of the examples we're discussing) and we both agree on this point of view.

BTW #4779 is an example (the only one in Thermal with large ReferenceResult) that can be cured by setting a meaningful Interval - this one is not a draft but ready for review.

@christiankral
Copy link
Copy Markdown
Contributor

As I spoke with @AHaumer on the phone today and a couple times before: I very much agree, that utilizing a ComparisonWindow makes a lot of sense. So yes, I am in favor of this approach:

a) Reduce the ComparisonSignals in MSL examples, in order to reduce the result files size. This yet allows to show the intended results to the users.

b) Utilize ComparisonWindow in ModelicaTest to make sure high quality results are compared in an efficient way.

All the power electronics examples are based on PWM utilizing time events. So there should not arise any issues with phase shifts.

@maltelenz
Copy link
Copy Markdown
Contributor

maltelenz commented Apr 27, 2026

@MatthiasBSchaefer @maltelenz Nowadays we have a split: experiment annoation in the model + ComparisonSignals in separate textfile. What we would get in future:

* experiment annoation in the original model + shortened ComparisonSignals in separate textfile.

* TestCase annoation in the derived model + full ComparisonSignals in separate textfile.

What's wrong with this constellation?

I don't see a problem here either.

For the long term, I think I'd like to move the comparison signals list into annotations as well, but that is a larger discussion that we should have later. We have to take small steps, or we will never get anywhere.

@AHaumer
Copy link
Copy Markdown
Contributor Author

AHaumer commented Apr 27, 2026

Should we change the vendor specific annotation from __MAPLib_ComparisonWindow={begin, end} to __MAPLib_ComparisonStart=begin since it doesn't make sense to simulate longer than
StopTime = ComparisonWindow.end?
@MatthiasBSchaefer @maltelenz what's your opinion?

@maltelenz
Copy link
Copy Markdown
Contributor

Should we change the vendor specific annotation from __MAPLib_ComparisonWindow={begin, end} to __MAPLib_ComparisonStart=begin since it doesn't make sense to simulate longer than StopTime = ComparisonWindow.end? @MatthiasBSchaefer @maltelenz what's your opinion?

I can see a situation where one wants to simulate after a comparison window, for example to verify that a terminate is triggered at the correct time. So my opinion is that we want to be explicit about the begin and end time of any comparison window(s).

@HansOlsson
Copy link
Copy Markdown
Contributor

Should we change the vendor specific annotation from __MAPLib_ComparisonWindow={begin, end} to __MAPLib_ComparisonStart=begin since it doesn't make sense to simulate longer than StopTime = ComparisonWindow.end? @MatthiasBSchaefer @maltelenz what's your opinion?

I thought one use case was that you have a long-running simulation as Experiment and only want to compare a small part in the beginning for regression testing. For that use-case it makes sense to simulate to the normal stop-time (the most obvious reason is that we don't want that simulation to fail).

@henrikt-ma
Copy link
Copy Markdown
Contributor

Should we change the vendor specific annotation from __MAPLib_ComparisonWindow={begin, end} to __MAPLib_ComparisonStart=begin since it doesn't make sense to simulate longer than
StopTime = ComparisonWindow.end?

I suggest going for something that treats the start and end of the window symmetrically. Either both should be optional and default to {{experiment.StartTime}} and {{experiment.StopTime}}, or both should be mandatory. Personally, I am leaning towards making both mandatory to encourage cutting away "unnecessary" data both at the beginning and the end.

Also, I don't understand why we shouldn't reuse the __ModelicaAssociation which is already in use by the language specification, and as far as I understand not intended to be reserved specifically for the specification (in that case it should have been something like __MAPLang or __ModelicaLanguage instead)?

This is highly related to the already standardized TestCase, so the logical place would be inside of it. Further, I see two sub-categories of things we might want to have inside TestCase, namely settings for generating reference results and settings for how to perform result the comparison. Thus:

TestCase(
  __ModelicaAssociation(
    Comparison(
      timeWindow = {5, 7}
      comparisonSignals = … // Example of future extension
    ),
    ReferenceCreation( // Example of future extension
      ToleranceFactor = 0.05, // Example of future extension
      IntervalFactor = 0.25 // Example of future extension
    )
  )
)

One thing which speaks in favor of making both start and end optional is that one could reuse the already established names StartTime and StopTime. Thus:

TestCase(
  __ModelicaAssociation(
    Comparison(
      StartTime = 5,
      StopTime = 7
    )
  )
)

(One could of course require that either both or none of these are provided, but would probably be perceived as an annoying artificial requirement.)

@maltelenz
Copy link
Copy Markdown
Contributor

Do we want to leave the door open to multiple comparison windows in the future? That would also influence the exact syntax of the annotation used.

@HansOlsson
Copy link
Copy Markdown
Contributor

Do we want to leave the door open to multiple comparison windows in the future? That would also influence the exact syntax of the annotation used.

I think leaving that door open is good.

However, I'm not sure how limiting it is - as we could support multiple annotations with the same name:

   ComparisonWindow={1, 3}, 
   ComparisonWindow={5, 7}

Obviously, if we already plan to use multiple windows (especially if we want to massively use it), then it would be good to have something more suited for multiple windows. But if it is just a possibility I don't see as a major limitation.

@maltelenz
Copy link
Copy Markdown
Contributor

However, I'm not sure how limiting it is - as we could support multiple annotations with the same name:

Oooff... Before doing that, I would choose to extend it to ComparisonWindow = {{1, 2}, {4, 5}}, I think.

I brought it up, because it seemed to me one of @henrikt-ma alternatives:

TestCase(
  __ModelicaAssociation(
    Comparison(
      StartTime = 5,
      StopTime = 7
    )
  )
)

seemed less amenable to having multiple windows.

Reusing StartTime and StopTime might also create confusion, since they can be taken for experiment information, which they are not.

@henrikt-ma
Copy link
Copy Markdown
Contributor

henrikt-ma commented Apr 27, 2026

Oooff... Before doing that, I would choose to extend it to ComparisonWindow = {{1, 2}, {4, 5}}, I think.

I think we could afford the verbosity of a record array (avoiding the use of StartTime and StopTime as suggested):

timeWindows = {Interval(begin = 1, end = 2), Interval(begin = 4, end = 5)}

Or perhaps with positional constructor arguments?

timeWindows = {Interval(1, 2), Interval(4, 5)}

@AHaumer
Copy link
Copy Markdown
Contributor Author

AHaumer commented Apr 27, 2026

Thanks a lot @HansOlsson @henrikt-ma @maltelenz splendid idea to keep the annotation open to cover other cases.
It looks like "end" is not the best idea because "end" could be mistaken by a parser.
I'd also like to avoid "StartTime", "StopTime" and "Interval".
Maybe I try something like
TestCase(__ModelicaAssociation(Comparison(timeWindows={timeSlot(1,2), timeSlot(4,5)})))

@henrikt-ma
Copy link
Copy Markdown
Contributor

Thanks a lot @HansOlsson @henrikt-ma @maltelenz splendid idea to keep the annotation open to cover other cases. It looks like "end" is not the best idea because "end" could be mistaken by a parser. I'd also like to avoid "StartTime", "StopTime" and "Interval". Maybe I try something like TestCase(__ModelicaAssociation(Comparison(timeWindows={timeSlot(1,2), timeSlot(4,5)})))

I would recommend using an uppercase initial for the record constructor, as in {{TimeSlot}}. (When I search the specification, experiment stands out as the only (but very prominent) exception to the rule.)

@MatthiasBSchaefer
Copy link
Copy Markdown
Contributor

MatthiasBSchaefer commented Apr 28, 2026

I completely miss the aspect, that in a comparison are always involved at least two results. In our case it is the model on one hand side and the reference results on the other side. But in general you don't know in advance which results you want to compare. You can - for example - also compare the model with an other model and want to ensure that they both reach the same steady-state. In this case you need an other time-window that comparing the oscillations during the simulation time.
Such - in my opinion - either we need to specify also the comparison-partner (reference results) in the Testcase-annotation of the model (together with all information needed for comparison: time-window, comparison-signals, ...) or specify it in a separate file (like done in comparison_signals.txt), particulary for the comparison with the reference results


From automation point of view it's really hard to handle multiple time windows. Should both be compared at the same time ? (CSV Compare can not handle this) or are these two different comparisons? Then, which one should be used in which case ?

@maltelenz
Copy link
Copy Markdown
Contributor

I completely miss the aspect, that in a comparison are always involved at least to results. In our case it is the model on one hand side and the reference results on the other side. But in general you don't know in advance which results you want to compare. You can - for example - also compare the model with an other model and want to ensure that they both reach the same steady-state. In this case you need an other time-window that comparing the oscillations during the simulation time.
Such - in my opinion - either we need to specify also the comparison-partner (reference results) in the Testcase-annotation of the model (together with all information needed for comparison: time-window, comparison-signals, ...) or specify it in a separate file (like done in comparison_signals.txt), particulary for the comparison with the reference results

@MatthiasBSchaefer I can see that all of this would be nice to have eventually. For now, I don't understand why we would need anything new besides the time window information in the model?

From automation point of view it's really hard to handle multiple time windows. Should both be compared at the same time ? (CSV Compare can not handle this) or are these two different comparisons? Then, which one should be used in which case ?

I think we should stick to a single time window at this stage. I just wanted us to choose an annotation design that allows us to extend to multiple time windows in the future. The details of exactly what they mean could then also be discussed in the future.

@StephanZiegler
Copy link
Copy Markdown
Contributor

@AHaumer : Are we only talking periodic signals with high frequency or also "generic" fast alternating signals?
Does a phase shift in a steady-state oscillation indicate a regression?
If not would it make sense to identify one period in the defined comparison window and use that for comparing? This might be a future enhancement for the comparison tool.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

discussion Discussion issue that it not necessarily related to a concrete bug or feature help wanted Extra attention is needed L: Electrical.PowerConverters Issue addresses Modelica.Electrical.PowerConverters ref-result Issue addresses the reference results

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants