Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space WBRIDGE and version 7.3.0
Div
Classe2e-refDiv

Otp
Floatingfalse

Rp

This page discusses a few critical points to optimize the run-time performance of a Persistent State model.

Info
iconfalse

If a service never processes more than a few dozen objects in parallel, only triggers a state transition every few seconds and does not rely on response times below one second, there is no need to fine-tune model and configuration for performance. Other criteria like readability of the state machine diagram, robustness of the service and maintainability of the persisted data are much more important.

Note
iconfalse

Activating asynchronous trace for a certain persistent state class will have a major performance impact on its processing! However, non-traced object within the same service will behave normal, assuming there are enough system resources to handle the additional load.

Anchor
Storage Medium
Storage Medium

...

  • For models that generate a high CPU load by complex activities or processing of large amounts of data, a low number of concurrent workers are beneficial as it reduces the overhead of the operating system for task switching.
  • For models that include some long-running activities, a higher number of concurrent workers improve the response time as multiple shorter activities may be processed parallel to the long running. This is especially valid for long-running transactions that are not CPU bound, e.g. when waiting for an external system to reply.
  • Additionally the number of available license concurrency limits the useful number of workers.

    Noteinfo
    iconfalse

    Each active worker requires one license slot (concurrent connection) to process activities.

    We recommend to use not more than half of the license slots for persistent state workers. If the workers consume all license slots, the persistent state service in question will be severely afflicted by errors.
    For more information on concurrent connections and E2E Bridge licensing refer to License for Running xUML Services.

    As well, the required memory to process data, the number of database connections and other external resources used in the activities should be considered. In practice, all these factors limit the number of useful workers to a few dozen (see also xUML Runtime Resources further below).

We consider the default of 5 workers is a reasonable compromise. Task switching between 5 concurrent threads is neglectable, even with fully CPU bound activities. Still a certain amount of concurrency is possible, should some of the workers be busy with long-running tasks.
Decrease this number if you want to limit resources, or if concurrency is not a requirement (in which case 1 worker is sufficient). Increase this number if you want less latency for short-running transitions and your system resources are not exhausted yet.

Event Selection Algorithm

The event selection algorithm defines in which order the xUML Runtime processes the incoming persistent state events:

  • Favour Signals
    The Runtime processes events according to signal appearance. Signals that have arrived earlier are processed before newer signals.

    The following figure shows a schematic overview on the object/signal processing in this case:

    Image Added

  • Favour Objects
    The Runtime processes events according to object age. Signals of older objects are processed in advance of signals of newer objects.

    The following figure shows a schematic overview on the object/signal processing in this case:

    Image Added

The overall throughput should be the same whatever algorithm you use, but:

  • Comparing the figures above, you can see that with Favour Objects objects get deleted earlier. This reduces load on the persistent state engine.
  • You should look at your process and decide what is important in your dedicated case: the arrival on a certain state (Favour States) or that objects get to the end state fast (Favour Objects).
  • Favour Objects can help if you have states that do not run very well in parallel.
  • In general, these considerations are relevant only for persistent state services with a huge amount of events, especially if the count of workers is less than the count of events. This algorithm only is invoked if events are queued (for a definition of event refer to Persistent States Concept).

Note, however, that these considerations are very abstract. How the persistent state engine processes the events in a given scenario is dependent on a variety of factors, e.g.

  • what activities are implemented to the particular states and how long these will take
  • how many consecutive states are implemented to the state machine

Model

The model has an immediate impact on the performance of the resulting run-time service.

...

The number of workers limits the required number of database connections (or any other external system accessed in state transition activities).
As of now, E2E the Bridge will only use one database connection per deployed service to access the persistent state database.