Date: Fri, 29 Mar 2024 03:12:46 +0100 (CET) Message-ID: <1340583180.3547.1711678366653@263d865a522e> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_3546_1367771868.1711678366653" ------=_Part_3546_1367771868.1711678366653 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
Service clients may create heavy load for service providers depending on= the number of parallel running clients. A common way of distributing this = load is using more than one node to host the services. Then, a load balance= r will distribute the load among the identical services.The figure below de= picts a scenario involving two servers and an external load balancer, for i= nstance a hardware appliance. On each node, there is an independent Bridge = installation. Each Bridge installation leads to two operating system servic= es (Windows services, Unix daemons):
From an administration point of view, the E2E Console service is the onl= y service that must be monitored and started by the operating system and po= ssibly monitored by an external console such as HP Open View.
When a service is deployed on a server node, the Bridge starts an xUML R=
untime instance executing this service as an operating system process and s=
tarts monitoring this service (see Monitoring Load Balanced Nodes). The deployed service (=
=3D repository) is stored in the E2E_BRIDGE_DATA directory=
. Since both server installations on both nodes are independent of each oth=
er, each service repository must be deployed on both nodes separately. This=
approach has the benefit that it is possible to update services on the fly=
by directing the load balancer to one node while updating the other node w=
ith a new service version. This way, the service is online all the time wit=
hout any interruption. However, the drawback of this approach is that all d=
eployments must be done twice.
If the services hold state either by using E2E Persistent State objects or =
by sharing persistent data, the data should be put into a shared external d=
atabase. Of course, this makes this database a single point of failure. How=
ever, this is a common scenario most operation departments are used to.
As identical services are concurrently running, all write operations of = the same services must use resources being safe of concurrent write operati= ons. This is guaranteed by databases, message queues, and SOAP/HTTP service= s, etc. However, in general it applies not to file systems or (s)ftp. For s= uch non-save resources, the modeler has to provide its own co-ordination me= chanism, for example by a persistent state object controlling the access of= the shared resource by the concurrent services. In the following figure, n= on-save resources are flagged with a warning sign. We recommend not using n= on-save backends at all in a load balanced set up.
Though this is not always feasible, in many use cases it makes sense to =
use the load balancing architecture for online services, but another config=
uration for batch processes that typically have to access files and file pr=
otocols (ftp, sftp, ftps, ...). This use case is discussed on Batch Processing.
A big advantage of a pure load balanced approach is that the Bridge process=
es do not hold any state. This implies that this approach scales very well =
and does not require any special load balancing features (even simple DNS r=
ound robin works quite well).
Figure: Two Servers and an External Load Balancer
The file system contains two directories on each machine:
When maintaining one node, following steps must be taken:
After that, the node to be maintained can be shut down. This is done= by stopping the OS services E2E Proxy and E2E Con= sole. The shutdown of the Console process triggers the shut down o= f all deployed services. Each service being shut down will wait until it fi= nishes the current request.
If one of the services cannot be properly shut down because of - for exa= mple - a hanging database connection, this process must be killed using the= Bridge.
All online services should be managed by the Bridge. It is technically p= ossible to start and stop all online services by using OS scripts. However,= the Bridge is the only entity that knows if services are newly deployed or= deleted. External tools do not. So, if starting up or shutting down the se= rvices (via Automatic Startup flag) is to happen automatic= ally, the operator should start/stop the E2E Console services/daemon only.<= /p>
Additionally, it is important that operators monitor the Console service= /daemon, because if it is dead, no management and monitoring of the online = services will take place.
The Bridge monitors all deployed services. If a service writes a log ent=
ry of type ERROR or FATAL, or if a service terminates unexpectedly (crashes=
), the Bridge can call a monitoring service with all information found in t=
he log file.
A monitoring service is a plain SOAP service implementing a given interface=
whose URL is registered at the Bridge (details see Monitoring Node Instances). Building mon=
itoring services is on Monitoring pp.
Actually, the Bridge may register two URLs, the primary and the backup m= onitoring service. If the primary monitor fails, the backup will take over.= In the load balancing scenario, the monitoring services on each node backu= p each other typically.
Figure: Monitoring Load Balanced Nodes
To setup a load balanced persistent state engine, you need to do the fol= lowing:
Both services will create and process their own objects. These objects a=
re identified not only by their pr=
imary persistent state key, but also by an owner name and owner id reflecti=
ng the actual service that owns these objects.
However, each service will be able to list and send signals to objects owne=
d by the other service (having the same owner name).
If one of the services (e.g. Service A with owner ID 9) is stopped, all =
objects with owner ID 9 will not be processed anymore (transitions, do acti=
vities).
But:
The redundant Service A with owner ID 7 can take over the processin=
g of the persistent state objects, but will not do this automatically. To e=
nable Service A with owner ID 7 to identify the persistent state objec=
ts to process, you need to change the owner ID of the objects from 9 to 7 i=
n this case.
This can be done on the Persistent State tab of the Bridge=
with button Manage Ownership, see Persistent State Ownership.
Besides changing the owner ID of a redundant service to trigger processi= ng of the stalled objects again, you can do the following: