Response-Time Load Balancing
|
The Web server
plug-in considers response time data as requests enter and leave
the plug-in, then uses this data to load balance future requests.
Because there is no interprocess communication involved and per-request
resource usage is not factored into the load-balancing decisions,
this is a simpler method than server-based load balancing.
|
User-Defined Load Balancing
|
Alternatively,
each application server can use a combination of hardware resource
profiles (including CPU load and disk I/O) and Request Execution
profiles (including result caching and servlet execution rate) to
load balance individual requests. Server and request statistics
are communicated from one NAS machine to another in a cluster via
multicasting. Multicasting gives more control to the administrator
and is suitable for sophisticated scenarios.
|
Load Monitor
|
The Load Monitor
takes a snapshot of server load based on administrator-configured
interval settings. The Load Monitor then tabulates resource availability
and system performance.
|
Load Balancer
|
The Load Balancer
uses the load and performance statistics retrieved by the Load Monitor
to determine the server with the most available resources to handle
an incoming request. Each instance of NAS has its own load-balancing
module that makes routing decisions based on load-balancing properties.
|
Fine Grained
|
There are many
load-balancing configuration parameters. Administrators can tune
these parameters using NAS Administrator.
|
Sticky Load Balancing
|
This option
is enabled per application component. When a session requests one
of its sticky components, that first request is load balanced normally.
But for the remaining life of that session, any further requests
to sticky components will be directed to the same server. This option
is useful for certain applications components that cannot distribute
session and state information, or when an application component
can use cached results that are stored on a single server.
|