What is Load Balancing?
Load balancing is the process by which inbound internet protocol (IP) traffic can be distributed across multiple servers. Load balancing enhances the performance of the servers, leads to their optimal utilization and ensures that no single server is overwhelmed. Load balancing is particularly important for busy networks, where it is difficult to predict the number of requests that will be issued to a server.
Typically, two or more web servers are employed in a load balancing scheme. In case one of the servers begins to get overloaded, the requests are forwarded to another server. Load balancing brings down the service time by allowing multiple servers to handle the requests. This service time is reduced by using a load balancer to identify which server has the appropriate availability to receive the traffic.
The process, very generally, is straightforward. A webpage request is sent to the load balancer, which forwards the request to one of the servers. That server responds back to the load balancer, which in turn sends the request on to the end user.
What is Session Persistence and Why is It Important?
An important issue when operating a load-balanced service is how to handle information that must be kept across the multiple requests in a user’s session. If this information is stored locally on one backend server, then subsequent requests going to different backend servers would not be able to find it. This might be cached information that can be recomputed, in which case load-balancing a request to a different backend server just introduces a performance issue.
Load Balancing Algorithms
A variety of scheduling algorithms are used by load balancers to determine which backend server to send a request to. Simple algorithms include random choice or round robin. More sophisticated load balancers may take into account additional factors, such as a server’s reported load, recent response times, up/down status (determined by a monitoring poll of some kind), number of active connections, geographic location, capabilities, or how much traffic it has recently been assigned. High-performance systems may use multiple layers of load balancing.
Load balancing of servers by an IP sprayer can be implemented in different ways. These methods of load balancing can be set up in the load balancer based on available load balancing types. There are various algorithms used to distribute the load among the available servers.
Weighted Round-Robin Allocation
Weighted Round-Robin is an advanced version of the round-robin that eliminates the deficiencies of the plain round robin algorithm. In case of a weighted round-robin, one can assign a weight to each server in the group so that if one server is capable of handling twice as much load as the other, the powerful server gets a weight of 2. In such cases, the IP sprayer will assign two requests to the powerful server for each request assigned to the weaker one.
What is Web Caching?
A Web cache is a temporary storage place for files requested from the Internet. After an original request for data has been successfully fulfilled, and that data has been stored in the cache, further requests for those files (a Web page complete with images, for example) results in the information being returned from the cache rather than the original location.
Types of Web Caches
Forward/transparent proxy servers, reverse proxy servers (which are actually what the cache appliances are running internally) and web servers mostly have web caches. The caches in web servers are RAM caches as they already have the resources served locally. The caches on proxy servers could be RAM & DISK, usually both. It is hgly recommended to install a 15k RPM or SSD HDD as proxy server DISK cache.
Secure socket layer (SSL) certificates provide authentication between a server and a client computer in a Web application. Companies or businesses with a dedicated SSL certificate must host that certificate on a Web server. Heavy use of the certificate can put a strain on the machine and slow down the application.
SSL offloading takes all the processing of SSL encryption and decryption off the main Web server and moves it to a separate device designed specifically for the task. This allows the performance of the main Web server to increase and it handles the SSL certificate efficiently.
There’s a finite amount of bandwidth on most Internet connections, and anything administrators can do to speed up the process is worthwhile. One way to do this is via HTTP compression, a capability built into both browsers and servers that can dramatically improve site performance by reducing the amount of time required to transfer data between the server and the client. The principles are nothing new – the data is simply compressed. What is unique is that compression is done on the fly, straight from the server to the client, and often without users knowing.
Suitable File Types
Sites that have a lot of plain text content, including the main HTML files, XML, CSS, and RSS, may benefit from the compression. It will still depend largely on the content of the file; most standard HTML text files will compress by about a half, sometimes more. Heavily formatted pages, for example those that make heavy use of tables (and therefore repetitive formatting content) may compress even further, sometime to as little as one-third of the original size.
URL Translation & URL Rewrite
URL translation is translation from externally known URLs to the internal locations
URL Rewriting is a server-side technique for mapping URL requests to request handlers.
Typically there is a direct mapping between request URL and the handler for that request. All requests that end in .php will be handled by a PHP script with the given name. Similarly, request paths that end in .html will typically be handled by a static file handler. The mapping between URL and handler is typically static, and depends solely on the “extension” of the URL Request.
Why Rewrite URLs?
There are many of reasons to rewrite URLs:
* Search Engine Optimization (SEO)
SEO is a broad topic, but the main goal is to assist search engines in finding content on a web site. One aspect of that is optimizing the URLs themselves.
* Making user-friendly URLs
Similar in effect to SEO, this allows the use of friendly public URLs where they are observed by users in links and browser bars. Elements within URLs that are meaningful only to server-side technology, including the extension of the server-side script or web app platform, can be obscured from the public.
* Server-side technology migrations
When migrating from one technology to another in stages, URL rewriting can be used to keep the URL space stable while things change on the server back-end. URL Rewriting can also be used to support migration of “old” or stale URLs to the new URL namespace, when those changes occur.
ServeTrue IQProxy supports all of the above with support at a reasonable cost for Windows