preview

Open Flow : Based Server Load Balancing Gone Wild

Better Essays

Open Flow – Based Server Load Balancing Gone Wild Jiujian Ye, Paul Teran and Senthil Alagappan Ranganathan Abstract In today’s high-traffic internet, it is often desirable to have multiple servers representing a single logical destination server to share load. A general configuration consists of multiple servers behind a load-balancer which would determine which server would service a client’s request. Such hardware is expensive, congested, and is a single point of failure. In this paper we implement and evaluate an alternative load-balancing architecture using an OpenFlow switch connected to a NOX controller, which gains flexibility in policy, costs less, and has the potential to be more robust to failure with future generations of switches. However, the simple approach of installing separate rule for each client connection/microflow leads to huge number of rules in switches and heavy load on controller. So controller should exploit switch support for wildcard rules for more scalable solution that directs large aggregates of client traffic to server replicas. We implement these algorithms on top of NOX OpenFlow controller and evaluate their effectiveness. Introduction There are many scenarios in today’s increasingly cloud-service based internet where a client sends a request to a URL, or logical server, and receives a response from one of potentially many servers acting as the logical server at the address. One example would be a Google web-server: after a client resolves

Get Access