The first paper proposed a proactive push scheme based on the HTTP/2 server-push feature, the second paper, on the other hand, improved the push method from fixed numbers into adaptive numbers based on multiple factors.
In this section, we will compare two outstanding papers with different parameters in the following four metrics:
(a) Number of requests
The definition of number of requests is the total number of HTTP requests sent by the client during the streaming of the video. This metric measures the protocol overhead generated on the network. Before HTTP/2, each packet required by the client needs a separate HTTP request sent from client to server. The server will then send the HTTP response with the video segment as the payload. With HTTP/2, the it is not needed any more that the client sends a new request for each segment. The server will push the subsequent segments to the clients and therefore reduce the overhead to the network. So, we expect an as lowest as possible number of this metric.
From the experiment results of the two selected paper, we can get the data of number of requests as follows, methods metrics
Method of [3]
Method of [4]
N = 1
N = 2
N = 3
N = 4 a=0.6 A=0.4
Number of request
1000
500
334
250
518
383
Figure 9. The comparison of number of requests for methods [3] and [4]
The parameter ‘N’ in the method of [3] means the number of segments to be pushed to the client each time.
The parameter ‘a’ in the method of [4] is the key coefficient of the
As a conclusion, Lighttpd is a high-quality web server that suitable for any websites with dynamic content. It is also an alternative web server which is easy to configure and efficient than the resource limited web servers.
IPv4, is the version of IP that is prevalent today. It may also be expected that IPv6 may be prevalent in the future.
In these paper author focus on finding the gap in understanding how complex individual Web sites are and how this complexity impacts on the usersperformance. Also characterize the Web site both at content level (like, number and size of images) and service level (like, number of servers/origins). It may happen that some categories are more complex than other such as 'News '. Out of hundred 60% of Web sites fetched content from minimum five non-origin sources, and these give more than 35% of the bytes downloaded. In addition, they examine which metrics are most suitable for predicting page render and load times and catch that the number of objects requested is the most important factor. With respect to variability in load times, however, they alsofind number of servers is the best indicator.
Table No. 3- list of counters for use in statistics messages [7]. Counters Bits Per Table Active entries 32 Packet Lookup 64 Packet Matches 64 Per Flow Received packets 64 Received Bytes 64 Duration (seconds) 32 Duration (nano seconds) 32 Per Port
In this section, important functions used to send and get requests to and from the traffic
Using the search engines and inclusion criteria, five appropriate papers were chosen. Within these papers a number of themes were noted. These themes were then chosen to be explored within the discussion section.
Stream identifiers identify what stream a packet belongs to; class identifier identifies a transmitter and the information and packet classes. The timestamp fields are used to precisely specify the reference point of time for the transmission of the first packet [14]. The trailer is an optional part of the structure that can be used to enable other processes, as well as indicate states and events [14]. For the project these optional fields are disregarded.
Using the appropriate queuing model, compute the server utilization (probability that the server is busy) and the waiting time W (known as the response time in this application), as the number of clients M varies from 1 to 20. (Use a data table). Plot W against M to show the effect of the number of clients on the system response. At high server utilization the system is congested and each additional client increases the response time by its service time and the plot of W against M becomes linear. From your computed results calculate the change in W as M increases from 19 to 20.
This article is written by Benoit Gomis and it is showing data that has been gathered to show the comparison
This is evidently shown in my 2nd screenshot where it portrays within the section Receiving Header and Content of the page. Another thing shown in the 2nd screenshot was NVCC’s Web server software, which was Apache HTTP Server. Also, based on the 2nd screenshot the content of NVCC’s web server was shown. The content length was about 17,335. The content file was a !DOCTYPE.html file which was shown public. The screenshot also shown the URL, REF, UAG, REQ, and AEN. The 2nd screenshot also provides information about the total bytes received which was 17,886 bytes. The NVCC’s web server also evidently shows that it took more bytes than what it took to generate a request from the browser used which was 485 bytes. Furthermore, the 2nd screenshot of the HTTP viewer also shown the Sending request, Receiving Header, and End of Header. Each section provided information based on the URL webpage. It provided information like the Host, User-Agent, Host IP Address, Server used, and
Also, authors in both the books have provided a solution approach to those problems. Though focus area of authors in both books are somehow closely related, some difference can be found on their analysis of the problem and the way they presented.
The protocol is detailed and organized for you to understand it well. The method shown is easy to understand and easy to follow.
Cross-Fault Tolerance (XFT) is new trending protocol that makes BFT more feasible and efficient, also attack model is simplified. [17] Most of the BFT protocols believes in adversary which is used to handle compromised nodes, also message delivery of entire network. To handle such a powerful adversity results complexity and therefore becomes less efficient. To overcome this issue XFT is used to provide perfect service as long as replicas are perfect and communicates with each other continuously. Same number of resources are uses as used by protocols that can handle failures and also can
A significant amount of research efforts have been directed towards attempts at providing high quality service for real time applications of IP based networks in the past decade. The core idea of these techniques is to control the IP network congestion level which is the main cause of delays/losses, and also imposes upper bounds on these events. Traffic policing is one of the techniques used for this purpose. These techniques makes the assumption that users generating flows/calls will specify the setup time by using Service Level Agreement (SLAs) i.e. the characteristics of the traffic that they emit to the network. Therefore, these techniques are employed to ensure that the generated traffic is within the conforms of previously agreed contracts. One of the most popular traditional traffic policing techniques is the Leaky Bucket (LB). Average rate and the maximum burst size/length of a