Statistical Analysis of Network Traffic The use of statistical features of network traffic for detection of DDoS attacks had made good contributions. They can also be used for trace back schemes. The use of Regression Analysis where strength of DDoS attack is estimated and compared with actual strength. The comparison results were promising, indicating that this method is applicable for DDoS strength evaluation in router or a separate unit communicating with the router. Another type of approach named flow-volume based approach can also applied in the process, to build the traffic profile under normal traffic conditions. When total traffic arriving at a router in a designed time window is deviated from the profile, attack is detected and …show more content…
The proposed scheme has an advantage over traditional packet marking schemes in terms of scalability and storage requirements in victim or intermediate routers. The method stores only short-term information of traffic entropy in order to detect the DDoS attack. The research also presented experimental analysis to claim that the method is able to implement accurate trace back in a large-scale DDoS attack scenario within a few seconds. In addition to variation scheme, a few other methods also exist to trace back DDoS sources. The classification is given below. In packet marking schemes, the idea is to trace the path through uplink routers up to the attack sources. It is a common method employed in trace back implementations but contains inherent drawbacks. In the packet logging scheme, the information of each packet is stored at routers through which the packet is passed. The stored information of the packet contains constant header fields and first 8 bytes of the payload which are hashed through many hash functions to produce digests. These digests are stored by DGAs using bloom filter, a space-efficient data structure. In the pushback scheme, the router under congestion sends the rate-limit request to upstream routers. It determines from which routes the stream of packets is arrived and devises an attack signature for such traffic. The signature belongs to the aggregate traffic having common property such as the same destination address. A mechanism called
KDDCup99 dataset was introduced at the Third International Knowledge Discovery and Data Mining Tools Competition which was held by DARPA in 1999 .KDDCup99 is a refined data set from DARPA 1998 dataset as it contains only network data[3]. KDDCup99 is commonly used developers and implementers of new IDS to evaluate their systems. IDS systems take the KDDCup99 dataset as an input to train ,test the system and check performance of the IDS in classifying and detecting attack records. KDDCup99 dataset is used by most researchers because it contains 22 different attack types which could be classified into four main attack categories of the network discussed in the previous section. The full DARPA dataset consists of relatively 4,900,000 lines of connection vectors where each single connection vectors consists of 41 features and is marked as either normal or an attack, with exactly one particular attack type [38]. Among the 41 features of the connection, only sixteen significant attributes are considered which are: A1,A5,A6,A8, A9, A10, A11, A13, A16, A17, A18, A19, A23, A24, A32, A33[38] The KDD 99
After that, it uses the concept of Bloom filter. Bloom filter is a data structure used to test whether an element is a member of a given set or not. It has a two-dimensional bin table of k levels by m bins with k independent hash functions. It is used to keep track of the recent arrival rates of packets of different destination IP addresses passing through a router within a sampling period t as shown in fig. 4.2. In proposed system, it stores the IP address in data structure and checks it on the behalf of misuse detection method. Once whole of the information is derived, the complete data is analyzed statistically by using association between the nodes respective to the current node.
It is not clear in the article if iPremier did any risk assessment, and if they did, they didn’t anticipate that they could be victims of a DDoS attack. iPremier should have used a Contingency Planning standard like NIST SP 800-34 to identify risks and develop policies and procedures to deal with attacks like the one they faced. If they had these, they could have responded in a more orderly and affectively fashion and they could have alternatives to overcome the negative impact
Distributed denial of service attack is a coordinated denial of service attack against a computer or network that occurs from multiple sources and locations to halt or disrupt legitimate use of its resources. Denial of service attack may affects software systems, network, routers, equipment, servers, and personal PCs. In distributed denial of service attack, a master program scans remote machines to find security holes. Once vulnerable hosts are exploited and injected with malicious code, the agents initiate the attack to infect further machines using client/server technology. A number of denial of service attacks include Network Level Service which affects routers, IP switches, and firewall, OS Level which affects equipment vendor OS, and
Alternative type of attack is called a Distributed Denial of Service (DDoS) attack. DDoS attacks are launched form numerous linked devices that are spread across the Internet. They are commonly harder to deflect, because of their sheer volume of devices involved. Unlike DoS attacks, DDoS assaults be apt to target the system infrastructure in an effort to drench it with huge volumes of traffic.
Network Traffic Monitoring and analysis is essential to more effectively troubleshoot and resolve issues when they occur, to not bring network services to a standstill for extended periods. Numerous software tools are available to help administrators with the monitoring and detects cyber threats in network traffic. This paper will discuss software that can monitor Network Traffic, which helps detect cyber threats. The following is software based monitoring that detect cyber threats;
The DDoS has become the attacker’s method for finding vulnerabilities in a banks network system. A DDoS attack is when an attacker uses a botnet execution, remotely controlling multiple computers to attack the traffic of a banking system network and the attack leads to capturing the resource records of a domain name system (DNS). By using a DDoS attack some of the major components of the DNS are disrupted mainly the distributed database name server. It slows the servicing of client queries and if a disruption takes place clients will not be able to service any internet demands. The DNS uses a messaging protocol handler that runs on UDP for handling client queries and name server responses. When a malicious action such as DDoS attacks are launched the DNS messaging protocol is vulnerable to criminals who can attach other hidden software within the DDoS
Internet Protocol (IP) datagrams may arrive in a seemingly random order of chunks that the receiving IP entity must continuously collect until it can reconstruct the original datagram. Consider that the receiving IP entity possesses a buffer for assembling the original datagram's data field. The buffer will comprise of chunks of data and "holes" between them corresponding to data not yet received.
3) Some network based Intrusion detection system have also problem in dealing with network based attacks which involve the packet fragmentation. This anomalously fashioned packet triggers the Intrusion detection system to change into unstable and crash. [3].
Denial of service attacks (DoS)- A denial of service attack attempts to put the condition will often times try to compromise many PC’s, and use them to “amplify” the attack volume, and to hide his or her tracks as well. This is called a Distributed Denial of Service Attack (DDoS). Denial of service attacks have now become a well-known criminal activity. In an online form of the “protection racket” (pay us some protection money or we’ll ruin your business), computer criminals have taken to
Therefore, if a network application is unnecessary it should be disabled or closed immediately (Joshi & Misra, 2010). The advantage of this approach is that it minimizes the attack surface, thus protecting the host from receiving certain request from ports that can be used to flood the system. The disadvantage to this approach is that you limit the amount of applications you may need to help run your organization more efficiently. Another method of preventing these attacks is by using a firewall. A firewall can help mitigate against simple DDoS attacks by using simple rules such as implicit deny, or deny any for certain ports and IP addresses. However, the disadvantage of using a firewall to mitigate attacks occurs when sophisticated attacks are launched on ports such as Port 80 used for web traffic. A firewall, cannot tell the difference between legitimate traffic and malicious traffic that comes through the port (Joshi & Misra, 2010). This can lead to an attack still being carried out if the firewall cannot decide what is good and bad traffic. One filtering technique that was discussed in the journal article was the technique of “History Based IP Filtering.” During normal function, traffic seems to stay balanced and stable. Yet, during most DoS attacks they are carried out with IP addresses that have never been seen before on the network to flood the system. This form of filtration relies
While DDoS attacks tend to generate a lot of fear and media attention (especially when the perpetrators are acting out of a sense of political “hactivism”), they are by no means the only form of DoS attack. Asymmetric application-level DoS attacks take advantage of vulnerabilities in web servers, databases, or other cloud resources, allowing a malicious individual to take out an application using a single extremely small attack payload – in some cases less than 100 bytes long.
Replay- Timestamps or sequence numbers on packet transmission can eliminate replay attacks. Replay attacks occur when an unauthorized user intercepts transmissions between authorized users, and forwards the packets to the destination as if he were the original sender.
Also we considered a random sample of 1,000 connection records that correspond to normal data in order to determine the false alarm rate. It is important to note that the sample used for testing pur-poses had the same distribution as the original set of nor-mal connections. After the features are constructed and normalized, anomaly detection schemes were tested sepa-rately for the attack bursts, mixed bursty attacks and non-bursty attacks. In all the experiments, the percentage of the outliers in the training data (allowed false alarm rate) is set to be approximately 2%
The first category of attacks concerns vulnerabilities within the anatomy of the DNS protocol and the software implementation of the protocol. The second category of attacks concerns