CH 8
1. In a 5-stage pipelined processor, upon a page fault, what needs to happen in hardware for instruction re-start?
When there is a page fault during fetching an instruction pipeline must be drained, so that the instructions which are already executing show be finished first. After this we cater the page fault and restart instructions.
Otherwise if there is a situation that page fault occur during MEM operation the instructions which are in other states such as in instruction fetch, instruction decode or execute can be condensed as they won’t be making any changes to the registers. After this we can handle the page fault.
2. Describe the role of the frame table and the disk map data structures in a demand paged memory manager.
Frame tables are used as a reference as we can get to know that which frames are available and which are already taken and which process they are allocated.
Whereas the disk map data structures are used to know about the frames which are being swapped from the disk and where they can be relocated again.
4. Describe the interaction between the process scheduler and the memory manager.
Process scheduler and memory manager these 2 ae codes which lie dormant when a user process is started. Sometimes, the supervisory timer interrupt raises the process scheduler which take decisions that which tasks should be run on Central processing unit. When a process is running it keeps on asking many read and write memory access in its logical address
i) Memory : cache server (holds recently acesed web pages in its RAM, for spedier aces
a. The CPU tells the RAM which address holds the data that the CPU wants to
assign process identification, allocate address space, initialize process control block, set appropriate linkage, create or expand data structure
A distributed system is an application that executes a collection of protocols to coordinate the actions of multiple processes on a network, where all component work together to perform a single set of related tasks. A distributed system can be much larger and more powerful given the combined capabilities of the distributed components, than combinations of stand-alone systems. But it's not easy - for a distributed system to be useful, it must be reliable. This is a difficult goal to achieve because of the complexity of the interactions between simultaneously running components. A distributed system must have the following characteristics:
Primary storage (or main memory or internal memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
The main feature of HDFS is built in redundancy it typically keeps multiple replicas in the system. In HDFS cluster manages addition and removal of nodes automatically. Here an operator can operate upto 3,000 nodes at a time. In the HDFS key areas of POSIX semantics have been traded to increase data throughput rate.
This separation provides greater flexibility to manage the storage. The virtualization system provides the logical space for data storage and also maps it to the actual physical location. There can be various layers of virtualization and mapping so the output of one layer can be passed on to the other higher layers. Virtualization maps between front-end and back-end resources where the back-end refers to a logical unit number that is not presented to the host system and the front-end refers to logical unit number that is presented to a host system. The actual form of mapping depends on the implementation. In block based storage a single block of information is addressed using a LUN identifier and an offset which is known as logical block addressing. The virtualization software maintains the mapping information of virtualized storage. The mapping information is called as metadata and is stored in mapping table. Some implementations do not use mapping table instead they use algorithms to calculate the location for storage. Dynamic methods can be used for this
HDFS stores large files across multiple machines. It achieves reliability by replicating the data across multiple hosts, and hence theoretically does not require redundant array of independent disks (RAID) storage on
Ans: If the two entries in a page table point to the same page frame in the memory, then the users can use the same code or sometimes data in the future.
Today users have plenty of high quality and high resolution data present though various technologies and more data keeps on generating in various domains and fields. So the passage of huge data sets between External memory and internal memory of computer is becoming commonplace. However there is a vast difference between data access speeds on internal memory and external memory. Internal memory is very fast while external memory is about 105 to 106 times slower in performing
Today’s computers have different ways to store data. Some examples of these ways are devices as the hard disk (aka magnetic disk), floppy disk, RAM, CD ROM, tape, and the flash (aka jump drive, USB memory stick, and thumb drive). Storage devices come in two different sources; primary or secondary. Each of these devices causes the computer to process data at different speeds. This paper will show how each of these devices store data and how they affect the speed of the computer.
The major dificulty encountered due to extensive use of parallelism is the existence of branch instructions in the set of instructions presented to the processor for execution: both conditional and unconditional branch instructions in the pipeline. If the instructions under execution in the pipeline does not bring any change in the control flow of the program, then there is no problem at all. However, when the branch instruction puts the program under execution to undergo a change in the flow of control, the situation becomes a topic of concern as the branch instruction breaks the sequential flow of control, leading to a situation what is called pipeline stall and levying heavy penalties on processing in the form of execution delays, breaks in the program flow and overall performance drop. Changes in the control flow affects the processor performance because many processor cycles must be wasted in ushing the pipeline already loaded with instructions from wrong locations and again reading-in the new set of instructions from right address. It is well known that in a highly parallel computer system, branch instructions can break the smooth flow of instruction fetching, decoding and execution. This results in delay, because the instruction issuing must often wait until the actual branch outcome is known. To make things worse, the deeper the pipelining, more is the delay, and thus greater is the performance loss.
The major dificulty encountered due to extensive use of parallelism is the existence of branch instructions in the set of instructions presented to the processor for execution: both conditional and unconditional branch instructions in the pipeline. If the instructions under execution in the pipeline does not bring any change in the control flow of the program, then there is no problem at all. However, when the branch instruction puts the program under execution to undergo a change in the flow of control, the situation becomes a topic of concern as the branch instruction breaks the sequential flow of control, leading to a situation what is called pipeline stall and levying heavy penalties on processing in the form of execution delays, breaks in the program flow and overall performance drop. Changes in the control flow affects the processor performance because many processor cycles must be wasted in ushing the pipeline already loaded with instructions from wrong locations and again reading-in the new set of instructions from right address. It is well known that in a highly parallel computer system, branch instructions can break the smooth flow of instruction fetching, decoding and execution. The consequence of this is in delay, because the instruction issuing must often wait until the actual branch outcome is known. To make things worse, the deeper the pipelining, more is the delay, and thus greater is the performance
The following rules that are listed below are steps for fault resolution of a fault management:
Only 1 program at a time gets the electronic equipment for execution whereas the others square measure waiting their flip. The complete plan of getting a multi-programmed system is to optimize system utilization (more specifically electronic equipment time). The presently capital punishment program gets interrupted by the package between tasks (for example looking forward to IO, recall the mail packaging example) and transfer management to a different program in line (another customer). Running program keeps execution till it voluntarily offers the electronic equipment back or once it blocks for IO. As you'll see, the planning goal is extremely clear: processes looking forward to IO mustn't block different processes that wastes the cpu’s times. The concept is keeping the CPU under work as long as there are processes able to