CSN-1.E.1 The Internet has been engineered to be fault-tolerant, with abstractions for routing and transmitting data.
CSN-1.E.2 Redundancy is the inclusion of extra components that can be used to mitigate failure of a system if other components fail.
CSN-1.E.3 One way to accomplish network redundancy is by having more than one path between any two connected devices.
CSN-1.E.4 If a particular device or connection on the Internet fails, subsequent data will be sent via a different route, if possible.
CSN-1.E.5 When a system can support failures and still continue to function, it is called fault-tolerant. This is important because elements of complex systems fail at unexpected times, often in groups, and fault tolerance allows users to continue to use the network.
CSN-1.E.6 Redundancy within a system often requires additional resources but can provide the benefit of fault tolerance.
CSN-1.E.7 The redundancy of routing options between two points increases the reliability of the Internet and helps it scale to more devices and more people.
In these examples the letters represent computing devices and the lines represent wires that connect the devices.
: the operating system has dozens of tasks, like scheduling what it will be doing next, managing hardware, working with the network, etc.
: executing programs that the user has selected, such as running MS Excel and MS Word, or computer games
: Tasks are done one after another. It is a computational model in which operations are performed in order one at a time.
Think of sequential computing as items one at a time on a conveyor belt.
: A computational model where the program is broken into multiple smaller sequential computing operations, some of which are performed simultaneously.
: New hardware has several processors in one system, called cores. 1 CPU can have 64+ cores. Graphic cards for gaming can have thousands of cores.
: A lot of data can be processed the same way (video gaming), called SIMD (Single Instruction Multiple Data)
:
:
Parallel | Sequential |
---|---|
time = longest time taken on any given processor, aka faster | time = add up all individual task times aka slower |
tasks done simultaneously | tasks done individually |
good for big problems | good for small problems |
harder to implement | easier to implement |
less portable | more portable |
NOTE: When increasing the use of parallel computing in a solution, the efficiency of the solution is still restricted by the sequential portion. This means that at some point, adding parallel portions will no longer significantly increase efficiency.
: the sending of tasks from one computer to one or more others. It is a model in which multiple devices are used to run a program.
: special software to group different computers together
Record your answers in the cell below, titled “Write Answers Here.”
Process | Execution Time on Either Processor |
---|---|
A | 25 seconds |
B | 45 seconds |
What is the difference in execution time between running the two processes in parallel in place of running them one after the other on a single processor?
Process | Execution Time on Either Processor |
---|---|
A | 25 seconds |
B | 25 seconds |
C | 10 seconds |
D | 40 seconds |
How should the program assign the four processes to optimize execution time?
After you record your answers in this cell, take a screen shot of the cell and DM it to Nikki Hekmat on Slack. There will be a penalty of 10% off your grade for each day the assignment is late. The homework will be due Monday December 4th at 11:59 PM.
[50] milliseconds
Simply put A, B, C, or D next to each number to indicate the correct answers for the questions above.