After having studied various data representations in binary format (boolean, integer, real variables, but also instructions), and after having given a look at how CPUs are able to process these data (basic idea on how transistors work, logic gates), we are now ready to include another important component in our computer machine: the memory. We already know that CPUs are equipped with registers, where the data need to be stored in order to be treated by the CPU. Moreover, we already have a vague idea that the computer has a sort of main memory (not to be confused with the hard disk), where the data (including program's instructions) are kept during the execution of our programs.
This first video introduces the "cache memory".
Why cache memory is actually able to improve our computer's performances? This video is an introduction to the two important concepts of temporal and space locality.
The next video explaines the hierarchy of the different memory devices that we can find in our computer machines. The cache memory itself is separated in different pieces of memory having faster (or slower) access to the data. In general, the faster memory devices are also the smaller ones (capable of keeping less data).
The lecture of Prof. Luis Ceze continues with an explanation of how to map the data from the main memory to the caches. This part is very technical and is not fundamental for the purposes of our course (we'll study this kind of details very soon in the context of virtual memory). The curious student may want to watch this video for information.
Is there any way to optimize our codes in a way to exploit as much as possible the cache memory hierarchy? The last video we are watching this week will answer to this question!
In order to revise the content of the videos above, you can perform this set of simple exercises. The correction of these exercises will be given at Rennes 1 students during the planned videoconferences.