There are actually several reasons.
First and probably foremost, the data that's stored in the instruction cache is generally somewhat different than what's stored in the data cache -- along with the instructions themselves, there are annotations for things like where the next instruction starts, to help out the decoders. Some processors (E.g., Netburst, some SPARCs) use a "trace cache", which stores the result of decoding an instruction rather than storing the original instruction in its encoded form.
Second, it simplifies circuitry a bit -- the data cache has to deal with reads and writes, but the instruction cache only deals with reads. (This is part of why self-modifying code is so expensive -- instead of directly overwriting the data in the instruction cache, the write goes through the data cache to the L2 cache, and then the line in the instruction cache is invalidated and re-loaded from L2).
Third, it increases bandwidth: most modern processors can read data from the instruction cache and the data cache simultaneously. Most also have queues at the "entrance" to the cache, so they can actually do two reads and one write in any given cycle.
Fourth, it can save power. While you need to maintain power to the memory cells themselves to maintain their contents, some processors can/do power down some of the associated circuitry (decoders and such) when they're not being used. With separate caches, they can power up these circuits separately for instructions and data, increasing the chances of a circuit remaining un-powered during any given cycle (I'm not sure any x86 processors do this -- AFAIK, it's more of an ARM thing).
No comments:
Post a Comment