Update on Copilot Memory and Retention
I'm not going to have to change a retention policy after all, but it's not so simple.
A short missive today while I play catch-up after a week at the ILTA Conference, followed by a weekend adventure in DC with my wife.
I wanted to update you on something I wrote a few weeks back about Copilot Memory:
My assumption (testing in progress to confirm) is that the data will be subject to the retention policy applied to Copilot interactions. I have tested and confirmed that the “Memory” interactions, such as specifying my location and role at work, are stored in the same manner as other interactions. A search of my mailbox for these items returns them as interactions because they are indeed interactions. I prompted Copilot to remember details about me, and the response was to add them as memory items.
Assuming that I’ve set a short retention policy due to the temporary nature of AI interactions, I am now presented with an interesting question. How long should Copilot remember things about users by default? Do I want to stop auto-deleting Copilot interactions so that it always remembers facts about users until the user explicitly removes them from its memory? If that sounds good to you, I’d also ask, do you want to store every interaction forever? That sounds like a data protection and eDiscovery nightmare.
Upon returning to my regular work schedule yesterday, I took a quick look at Copilot memory. I saw that, indeed, the things I had asked Copilot to remember were still showing up in the Memory setting. When prompted, Copilot returned them as things it knew about me.
Which was weird, because the auto-deletion policy should have removed those interactions long before now.
Naturally, I went looking for the interactions in my mailbox.
They’re gone just as my retention policy requires. The memory, however, stays like a ghost in the walls of my personal Copilot settings.
This seems like good news. Data retention rules are being followed, and users can use this feature to customize their interactions with Copilot with complete control of what it remembers and what it should forget as they decide to work with the AI.
The part of my brain that always considers where things can go wrong, however, had another thought. Will the memory that exists in the Copilot, which influences the responses you get from Copilot, become an issue in an investigation?
I admit, it may not seem like a considerable risk, and maybe it isn’t. On the other hand, I wonder how we will show what memories were in place at the time of a Copilot response, should that response be the thing that explains a business decision that is now in question. After all, the eDiscovery search will come up empty after my retention period has passed, regardless of what items Copilot is still remembering.
Sure, it’s far-fetched. This is legal, though. We specialize in finding far-fetched scenarios to argue about. 😏
What do you think? Is my imagination running away with me, or is there a possibility that one day we’ll be reading about a case where the memory or custom instructions influenced a Copilot response that is key to a case?
If that happens, how will we show what instructions or memories were in place at the time?
These are the questions that sit deep in my brain. Scary place, huh?
I’ll be back next week with some details of a weird search quirk that I’ve seen in the new eDiscovery interface. As usual, paid subscribers will get all the details. As we head toward the end of August and the retirement of Premium eDiscovery in favor of this new UI, I’ll be looking closer at what’s going on in there to help you get as ready as you can be for the latest tools. If you’ve been testing in that new interface, let me know if you see any other weird behavior. The more eyes on it, the better!