Selective KV-Cache Sharing to Mitigate Timing Side-Channels in LLM Inference

Published in arXiv preprint, 2025, 2025