Accelerating Local LLMs on Resource-Constrained Edge Devices via Distributed Prompt CachingSince local LLM inference on resource-constrained edge devices imposes a severe performance bottleneck, this paper proposes distributed prompt caching...