Edge Computing

Definition

Processing data on or near the device where it is generated, rather than sending it to a remote cloud server.

Edge computing moves computation from centralized cloud data centers to the 'edge' of the network — the user's device or a nearby server. For speech recognition, this means running the ASR model directly on the user's computer or phone instead of streaming audio to a remote server.

Edge computing provides three key benefits for voice applications: privacy (audio never leaves the device), latency (no network round-trip), and reliability (works offline). The trade-off is that edge devices have limited computational resources compared to cloud servers, which can constrain model size and accuracy. Apple's Neural Engine and similar hardware accelerators are narrowing this gap.

Related Terms

Related Content