as

Settings
Sign out
Notifications
Alexa
Amazonアプリストア
AWS
ドキュメント
Support
Contact Us
My Cases
開発
設計と開発
公開
リファレンス
サポート
アクセスいただきありがとうございます。こちらのページは現在英語のみのご用意となっております。順次日本語化を進めてまいりますので、ご理解のほどよろしくお願いいたします。

Vega Web App Best Practices - Web Workers

Web Workers are a feature in modern web browsers that allow JavaScript to run in the background, independently of the main browser thread. For more information, see Using Web Workers.

The execution model is true parallelism rather than concurrency, which manages asynchronous operation on the main thread. Web Workers explicitly prevent UI blocking by offloading the work while concurrency code does not prevent UI blocking if the asynchronous task is computationally heavy. In this way, Web Workers becomes a powerful tool when used correctly.

Concurrent vs. parallel diagram

Blink is the rendering engine used by Chromium. It uses a thread pool algorithm built to maximize the performance of web app. It’s more complex than fixed or cached, and it excels at burst calls for small workers. Blink balances parallelism and concurrency across CPU cores to maximize core usage. This means the threads and resources, including resource limits and sandboxing, are managed by Chromium. It is the web app’s role to determine the appropriate number of Web Workers to create based on the task type, complexity, and available system resources. The web app will need to manage the communication between the workers and the main thread. The web app must also minimize communication overhead by using strategies like batching, determining the priority, and limiting the number of workers sent at a time.

Web Workers in a Vega WebView

Vega often runs on a device with low resources, so the number of threads should be restricted to the number of cores on the device. JavaScript provides navigator.hardwareConcurrency to determine how many cores the device contains. The lowest devices have 4 cores while our higher end devices contain more. The algorithm used is number_of_workers = (2 * number_of_cores) + 1. For example, if a device had 4 cores, this would require a maximum of 9 workers. In some cases the system can support lower or higher than this depending on worker behavior. The Web Worker lifecycle helps explain some of these cases as well as why resource oversubscription is problematic.

The normal life cycle of a Web Worker consists of 1) the main thread creating a new worker, 2) posting a message, 3) the worker handles the message, 4) it sends a new message for the main thread, and 5) the main thread then handles the returning message. Communication with the main thread impacts main thread performance. This means that if many small workers are created and finish around the same time, and they all send data back to the main thread to process, the main thread will be impacted. This is true even if the responses are handled concurrently. The following are some strategies to maximize performance for Web Workers.

Web Worker guidelines for Vega

  • Use Web Workers for genuinely beneficial tasks and be mindful of what events trigger them. Too many Web Workers can overwhelm low resource devices such as the Fire TV Stick 4K Select.
  • Reduce the amount of data passed between workers and the main thread.
    • Use transferable objects to avoid copying large amounts of data.

      Copied to clipboard.

      // Using transferable objects to avoid copying large amounts of data
      const data = new Uint8Array(1024 * 1024); // 1MB of data
      worker.postMessage(data, [data.buffer]); // Transfer the underlying ArrayBuffer
      
  • Manage your workers and know when they’re no longer needed. Use terminate to remove workers that are no longer needed to avoid them finishing and then flooding the main thread.

    Copied to clipboard.

    // Terminate a worker when it's no longer needed
    worker.terminate();
    
  • Reuse workers or worker pools when possible, instead of spawning new workers.

    Copied to clipboard.

    const worker = new Worker('worker.js');
    worker.postMessage('Hello from main thread');
      
    // ... some time later ...
      
    // Reusing a worker by posting new tasks to it
    worker.postMessage('New task');
    
  • Use caching to reduce recalculations.

    Copied to clipboard.

    const cache = new Map<string, CachedData>(); // Using a Map for efficient ID-based lookups
      
    self.onmessage = async (event: MessageEvent) => {
      const { type, payload } = event.data;
      const { id, forceRefresh } = payload;
      
      // Check if data is already in cache
      if (cache.has(id) && !forceRefresh) {
        self.postMessage({ type: 'DATA_RESPONSE', payload: cache.get(id).data });
        return;
      }
      // ...
    
  • Know how many Web Workers can be run well on a given number of cores. A quad-core CPU can run 1-9 simple workers well, 9-15 with some impact to the UI, 15-30 with a visible impact, and anything beyond this with difficulty.
    • Here's an algorithm you can use for this:
      • Always safe: number_of_workers = navigator.hardwareConcurrency - 2.
      • Mostly safe: number_of_workers = (2 * navigator.hardwareConcurrency) + 1.
    • For persistent or heavy workers, consider 2-4 workers simultaneously.
    • If you have a CPU intensive task, make sure there aren’t many workers running at the same time that could cause issues, or manage the processing manually to prevent overloading the system.
  • Make sure your headers are set up so they don’t redownload the same files. Some libraries use workers to cache images or other content, and many use a predefined strategy paradigm.
    • TV apps often trigger workers on navigation when the image doesn’t exist to preload information for higher fluidity. If these images aren’t cached, quick navigation often causes workers to flood the system, causing stuttering or freezing. Make sure headers are set up properly so images are cached appropriately.
    • There are several strategies for caching. Many of these libraries utilize “cache only,” “cache first,” “network only,” “network first,” or “stale-while-revalidate,” as well as other options such as expiry.

Last updated: Sep 30, 2025