Developer Console

Requirements for Multimedia Apps (Fire TV)

For Fire TV applications to interact harmoniously with other applications on Fire TV, all applications must adhere to the same multimedia requirements. These multimedia requirements include how to correctly handle activity lifecycle and audio focus events, MediaSession, decoder instances, wake locks, and more. By adhering to the requirements outlined here, your app will contribute to a better experience for users as they navigate from application to application on Fire TV.

Co-operation of Applications on Fire TV

On Fire TV, the main purpose of applications is usually to play multimedia. As such, co-operation of applications across Fire TV is necessary.

In many ways, applications targeting TV devices can be more complex than general Android applications. The HOME button and voice interactions are readily available for users to press, which can immediately interrupt your application's playback. In contrast to tablet and phone apps, on Fire TV the status bar and notification bar aren't available to interact with an app. Stopping or killing a misbehaving application is also a more complicated task on TV devices. If you've ever closed an app playing music only to find the music continued playing while you started watching a movie, you know how multimedia interactions can collide in disastrous ways.

Because of the need for harmonious co-operation of apps on Fire TV, it's imperative that all apps follow the same practices with multimedia. In general, follow the recommended guidelines on the Android Developer Pages. However, due to the complexity of various interactions between applications on TV devices, we treat many suggestions listed in the Android Developer documentation as requirements rather than suggestions. Where Fire TV's requirements are stricter, it's noted in the requirement details.

Background Android Knowledge

Concepts in this article assume familiarity with Android development. For a better understanding of these foundational concepts, see the following topics in the Android documentation:

Note that not all Android APIs mentioned here are available on Fire OS 5 (which supports up to API level 22). Some of the mentioned APIs are specific to Android API level 25 (Nougat), which Fire OS 6 supports. If the requirement refers to an API level your app doesn't support, follow the implementation guidelines available at your application's API level.

Requirements by Application Type and the Activity/Service Approach

Requirements differ by application type and whether the application plays media as an activity or a service. Multimedia applications can be divided into several types:

  • Applications that play audio only (for example, a music app)
  • Applications that play audio and video (for example, a television network station app)
  • Applications that play video only (for example, a game without audio that lets users listen to their favorite music application playing in the background)

These different categories of applications often have different requirements.

Additionally, apps can play media either as an Activity or a Service. Video applications must have a window to render on, so they generally use the Activity approach. Audio-only applications that play in the background (while being covered by other Activities) must use a foreground Service to avoid being interrupted. In both cases, audio focus management is necessary before outputting any audio. The requirements for each of these application types are specified in the following sections.

Activity-Based Application Requirements

When your multimedia application plays as an activity (rather than service), you must handle Activity lifecycle events carefully. You must be aware of all possible transitions described in Android's Activity Lifecycle. Handling each of these transitions is a requirement for your Fire TV app.

Requirement 1.1: Handling onPause()

onPause() is called when your Activity is partially covered by other Activities (such as when Fire TV shows a heads-up notification after a user long-presses HOME). Your activity is allowed to continue playback even after onPause() is called, but you still must run callbacks for audio focus changes. (You are not allowed to play audio after losing audio focus, regardless of the lifecycle event.)

Requirement 1.2: Handling onResume()

onResume() must restore the original playback state (that is, revert any changes made in onPause()) without onCreate() being called. The “playback state” here can mean various things depending on your application. Generally, you should continue playback automatically (if you also have audio focus) using the original volume with restored UI showing an up-to-date progress bar. If you had to free any of the resources used for playback, you might need to re-initialize the pipeline and seek back to the last known position before playback was interrupted.

Requirement 1.3: Release Resources in onStop()

When onStop() is called, you must release all resources used for playback (AudioTrack, MediaCodec, etc). When the user returns to your application after pressing HOME, your application needs to re-acquire all resources.

Be cautions when using a separate Activity for playback. Quickly returning to your application might mean that the order of lifecycle events will be onStop()onRestart()onStart()onResume(). If you release resources in onStop(), you might consider closing the playback Activity by calling finish(). Alternatively, you can re-acquire all playback resources along these lifecycle events (in onRestart(), onStart() or onResume()).

Requirement 1.4: Avoid Using onDestroy()

You must not rely on onDestroy() being called in order to release resources. Resources must already be released in onStop(). The timing as to when onDestroy() is called (if ever) is decided by the device's operating system. It is most probably not called when quickly switching to an other application.

Requirement 1.5: Audio Focus Is Independent of the Activity Lifecycle

You must not rely on specific audio focus change events being received at specific states in your Activity life cycle. Lifecycle and audio focus change events are often received together — such as when your Activity is interrupted by a voice interaction. In this case, you will receive both an AUDIOFOCUS_LOSS and an onPause() activity lifecycle event, but these events are actually independent, and the order of events can be changed any time.

Requirement 1.6: Overriding onKeyDown

In case you decide to handle input events by overriding onKeyDown(), you must carefully set the return value. You must return true to prevent an event being propagated any further. Returning false means you have not handled this event and the Android frameworks should keep propagating it.

Service-Based App Requirements

If your application plays audio without video, it's recommended that you implement playback using a foreground service as described in Building an Audio App. With this approach, audio playback is allowed in the background as the user browses around other activities.

Requirement 2.1: Starting the Service

Start your service as a foreground service using the startForeground() method to avoid being interrupted in case of low memory conditions.

Requirement 2.2: Stopping the Service

When finished playing, use stopForeground() to remove the foreground state of your service. This method indicates that your service might be killed if more memory is needed.

Media Session Requirements

Media applications must handle incoming media session events regardless of where the event originates. Among others, events might originate from the following:

  • The remote control of the Fire TV device
  • Bluetooth headsets with NEXT/PREVIOUS buttons
  • Attached keyboards with multimedia buttons, along with mice
  • Smartwatches
  • Remote applications running on the phone of the user
  • Amazon services responding to voice interactions (such as the user saying “Alexa, pause!”), and more.

To handle all these events universally, regardless of the Activity lifecycle state, AudioFocus events, etc, your application must follow the guidelines in Working with a Media Session. Specifically, note the following:

“A media session lives alongside the player that it manages. You should create and initialize a media session in the onCreate() method of the activity or service that owns the media session and its associated player.”

The requirements for handling the media session events are as follows:

Requirement 3.1: Allowing Transport Control Callbacks

When you initialize MediaSession, set flags to allow callbacks from transport controllers and media buttons. (Note: As of API level 26, all sessions are expected to handle transport controls.)

Requirement 3.2: Publishing the MediaSession

Publish the media session by calling setActive before starting any playback.

Requirement 3.3: Updating Playback state

Keep the PlaybackState up-to-date:

  • Maintain the list of valid controller actions. For example, “pausing” is only possible while playing; also, some applications might restrict fast-forwarding during commercials.

  • Update the player position to let any progress bars be correctly updated. Note: To avoid refreshing the state only to update the playback position you probably want to use the PlaybackStateCompat.Builder.setState() with 4 parameters (including updateTime).

  • Update the metadata information, such as artist name, track title, track duration, etc., when it changes. All UI displays should be updated (even the remote application on your phone).

Requirement 3.4: Releasing the MediaSession

When playback is finished, release the media session using the MediaSession.release() API.

Requirement 3.5: Duration of the Mediasession

The active period of your MediaSession must exactly match the period your application is allowed and willing to play. For example, if you play music in the background, you must handle the media buttons continuously while being in the background. If you pause playback when temporarily losing audio focus, you must also deactivate the media session to give up handling of media buttons.

Audio Focus Event Requirements

Through audio focus handling, applications play audio in well coordinated ways. As explained in Managing Audio Focus:

“To avoid every music app playing at the same time, Android introduces the idea of audio focus. Only one app can hold audio focus at a time.”

On Fire TV, applications playing any kind of audio must have audio focus before playback can be started. There are various requirements around requesting audio focus, handling audio focus changes, and in abandoning audio focus.

Requirement 4.1: Request Audio Focus Immediately Before Playback

Applications must call AudioManager.requestAudioFocus() immediately before playing audio. (Many developers overlook the immediately part of the requirement.) The time period between your requestAudioFocus() and abandonAudioFocus() audio focus calls must exactly match the time you handle the focus change callback.

An often incorrectly implemented scenario is, for example:

  1. User clicks PLAY on a screen listing various available content inside the application.
  2. App requests audio focus to check if playback is allowed.
  3. App starts to download the content showing a “buffering” screen for a few seconds.
  4. App switches to the Activity with the actual playback handling the audio focus changes.

Your application might also lose audio focus during the transient “buffering” screen (step 3) before the start of the real playback. If you requested audio focus at step 2, you must handle AudioFocus events from that point on continuously. When the user arrives at step 4 (the playback activity handling the AudioFocus events), your application might already have lost the privilege to play (for example, if the user quickly initiated a voice interaction after clicking PLAY at step 1).

Requirement 4.2: Verify that Focus is Granted

Applications must verify that the requestAudioFocus() call returns AUDIOFOCUS_REQUEST_GRANTED. Any other value returned means that your application is not allowed to start playback. Note: API level 26 introduces AUDIOFOCUS_REQUEST_DELAYED, but this value is not returned on Fire TV devices (API level 25) at the moment.

Requirement 4.3: Abandoning Audio Focus

Every playback session must abandon the audio focus immediately after finishing playback. If your application has multiple playback sessions (for example, you start playing the next episode of a show on a new Activity), you do not need to request audio focus the second time. If you do, your first session will receive a callback with the parameter AUDIOFOCUS_LOSS; in response, your first session should release its resources (the ones not shared with the second session) and abandon the audio focus. As a result, your application will always be present only once in the AudioFocus stack (you can confirm this manually by running adb shell dumpsys audio in the command line).

Audio Focus Change Requirements

If your application requests audio focus, it must also handle the changes in focus implementing AudioManager.OnAudioFocusChangeListener and must handle all callback parameters correctly. The callback parameter is one of the following:

  • AUDIOFOCUS_LOSS
  • AUDIOFOCUS_LOSS_TRANSIENT
  • AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK
  • AUDIOFOCUS_GAIN

Requirement 4.4: Handling AUDIOFOCUS_LOSS

When your application receives an AUDIOFOCUS_LOSS event, it must completely stop audio playback because the system has granted another application full playback rights. You should not expect the audio focus to be returned to your application (you will never get an AUDIOFOCUS_GAIN after this). You should close your application, release resources (AudioTrack, MediaCodec, etc) and notify the system that you finished playback by calling AudioManager.abandonAudioFocus().

Requirement 4.5: Handling AUDIOFOCUS_LOSS_TRANSIENT

When your application receives an AUDIOFOCUS_LOSS_TRANSIENT event, it must “pause” audio playback (but there is no hard requirement for video here). This usually happens when your application gets interrupted by short notifications, system sounds, or other temporary states, and the audio focus will be returned to your application shortly.

Pausing audio means you cannot output any audio after this point. Your application must be mute. You may pause the entire playback or just reduce the volume of the audio to 0 and keep the video played. Video applications playing on-demand content should opt to pause both video and audio playback completely. You can hold on to the resources used for playback. If full pausing will result in a poor user experience (such as with live video feeds that have long buffering times for interruptions), you can keep the video playback ongoing while the audio is muted.

Requirement 4.6: Handling AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK

When your application receives an AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK, it must reduce the volume (duck). The expected volume level might vary for various use cases but must not interfere with any other sound played parallel to your audio output. (For example, you can set the temporary volume level to around 30-40% of the original volume.)

This event is usually received when the system wants to play a short notification during which audio applications are allowed to keep playing music with reduced volume. Note: This means that you are also allowed to set this temporary volume level to 0 or to completely pause the playback, effectively handling this event the same way as AUDIOFOCUS_LOSS_TRANSIENT was handled.

Requirement 4.7: Handling AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK

Applications that output encoded audio content (for example, Dolby pass-through) should pause the playback completely when AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK is received. Encoded content does not have the notion of volume without being decoded. For pass-through cases, decoding would only happen on the receiver (AVR), so setting the stream volume on the Fire TV would not influence the real output.

Amazon's audio-enhanced platforms might be able to decode the content, update the volume to the reduced level, re-encode the content, and then send it to the HDMI output, but this capability is not guaranteed. (You can contact Amazon for specifics and availability.)

Requirement 4.8: Handling AUDIOFOCUS_GAIN

When your application receives an AUDIOFOCUS_GAIN, it can continue playback as your application originally intended. This event is delivered after the transient periods (during which you were paused, muted, or played with reduced volume) ended. If you reduced volume before, you must restore the original volume level.

When receiving this AUDIOFOCUS_GAIN event, audio applications and video applications playing live streams usually continue playback automatically. It's also suggested that applications playing on-demand video content continue or optionally stay in a paused state showing PLAY button to the user to press to continue playback.

Requirement 4.9: Abandoning the Audio Focus

When you are finished with audio playback, if your application was ever granted audio focus, you must also call AudioManager.AbandonAudioFocus() with the same parameter you have the audio focus requested earlier. You must release the resources used for playback. (You can request the Audio Focus again for your next playback).

Secure Decoder Requirements

Requirement 5.1: Limitations on Secure Decoders

Applications must use only one secure decoder pipeline at any given time. Most platforms support only one secure decoder (MediaCodec) instance, so any attempts of parallel decoding of multiple secure contents would fail.

Requirement 5.2: Release Resources

Secure decoders must be released immediately when the onStop() function of the playback Activity is called. Secure decoders are used for video playback, so applications requiring them will always have an Activity with the surface they render on. As soon as this Activity is stopped, the resources should be freed to let other applications acquire the single instance of the secure pipeline.

Wake Lock Requirements

In general, you can expect all Fire TV devices to be powered from wall power (and not from a battery), so you are allowed to skip management of power consumption and performance of your application. For example, you do not need to keep the CPU turned on, dim the screen, manage efficient network usage or WIFI locks. But you will need to follow the Keeping the Device Awake developer pages for managing wake locks.

Requirement 6.1: Keeping the Device Awake

You must keep the device awake while playback is expected. This usually means your application sets the Activity flag FLAG_KEEP_SCREEN_ON or implements dynamic wake lock management by acquiring FULL_WAKE_LOCK.

Requirement 6.2: Releasing Wake Locks

You must release your wake locks if playback is not active for any reason. (Playback might not be active if the movie ended, buffering screen is shown, or you lost the audio focus.) Your application is expected to let the device screensaver kick in (after a timeout) and then let the device go to sleep.

Output State Change Requirements

When the output state changes, your application must adhere to the following requirements. For more information, see Handling HDMI Events (Fire TV).

Requirement 7.1: HDMI Disconnection

When you receive an HDMI disconnection event, you must pause media playback. Most applications should not continue playback without a screen (data usage of an unnecessary 4K video playback can be enormous). Even audio-only applications should pause playback when an HDMI sink gets disconnected.

Requirement 7.2: Switching from bluetooth Sink to HDMI output

Headphones are often used with high volumes, while TV speakers and sound systems are generally set to a much lower level. When the user disconnects a Bluetooth headset being used, the audio stream will be automatically rerouted to the speaker. Users usually expect audio and video applications to be paused to avoid a sudden change in volume levels. To support this scenario, your applications should handle the intent ACTION_AUDIO_BECOMING_NOISY.

Requirement 7.3: Switching to Bluetooth Outputs

Applications that output encoded audio streams directly (for example, Dolby pass-through) must listen to changes with audio devices. Encoded audio streams can be sent to the HDMI output (TVs and AV receivers support most Dolby standards) without being decoded on the Fire TV. But when a Bluetooth headphone is turned on, the audio stream automatically reroutes to the headphones, and the Bluetooth standards do not allow encoded audio streams to be directed to these devices. As a result, applications must be prepared for changes in device capabilities and act appropriately. There are multiple options how to recognize these audio device changes and how to react to the change.

To receive notifications, applications can register for AudioDeviceCallbacks (introduced in API level 23) or for the intent AudioManager.ACTION_HDMI_AUDIO_PLUG (the latter is slower and uses more resources, so using AudioDeviceCallback is suggested).

To react to the changes, among other options, applications can switch to PCM input streams or turn off pass-through (by using MediaCodec, the encoded input stream will be decoded to PCM); the Bluetooth headphones will be able to play the content correctly.

Requirement 7.4: Switching Between Different HDMI Outputs

Applications relying on output other than PCM output (such as encodings like Dolby or DTS) must check if the capabilities of the sink have changed between an HDMI disconnect-and-reconnect events. If the capabilities have changed, they must recreate the audio objects.

Users can disconnect the HDMI cable from a display supporting various audio encodings and plug it back to a display supporting different encodings. The change in capabilities might require your application to adjust its audio output.

Additional Requirements

Requirement 8.1: Do Not Modify Global Device State

Refrain from using APIs impacting the global state of the device — for example, AudioManger.setStreamVolume(). An application trying to update volume should use the setVolume() APIs of its AudioTrack or the MediaPlayer instances instead of AudioManager APIs intended to affect all applications of the device.

Requirement 8.2: Target the latest API level

Event handling and the behaviour of the Android Frameworks might depend on what API level your app is targeting. As a general rule, always target the highest API level your application should be running on. Currently, if you want your application to behave correctly on the latest Fire TV devices, you must target API level 28 (Android Pie).

Requirement 8.3: Avoid extensive CPU usage

In case the device reaches a critically high temperature, it will start throttling the CPU performance in an attempt to decrease temperature. If the temperature is still increasing, as a last resort, the Fire TV device will reboot to avoid any damage in the hardware caused by heat.

Continuous multimedia playback will keep all system resources under constant pressure. We thoroughly test all common multimedia playback use cases, including all combinations of encryptions and encodings for both audio and video. All device capabilities are determined as achievable and safe with optimal application code.

It is your responsibility to keep your application code efficient enough to avoid overheating the device. Short bursts of high CPU loads are acceptable, but continuous high CPU usage will start increase the device temperature. Test your application with the most demanding use case (highest resolution, frame rate, encrypted playback, all overlays turned on) for an extended period of time.

For example, if your application plays movies, test the longest movies. It is not acceptable for an application to let the device temperature reach to the point where the CPU gets throttled. Consider using more efficient encodings (for example, HEVC instead of AVC), use hardware solutions instead of software implementations in order to reduce pressure on the entire multimedia pipeline.

Requirement 8.4: Test Complex Scenarios

Consider complex scenarios of the previously described requirements. If you handle the AudioFocus events and Android Lifecycle events correctly, maintain the MediaSession, follow HDMI connection and disconnection states, etc., your application should handle arbitrarily complex use cases such as the following:

  1. Start playback in your app.
  2. Initiate voice search.
  3. Unplug the HDMI cable from the display.
  4. Press BACK on the remote to return to your application from voice search.
  5. Turn on a previously paired Bluetooth headphone.
  6. Replug the HDMI cable to a display.

At this point, most applications should be waiting for user interaction in a paused state. All user interface displays should reflect a paused state. For example, remote applications should disable the PAUSE buttons. Audio should be routed to the Bluetooth headphone while avoiding Dolby pass-through output.


Last updated: Oct 10, 2023