Set Up Voice Permissions for Timers


With voice permissions for timers, your skill can grant timer permissions through a voice interaction – this lets a user accept the permission you're requesting by saying "I approve," instead of having to open the Alexa app. This functionality can make your skill more convenient to use for your users.

How it works

  1. Your skill initiates the permission request.
  2. Alexa asks the user if they want to grant permission to create a specific timer.
  3. The user responds to the permission request.
    • If the user grants permission, your skill can then create the timer. You don't need to send a card to the Alexa app, requesting for permissions.
    • If the user doesn't grant permission, then your skill can't create a timer. Your skill should provide a fallback workflow if the user doesn't grant permission to create a timer.

Standard voice permission workflows

A skill initiates the initial prompt asking if the user wants to set a timer. You can determine when your skill should ask the user if they want to set a timer.

User grants permission by voice to timers

In this example, the skill links to a hypothetical third-party cooking app called Cooking Time. Some of the skill's output responses are controlled by the skill, and some by Alexa.

Alexa (as determined by skill): Welcome to Cooking Time. To help you with your cooking, this skill requires the use of Timers on Alexa.

Alexa (as determined by Alexa): Do you give Cooking Time permission to update your timers? You can say I approve or no.

User: I approve.

User denies permission for timers after initial agreement

In this example, the user denies permission to set a timer, after initially agreeing to do so.

You can't control the interaction between Alexa and the user in respect to this permissions workflow, other than when you send out the first prompt.

Alexa (as determined by skill): Welcome to Cooking Time. To help you with your cooking, this skill requires the user permission of Timers on Alexa.

Alexa (as determined by Alexa): Do you give Cooking Time permission to update your timers? You can say I approve or no.

User: No.

User response is unintelligible

Suppose the user, when Alexa asks for permission to set a timer, has an unintelligible response. Alexa re-prompts the user with a rephrased question. If the user grants or denies permission after the re-prompt, the corresponding workflow then follows from that point.

Alexa (as determined by skill): Welcome to Cooking Time. To help you with your cooking, this skill requires the user of Timers on Alexa.

Alexa (as determined by Alexa): Do you give Cooking Time permission to update your timers? You can say I approve or no.

User: <Unintelligible>

Alexa (as determined by Alexa): Do you give Cooking Time permission to update your timers? You can say I approve or no.

Standard prompts

A skill kicks off the voice permission for timers workflow, and the skill controls the initial prompt to the user. Alexa controls the next prompts in the voice permissions workflow, which provides a consistent experience for users across skills. For reference, each prompt receives a tag. Your skill should follow the AcceptConsenttimers prompt with a restatement of the timer, its date and time, and its purpose, and your skill controls this portion of the response.

AskForConsentTimers: Alexa (as determined by Alexa): Do you give Cooking Time permission to update your timers? You can say I approve or no.

AskForConsentRetryTimers: Alexa (as determined by Alexa): Do you give Cooking Time permission to update your timers? You can say I approve or no.

Send a Connections.SendRequest directive

When the user responds affirmatively when the skill asks to set timers, your skill service code can then send a Connections.SendRequest directive, as shown here. The permissionScope value is for the timers scope: alexa::alerts:timers:skill:readwrite.

The token field in this directive isn't used by Alexa, but resulting Connections.Response request returns the token value. You can provide this token in a format that makes sense for the skill, and you can use an empty string if you don't need it.

The consentLevel parameter is the granularity of the user to which to ask for the consent. Valid values are ACCOUNT and PERSON:

  • ACCOUNT is the Amazon account holder to which the Alexa-enabled device is registered.
  • PERSON is the recognized speaker. For details about recognized speakers, see Add Personalization to Your Alexa Skill.
{
   "type": "Connections.SendRequest",
   "name": "AskFor",
   "payload": {
      "@type": "AskForPermissionsConsentRequest",
      "@version": "2",
      "permissionScopes": [
        {
          "permissionScope": "alexa::alerts:timers:skill:readwrite",
          "consentLevel": "ACCOUNT"
        }
      ]
   },
   "token": ""
}

After receiving this directive, Alexa asks the user to grant permission for the specified timers permission scope, which results in a Connections.Response request, as shown. The body.status value is one of:

  • ACCEPTED – the user grants the permissions, either in response to the last request or previously.
  • DENIED – the user refuses the permissions.
  • NOT_ANSWERED – the user didn't answer the request for permissions or the response wasn't understood. In this scenario, Alexa re-prompts the user for a response.
{
   "type": "Connections.Response",
   "requestId": "string",
   "timestamp": "string",
   "locale": "string",
   "name": "AskFor",
   "status": {
      "code": "string",
      "message": "string"
   },
   "token": "string",
   "payload": {
      "permissionScopes" : [
       {
         "permissionScope" : "alexa::alerts:timers:skill:readwrite",
         "consentLevel": "ACCOUNT"
       },
      "status" : <status enum> // ACCEPTED, DENIED, or NOT_ANSWERED
      ]
   }
}

As you can see in the examples, Alexa has a set of standard prompts that you can't change when you develop a skill. You don't have to code these prompts, they're included with the standard voice permissions workflow.

Code example for a voice permissions request

The following example shows how you can add code to an Amazon Web Services (AWS) Lambda function to send the Connections.SendRequest directive for a voice permissions request. You can use the token field to keep track of state. Any value that you provide in the token field appears in the user's requests to Alexa. For example, you could use the token field to store the userId value. You can also set the token to be an empty string.

Copied to clipboard.

This code example uses the Alexa Skills Kit SDK for Node.js (v2).

return handlerInput.responseBuilder
	.addDirective({
		type: "Connections.SendRequest",
		name: "AskFor",
		payload: {
			"@type": "AskForPermissionsConsentRequest",
			"@version": "2",
			"permissionScopes": [
			  {
			    "permissionScope": "alexa::alerts:timers:skill:readwrite",
			    "consentLevel": "ACCOUNT" 
			  } 
			]
		},
		token: "<string>"
	})
	.getResponse();

Copied to clipboard.

This code example uses the Alexa Skills Kit SDK for Node.js (v1).

this.handler.response = {
	'version': '1.0',
	'response': {
		'directives': [{
			'type': 'Connections.SendRequest',
			'name': 'AskFor',
			'payload': {
			   '@type': 'AskForPermissionsConsentRequest',
			   '@version': '2',
			   'permissionScopes': [
			    {
			      'permissionScope': 'alexa::alerts:timers:skill:readwrite',
			      'consentLevel': 'ACCOUNT' 
			    } 
			   ]
			},
			'token': '<string>'
		}],
		'shouldEndSession': true
	}
};
this.emit(':responseReady');

Copied to clipboard.

JSON syntax for a Connections.SendRequest directive for a voice request for permissions. In this case, the name is Request.

{
  "directives": [
    {
      "type": "Connections.SendRequest",
      "name": "AskFor",
      "payload": {
        "@type": "AskForPermissionsConsentRequest",
        "@version": "2",
        "permissionScopes": [
          {
            "permissionScope": "alexa::alerts:timers:skill:readwrite",
            "consentLevel": "ACCOUNT"
          } 
        ]
      },
      "token": "<string>"
    }
  ]
}

Was this page helpful?

Last updated: Nov 29, 2023