export const meta = { title: `Custom business logic (Lambda function, HTTP, and VTL resolvers)`, description: `Add authorization rules to your GraphQL schema to control access to your data.`, }; Define your custom business logic in a Lambda function resolver, HTTP resolver, or VTL resolver and expose them in a GraphQL query or mutation. Extend or override Amplify-generated GraphQL resolvers to optimize for your specific use cases. ## Create a custom query or mutation While `@model` automatically generates dedicated "create", "read", "update", "delete", and "subscription" queries or mutations for you, there are some cases where you want to define a stand-alone query or mutation. 1. Define your custom query or mutation ```graphql type Mutation { myCustomMutation(args: String): String # your custom mutations here } type Query { myCustomQuery(args: String): String # your custom queries here } ``` 2. Use one of these resolver choices to handle the query or mutation request: - [Lambda function resolver](#lambda-function-resolver): use a custom Lambda function to handle query or mutation - [HTTP resolver](#http-resolver): call an HTTP endpoint upon a query or mutation - [VTL resolver](#vtl-resolver) (most advanced): use VTL mapping templates to customize the query and mutation logic 3. Secure your custom query or mutation with [field-level authorization rules](/cli/graphql/authorization-rules/) - Note: Dynamic authorization rules are not supported on a custom query or mutation. ## Lambda function resolver The `@function` directive allows you to quickly & easily configure a AWS Lambda resolvers with your GraphQL API. You can use any AWS Lambda functions that was created with the Amplify CLI, AWS Lambda console, or any other tool. For example, use `amplify add function` to add a Lambda function called "echofunction" with the following handler: ```js exports.handler = async function(event, context){ return event.arguments.msg; }; ``` To connect an AWS Lambda resolver to the GraphQL API, add the `@function` directive to a field in your schema. ```graphql type Query { echo(msg: String): String @function(name: "echofunction-${env}") } ``` The Amplify CLI provides support for maintaining multiple environments. When you deploy a function via `amplify add function`, it will automatically add the environment suffix to your Lambda function name. For example, if you create a function named `echofunction` using `amplify add function` in the `dev` environment, the deployed function will be named `echofunction-dev`. The `@function` directive allows you to use `${env}` to reference the current Amplify CLI environment. If you deployed your Lambda function without Amplify CLI then you must provide the full Lambda function name in the `name` parameter. If you deployed the same function with the name echofunction then you would have: ```graphql type Query { echo(msg: String): String @function(name: "echofunction") } ``` ### Structure of the function event When writing Lambda functions that are connected via the `@function` directive, you can expect the following structure for the AWS Lambda `event` object. | Key | Description | |---|---| | typeName | The name of the parent object type of the field being resolved. | | fieldName | The name of the field being resolved. | | arguments | A map containing the arguments passed to the field being resolved. | | identity | A map containing identity information for the request. Contains a nested key 'claims' that will contains the JWT claims if they exist. | | source | When resolving a nested field in a query, the source contains parent value at runtime. For example when resolving `Post.comments`, the source will be the Post object. | | request | The AppSync request object. Contains header information. | | prev | When using pipeline resolvers, this contains the object returned by the previous function. You can return the previous value for auditing use cases. | Your function should follow the Lambda handler guidelines for your specific language. See the developer guides from the [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) documentation for your chosen language. If you choose to use structured types, your type should serialize the AWS Lambda event object outlined above. For example, if using Golang, you should define a struct with the above fields. ### Calling functions in different regions By default, you expect the function to be in the same region as the Amplify project. If you need to call a function in a different or a specific region, you can provide the **region** argument. ```graphql type Query { echo(msg: String): String @function(name: "echofunction", region: "us-east-1") } ``` Calling functions in different AWS accounts is not supported via the `@function` directive but is supported by AWS AppSync. ### Chaining functions You can chain together multiple `@function` resolvers such that they are invoked in series when your field's resolver is invoked. To create a pipeline resolver that calls to multiple AWS Lambda functions in series, use multiple `@function` directives on the field. Similarly, `@function` can be combined with field-level `@auth`. When combining these field directives, the ordering in the schema matches the ordering in the pipeline resolver. You can choose to have functions before and/or after field level authorization is applied. > **Note:** Be careful when using @auth directives on a field in a root type. @auth directives on field definitions use the source object to perform authorization logic and the source will be an empty object for fields on root types. Static group authorization should perform as expected. ```graphql type Mutation { doSomeWork(msg: String): String @function(name: "worker-function") @function(name: "audit-function") } ``` In the example above when you run a mutation that calls the `Mutation.doSomeWork` field, the **worker-function** will be invoked first then the **audit-function** will be invoked with an event that contains the results of the **worker-function** under the **event.prev.result** key. The **audit-function** would need to return **event.prev.result** if you want the result of **worker-function** to be returned for the field. ### How it works Definition of `@function` directive: ```graphql directive @function(name: String!, region: String) on FIELD_DEFINITION ``` Under the hood, Amplify creates an `AppSync::FunctionConfiguration` for each unique instance of `@function` in a document and a pipeline resolver containing a pointer to a function for each `@function` on a given field. The `@function` directive generates these resources as necessary: 1. An AWS IAM role that has permission to invoke the function as well as a trust policy with AWS AppSync. 2. An AWS AppSync data source that registers the new role and existing function with your AppSync API. 3. An AWS AppSync pipeline function that prepares the lambda event and invokes the new data source. 4. An AWS AppSync resolver that attaches to the GraphQL field and invokes the new pipeline functions. ## HTTP resolver The `@http` directive allows you to quickly configure HTTP resolvers within your GraphQL API. To connect to an endpoint, add the @http directive to a field in your GraphQL schema. The directive allows you to define URL path parameters, and specify a query string and/or specify a request body. For example, given the definition of a Post type: ```graphql type Post { id: ID! title: String description: String views: Int } type Query { listPosts: [Post] @http(url: "https://www.example.com/posts") } ``` Amplify generates the definition below that sends a request to the url when the listPosts query is used. ```graphql type Query { listPosts: [Post] } ``` ### Request headers The `@http` directive generates resolvers that can handle XML and JSON responses. If an HTTP method is not defined, `GET` is used. You can specify a list of static headers to be passed with the HTTP requests to your backend in your directive definition. ```graphql type Query { listPosts: [Post] @http( url: "https://www.example.com/posts" headers: [{ key: "X-Header", value: "X-Header-Value" }] ) } ``` ### Path parameters You can create dynamic paths by specifying parameters in the directive URL by using the special `:` notation. Your set of parameters can then be specified in the params input object of the query. Note that path parameters are not added to the request body or query string. You can define multiple parameters. ```graphql type Query { getPost: Post @http(url: "https://www.example.com/posts/:id") } ``` In the example above, the `:id` parameter will generate the appropriate query input as shown below: ```graphql type Query { getPost(params: QueryGetPostParamsInput!): Post } input QueryGetPostParamsInput { id: String! } ``` You can fetch a specific post by enclosing the id in the params input object. ```graphql query post { getPost(params: {id: "POST_ID"}) { id title } } ``` This executes the following request: ```console GET /posts/POST_ID Host: www.example.com ``` ### Query String You can send a query string with your request by specifying variables for your query. The query string is supported with all request methods. Given the definition ```graphql type Query { listPosts(sort: String!, from: String!, limit: Int!): Post @http(url: "https://www.example.com/posts") } ``` Amplify generates ```graphql type Query { listPosts(query: QueryListPostsQueryInput!): [Post] } input QueryListPostsQueryInput { sort: String! from: String! limit: Int! } ``` You can query for posts using the `query` input object ```graphql query posts{ listPosts(query: {sort: "DESC", from: "last-week", limit: 5}) { id title description } } ``` which sends the following request: ```text GET /posts?sort=DESC&from=last-week&limit=5 Host: www.example.com ``` ### Request Body The `@http` directive also allows you to specify the body of a request, which is used for `POST`, `PUT`, and `PATCH` requests. To create a new post, you can define the following. ```graphql type Mutation { addPost(title: String!, description: String!, views: Int): Post @http(method: POST, url: "https://www.example.com/post") } ``` Amplify generates the `addPost` query field with the `query` and `body` input objects since this type of request also supports a query string. The generated resolver verifies that non-null arguments (e.g.: the `title` and `description`) are passed in at least one of the input objects; if not, an error is returned. ```graphql type Mutation { addPost(query: QueryAddPostQueryInput, body: QueryAddPostBodyInput): Post } input QueryAddPostQueryInput { title: String description: String views: Int } input QueryAddPostBodyInput { title: String description: String views: Int } ``` You can add a post by using the `body` input object: ```graphql mutation add { addPost(body: {title: "new post", description: "fresh content"}) { id } } ``` which will send ```text POST /post Host: www.example.com { title: "new post" description: "fresh content" } ``` ### Reference Amplify environment name The `@http` directive allows you to use `${env}` to reference the current Amplify CLI environment. ```graphql type Query { listPosts: Post @http( url: "https://www.example.com/${env}/posts" ) } ``` which, in the `DEV` environment, will send ```text GET /DEV/posts Host: www.example.com ``` **Combining the different components** You can use a combination of parameters, query, body, headers, and environments in your `@http` directive definition. Given the definition ```graphql type Post { id: ID! title: String description: String views: Int comments: [Comment] } type Comment { id: ID! content: String } type Mutation { updatePost( title: String! description: String! views: Int withComments: Boolean ): Post @http( method: PUT url: "https://www.example.com/${env}/posts/:id" headers: [{ key: "X-Header", value: "X-Header-Value" }] ) } ``` you can update a post with ```graphql mutation update { updatePost( body: {title: "new title", description: "updated description", views: 100} params: {id: "EXISTING_ID"} query: {withComments: true}) { id title description comments { id content } } } ``` which, in the `DEV` environment, will send ```text PUT /DEV/posts/EXISTING_ID?withComments=true Host: www.example.com X-Header: X-Header-Value { title: "new title" description: "updated description" views: 100 } ``` ### Reference existing field data In some cases, you may want to send a request based on existing field data. Take a scenario where you have a post and want to fetch comments associated with the post in a single query. Let's use the previous definition of `Post` and `Comment`. ```graphql type Post { id: ID! title: String description: String views: Int comments: [Comment] } type Comment { id: ID! content: String } ``` A post can be fetched at `/posts/:id` and a post's comments at `/posts/:id/comments`. You can fetch the comments based on the post id with the following updated definition. `$ctx.source` is a map that contains the resolution of the parent field (`Post`) and gives access to `id`. ```graphql type Post { id: ID! title: String description: String views: Int comments: [Comment] @http(url: "https://www.example.com/posts/${ctx.source.id}/comments") } type Comment { id: ID! content: String } type Query { getPost: Post @http(url: "https://www.example.com/posts/:id") } ``` You can retrieve the comments of a specific post with the following query and selection set. ```graphql query post { getPost(params: {id: "POST_ID"}) { id title description comments { id content } } } ``` Assuming that `getPost` retrieves a post with the id `POST_ID`, the comments field is resolved by sending this request to the endpoint ```text GET /posts/POST_ID/comments Host: www.example.com ``` Note that there is no check to ensure that the reference variable (here the post ID) exists. When using this technique, it is recommended to make sure the referenced field is non-null. ### How it works Definition of `@http` directive: ```graphql directive @http(method: HttpMethod, url: String!, headers: [HttpHeader]) on FIELD_DEFINITION enum HttpMethod { PUT POST GET DELETE PATCH } input HttpHeader { key: String value: String } ``` The `@http` transformer will create one HTTP datasource for each identified base URL. For example, if multiple HTTP resolvers are created that interact with the "https://www.example.com" endpoint, only a single datasource is created. Each directive generates one resolver. Depending on the definition, the appropriate `body`, `params`, and `query` input types are created. Note that `@http` transformer does not support calling other AWS services where Signature Version 4 signing process is required. ## VTL resolver You can use AWS Cloud Development Kit (CDK) to define custom VTL resolvers for your GraphQL API. `@auth` directives are not supported for custom queries or mutations that are connected to a VTL resolver. This is because you are replacing Amplify's auto-generated capabilities for that particular query or mutation with a custom-defined cloud resources. ```bash amplify add custom ``` ```console ? How do you want to define this custom resource? ❯ AWS CDK ? Provide a name for your custom resource ❯ MyCustomResolvers ``` Next, install the AppSync dependencies for your custom resource: ```bash cd amplify/backend/custom/MyCustomResolvers npm i @aws-cdk/aws-appsync@~1.172.0 ``` > **Note:** Installations using the '\~' character do not modify the package.json. To use '\~' for default npm configurations, make sure your package.json reflects the right dependency to avoid compatibility errors in CDK. Finally, add your custom resolvers into the `cdk-stack.ts` file. You can either add the VTL inline into your `cdk-stack.ts` file or define them externally in another file. Review the [Resolver Mapping Template Programming Guide](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-programming-guide.html) to learn more about the VTL template. #### Unit Resolver ```ts import * as cdk from '@aws-cdk/core'; import * as AmplifyHelpers from '@aws-amplify/cli-extensibility-helper'; import * as appsync from '@aws-cdk/aws-appsync'; import { AmplifyDependentResourcesAttributes } from '../../types/amplify-dependent-resources-ref'; const requestVTL =` ` const responseVTL =` ` export class cdkStack extends cdk.Stack { constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps, amplifyResourceProps?: AmplifyHelpers.AmplifyResourceProps) { super(scope, id, props); /* Do not remove - Amplify CLI automatically injects the current deployment environment in this input parameter */ new cdk.CfnParameter(this, 'env', { type: 'String', description: 'Current Amplify CLI env name', }); // Access other Amplify Resources const retVal:AmplifyDependentResourcesAttributes = AmplifyHelpers.addResourceDependency(this, amplifyResourceProps.category, amplifyResourceProps.resourceName, [{ category: "api", resourceName: "" }] ); const resolver = new appsync.CfnResolver(this, "custom-resolver", { // apiId: retVal.api.new.GraphQLAPIIdOutput, // https://github.com/aws-amplify/amplify-cli/issues/9391#event-5843293887 // If you use Amplify you can access the parameter via Ref since it's a CDK parameter passed from the root stack. // Previously the ApiId is the variable Name which is wrong , it should be variable value as below apiId: cdk.Fn.ref(retVal.api.replaceWithAPIName.GraphQLAPIIdOutput), fieldName: "querySomething", typeName: "Query", // Query | Mutation | Subscription requestMappingTemplate: requestVTL, responseMappingTemplate: responseVTL, dataSourceName: "TodoTable" // DataSource name }) } } ``` #### Pipeline Resolver ```ts import * as cdk from '@aws-cdk/core'; import * as AmplifyHelpers from '@aws-amplify/cli-extensibility-helper'; import * as appsync from '@aws-cdk/aws-appsync'; import { AmplifyDependentResourcesAttributes } from '../../types/amplify-dependent-resources-ref'; const beforeMappingVTL =` ` const afterMappingVTL =` ` const function1requestVTL=` ` const function1responseVTL=` ` const function2requestVTL=` ` const function2responseVTL=` ` export class cdkStack extends cdk.Stack { constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps, amplifyResourceProps?: AmplifyHelpers.AmplifyResourceProps) { super(scope, id, props); /* Do not remove - Amplify CLI automatically injects the current deployment environment in this input parameter */ new cdk.CfnParameter(this, 'env', { type: 'String', description: 'Current Amplify CLI env name', }); // Access other Amplify Resources const retVal:AmplifyDependentResourcesAttributes = AmplifyHelpers.addResourceDependency(this, amplifyResourceProps.category, amplifyResourceProps.resourceName, [{ category: "api", resourceName: "" }] ); const function1 = new appsync.CfnFunctionConfiguration(this,"function1",{ apiId: cdk.Fn.ref(retVal.api.replaceWithAPIName.GraphQLAPIIdOutput), dataSourceName: "NONE_DS", // DataSource name functionVersion: "2018-05-29", name: "function1", requestMappingTemplate: function1requestVTL, responseMappingTemplate: function1responseVTL }) const function2 = new appsync.CfnFunctionConfiguration(this,"function2",{ apiId: cdk.Fn.ref(retVal.api.replaceWithAPIName.GraphQLAPIIdOutput), dataSourceName: "TodoTable", // DataSource name functionVersion: "2018-05-29", name: "function2", requestMappingTemplate: function2requestVTL, responseMappingTemplate: function2responseVTL }) const resolver = new appsync.CfnResolver(this, "pipeline-resolver", { apiId: cdk.Fn.ref(retVal.api.replaceWithAPIName.GraphQLAPIIdOutput), fieldName: "querySomething", typeName: "Query", // Query | Mutation | Subscription kind: "PIPELINE", pipelineConfig: { functions: [ function1.attrFunctionId, function2.attrFunctionId ] }, requestMappingTemplate: beforeMappingVTL, responseMappingTemplate: afterMappingVTL }) } } ``` > **Note:** Users moving from ElasticSearch to OpenSearch will need to change the datasource name from `ElasticSearchDomain` to `OpenSearchDataSource` if the upgrade process changes the source name. For new @searchable models the datasource name will default to `OpenSearchDataSource`. You can alternatively define the VTL templates in another file such as `Query.querySomething.req.vtl` or `Query.querySomething.res.vtl` in `amplify/backend/custom/MyCustomResolvers/`. Then use the following code snippets to retrieve them: ```ts requestMappingTemplate: appsync.MappingTemplate.fromFile(path.join(__dirname, "..", "Query.testColin.req.vtl")).renderTemplate(), responseMappingTemplate: appsync.MappingTemplate.fromFile(path.join(__dirname, "..", "Query.testColin.res.vtl")).renderTemplate(), ``` > **Note:** the `..` is added to the path because the path is always relative to the `build` folder of the custom resource. ## Add authorization rules to custom queries and mutations Authorization rules can be applied with the `@auth` directive in the same way as field-level authorization rules. See [Field-level authorization rules](/cli/graphql/authorization-rules/#field-level-authorization-rules) for details. In the example below, `myCustomMutation` can only be executed by signed-in customers who are authenticated with IAM: ```graphql type Mutation { myCustomMutation(args: String): String @auth(rules: [{ allow: private, provider: iam }]) } ``` > **Known limitation:** You can't combine the `@auth` directive with a custom query or mutation that is using a VTL resolver. ## Override Amplify-generated resolvers Amplify generates [AWS AppSync pipeline resolver](https://docs.aws.amazon.com/appsync/latest/devguide/pipeline-resolvers.html) for your queries and mutations. The resolvers are listed the following API resource's folder `amplify/backend/api//build/resolvers/`. To override an Amplify-generated resolver: 1. Find the resolver file name you want to override under `build/resolvers` 2. Place a `.vtl` with the same file name the resource's `resolvers/` (not under `build/`) 3. Upon the next `amplify api gql-compile` or `amplify push` the Amplify-generated resolver file will be replaced with your overwritten resolver file ```console amplify/backend/api/ ├── build │   ├── ... │   ├── resolvers │   │   ├── ... │   │   ├── Query.searchTodos.req.vtl # Find resolver file name │   │   └── ... | ... ├── resolvers │   └── Query.searchTodos.req.vtl # Place resolver overrides with the same file name here ``` The example above shows how the `Query.searchTodos.req.vtl` is overwritten with a custom resolver. Review the [Resolver Mapping Template Programming Guide](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-programming-guide.html) to learn more about the VTL template. ## Extend Amplify-generated resolvers Amplify generates [AWS AppSync pipeline resolvers](https://docs.aws.amazon.com/appsync/latest/devguide/pipeline-resolvers.html) for your queries and mutations. You can "slot" in your custom business logic between Amplify-generated resolvers. You can find Amplify-generated resolvers under your API resources' `build/resolvers/` folder. The resolver functions file name determines its placement within the slot sequence. ```console File name convention: [Query|Mutation|Subscription].[field name].[slot name].[slot placement].[req|res].vtl Example: Mutation.createTodo.postAuth.1.req.vtl ``` To extend an Amplify-generated resolver: 1. Find the [resolver slot](#supported-resolver-slots) you want to add your custom business logic to 2. Place a `.vtl` file with the correct the file naming convention into `resolvers/` (not under `build/`) 3. Upon the next `amplify api gql-compile` or `amplify push` the Amplify-generated resolver file will be replaced within the desired slot within the resolver sequence. ```console amplify/backend/api/ ├── build │   ├── ... │   ├── resolvers │   │   ├── ... │   │   ├── Mutation.createTodo.postAuth.1.req.vtl # Amplify-generated resolvers │   │   └── ... | ... ├── resolvers │   └── Mutation.createTodo.postAuth.2.req.vtl # Custom resolver slotted in after postAuth.1 resolver ``` For example, the a resolver function file named `Mutation.createTodo.postAuth.2.req.vtl` will be slotted in right after the `Mutation.createTodo.postAuth.1.req.vtl` resolver. Review the [Resolver Mapping Template Programming Guide](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-programming-guide.html) to learn more about the VTL template. ### Supported resolver slots #### Query | Sequence | Slot name | Description | | -------- | ------------ | ------------------------------------------------------------------------------------------ | | 1 | init | Initial resolvers that are run. Usually used for initializing default values. | | 2 | preAuth | Resolvers that are intended to run before authorization rule checks are applied. | | 3 | auth | Resolvers that implement authorization rule checks. | | 4 | postAuth | Resolvers that are run after authorization rule checks. | | 5 | preDataLoad | Resolvers to configure values to make a request to the data source. | | 6 | postDataLoad | Resolvers for post-processing after request to data source. | | 7 | finish | Final set of resolvers before response is returned to client. Typically used for clean-up. | #### Mutation | Sequence | Slot name | Description | | -------- | ---------- | ------------------------------------------------------------------------------------------ | | 1 | init | Initial resolvers that are run. Usually used for initializing default values. | | 2 | preAuth | Resolvers that are intended to run before authorization rule checks are applied. | | 3 | auth | Resolvers that implement authorization rule checks. | | 4 | postAuth | Resolvers that are run after authorization rule checks. | | 5 | preUpdate | Resolvers to configure values to make a request to the data source. | | 6 | postUpdate | Resolvers for post-processing after request to data source. | | 7 | finish | Final set of resolvers before response is returned to client. Typically used for clean-up. | #### Subscription | Sequence | Slot name | Description | | -------- | ------------ | -------------------------------------------------------------------------------- | | 1 | init | Initial resolvers that are run. Usually used for initializing default values. | | 2 | preAuth | Resolvers that are intended to run before authorization rule checks are applied. | | 3 | auth | Resolvers that implement authorization rule checks. | | 4 | postAuth | Resolvers that are run after authorization rule checks. | | 5 | preSubscribe | Resolver slot that executes after auth but before the subscription returns |