Leveraging EventBridge API Destinations to augment your application tracking
Often in applications that we develop, there is a need to API integration with any array of SaaS backend applications that serve a multitude of purposes including tracking user activity and delivering events. These calls are often done in a fire and forget fashion whereby the delivery of the event doesn’t have any effect on the actual operation of the application logic. For instance, you might want to record a user sign up event into an event tracking SaaS and making sure that you are not doing it in a manner whereby it blocks the execution of the actual sign up operation.
Cue EventBridge API Destinations, this functionality allow you to forward events to any HTTP API, enabling you to route events between AWS services, partner applications supported by AWS, and custom applications outside of AWS which support the events delivered by HTTP methods. API Destinations also supports the integration of other serverless AWS services that can act on the information based on the result of the delivery.
There are several useful functionalities of AWS EventBridge API Destinations that can help in the integration between API Destinations and external APIs.
- Input Transformation — EventBridge rules support natively manipulating the input of the event in order to deliver the correct payload to the final API. This is done through the use of the JSON selectors which allows you to select parts of the original event to be delivered downstream.
- Rate Limiting and Retry — Using API Destinations will allow for automated retry and rate limiting to be built into the downstream API call. The retry is done using exponential backoff with jitter which is typically the recommended way to manage these fire and forget calls. The rate limiting also helps to manage the use of the endpoint and makes sure that the number of invocations remains within service limits, something that you would have the independently implement if you were doing the call in application code.
- Partner Integration — API Destinations offers first class support for some commonly used integrations with AWS Partners that can help to reduce the integration time altogether and get you on your way faster.
- Built-In Auth Management — It provides ways for authentication to the endpoint to be done natively by AWS with the management of the underlying secret done as per the recommended best practices, with AWS Secrets Manager
However that being said, there are some drawbacks of using API Destinations.
- Lack of logging visibility — there is a lack of direct logging and debugging visibility though this can be improved if you set up some additional resources, which I will illustrate in the example below.
- 5 second timeout — there is a 5 second timeout on the API call when delivering the response so this might not be ideal for API endpoints which you know to be long-running operations. To interact with those endpoints, consider using a Lambda Function instead as they can also receive events from EventBridge and have support for integration with other AWS Services such as Secrets Manager
Let’s run through a quick example of how to implement API Destinations using IAC via aws-cdk
const connection = new Connection(this, 'apiConnection', {
connectionName: 'exampleApiConnection',
description: 'Connection to Example API',
authorization: Authorization.apiKey(
'x-api-key', // Replace this with your API Key Header
SecretValue.secretsManager('your-secret-arn-here'),
),
headerParameters: {
'Content-Type': HttpParameter.fromString('application/json'),
accept: HttpParameter.fromString('application/json'),
},
queryStringParameters: {
projectId: HttpParameter.fromString('abc123'),
},
});
In this code block, we create a Connection
which encapsulates the authorization, header and query parameters that will be used to call the API endpoint. In addition to using apiKey
in the Authorization, you can use Basic Auth or OAuth as well. If your API does not need authorization, you can choose to use apiKey
and specify some redundant header instead as the authorization
field in this object is not nullable. Take note that doing this will still create the Secret that has already been calculated into the cost of the API Destination.
const apiDestination = new ApiDestination(this, 'ExampleEventApiDestination', {
connection,
endpoint: 'https://api.example.com/report',
apiDestinationName: 'ExampleReportingAPI',
httpMethod: HttpMethod.POST,
rateLimitPerSecond: 100,
});
Here we define the actual API Destination and the params required to call it. Set your rate limit here per the documentation provided by your API provider and the rest of the requests will be queued to be sent if the rate limit is reached.
const rule = new Rule(this, 'ExampleEventRule', {
ruleName: 'ExampleEventRule',
eventPattern: {
source: ['userService.tracking'],
detailType: ['user_activity'],
},
});
const dlq = new aws_sqs.Queue(this, 'ExampleEventDeadLetterQueue', {
queueName: 'ExampleEventDeadLetterQueue',
});
rule.addTarget(
new targets.ApiDestination(apiDestination, {
event: RuleTargetInput.fromEventPath('$.detail'),
deadLetterQueue: dlq,
retryAttempts: 1,
}),
);
Now we define the rule and add in the event pattern that we will use to filter the events coming into the event bus for delivery downstream. We then add a target to the rule and define how the output should be filtered. We also set the number of attempts we expect to retry as well as a dead letter queue. Remember what we said earlier about not having visiblity in the logs for debugging? This DLQ here is essential for use to be able to view and act upon the messages that failed to sent and have met the number of max retries. These event will be sent to the queue with additional information about their state and the actual payload. These information includes specific error codes as well as the error from the downstream service that can help in debugging of the error.
Here is an example of an event that will match the filter to be delivered downstream.
{
"version": "0",
"detail-type": "user_activity",
"source": "userService.tracking",
"account": "123456789012",
"time": "2024-01-02T05:24:21Z",
"region": "ap-southeast-1",
"resources": [],
"detail": [
{
"event": "user_login",
"properties": {
"orgId": "88648157-106d-4016-8b65-7fb6a0eefe37",
"loginLocation": "US",
"time": "1704167079",
"userId": "a14f19ef-af8c-4834-829c-8ce63a1b611c",
"method": "website"
}
}
]
}
The event filter in this case will match the detail-type
and source
and then will forward the entire detail
downstream to the API for processing. This use case would be one where the backend is sending information about user activity on the platform to a backend tracking service such as one used to manage the user acquisition funnel.
If you add a new API Destination, you can use it as a target to send the same event to another service such as an auditing service that monitors user activity for abnormalities.
Another thing functionality to note in this case would be that we could also create an archive of all events that is sent to the event bus through the use of Event Bridge archive such that everything can be audited without the need to invoke any downstream services. The data can be exported out later for analysis.
In both cases, you would want something that operates silently in the background and not break the user login flow or make it any longer. By utilising event buses in EventBridge as well as API Destinations, you can now handle incoming information and send it along to backend services. This is a good example of how we can incrementaly implement and convert more of our systems to use serverless event-driven architecture for cost optimization and a smoother user experience.