·8 min read

Efficient Article Summarization with QStash: Handling API Rate Limits and Parallel Processing

Abdullah Enes GulesAbdullah Enes GulesSoftware Engineer (Guest Author)

In this article, we'll build an application to summarise hundreds of online articles at once. To create these summaries, we'll use QStash's LLM integration to call an OpenAI LLM. This not only allows us to bypass platform-specific function execution limits but also massively reduces our billed function execution duration.

You'll learn how to work around OpenAI's API rate limits, which could otherwise be a problem when making many calls in parallel. The result will be hundreds of neatly summarised online articles created at the same time, ready for you to read or further process.

Motivation

Almost all publically available APIs have a rate limit applied to them, a maximum amount of requests you can make in a certain time frame. And, of course, depending on the API, hitting those limits is usually relatively easy. For example, Twitter is known for having very restrictive API rate limits, even for expensive premium tiers of their API.

OpenAI, the service we'll use to summarize articles, implements a similar rate limit, albeit less restrictive than Twitter. If you depend on a rate-limited API for your service, you're forced to implement some kind of workaround (i.e. throttling) that leads to a more complex codebase.

With Upstash QStash, a message scheduler for the serverless environment, we don't need to worry about throttling mechanisms under high API load. Our API requests are automatically retried when hitting our rate limits to make sure every request gets processed.

Prerequisites

To follow along, you'll need:

  • A basic understanding of Python and Django.
  • An Upstash account to obtain your QStash token and Redis URL.
  • A Vercel account to deploy the web application.
  • An OpenAI API key to use for summarization.
In this blog we used OpenAI's GPT 3.5 LLM model for summarization. You can also use Upstash hosted models for summarization.

Project Overview

The project consists of two main components:

  1. A Django web application that receives article summaries and saves them to our Redis database. We'll deploy this application to Vercel.

  2. A Python script that sends articles to OpenAI for summarization using QStash's LLM API support. The script will iterate over 1000 articles stored in Redis, send each one to OpenAI for summarization, and save the summaries back in Redis. We'll use QStash's queue system to handle the parallel processing of these tasks. We will also set the Retry-After header to handle API rate limits.

Project Setup

Install Necessary Packages

Install QStash Python SDK, Upstash Redis, Django, and Python-dotenv using pip:

pip install qstash upstash-redis django python-dotenv

QStash Python SDK is used to interact with QStash services, upstash-redis is used to communicate with our database, django is used to create the web application, and python-dotenv is used to load environment variables from a .env file.

To use a Redis database, create a free account on Upstash and get your Redis URL. Follow the instructions in the Upstash Redis documentation to create one.

Create a Django Project

First, we need to set up a new Django project. Navigate where you'd like this project to live and run:

django-admin startproject article_summarizer
cd article_summarizer
django-admin startapp summarizer

Configure Django Settings

In our settings.py, we'll add summarizer to INSTALLED_APPS and set APPEND_SLASH to False. Also, add .vercel.app and 127.0.0.1 to ALLOWED_HOSTS to allow requests from Vercel and local development:

INSTALLED_APPS = [
    ...
    'summarizer',
]
 
ALLOWED_HOSTS = ['.vercel.app', '127.0.0.1', 'localhost']
 
APPEND_SLASH = False

Add QStash configurations, OpenAI API key, and other environment variables to a .env file in the project root:

OPENAI_API_KEY=your_openai_api_key
QSTASH_TOKEN=your_qstash_token
DEPLOYMENT_URL=your_deployment_url
UPSTASH_REDIS_REST_URL=your_upstash_redis_rest_url
UPSTASH_REDIS_REST_TOKEN=your_upstash_redis_rest_token

Load the environment variables into the project's settings.py:

import os
from dotenv import load_dotenv
 
load_dotenv()
 
OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')
QSTASH_TOKEN = os.getenv('QSTASH_TOKEN')
DEPLOYMENT_URL = os.getenv('DEPLOYMENT_URL')
UPSTASH_REDIS_REST_URL = os.getenv('UPSTASH_REDIS_REST_URL')
UPSTASH_REDIS_REST_TOKEN = os.getenv('UPSTASH_REDIS_REST_TOKEN')

Finally add the following line to the wsgi.py file to expose the application to Vercel:

app = application

Implementation

1. Creating a Django View to Use as a Callback URL

We'll create a Django view to use as our callback URL. This view will handle the summary data sent by QStash and save it to our Redis database. We will use the upstash_redis package to interact with our Redis database. We will also add the csrf_exempt decorator to the view to allow POST requests without CSRF tokens.

First, we decode the base64-encoded data, extract the summary, and save it to Redis using the article ID as the key.

import base64
import json
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
from upstash_redis import Redis
 
@csrf_exempt
def redis_callback_view(request):
    if request.method == 'POST':
        # Parse the request body
        data = json.loads(request.body)
 
        # Decode the base64-encoded 'body' field from the callback
        encoded_body = data.get('body', '')
        decoded_body = base64.b64decode(encoded_body).decode('utf-8')
 
        # Parse the decoded body to JSON format
        decoded_data = json.loads(decoded_body)
 
        # Extract the summary from the decoded OpenAI response
        summary = decoded_data['choices'][0]['message']['content']
 
        # Extract the article ID from the query parameters
        article_id = request.GET.get('article_id')
        
        # Save the summary to Redis
        redis = Redis.from_env()
        redis.set(f"summary_{article_id}", summary)
 
        return JsonResponse({'status': 'Summary saved to Redis'})
    
    return JsonResponse({'error': 'Invalid request'}, status=400)

2. Adding the URL Pattern for the Callback View

We will add the URL pattern for the callback view to the summarizer/urls.py file of the summarizer app:

from django.urls import path
from .views import redis_callback_view
 
urlpatterns = [
    path('redis-callback', redis_callback_view, name='redis_callback'), 
]

3. Update the Project's URL Configuration

We will include the URL pattern for the summarizer app in the project's article_summarizer/urls.py file:

from django.contrib import admin
from django.urls import path, include
 
urlpatterns = [
    path('admin/', admin.site.urls),
    path('summarizer/', include('summarizer.urls')),
]

4. Deploy the Django Application

We will use Vercel to deploy our application. Before deploying, we need to create a vercel.json file in the project root with the following configuration:

{
  "builds": [
    {
      "src": "article_summarizer/wsgi.py",
      "use": "@vercel/python",
      "config": { "maxLambdaSize": "15mb", "runtime": "python3.9" }
    }
  ],
  "routes": [
    {
      "src": "/(.*)",
      "dest": "article_summarizer/wsgi.py"
    }
  ]
}

Then we will create a requirements file to specify the dependencies. We will run the following command to generate the requirements.txt file:

pip freeze > requirements.txt

We are now ready to deploy!

To easily deploy our app, we can create a GitHub repository and push our Django project to it. Then, create a new project on Vercel and connect it to our GitHub repository. After that, Vercel will handle the deployment process for us. After the deployment is complete, we will get a deployment URL that we can use as the callback URL and we need to set our environment variables in our project’s Settings -> Environment Variables. After we set our variables we will redeploy from the Deployments tab.

5. Creating the Queue and Sending Summarization Requests

We'll create a queue with parallelism set to 2, meaning two summarization tasks can run concurrently. Then, we'll iterate over 1000 articles stored in Redis, sending each one to OpenAI for summarization. We'll make sure to set the Retry-After header to 60 seconds to handle OpenAI's per-minute rate limit, and we'll also set the callback URL to our deployed Django application with the article ID as a query parameter.

from upstash_redis import Redis
from qstash import QStash
from qstash.chat import openai
from dotenv import load_dotenv
import os
 
load_dotenv()
redis = Redis.from_env()
qstash_client = QStash(os.getenv("QSTASH_TOKEN"))
 
qstash_client.queue.upsert("articles-queue", parallelism=2)
 
for i in range(1, 1001):
 
    article = redis.get(f"article_{i}")
 
    result = qstash_client.message.enqueue_json(
        queue="articles-queue",
        api={"name": "llm", "provider": openai(os.getenv("OPENAI_API_KEY"))},
        body={
            "model": "gpt-3.5-turbo",
            "messages": [
                {
                    "role": "user",
                    "content": f"Summarize the following article: {article} \n in 50-100 words, highlighting the main points and key findings. Please use your own words and avoid copying and pasting from the original text. If the article has multiple sections or parts, focus on the most important and relevant information. Thank you!",
                }
            ],
        },
        callback=f'{os.getenv("DEPLOYMENT_URL")}/redis-callback?article_id={i}',
        headers={"Retry-After": "60",},
    )
 
print(result)

Conclusion

And that's it! We now have an app that can summarize hundreds of web articles reliably and quickly using parallelism and automatic retries upon hitting our rate limits. By the way, I included a bonus for you: Use this article summary app to summarize any article and send the summary straight to your email inbox.

For more details, you can explore the Upstash QStash documentation. You can find the complete source code for this project on the GitHub repository. For any questions or feedback, feel free to reach out to me on LinkedIn.