Skip to content
forked from encode/uvicorn

A lightning-fast asyncio server for Python 3.

Notifications You must be signed in to change notification settings

kgriffs/uvicorn

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

uvicorn

A lightning-fast asyncio server for Python 3.


Installation

Install using pip:

pip install uvicorn

Examples

Hello, world...

app.py:

def hello_world(message):
    content = b'<html><h1>Hello, world</h1></html>'
    response = {
        'status': 200,
        'headers': [
            [b'content-type', b'text/html'],
        ],
        'content': content
    }
    message['reply_channel'].send(response)

Run the server:

uvicorn app:hello_world

Using async...

import asyncio


async hello_world(message):
    await asyncio.sleep(1)
    content = b'<html><h1>Hello, world</h1></html>'
    response = {
        'status': 200,
        'headers': [
            [b'content-type', b'text/html'],
        ],
        'content': content
    }
    message['reply_channel'].send(response)

Run the server:

uvicorn app:hello_world

Discussion on django-dev.

The server is implemented as a Gunicorn worker class that interfaces with an ASGI Consumer callable, rather than a WSGI callable.

We use a couple of packages from MagicStack in order to achieve an extremely high-throughput and low-latency implementation:

  • uvloop as the event loop policy.
  • httptools as the HTTP request parser.

You can use uvicorn to interface with either a traditional syncronous application codebase, or an asyncio application codebase.

These are the same packages used by the Sanic web framework.

Notes

  • I've modified the ASGI consumer contract slightly, to allow coroutine functions. This provides a nicer interface for asyncio implementations. It's not strictly necessary to make this change as it's possible to instead have the application be responsible for adding a new task to the event loop.
  • Streaming responses are supported, using "Response Chunk" ASGI messages.
  • Streaming requests are not currently supported.

Comparative performance vs Meinheld

Using wrk -d20s -t10 -c200 http://127.0.0.1:8080/ on a 2013 MacBook Air...

Server Requests/sec Avg latency
Uvicorn ~34,000 ~6ms
Meinheld ~16,000 ~12ms

ASGI Consumers vs Channels.

This worker class interfaces directly with an ASGI Consumer.

This is in contrast to Django Channels, where server processes communicate with worker processes via an intermediary channel layer.

About

A lightning-fast asyncio server for Python 3.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 92.9%
  • Shell 7.1%
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy