Asynchronous HTTP client/server framework for asyncio and Python

Async http client/server framework

aiohttp logo


GitHub Actions status for master branch codecov.io status for master branch Latest PyPI package version Downloads count Latest Read The Docs Discourse status Chat on Gitter

Key Features

  • Supports both client and server side of HTTP protocol.
  • Supports both client and server Web-Sockets out-of-the-box and avoids Callback Hell.
  • Provides Web-server with middlewares and plugable routing.

Getting started

Client

To get something from the web:

import aiohttp
import asyncio

async def main():

    async with aiohttp.ClientSession() as session:
        async with session.get('http://python.org') as response:

            print("Status:", response.status)
            print("Content-type:", response.headers['content-type'])

            html = await response.text()
            print("Body:", html[:15], "...")

loop = asyncio.get_event_loop()
loop.run_until_complete(main())

This prints:

Status: 200
Content-type: text/html; charset=utf-8
Body: <!doctype html> ...

Coming from requests ? Read why we need so many lines.

Server

An example using a simple server:

# examples/server_simple.py
from aiohttp import web

async def handle(request):
    name = request.match_info.get('name', "Anonymous")
    text = "Hello, " + name
    return web.Response(text=text)

async def wshandle(request):
    ws = web.WebSocketResponse()
    await ws.prepare(request)

    async for msg in ws:
        if msg.type == web.WSMsgType.text:
            await ws.send_str("Hello, {}".format(msg.data))
        elif msg.type == web.WSMsgType.binary:
            await ws.send_bytes(msg.data)
        elif msg.type == web.WSMsgType.close:
            break

    return ws


app = web.Application()
app.add_routes([web.get('/', handle),
                web.get('/echo', wshandle),
                web.get('/{name}', handle)])

if __name__ == '__main__':
    web.run_app(app)

Documentation

https://aiohttp.readthedocs.io/

Demos

https://github.com/aio-libs/aiohttp-demos

External links

Feel free to make a Pull Request for adding your link to these pages!

Communication channels

aio-libs discourse group: https://aio-libs.discourse.group

gitter chat https://gitter.im/aio-libs/Lobby

We support Stack Overflow. Please add aiohttp tag to your question there.

Requirements

Optionally you may install the cChardet and aiodns libraries (highly recommended for sake of speed).

License

aiohttp is offered under the Apache 2 license.

Keepsafe

The aiohttp community would like to thank Keepsafe (https://www.getkeepsafe.com) for its support in the early days of the project.

Source code

The latest developer version is available in a GitHub repository: https://github.com/aio-libs/aiohttp

Benchmarks

If you are interested in efficiency, the AsyncIO community maintains a list of benchmarks on the official wiki: https://github.com/python/asyncio/wiki/Benchmarks

Owner
aio-libs
The set of asyncio-based libraries built with high quality
aio-libs
Comments
  • aiohttp 3.0 release

    aiohttp 3.0 release

    I'd like to name the next release 3.0 3.0 means the major.

    Let's drop Python 3.4 and use async/await everywhere. Another important question is what 3.5 release to cut off?

    Honestly I want to support 3.5.3+ only: the release fixes well known ugly bug for asyncio.get_event_loop(). Debian stable has shipped with Python 3.5.3, not sure about RHEL. All other distributions with faster release cycle support 3.5.3 too at least or event 3.6.

    The transition should not be done at once.

    1. Let's pin minimal required Python version first in setup.py.
    2. Translate test suite to use async/await.
    3. Modify aiohttp itself to use new syntax.

    Every bullet except number one could be done in a long series of PRs, part by part.

    BTW at the end I hope to get minor performance increase :)

  • aiohttp ignoring SSL_CERT_DIR and or SSL_CERT_FILE environment vars. Results in [SSL: CERTIFICATE_VERIFY_FAILED]

    aiohttp ignoring SSL_CERT_DIR and or SSL_CERT_FILE environment vars. Results in [SSL: CERTIFICATE_VERIFY_FAILED]

    Long story short

    The CA file is working with cURL, Python Requests, but not aiohttp, when using SSL_CERT_DIR and or SSL_CERT_FILE environment variables.

    Our environment uses its own CA root used to decode/encode HTTPS API requests/responses to provide a short lived cache to prevent excessing external requests.

    The environment has the following set:

    $ (set -o posix; set) | egrep 'SSL|_CA'
    CURL_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    REQUESTS_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    SSL_CERT_DIR=/home/creslin/poo/freqcache/cert/
    SSL_CERT_FILE=/home/creslin/poo/freqcache/cert/ca.pem
    

    The ca.pem can be successfully used by cURL - with both a positive and negative test shown:

    curl --cacert /home/creslin/poo/freqcache/cert/ca.pem https://api.binance.com/api/v1/time
    {"serverTime":1533719563552}
    
    curl --cacert /home/creslin/NODIRHERE/ca.pem https://api.binance.com/api/v1/time
    curl: (77) error setting certificate verify locations:
      CAfile: /home/creslin/NODIRHERE/ca.pem
      CApath: /etc/ssl/certs
    

    A simple python requests script req.py also works as expected, positive and negative tests

    cat req.py 
    import requests
    req=requests.get('https://api.binance.com/api/v1/time', verify=True)
    print(req.content)
    
    CURL_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    REQUESTS_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    SSL_CERT_DIR=/home/creslin/poo/freqcache/cert/
    SSL_CERT_FILE=/home/creslin/poo/freqcache/cert/ca.pem
    
    python3 req.py 
    b'{"serverTime":1533720141278}'
    
    CURL_CA_BUNDLE=/
    REQUESTS_CA_BUNDLE=/
    SSL_CERT_DIR=/
    SSL_CERT_FILE=/
    
    python3 req.py 
    Traceback (most recent call last):
      File "/home/creslin/freqt .......
    
    ..... ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)
    

    Using aysnc/aiohttp [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833) is returned always. The environment settings pointing to the ca.pem shown to work for both cURL and requests are seemingly ignored

    CURL_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    REQUESTS_CA_BUNDLE=/home/creslin/poo/freqcache/cert/ca.pem
    SSL_CERT_DIR=/home/creslin/poo/freqcache/cert/
    SSL_CERT_FILE=/home/creslin/poo/freqcache/cert/ca.pem
    

    I have the test script a.py as

    cat a.py 
    import aiohttp
    import ssl
    import asyncio
    import requests
    
    print("\n requests.certs.where", requests.certs.where())
    print("\n ssl version", ssl.OPENSSL_VERSION)
    print("\n ssl Paths", ssl.get_default_verify_paths() ,"\n")
    f = open('/home/creslin/poo/freqcache/cert/ca.crt', 'r') # check perms are ok
    f.close()
    
    async def main():
        session = aiohttp.ClientSession()
        async with session.get('https://api.binance.com/api/v1/time') as response:
            print(await response.text())
        await session.close()
    
    if __name__ == "__main__":
        loop = asyncio.get_event_loop()
        loop.run_until_complete(main())
    

    Which will always produce the failure - output in full:

     requests.certs.where /home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/certifi/cacert.pem
    
     ssl version OpenSSL 1.1.0g  2 Nov 2017
    
     ssl Paths DefaultVerifyPaths(cafile=None, capath='/home/creslin/poo/freqcache/cert/', openssl_cafile_env='SSL_CERT_FILE', openssl_cafile='/usr/lib/ssl/cert.pem', openssl_capath_env='SSL_CERT_DIR', openssl_capath='/usr/lib/ssl/certs') 
    
    Traceback (most recent call last):
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 822, in _wrap_create_connection
        return await self._loop.create_connection(*args, **kwargs)
      File "/usr/lib/python3.6/asyncio/base_events.py", line 804, in create_connection
        sock, protocol_factory, ssl, server_hostname)
      File "/usr/lib/python3.6/asyncio/base_events.py", line 830, in _create_connection_transport
        yield from waiter
      File "/usr/lib/python3.6/asyncio/sslproto.py", line 505, in data_received
        ssldata, appdata = self._sslpipe.feed_ssldata(data)
      File "/usr/lib/python3.6/asyncio/sslproto.py", line 201, in feed_ssldata
        self._sslobj.do_handshake()
      File "/usr/lib/python3.6/ssl.py", line 689, in do_handshake
        self._sslobj.do_handshake()
    ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "a.py", line 20, in <module>
        loop.run_until_complete(main())
      File "/usr/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
        return future.result()
      File "a.py", line 14, in main
        async with session.get('https://api.binance.com/api/v1/time') as response:
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/client.py", line 843, in __aenter__
        self._resp = await self._coro
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/client.py", line 366, in _request
        timeout=timeout
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 445, in connect
        proto = await self._create_connection(req, traces, timeout)
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 757, in _create_connection
        req, traces, timeout)
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 879, in _create_direct_connection
        raise last_exc
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 862, in _create_direct_connection
        req=req, client_error=client_error)
      File "/home/creslin/freqtrade/freqtrade_mp/freqtrade_technical_jul29/freqtrade/.env/lib/python3.6/site-packages/aiohttp/connector.py", line 827, in _wrap_create_connection
        raise ClientConnectorSSLError(req.connection_key, exc) from exc
    aiohttp.client_exceptions.ClientConnectorSSLError: Cannot connect to host api.binance.com:443 ssl:None [[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)]
    Unclosed client session
    client_session: <aiohttp.client.ClientSession object at 0x7fa6bf9de898>
    
    

    Expected behaviour

    aiohttp does not reject server certificate.

    Actual behaviour

    SSL verification error

    Steps to reproduce

    use own CA root certificate to trust HTTPS server.

    Your environment

    Ubuntu 18.04 Python 3.6.5 ssl version OpenSSL 1.1.0g 2 Nov 2017 Name: aiohttp Version: 3.3.2 Name: requests Version: 2.19.1 Name: certifi Version: 2018.4.16

  • ASGI support?

    ASGI support?

    Long story short

    Currently, most of asyncio based web frameworks embeds http server into the web framework. If http/ws servers would use ASGI standard, then it would be possible to run your app using different http/ws servers.

    That is why I think it is important for aiohttp to add ASGI support.

    Expected behaviour

    I would like to write aiohttp app:

    from aiohttp import web
    
    async def hello(request):
        return web.Response(text="Hello world!")
    
    app = web.Application()
    app.add_routes([web.get('/', hello)])
    

    And run it with any ASGI compatible server:

    > daphne app:app
    
    > http -b get :8080/
    Hello world!
    

    Actual behaviour

    > daphne app:app
    2018-04-02 12:48:51,097 ERROR    Traceback (most recent call last):
      File "daphne/http_protocol.py", line 158, in process
        "server": self.server_addr,
      File "daphne/server.py", line 184, in create_application
        application_instance = self.application(scope=scope)
    TypeError: __call__() got an unexpected keyword argument 'scope'
    
    127.0.0.1:41828 - - [02/Apr/2018:12:48:51] "GET /" 500 452
    
    > http -b get :8080/
    <html>
      <head>
        <title>500 Internal Server Error</title>
      </head>
      <body>
        <h1>500 Internal Server Error</h1>
        <p>Daphne HTTP processing error</p>
        <footer>Daphne</footer>
      </body>
    </html>
    

    ASGI resources

    https://github.com/django/asgiref/blob/master/specs/asgi.rst - ASGI specification.

    https://github.com/django/asgiref/blob/master/specs/www.rst - ASGI-HTTP and ASGI-WebSocket protocol specifications.

    Example ASGI app:

    import json
    
    def app(scope):
        async def channel(receive, send):
            message = await receive()
    
            if scope['method'] == 'POST':
                response = message
            else:
                response = scope
    
            await send({
                'type': 'http.response.start',
                'status': 200,
                'headers': [
                    [b'Content-Type', b'application/json'],
                ],
            })
            await send({
                'type': 'http.response.body',
                'body': json.dumps(response, default=bytes.decode).encode(),
            })
            await send({
                'type': 'http.disconnect',
            })
        return channel
    
    > daphne app:app
    2018-03-31 22:28:10,823 INFO     Starting server at tcp:port=8000:interface=127.0.0.1
    2018-03-31 22:28:10,824 INFO     HTTP/2 support enabled
    2018-03-31 22:28:10,824 INFO     Configuring endpoint tcp:port=8000:interface=127.0.0.1
    2018-03-31 22:28:10,825 INFO     Listening on TCP address 127.0.0.1:8000
    127.0.0.1:43436 - - [31/Mar/2018:22:28:17] "GET /" 200 347
    127.0.0.1:43440 - - [31/Mar/2018:22:28:22] "POST /" 200 43
    127.0.0.1:43446 - - [31/Mar/2018:22:28:42] "POST /" 200 54
    
    > http -b get :8000/
    {
        "type": "http"
        "http_version": "1.1",
        "method": "GET",
        "path": "/",
        "query_string": "",
        "root_path": "",
        "scheme": "http",
        "headers": [
            ["host", "localhost:8000"],
            ["user-agent", "HTTPie/0.9.9"],
            ["accept-encoding", "gzip, deflate"],
            ["accept", "*/*"],
            ["connection", "keep-alive"]
        ],
        "client": ["127.0.0.1", 43360],
        "server": ["127.0.0.1", 8000],
    }
    
    > http -b -f post :8000/ foo=bar
    {
        "body": "foo=bar",
        "type": "http.request"
    }
    
    > http -b -j post :8000/ foo=bar
    {
        "body": "{\"foo\": \"bar\"}",
        "type": "http.request"
    }
    
  • Degrading performance over time...

    Degrading performance over time...

    I wrote a quick AsyncScraper class below:

    import logging, datetime, time
    import aiohttp
    import asyncio
    import uvloop
    
    # asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
    
    logger = logging.getLogger(__name__)
    logger.setLevel(logging.DEBUG)
    logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
    logger.addHandler(logging.StreamHandler())
    
    class AsyncScraper(object):
    	headers = {"User-Agent" : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36'}
    	def __init__(self, max_connections=1000, timeout=10):
    		self.max_connections = max_connections
    		self.timeout = timeout
    		
    	async def get_response(self, url, session):
    		with aiohttp.Timeout(timeout=self.timeout):
    			async with session.get(url, allow_redirects=True, headers=AsyncScraper.headers, timeout=self.timeout) as response:
    				try:
    					content = await response.text()
    					return {'error': "", 'status': response.status, 'url':url, 'content': content, 'timestamp': str(datetime.datetime.utcnow())}
    				except Exception as err:
    					return {'error': err, 'status': "", 'url':url, 'content': "", 'timestamp': str(datetime.datetime.utcnow())}
    				finally:
    					response.close()
    
    	def get_all(self, urls):
    		loop = asyncio.get_event_loop()
    		with aiohttp.ClientSession(loop=loop, connector=aiohttp.TCPConnector(keepalive_timeout=10, limit=self.max_connections, verify_ssl=False)) as session:
    			tasks = asyncio.gather(*[self.get_response(url, session) for url in urls], return_exceptions=True)
    			results = loop.run_until_complete(tasks)
    			return results
    
    def chunks(l, n):
    	for i in range(0, len(l), n):
    		yield l[i:i + n]
    
    def process_urls(urls, chunk_size=1000):
    	scraper = AsyncScraper()
    
    	results = []
    	t0 = time.time()
    	for i, urls_chunk in enumerate(chunks(sorted(set(urls)), chunk_size)):
    		t1 = time.time()
    		result = scraper.get_all(urls_chunk)
    		success_size = len( [_ for _ in result if ((isinstance(_, Exception) is False) and (_['status']==200)) ] )
    		results.extend(result)
    		logger.debug("batch {} => success: {} => iteration time: {}s =>  total time: {}s => total processed {}".format(i+1, success_size, time.time()-t1, time.time()-t0, len(results)))
    	return results
    

    and I've run into two main issues:

    1. If I pass in a flat list of URLs, say 100k (via the get_all method), I get flooded with errors:

      2017-04-17 15:50:53,541 - asyncio - ERROR - Fatal error on SSL transport protocol: <asyncio.sslproto.SSLProtocol object at 0x10d5439b0> transport: <_SelectorSocketTransport closing fd=612 read=idle write=<idle, bufsize=0>> Traceback (most recent call last): File "/Users/vgoklani/anaconda3/lib/python3.6/asyncio/sslproto.py", line 639, in _process_write_backlog ssldata = self._sslpipe.shutdown(self._finalize) File "/Users/vgoklani/anaconda3/lib/python3.6/asyncio/sslproto.py", line 151, in shutdown raise RuntimeError('shutdown in progress') RuntimeError: shutdown in progress

    2. I then batched the URLs in chunks of 1,000, and timed the response between batches. And I was clearly able to measure the performance decay over time (see below). Moreover, the number of errors increased over time... What am I doing wrong?

      iteration 0 done in 16.991s iteration 1 done in 39.376s iteration 2 done in 35.656s iteration 3 done in 19.716s iteration 4 done in 29.331s iteration 5 done in 19.708s iteration 6 done in 19.572s iteration 7 done in 29.907s iteration 8 done in 23.379s iteration 9 done in 21.762s iteration 10 done in 22.091s iteration 11 done in 22.940s iteration 12 done in 31.285s iteration 13 done in 24.549s iteration 14 done in 26.297s iteration 15 done in 23.816s iteration 16 done in 29.094s iteration 17 done in 24.885s iteration 18 done in 26.456s iteration 19 done in 27.412s iteration 20 done in 29.969s iteration 21 done in 28.503s iteration 22 done in 28.699s iteration 23 done in 31.570s iteration 26 done in 31.898s iteration 27 done in 33.553s iteration 28 done in 34.022s iteration 29 done in 33.866s iteration 30 done in 36.351s iteration 31 done in 40.060s iteration 32 done in 35.523s iteration 33 done in 36.607s iteration 34 done in 36.325s iteration 35 done in 38.425s iteration 36 done in 39.106s iteration 37 done in 38.972s iteration 38 done in 39.845s iteration 39 done in 40.393s iteration 40 done in 40.734s iteration 41 done in 47.799s iteration 42 done in 43.070s iteration 43 done in 43.365s iteration 44 done in 42.081s iteration 45 done in 44.118s iteration 46 done in 44.955s iteration 47 done in 45.400s iteration 48 done in 45.987s iteration 49 done in 46.041s iteration 50 done in 45.899s iteration 51 done in 49.008s iteration 52 done in 49.544s iteration 53 done in 55.432s iteration 54 done in 52.590s iteration 55 done in 50.185s iteration 56 done in 52.858s iteration 57 done in 52.698s iteration 58 done in 53.048s iteration 59 done in 54.120s iteration 60 done in 54.151s iteration 61 done in 55.465s iteration 62 done in 56.889s iteration 63 done in 56.967s iteration 64 done in 57.690s iteration 65 done in 57.052s iteration 66 done in 67.214s iteration 67 done in 58.457s iteration 68 done in 60.882s iteration 69 done in 58.440s iteration 70 done in 60.755s iteration 71 done in 58.043s iteration 72 done in 65.076s iteration 73 done in 63.371s iteration 74 done in 62.800s iteration 75 done in 62.419s iteration 76 done in 61.376s iteration 77 done in 63.164s iteration 78 done in 65.443s iteration 79 done in 64.616s iteration 80 done in 69.544s iteration 81 done in 68.226s iteration 82 done in 78.050s iteration 83 done in 67.871s iteration 84 done in 69.780s iteration 85 done in 67.812s iteration 86 done in 68.895s iteration 87 done in 71.086s iteration 88 done in 68.809s iteration 89 done in 70.945s iteration 90 done in 72.760s iteration 91 done in 71.773s iteration 92 done in 72.522s

    The time here corresponds to the iteration time to process 1,000 URLs. Please advise. Thanks

  • Memory leak in request

    Memory leak in request

    Hi all,

    Since I upgraded to 0.14.4 (from 0.9.0) I am experiencing memory leaks in a Dropbox-API longpoller. It is a single process that spawns a few thousands of greenlets. Each greenlet performs a request(), that blocks for 30 seconds, then parses the response and dies. Then a new greenlet is spawned.

    I am running on python 3.4.0, Ubuntu 14.04. I use the connection pool feature, passing the same connector singleton to each .request() call.

    I played with tracemalloc, dumping a <N>.dump stat file every minute and found out that the response parser instances keep increasing in number (look at the third line of each stat)

    [email protected]:/src# python3 -c "import pprint; import tracemalloc; pprint.pprint(tracemalloc.Snapshot.load('5.dump').statistics('lineno')[:3])"
    [<Statistic traceback=<Traceback (<Frame filename='/usr/lib/python3.4/ssl.py' lineno=648>,)> size=6130540 count=82650>,
     <Statistic traceback=<Traceback (<Frame filename='<frozen importlib._bootstrap>' lineno=656>,)> size=3679906 count=31688>,
     <Statistic traceback=<Traceback (<Frame filename='/usr/local/lib/python3.4/dist-packages/aiohttp/parsers.py' lineno=198>,)> size=2176408 count=4437>]
    [email protected]:/src# 
    [email protected]:/src# 
    [email protected]:/src# python3 -c "import pprint; import tracemalloc; pprint.pprint(tracemalloc.Snapshot.load('6.dump').statistics('lineno')[:3])"
    [<Statistic traceback=<Traceback (<Frame filename='/usr/lib/python3.4/ssl.py' lineno=648>,)> size=6130476 count=82649>,
     <Statistic traceback=<Traceback (<Frame filename='<frozen importlib._bootstrap>' lineno=656>,)> size=3679906 count=31688>,
     <Statistic traceback=<Traceback (<Frame filename='/usr/local/lib/python3.4/dist-packages/aiohttp/parsers.py' lineno=198>,)> size=2199704 count=4463>]
    [email protected]:/src# 
    [email protected]:/src# 
    [email protected]:/src# python3 -c "import pprint; import tracemalloc; pprint.pprint(tracemalloc.Snapshot.load('7.dump').statistics('lineno')[:3])"
    [<Statistic traceback=<Traceback (<Frame filename='/usr/lib/python3.4/ssl.py' lineno=648>,)> size=6130476 count=82649>,
     <Statistic traceback=<Traceback (<Frame filename='<frozen importlib._bootstrap>' lineno=656>,)> size=3679906 count=31688>,
     <Statistic traceback=<Traceback (<Frame filename='/usr/local/lib/python3.4/dist-packages/aiohttp/parsers.py' lineno=198>,)> size=2231064 count=4498>]
    

    tracemalloc reports this stack trace:

    python3 -c "import pprint; import tracemalloc; pprint.pprint(tracemalloc.Snapshot.load('3.dump').filter_traces([tracemalloc.Filter(True, '*aiohttp/parsers.py')]).statistics('traceback')[0].traceback.format())"
    ['  File "/usr/local/lib/python3.4/dist-packages/aiohttp/parsers.py", line '
     '198',
     '    p = parser(output, self._buffer)',
     '  File "/usr/local/lib/python3.4/dist-packages/aiohttp/client.py", line 633',
     '    httpstream = self._reader.set_parser(self._response_parser)',
     '  File "/usr/local/lib/python3.4/dist-packages/aiohttp/client.py", line 108',
     '    yield from resp.start(conn, read_until_eof)',
     '  File "/src/xxxxxx/main.py", line 70',
     '    connector=LONGPOLL_CONNECTOR',
    

    Looks like there is something keeping alive those parsers....

    Using force_close=True on the connector makes no difference.

    Then I tried calling gc.collect() after every single request, and is going much better ~~but the leak has not~~ the leak has disappeared completely-. This means (maybe is an unrelated issue) the library creates more reference cycles thant the CGC can handle.

    It my well be my own bug, or maybe something to do with python 3.4.0 itself. I'm still digging into it.

  • Add json_response funciton

    Add json_response funciton

    Should be derived from aiohttp.web.Response.

    Constructor signature is: def __init__(self, data, *, status=200, reason=None, headers=None)

    Should pack data arg as json.dumps() and set content type to application/json.

    People forget to specify proper content type on sending json data.

  • ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2605)

    ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2605)

    The following very simple aiohttp client:

    #!/usr/bin/env python3
    
    import aiohttp
    import asyncio
    
    async def fetch(session, url):
        async with session.get(url) as response:
            print("%s launched" % url)
            return response
    
    async def main():
        async with aiohttp.ClientSession() as session:
            python = await fetch(session, 'https://python.org')
            print("Python: %s" % python.status)
            
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())
    

    produces the following exception:

    https://python.org launched
    Python: 200
    SSL error in data received
    protocol: <asyncio.sslproto.SSLProtocol object at 0x7fdec8d42208>
    transport: <_SelectorSocketTransport fd=8 read=polling write=<idle, bufsize=0>>
    Traceback (most recent call last):
      File "/usr/lib/python3.7/asyncio/sslproto.py", line 526, in data_received
        ssldata, appdata = self._sslpipe.feed_ssldata(data)
      File "/usr/lib/python3.7/asyncio/sslproto.py", line 207, in feed_ssldata
        self._sslobj.unwrap()
      File "/usr/lib/python3.7/ssl.py", line 767, in unwrap
        return self._sslobj.shutdown()
    ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2605)
    

    I noticed bug #3477 but it is closed and the problem is still there (I have the latest pip version).

    % python --version
    Python 3.7.2
    
    % pip show aiohttp
    Name: aiohttp
    Version: 3.5.4
    Summary: Async http client/server framework (asyncio)
    Home-page: https://github.com/aio-libs/aiohttp
    Author: Nikolay Kim
    Author-email: [email protected]
    License: Apache 2
    Location: /usr/lib/python3.7/site-packages
    Requires: chardet, multidict, attrs, async-timeout, yarl
    Required-by: 
    
  • replace http parser?

    replace http parser?

    @asvetlov should we replace http parser with https://github.com/MagicStack/httptools? looks good http://magic.io/blog/uvloop-make-python-networking-great-again/

  • docs syntax highlighting missing

    docs syntax highlighting missing

    starting with 0.18.0, the syntax highlighting for all but the first code block on any page vanished.

    it seems some change on readthedocs, docutils or sphinx made .. highlight:: python not work properly.

    and index.rst doesn’t have that directive, so the magic for autodetecting language or whatever made it work before is also gone.

  • Signals

    Signals

    To support altering headers before responses are sent, among other things, there should be a signals framework so people can hook into the response process. This would be extensible for other uses too. This issue follows on from #393.

    The proposal is:

    Create a new signals module, containing a Signal class. Its constructor takes a single argument, arguments, saying which arguments must be passed when dispatching the signal. This class has a method for checking that potential callbacks have a compatible signature. The module also contains instances of this class for individual types of signal (e.g. response_start = Signal({'request', 'response'}) β€” I can't yet think of what other signals might be useful).

    Application gets a new _signals = defaultdict(list) and a @property that exposes a view of it. It gets a add_signal_callback(signal : Signal, callback : Callable) method, which checks that the callback is compatible (see above) and coerces the calback to a coroutine, before appending the callback to self._signals[signal].

    Signal dispatching is done by Appllication.dispatch_signal(signal : Signal, kwargs : dict), which iterates through the registered signals, calling each in turn.

    Response.start() is modified to dispatch the response_start signal after this line.

    I don't think there's any need to create signals for the request creation phase, as the request can be modified safely with middleware.

    The use-case of adding response headers from middleware becomes this slightly contrived example:

    import asyncio
    import time
    
    from aiohttp.signals import response_start
    
    @asyncio.coroutine
    def add_header_middleware(app, handler):
        @asyncio.coroutine
        def middleware(request):
            request.started = time.time()
            return (yield from handler(request))
        return middleware
    
    def add_request_started(*, request, response):
        response.headers['X-Started'] = str(request.started)
    
    app = Application(middlewares=(add_header_middleware,))
    app.add_signal_handler(response_start, add_request_started)
    
  • Replace aiohttp.ClientSession with aiohttp.create_session()

    Replace aiohttp.ClientSession with aiohttp.create_session()

    create_session() should accept the same params as ClientSession but return a oject with __aenter__/__aexit__ and __await__ methods. Thus everything is supported except client = aiohttp.ClientSession() syntax: client = await aiohttp.create_session() should be used instead.

    It prevents hard to debug things like making a session in global namespace:

    class A:
         client = aiohttp.ClientSession()
    

    The change is backward compatible: we don't get rid of ClientSession object and even expose it by aiohttp.__all__. But it's a huge change from user perspective: we will rewrite all documentation to encourage a new way.

  • ClientSession.timeout has an incorrect typing

    ClientSession.timeout has an incorrect typing

    Describe the bug

    The aiohttp.ClientSession.timeout attribute has a type of Union[object, aiohttp.ClientTimeout], however the code logic will never actually assign a bare object type to the self._timeout attribute, making this typing quite over-inclusive. Trying to use this attribute in typed code results in having to use cast(aiohttp.ClientTimeout, session.timeout), which is far from ideal considering one can just fix the typing in the library.

    I ran into this while using Python 3.8.10, but the exact same explanation above applies to the current master branch (and the version I'm using of course), as shown by the snippets below.

    3.8 branch __init__ parameter: https://github.com/aio-libs/aiohttp/blob/6243204a6a6a0e5ff84ac754218381b44a841e72/aiohttp/client.py#L217

    3.8 branch self._timeout assignment: https://github.com/aio-libs/aiohttp/blob/6243204a6a6a0e5ff84ac754218381b44a841e72/aiohttp/client.py#L261-L290 Note the # type: ignore comment on L278 there - it's because the timeout is sentinel check does not narrow down the timeout type. The correct way to go about this would be to use a cast there instead of ignoring the issue like that.

    3.8 branch timeout attribute declaration: https://github.com/aio-libs/aiohttp/blob/6243204a6a6a0e5ff84ac754218381b44a841e72/aiohttp/client.py#L1029-L1032

    Master branch __init__ parameter: https://github.com/aio-libs/aiohttp/blob/52fa599c5637dd1a38761afb6829b0439b1cf505/aiohttp/client.py#L215

    Master branch self._timeout assignment: https://github.com/aio-libs/aiohttp/blob/52fa599c5637dd1a38761afb6829b0439b1cf505/aiohttp/client.py#L260-L263 Due to a different handling of the sentinel value via an Enum member, no cast is needed here.

    Master branch timeout attribute declaration: https://github.com/aio-libs/aiohttp/blob/52fa599c5637dd1a38761afb6829b0439b1cf505/aiohttp/client.py#L1008-L1011 The attribute type is still over-inclusive here though.

    The solution would be quite simple:

        @property
        def timeout(self) -> ClientTimeout:
            """Timeout for the session."""
            return self._timeout
    

    Please let me know if you'd welcome a PR for this. I'd like to get this backported back to 3.8 (that I'm using) if possible, but if not, just fixing it in the master branch so that it's correct going forward would be good enough for me.

    To Reproduce

    Utilize some kind of a type checker like MyPy.

    import asyncio
    import aiohttp
    
    async def main:
        session = aiohttp.ClientSession(timeout=aiohttp.ClientTimeout(total=10))
        # read back the total time attribute
        total_time = session.timeout.total  # "object" type of "Union[object, ClientTimeout]" has no attribute "total"
        print(total_time)
    
    asyncio.run(main())
    

    Expected behavior

    The attribute having only the aiohttp.ClientTimeout type and not requiring cast usage when accessing the attribute during library usage in user code.

    Logs/tracebacks

    Not applicable
    

    Python Version

    Python 3.8.10
    

    aiohttp Version

    Version: 3.8.1
    

    multidict Version

    Version: 6.0.2
    

    yarl Version

    Version: 1.7.2
    

    OS

    Windows

    Related component

    Client

    Additional context

    Related issues and PRs:

    #4191 #4193

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
  • Add missing __repr__ to streams.EmptyStreamReader

    Add missing __repr__ to streams.EmptyStreamReader

    EmptyStreamReader currently inherits __repr__ from StreamReader , which then tries to access non-existing fields, causing AttributeErrors. The bug causes issues like this in upstream code: https://github.com/miguelgrinberg/python-socketio/issues/1032

    This PR adds a custom __repr__ to EmptyStreamReader to fix the problem.

  • [DRAFT] Support dynamically updating authentication headers

    [DRAFT] Support dynamically updating authentication headers

    Is your feature request related to a problem?

    See #6908 for discussion with @Dreamsorcerer

    We are have a continuously running service that makes HTTP requests using aiohttp's client. These requests are sent with a bearer token in the authorization header. This token needs to be refreshed from time to time.

    Therefore we cannot set the header on the client object. We need to evaluate the header prior to making individual requests. This is cumbersome for an app developer.

    Describe the solution you'd like

    We would like to configure the client with the required data which can be used at each request to populate an authorization header. In its simplest form, this could be a simple co-routine that evaluates a token cache, generates a new token if required, and populate the header as required.

    For the avoidance of doubt, the proposal is for wherever aiohttp supports an auth argument (client session, request), it would support the "enhanced" auth handler.

    Currently, aiohttp supports a BasicAuth object which is a customized namedtuple (fields: username, password) with an encode method which returns the base64 encoded header value. Runtime checks are performed ensuring the auth value is an instance of that class.

    TODO: expand on preferred option

    Describe alternatives you've considered

    Both the requests library and the httpx library have similar functionality. For reference:

    • https://requests.readthedocs.io/en/latest/user/authentication/#new-forms-of-authentication
    • https://www.python-httpx.org/advanced/#customizing-authentication

    The httpx approach supports a more defined base class/interface for an auth handler, whereas requests just requires a callable. Fundamentally they do the same thing and inject an authorization header in the request headers.

    Related component

    Client

    Additional context

    I am probably able to contribute a PR with an implementation, depending on the agreed scope of the changes.

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
  • aiohttp.client_exceptions.ClientOSError: [Errno 32] Broken pipe

    aiohttp.client_exceptions.ClientOSError: [Errno 32] Broken pipe

    Describe the bug

    import asyncio
    import io
    import os
    
    import aiohttp
    from tqdm.asyncio import tqdm
    
    
    URL = 'http://your-ip:3000/upload'
    
    
    async def chunks(data, chunk_size):
        with tqdm.wrapattr(io.BytesIO(data), 'read', total=len(data)) as f:
            chunk = f.read(chunk_size)
            while chunk:
                yield chunk
                chunk = f.read(chunk_size)
    
    
    async def download(session, chunk_size):
        data_to_send = os.urandom(30_000_000)
        data_generator = chunks(data_to_send, chunk_size)
        await session.post(URL, data=data_generator)
    
            
    async def main():
        async with aiohttp.ClientSession() as session:
            tasks = [] 
            for _ in range(5):
                t = asyncio.create_task(download(session, 4096))
                tasks.append(t)
            await asyncio.gather(*tasks)
                
    
    asyncio.run(main())
    

    I am trying to make a CLI client for OpenSpeedTest-Server I am getting same error like this. to reproduce this use our DOCKER IMAGE or Android App. then make a post request to "http://your-ip:3000/upload" issues : For docker image it will only send first chunk for Android app it will throw error like this.

    Traceback (most recent call last):
      File "r.py", line 35, in <module>
        asyncio.run(main())
      File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/asyncio/runners.py", line 44, in run
        return loop.run_until_complete(main)
      File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
        return future.result()
      File "r.py", line 32, in main
        await asyncio.gather(*tasks)
      File "r.py", line 23, in download
        await session.post(URL, data=data_generator)
      File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/client.py", line 559, in _request
        await resp.start(conn)
      File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/client_reqrep.py", line 898, in start
        message, payload = await protocol.read()  # type: ignore[union-attr]
      File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/streams.py", line 616, in read
        await self._waiter
    aiohttp.client_exceptions.ClientOSError: [Errno 32] Broken pipe
    

    It is working fine when using Electron Apps of OpenSpeedTest-Server (Windows, Mac and Linux GUI Server apps) it uses Express server.

    Mobile Apps uses iOnic WebServer, for Android it's NanoHTTP Server and for iOS it is GDC WebServer. for Docker we use Nginx WebServer. Configuration posted on my profile.

    To Reproduce

    1. Download OpenSpeedTest-Server Android App 2)Run script i posted after changing the IP address to the one shown in the app. 3)Make sure you are making a POST request to http://your-ip/upload
    2. Error

    Expected behavior

    If you run the same script on GUI OpenSpeedTest -Server (Electron Apps) Upload finishes without issue.

    Logs/tracebacks

    `
    Traceback (most recent call last):
      File "r.py", line 35, in <module>
        asyncio.run(main())
      File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/asyncio/runners.py", line 44, in run
        return loop.run_until_complete(main)
      File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
        return future.result()
      File "r.py", line 32, in main
        await asyncio.gather(*tasks)
      File "r.py", line 23, in download
        await session.post(URL, data=data_generator)
      File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/client.py", line 559, in _request
        await resp.start(conn)
      File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/client_reqrep.py", line 898, in start
        message, payload = await protocol.read()  # type: ignore[union-attr]
      File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/streams.py", line 616, in read
        await self._waiter
    aiohttp.client_exceptions.ClientOSError: [Errno 32] Broken pipe
    
    
    
    ### Python Version
    
    ```console
    Python 3.8.9
    

    aiohttp Version

    Name: aiohttp
    Version: 3.8.1
    

    multidict Version

    Name: multidict
    Version: 6.0.2
    

    yarl Version

    Name: yarl
    Version: 1.8.1
    

    OS

    MacOS 12.5.1 (21G83)

    Related component

    Client

    Additional context

    This is a CLI-Client

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
  • PONG not received

    PONG not received

    Describe the bug

    During aiohttp usage we have noticed the strange issue - the client is sending PING message and the server takes a long time to respond. This simple script is the proof that pong method is correlated with the next message that is being sent to client instead of being sent right away

    To Reproduce

    Run this simple script - ping_pong.py:

    import argparse
    import asyncio
    import logging
    import time
    import traceback
    from contextvars import ContextVar
    from typing import AsyncGenerator
    from typing import Awaitable
    from typing import Callable
    from typing import Optional
    from typing import Union
    from uuid import uuid4
    
    import aiohttp
    from aiohttp import ClientSession
    from aiohttp import ClientWebSocketResponse
    from aiohttp import web
    from aiohttp import WSMessage
    from aiohttp import WSMsgType
    from aiohttp.web_request import Request
    from aiohttp.web_ws import WebSocketResponse
    
    logger = logging.getLogger(__name__)
    
    RECONNECTION_TIMEOUT = 1.0
    PING_SEND_TIME: ContextVar[dict[int, float]] = ContextVar('PING_SEND_TIME', default=dict())
    SERVER_HOST = "localhost"
    SERVER_PORT = 8080
    
    
    async def ws_connect(
        url: str, reconnect_timeout: float = RECONNECTION_TIMEOUT, autoping: bool = True
    ) -> tuple[ClientSession, ClientWebSocketResponse]:
        while True:
            logger.info("Connection attempt...")
            session = aiohttp.ClientSession()
            try:
                ws = await session.ws_connect(url, autoping=autoping)
                logger.info(f"Connected with autoping to set to {autoping}")
                return session, ws
            except aiohttp.client.ClientConnectorError:
                logger.info("CLOSING SESSION IN ws_connect!")
                await session.close()
                await asyncio.sleep(reconnect_timeout)
    
    
    async def send_ping(ws: Union[ClientWebSocketResponse, WebSocketResponse], ping_number: int) -> None:
        message = f"{ping_number}"
        logger.debug(f"[PING {ping_number}] Sending ping with message = '{message}'")
        await ws.ping(message=message.encode("utf-8"))
        PING_SEND_TIME.get()[ping_number] = time.time()
        logger.debug(f"[PING {ping_number}] Successfully sent ping with message = '{message}'")
    
    
    async def stream_from_ws(url: str, sink: Callable[[str], Awaitable[None]]) -> None:
        session, ws = await ws_connect(url=url, reconnect_timeout=1, autoping=False)
        try:
            await fetch_stream_with_ping(ws=ws, sink=sink)
        except asyncio.CancelledError:
            traceback.print_exc()
        finally:
            await session.close()
    
    
    async def wait_for_pong(
        ws: Union[ClientWebSocketResponse, WebSocketResponse], timeout: int = 20
    ) -> Optional[WSMessage]:
        listener_id = str(uuid4())
        logger.debug(f"[{listener_id}] Listening for PONG started!")
        try:
            response = await asyncio.wait_for(ws.receive(), timeout=timeout)
        except asyncio.TimeoutError:
            logger.error(f"[{listener_id}] PONG was never received and TimeoutError was raised!")
            return None
    
        if response.type == WSMsgType.PONG:
            pong_number = int(response.data.decode('utf-8'))
            logger.debug(f"[{listener_id}] PONG was successfully received - message = '{pong_number}'")
            pong_receive_time = time.time()
            ping_send_time = PING_SEND_TIME.get()[pong_number]
            logger.info(f"Client was waiting for PONG for - {pong_receive_time - ping_send_time}!")
            del PING_SEND_TIME.get()[pong_number]
            return None
        else:
            logger.debug(f"[{listener_id}] Received response that is not PONG")
            return response
    
    
    async def ack(data: WSMessage) -> str:
        return "ok"
    
    
    async def fetch_stream_with_ping(
        ws: Union[ClientWebSocketResponse, WebSocketResponse], sink: Callable[[str], Awaitable[None]]
    ) -> None:
    
        ping_number = 0
    
        while True:
            # send PING and wait for PONG
            await send_ping(ws, ping_number)
            ping_number += 1
            response1 = await wait_for_pong(ws, timeout=20)  # this response should be PONG
    
            response2 = await wait_for_pong(ws, timeout=20)  # this should be normal WS event
    
            responses = [response1, response2]
            true_events = [response for response in responses if response is not None]
    
            for event in true_events:
                await sink(event.data)
                await ws.send_str(await ack(event))
    
    
    async def send_stream(
        ws: Union[ClientWebSocketResponse, WebSocketResponse],
        stream: AsyncGenerator[str, None],
        get_message_func: Callable[[], Awaitable[WSMessage]],
        send_timeout: float = 1.0,
        ack_timeout: float = 1.0,
    ) -> None:
        async def receive_and_ack() -> None:
            ack_response = await get_message_func()
            await ack(ack_response)
    
        async def send_message(msg: str) -> None:
            await asyncio.wait_for(ws.send_str(data=msg), timeout=send_timeout)
            await asyncio.wait_for(receive_and_ack(), timeout=ack_timeout)
    
        sleep_time = 0
        async for message in stream:
            logger.info(f"[SERVER] Sending message = '{message}'")
            await send_message(message)
            await asyncio.sleep(sleep_time)
            sleep_time = (sleep_time + 1) % 5
        return None
    
    
    async def websocket_handler(request: Request) -> web.WebSocketResponse:
        ws = web.WebSocketResponse(autoping=True)
        await ws.prepare(request)
    
        stream = events_stream(100)
    
        await send_stream(ws, stream, get_message_func=ws.receive)
    
        async for msg in ws:
            if msg.type == aiohttp.WSMsgType.TEXT:
                if msg.data == 'close':
                    await ws.close()
                else:
                    await ws.send_str(msg.data + '/answer')
            elif msg.type == aiohttp.WSMsgType.ERROR:
                logger.error(f"ws connection closed with exception {ws.exception()}")
    
        logger.info('Websocket connection closed')
    
        return ws
    
    
    async def websocket_handler_custom_ping(request: Request) -> web.WebSocketResponse:
        ws = web.WebSocketResponse(autoping=False)
        await ws.prepare(request)
    
        stream = events_stream(100)
        message_queue: asyncio.Queue[WSMessage] = asyncio.Queue()
    
        async def consume_events(web_socket: web.WebSocketResponse, queue: asyncio.Queue[WSMessage]) -> None:
            async for msg in ws:
                if msg.type == aiohttp.WSMsgType.PING:
                    message = msg.data.decode("utf-8")
                    logger.debug(f"[SERVER] Received PING message = '{message}'")
                    await web_socket.pong(msg.data)
                elif msg.type == aiohttp.WSMsgType.TEXT:
                    if msg.data == 'close':
                        await ws.close()
                    else:
                        await queue.put(msg)
                elif msg.type == aiohttp.WSMsgType.ERROR:
                    logger.error(f"ws connection closed with exception {ws.exception()}")
                    raise ws.exception()
    
        events_consumption_task = asyncio.create_task(consume_events(ws, message_queue))
        stream_sending_task = asyncio.create_task(send_stream(ws, stream, get_message_func=message_queue.get))
    
        await asyncio.gather(events_consumption_task, stream_sending_task)
    
        logger.info('Websocket connection closed')
    
        return ws
    
    
    async def await_termination() -> None:
        # By design aiohttp server do not hang:
        # https://docs.aiohttp.org/en/stable/web_advanced.html#application-runners
        while True:
            await asyncio.sleep(3600.0)
    
    
    async def server() -> None:
        app = web.Application(logger=logger)
        app.add_routes([web.get('/ws-normal', websocket_handler)])
        app.add_routes([web.get('/ws-custom', websocket_handler_custom_ping)])
        app_runner = web.AppRunner(app)
        try:
            await app_runner.setup()
            site = web.TCPSite(app_runner, SERVER_HOST, SERVER_PORT)
            await site.start()
            logger.info(f"Successfully started server {SERVER_HOST}:{SERVER_PORT}")
            await await_termination()
        finally:
            logger.info(f"Cleaning up the server")
            await app_runner.cleanup()
    
    
    async def client(endpoint: str) -> None:
        async def client_sink(message: str) -> None:
            logger.info(f"[CLIENT] Received message = '{message}'")
    
        await stream_from_ws(url=f"ws://{SERVER_HOST}:{SERVER_PORT}/ws-{endpoint}", sink=client_sink)
    
    
    async def events_stream(events_number: int) -> AsyncGenerator[str, None]:
        for i in range(events_number):
            yield f"Message {i} from events stream"
            await asyncio.sleep(1)
    
    
    async def main(client_endpoint: str) -> None:
        server_task = asyncio.create_task(server())
        client_task = asyncio.create_task(client(client_endpoint))
    
        await asyncio.gather(server_task, client_task)
    
    
    if __name__ == "__main__":
        parser = argparse.ArgumentParser(description='Show PING-PONG issue in aiohttp')
        parser.add_argument('--client-endpoint', default="normal", help="Define which endpoint client should use")
        parser.add_argument('--logging-level', default="INFO", help="Logging level as in standard Python logging library")
        args = parser.parse_args()
    
        endpoint = args.client_endpoint
        logging_level = logging._nameToLevel[args.logging_level]
    
        logging.basicConfig(format='%(asctime)s:%(levelname)s:%(message)s', level=logging_level)
        asyncio.run(main(endpoint))
    

    Expected behavior

    We are using receive-acknowledge protocol:

    1. Client is establishing Websocket connection with the server by doing simple request
    2. Server is preparing event stream for the client
    3. Server is sending one message at a time and then is waiting for ACK message
    4. Client is receiving the main message and sending ok string as ACK response
    5. Server, after receiving ACK knows that the next message can be sent to client

    PING-PONG

    1. At any given time client can send PING message to server
    2. If the connection is established server should respond with PONG right away

    Logs/tracebacks

    By running `python ping_pong.py --client-endpoint=normal --logging-level=INFO` you can see that with `autoping=True`
    the PONG messages are correlated with the speed of server-stream.
    
    
    python ping_pong.py --client-endpoint=normal --logging-level=INFO
    2022-09-06 10:56:58,417:INFO:Connection attempt...
    2022-09-06 10:56:58,423:INFO:Successfully started server localhost:8080
    2022-09-06 10:56:58,425:INFO:[SERVER] Sending message = 'Message 0 from events stream'
    2022-09-06 10:56:58,425:INFO:Connected with autoping to set to False
    2022-09-06 10:56:58,426:INFO:Client was waiting for PONG for - 0.0004050731658935547!
    2022-09-06 10:56:58,426:INFO:[CLIENT] Received message = 'Message 0 from events stream'
    2022-09-06 10:56:59,427:INFO:[SERVER] Sending message = 'Message 1 from events stream'
    2022-09-06 10:56:59,429:INFO:Client was waiting for PONG for - 1.0026228427886963!
    2022-09-06 10:56:59,429:INFO:[CLIENT] Received message = 'Message 1 from events stream'
    2022-09-06 10:57:01,432:INFO:[SERVER] Sending message = 'Message 2 from events stream'
    2022-09-06 10:57:01,434:INFO:Client was waiting for PONG for - 2.004383087158203!
    2022-09-06 10:57:01,434:INFO:[CLIENT] Received message = 'Message 2 from events stream'
    2022-09-06 10:57:04,437:INFO:[SERVER] Sending message = 'Message 3 from events stream'
    2022-09-06 10:57:04,438:INFO:Client was waiting for PONG for - 3.004377841949463!
    2022-09-06 10:57:04,438:INFO:[CLIENT] Received message = 'Message 3 from events stream'
    2022-09-06 10:57:08,440:INFO:[SERVER] Sending message = 'Message 4 from events stream'
    2022-09-06 10:57:08,441:INFO:Client was waiting for PONG for - 4.002522945404053!
    2022-09-06 10:57:08,441:INFO:[CLIENT] Received message = 'Message 4 from events stream'
    
    
    By running separate coroutine that responds right-away I am able to overcome this issue but it does feel like a hack:
    `python ping_pong.py --client-endpoint=custom --logging-level=INFO`
    
    ```text
    python ping_pong.py --client-endpoint=custom --logging-level=INFO
    2022-09-06 10:57:35,876:INFO:Connection attempt...
    2022-09-06 10:57:35,882:INFO:Successfully started server localhost:8080
    2022-09-06 10:57:35,883:INFO:[SERVER] Sending message = 'Message 0 from events stream'
    2022-09-06 10:57:35,883:INFO:Connected with autoping to set to False
    2022-09-06 10:57:35,884:INFO:Client was waiting for PONG for - 0.0005218982696533203!
    2022-09-06 10:57:35,884:INFO:[CLIENT] Received message = 'Message 0 from events stream'
    2022-09-06 10:57:35,885:INFO:Client was waiting for PONG for - 0.0004208087921142578!
    2022-09-06 10:57:36,886:INFO:[SERVER] Sending message = 'Message 1 from events stream'
    2022-09-06 10:57:36,887:INFO:[CLIENT] Received message = 'Message 1 from events stream'
    2022-09-06 10:57:36,888:INFO:Client was waiting for PONG for - 0.0006530284881591797!
    2022-09-06 10:57:38,890:INFO:[SERVER] Sending message = 'Message 2 from events stream'
    2022-09-06 10:57:38,891:INFO:[CLIENT] Received message = 'Message 2 from events stream'
    2022-09-06 10:57:38,892:INFO:Client was waiting for PONG for - 0.00048804283142089844!
    2022-09-06 10:57:41,894:INFO:[SERVER] Sending message = 'Message 3 from events stream'
    2022-09-06 10:57:41,895:INFO:[CLIENT] Received message = 'Message 3 from events stream'
    2022-09-06 10:57:41,896:INFO:Client was waiting for PONG for - 0.0007541179656982422!
    2022-09-06 10:57:45,897:INFO:[SERVER] Sending message = 'Message 4 from events stream'
    2022-09-06 10:57:45,898:INFO:[CLIENT] Received message = 'Message 4 from events stream'
    2022-09-06 10:57:45,899:INFO:Client was waiting for PONG for - 0.0006568431854248047!
    
    
    
    ### Python Version
    
    ```console
    $ python --version
    Python 3.9.12
    

    aiohttp Version

    $ python -m pip show aiohttp
    Name: aiohttp
    Version: 3.8.1
    Summary: Async http client/server framework (asyncio)
    Home-page: https://github.com/aio-libs/aiohttp
    Author: 
    Author-email: 
    License: Apache 2
    Location: /Users/jgorazda/anaconda3/envs/cv-test-orchestrator/lib/python3.9/site-packages
    Requires: attrs, yarl, multidict, aiosignal, charset-normalizer, async-timeout, frozenlist
    Required-by:
    

    multidict Version

    $ python -m pip show multidict
    Name: multidict
    Version: 6.0.2
    Summary: multidict implementation
    Home-page: https://github.com/aio-libs/multidict
    Author: Andrew Svetlov
    Author-email: [email protected]
    License: Apache 2
    Location: /Users/jgorazda/anaconda3/envs/cv-test-orchestrator/lib/python3.9/site-packages
    Requires: 
    Required-by: yarl, aiohttp
    

    yarl Version

    $ python -m pip show yarl
    Name: yarl
    Version: 1.7.2
    Summary: Yet another URL library
    Home-page: https://github.com/aio-libs/yarl/
    Author: Andrew Svetlov
    Author-email: an[email protected]
    License: Apache 2
    Location: /Users/jgorazda/anaconda3/envs/cv-test-orchestrator/lib/python3.9/site-packages
    Requires: multidict, idna
    Required-by: aiohttp
    

    OS

    macOS

    Related component

    Server, Client

    Additional context

    No response

    Code of Conduct

    • [X] I agree to follow the aio-libs Code of Conduct
  • docs: Update redis to follow the latest spec

    docs: Update redis to follow the latest spec

    What do these changes do?

    I changed some Redis code from the documentation to follow the latest API. It is based on redis-py>=4.2.x on which aioredis has been merged and started to be maintained.

    Are there changes in behavior for the user?

    Nope!

    Related issue number

    There is no issue related to this PR.

    Checklist

    • [x] I think the code is well written
    • [ ] Unit tests for the changes exist
    • [x] Documentation reflects the changes
    • [ ] If you provide code modification, please add yourself to CONTRIBUTORS.txt
      • The format is <Name> <Surname>.
      • Please keep alphabetical order, the file is sorted by names.
    • [x] Add a new news fragment into the CHANGES folder
      • name it <issue_id>.<type> for example (588.bugfix)
      • if you don't have an issue_id change it to the pr id after creating the pr
      • ensure type is one of the following:
        • .feature: Signifying a new feature.
        • .bugfix: Signifying a bug fix.
        • .doc: Signifying a documentation improvement.
        • .removal: Signifying a deprecation or removal of public API.
        • .misc: A ticket has been closed, but it is not of interest to users.
      • Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."

    Thank you for reading and any feedback is always welcomed!

Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.
 Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.

Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.

Sep 17, 2022
EasyRequests is a minimalistic HTTP-Request Library that wraps aiohttp and asyncio in a small package that allows for sequential, parallel or even single requests

EasyRequests EasyRequests is a minimalistic HTTP-Request Library that wraps aiohttp and asyncio in a small package that allows for sequential, paralle

Jan 27, 2022
Asynchronous Python HTTP Requests for Humans using Futures

Asynchronous Python HTTP Requests for Humans Small add-on for the python requests http library. Makes use of python 3.2's concurrent.futures or the ba

Sep 22, 2022
Small, fast HTTP client library for Python. Features persistent connections, cache, and Google App Engine support. Originally written by Joe Gregorio, now supported by community.

Introduction httplib2 is a comprehensive HTTP client library, httplib2.py supports many features left out of other HTTP libraries. HTTP and HTTPS HTTP

Sep 20, 2022
An interactive command-line HTTP and API testing client built on top of HTTPie featuring autocomplete, syntax highlighting, and more. https://twitter.com/httpie
An interactive command-line HTTP and API testing client built on top of HTTPie featuring autocomplete, syntax highlighting, and more. https://twitter.com/httpie

HTTP Prompt HTTP Prompt is an interactive command-line HTTP client featuring autocomplete and syntax highlighting, built on HTTPie and prompt_toolkit.

Sep 21, 2022
A next generation HTTP client for Python. πŸ¦‹
A next generation HTTP client for Python. πŸ¦‹

HTTPX - A next-generation HTTP client for Python. HTTPX is a fully featured HTTP client for Python 3, which provides sync and async APIs, and support

Sep 21, 2022
Python requests like API built on top of Twisted's HTTP client.

treq: High-level Twisted HTTP Client API treq is an HTTP library inspired by requests but written on top of Twisted's Agents. It provides a simple, hi

Sep 12, 2022
As easy as /aitch-tee-tee-pie/ πŸ₯§ Modern, user-friendly command-line HTTP client for the API era. JSON support, colors, sessions, downloads, plugins & more. https://twitter.com/httpie
As easy as /aitch-tee-tee-pie/ πŸ₯§ Modern, user-friendly command-line HTTP client for the API era. JSON support, colors, sessions, downloads, plugins & more. https://twitter.com/httpie

HTTPie: human-friendly CLI HTTP client for the API era HTTPie (pronounced aitch-tee-tee-pie) is a command-line HTTP client. Its goal is to make CLI in

Sep 18, 2022
A minimal HTTP client. βš™οΈ

HTTP Core Do one thing, and do it well. The HTTP Core package provides a minimal low-level HTTP client, which does one thing only. Sending HTTP reques

Sep 18, 2022
Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more.
Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more.

urllib3 is a powerful, user-friendly HTTP client for Python. Much of the Python ecosystem already uses urllib3 and you should too. urllib3 brings many

Sep 21, 2022
Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more.
Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more.

urllib3 is a powerful, user-friendly HTTP client for Python. Much of the Python ecosystem already uses urllib3 and you should too. urllib3 brings many

Sep 23, 2022
πŸ”„ 🌐 Handle thousands of HTTP requests, disk writes, and other I/O-bound tasks simultaneously with Python's quintessential async libraries.

?? ?? Handle thousands of HTTP requests, disk writes, and other I/O-bound tasks simultaneously with Python's quintessential async libraries.

Sep 4, 2022
A Python obfuscator using HTTP Requests and Hastebin.
A Python obfuscator using HTTP Requests and Hastebin.

?? Jawbreaker ?? Jawbreaker is a Python obfuscator written in Python3, using double encoding in base16, base32, base64, HTTP requests and a Hastebin-l

Sep 3, 2022
Probe and discover HTTP pathname using brute-force methodology and filtered by specific word or 2 words at once
Probe and discover HTTP pathname using brute-force methodology and filtered by specific word or 2 words at once

pathprober Probe and discover HTTP pathname using brute-force methodology and filtered by specific word or 2 words at once. Purpose Brute-forcing webs

Jul 6, 2022
HTTP/2 for Python.
HTTP/2 for Python.

Hyper: HTTP/2 Client for Python This project is no longer maintained! Please use an alternative, such as HTTPX or others. We will not publish further

Sep 10, 2022
HTTP request/response parser for python in C

http-parser HTTP request/response parser for Python compatible with Python 2.x (>=2.7), Python 3 and Pypy. If possible a C parser based on http-parser

Sep 16, 2022
Python package for caching HTTP response based on etag

Etag cache implementation for HTTP requests, to save request bandwidth for a non-modified response. Returns high-speed accessed dictionary data as cache.

Apr 27, 2022
Some example code for using a raspberry pi to draw text (including emojis) and twitch emotes to a HUB75 RGB matrix via an HTTP post endpoint.

Some example code for using a raspberry pi to draw text (including emojis) and twitch emotes to a HUB75 RGB matrix via an HTTP post endpoint.

Jul 28, 2022
A simple, yet elegant HTTP library.
A simple, yet elegant HTTP library.

Requests Requests is a simple, yet elegant HTTP library. >>> import requests >>> r = requests.get('https://api.github.com/user', auth=('user', 'pass')

Sep 27, 2022