⚡ Fast • 🪶 Lightweight • 0️⃣ Dependency • 🔌 Pluggable • 😈 TLS interception • 🔒 DNS-over-HTTPS • 🔥 Poor Man's VPN • ⏪ Reverse & ⏩ Forward • 👮🏿 "Proxy Server" framework • 🌐 "Web Server" framework • ➵ ➶ ➷ ➠ "PubSub" framework • 👷 "Work" acceptor & executor framework

Proxy.Py

PyPi Monthly Docker Pulls No Dependencies Gitter License

Tested With MacOS, Ubuntu, Windows, Android, Android Emulator, iOS, iOS Simulator Android, Android Emulator iOS, iOS Simulator

pypi version Python 3.x Checked with mypy lib codecov

Contributions Welcome Need Help Sponsored by Jaxl Innovations Private Limited

Table of Contents

Features

  • Fast & Scalable

    • Scales by using all available cores on the system

    • Threadless executions using asyncio

    • Made to handle tens-of-thousands connections / sec

      # On Macbook Pro 2019 / 2.4 GHz 8-Core Intel Core i9 / 32 GB RAM./helper/benchmark.sh
        CONCURRENCY: 100 workers, TOTAL REQUESTS: 100000 req
      
        Summary:
          Success rate:	1.0000
          Total:	2.5489 secs
          Slowest:	0.0443 secs
          Fastest:	0.0006 secs
          Average:	0.0025 secs
          Requests/sec:	39232.6572
      
          Total data:	1.81 MiB
          Size/request:	19 B
          Size/sec:	727.95 KiB
      
        Response time histogram:
          0.001 [5006]  |■■■■■
          0.001 [19740] |■■■■■■■■■■■■■■■■■■■■■
          0.002 [29701] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
          0.002 [21278] |■■■■■■■■■■■■■■■■■■■■■■
          0.003 [15376] |■■■■■■■■■■■■■■■■
          0.004 [6644]  |■■■■■■■
          0.004 [1609]  |■
          0.005 [434]   |
          0.006 [83]    |
          0.006 [29]    |
          0.007 [100]   |
      
        Latency distribution:
          10% in 0.0014 secs
          25% in 0.0018 secs
          50% in 0.0023 secs
          75% in 0.0030 secs
          90% in 0.0036 secs
          95% in 0.0040 secs
          99% in 0.0047 secs
      
        Details (average, fastest, slowest):
          DNS+dialup:	0.0025 secs, 0.0015 secs, 0.0030 secs
          DNS-lookup:	0.0000 secs, 0.0000 secs, 0.0001 secs
      
        Status code distribution:
          [200] 100000 responses
    • See Benchmark for more details and how to run them locally.

  • Lightweight

    • Uses ~5-20 MB RAM
    • Compressed containers size is ~18.04 MB
    • No external dependency other than standard Python library
  • Programmable

    • Customize proxy behavior using Proxy Server Plugins. Example:
      • --plugins proxy.plugin.ProxyPoolPlugin
    • Optionally, enable builtin Web Server Plugins. Example:
      • --plugins proxy.plugin.ReverseProxyPlugin
    • Plugin API is currently in development phase, expect breaking changes
  • Real-time Dashboard

  • Secure

  • Private

    • Everyone deserves privacy. Browse with malware and adult content protection
    • See DNS-over-HTTPS
  • Man-In-The-Middle

    • Can decrypt TLS traffic between clients and upstream servers
    • See TLS Interception
  • Supported proxy protocols

    • http(s)
      • http1
      • http1.1 with pipeline
    • http2
    • websockets
  • Support for HAProxy Protocol

    • See --enable-proxy-protocol flag
  • Static file server support

    • See --enable-static-server and --static-server-dir flags
  • Optimized for large file uploads and downloads

    • See --client-recvbuf-size and --server-recvbuf-size flag
  • IPv4 and IPv6 support

    • See --hostname flag
  • Unix domain socket support

    • See --unix-socket-path flag
  • Basic authentication support

    • See --basic-auth flag
  • PAC (Proxy Auto-configuration) support

    • See --pac-file and --pac-file-url-path flags

Install

Using PIP

Stable Version with PIP

Install from PyPi

pip install --upgrade proxy.py

or from GitHub master branch

pip install git+https://github.com/abhinavsingh/[email protected]

Development Version with PIP

pip install git+https://github.com/abhinavsingh/[email protected]

Using Docker

Stable version container releases are available for following platforms:

  • linux/386
  • linux/amd64
  • linux/arm/v6
  • linux/arm/v7
  • linux/arm64/v8
  • linux/ppc64le
  • linux/s390x

Stable Version from Docker Hub

Run proxy.py latest container:

docker run -it -p 8899:8899 --rm abhinavsingh/proxy.py:latest

To run specific target platform container on multi-platform supported servers:

docker run -it -p 8899:8899 --rm --platform linux/arm64/v8 abhinavsingh/proxy.py:latest

Build Development Version Locally

git clone https://github.com/abhinavsingh/proxy.py.gitcd proxy.py && make containerdocker run -it -p 8899:8899 --rm abhinavsingh/proxy.py:latest

WARNING docker image is currently broken on macOS due to incompatibility with vpnkit.

Using HomeBrew

Updated formulae for HomeBrew are maintained in develop branch under the helper/homebrew directory.

  • stable formulae installs the package from master branch.
  • develop formulae installs the package from develop branch.

Stable Version with HomeBrew

brew install https://raw.githubusercontent.com/abhinavsingh/proxy.py/develop/helper/homebrew/stable/proxy.rb

Development Version with HomeBrew

brew install https://raw.githubusercontent.com/abhinavsingh/proxy.py/develop/helper/homebrew/develop/proxy.rb

Start proxy.py

From command line when installed using PIP

When proxy.py is installed using pip, an executable named proxy is placed under your $PATH.

Run it

Simply type proxy on command line to start with default configuration.

proxy
...[redacted]... - Loaded plugin proxy.http.proxy.HttpProxyPlugin
...[redacted]... - Started 8 threadless workers
...[redacted]... - Started 8 acceptors
...[redacted]... - Listening on 127.0.0.1:8899

Understanding logs

Things to notice from above logs:

  • Loaded plugin

    • proxy.py will load proxy.http.proxy.HttpProxyPlugin by default
    • As name suggests, this core plugin adds http(s) proxy server capabilities to proxy.py instance
  • Started N threadless workers

    • By default, proxy.py will start as many worker processes as there are CPU cores on the machine
    • Use --num-workers flag to customize number of worker processes
    • See Threads vs Threadless to understand how to control execution mode
  • Started N acceptors

    • By default, proxy.py will start as many acceptor processes as there are CPU cores on the machine
    • Use --num-acceptors flag to customize number of acceptor processes
    • See High Level Architecture to understand relationship between acceptors and workers
  • Started server on ::1:8899

    • By default, proxy.py listens on IPv6 ::1, which is equivalent of IPv4 127.0.0.1
    • If you want to access proxy.py from external host, use --hostname :: or --hostname 0.0.0.0 or bind to any other interface available on your machine.
    • See CustomNetworkInterface for how to customize proxy.py public IP seen by upstream servers.
  • Port 8899

    • Use --port flag to customize default TCP port.

Enable DEBUG logging

All the logs above are INFO level logs, default --log-level for proxy.py

Lets start proxy.py with DEBUG level logging:

proxy --log-level d
...[redacted]... - Open file descriptor soft limit set to 1024
...[redacted]... - Loaded plugin proxy.http_proxy.HttpProxyPlugin
...[redacted]... - Started 8 workers
...[redacted]... - Started server on ::1:8899

You can use single letter to customize log level. Example:

  • d = DEBUG
  • i = INFO
  • w = WARNING
  • e = ERROR
  • c = CRITICAL

As we can see from the above logs, before starting up:

  • proxy.py tried to set open file limit ulimit on the system
  • Default value for --open-file-limit used is 1024
  • --open-file-limit flag is a no-op on Windows operating systems

See flags for full list of available configuration options.

From command line using repo source

If you are trying to run proxy.py from source code, there is no binary file named proxy in the source code.

To start proxy.py from source code follow these instructions:

  • Clone repo

    git clone https://github.com/abhinavsingh/proxy.py.gitcd proxy.py
  • Create a Python 3 virtual env

    python3 -m venv venvsource venv/bin/activate
  • Install deps

    make lib-dep
  • Generate proxy/common/_scm_version.py

    NOTE: Following step is not necessary for editable installs.

    This file writes SCM detected version to proxy/common/_scm_version.py file.

    ./write-scm-version.sh
  • Optionally, run tests

    make
  • Run proxy.py

    python -m proxy

See Plugin Developer and Contributor Guide if you plan to work with proxy.py source code.

Docker image

Customize startup flags

By default docker binary is started with IPv4 networking flags:

--hostname 0.0.0.0 --port 8899

You can override flag from command line when starting the docker container. For example, to check proxy.py version within the docker container, run:

❯ docker run -it \
    -p 8899:8899 \
    --rm abhinavsingh/proxy.py:latest \
    -v

Plugin Examples

  • See plugin module for full code.
  • All the bundled plugin examples also works with https traffic
  • Plugin examples are also bundled with Docker image.

HTTP Proxy Plugins

ShortLinkPlugin

Add support for short links in your favorite browsers / applications.

Shortlink Plugin

Start proxy.py as:

proxy \
    --plugins proxy.plugin.ShortLinkPlugin

Now you can speed up your daily browsing experience by visiting your favorite website using single character domain names :). This works across all browsers.

Following short links are enabled by default:

Short Link Destination URL
a/ amazon.com
i/ instagram.com
l/ linkedin.com
f/ facebook.com
g/ google.com
t/ twitter.com
w/ web.whatsapp.com
y/ youtube.com
proxy/ localhost:8899

ModifyPostDataPlugin

Modifies POST request body before sending request to upstream server.

Start proxy.py as:

proxy \
    --plugins proxy.plugin.ModifyPostDataPlugin

By default plugin replaces POST body content with hard-coded b'{"key": "modified"}' and enforced Content-Type: application/json.

Verify the same using curl -x localhost:8899 -d '{"key": "value"}' http://httpbin.org/post

{
  "args": {},
  "data": "{\"key\": \"modified\"}",
  "files": {},
  "form": {},
  "headers": {
    "Accept": "*/*",
    "Content-Length": "19",
    "Content-Type": "application/json",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "json": {
    "key": "modified"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://httpbin.org/post"
}

Note following from the response above:

  1. POST data was modified "data": "{\"key\": \"modified\"}". Original curl command data was {"key": "value"}.
  2. Our curl command did not add any Content-Type header, but our plugin did add one "Content-Type": "application/json". Same can also be verified by looking at json field in the output above:
    "json": {
     "key": "modified"
    },
    
  3. Our plugin also added a Content-Length header to match length of modified body.

MockRestApiPlugin

Mock responses for your server REST API. Use to test and develop client side applications without need of an actual upstream REST API server.

Start proxy.py as:

proxy \
    --plugins proxy.plugin.ProposedRestApiPlugin

Verify mock API response using curl -x localhost:8899 http://api.example.com/v1/users/

{"count": 2, "next": null, "previous": null, "results": [{"email": "[email protected]", "groups": [], "url": "api.example.com/v1/users/1/", "username": "admin"}, {"email": "[email protected]", "groups": [], "url": "api.example.com/v1/users/2/", "username": "admin"}]}

Verify the same by inspecting proxy.py logs:

2019-09-27 12:44:02,212 - INFO - pid:7077 - access_log:1210 - ::1:64792 - GET None:None/v1/users/ - None None - 0 byte

Access log shows None:None as server ip:port. None simply means that the server connection was never made, since response was returned by our plugin.

Now modify ProposedRestApiPlugin to returns REST API mock responses as expected by your clients.

RedirectToCustomServerPlugin

Redirects all incoming http requests to custom web server. By default, it redirects client requests to inbuilt web server, also running on 8899 port.

Start proxy.py and enable inbuilt web server:

proxy \
    --enable-web-server \
    --plugins proxy.plugin.RedirectToCustomServerPlugin

Verify using curl -v -x localhost:8899 http://google.com

... [redacted] ...
< HTTP/1.1 404 NOT FOUND
< Server: proxy.py v1.0.0
< Connection: Close
<
* Closing connection 0

Above 404 response was returned from proxy.py web server.

Verify the same by inspecting the logs for proxy.py. Along with the proxy request log, you must also see a http web server request log.

2019-09-24 19:09:33,602 - INFO - pid:49996 - access_log:1241 - ::1:49525 - GET /
2019-09-24 19:09:33,603 - INFO - pid:49995 - access_log:1157 - ::1:49524 - GET localhost:8899/ - 404 NOT FOUND - 70 bytes

FilterByUpstreamHostPlugin

Drops traffic by inspecting upstream host. By default, plugin drops traffic for facebook.com and www.facebok.com.

Start proxy.py as:

proxy \
    --plugins proxy.plugin.FilterByUpstreamHostPlugin

Verify using curl -v -x localhost:8899 http://facebook.com:

... [redacted] ...
< HTTP/1.1 418 I'm a tea pot
< Proxy-agent: proxy.py v1.0.0
* no chunk, no close, no size. Assume close to signal end
<
* Closing connection 0

Above 418 I'm a tea pot is sent by our plugin.

Verify the same by inspecting logs for proxy.py:

2019-09-24 19:21:37,893 - ERROR - pid:50074 - handle_readables:1347 - HttpProtocolException type raised
Traceback (most recent call last):
... [redacted] ...
2019-09-24 19:21:37,897 - INFO - pid:50074 - access_log:1157 - ::1:49911 - GET None:None/ - None None - 0 bytes

CacheResponsesPlugin

Caches Upstream Server Responses.

Start proxy.py as:

proxy \
    --plugins proxy.plugin.CacheResponsesPlugin

Verify using curl -v -x localhost:8899 http://httpbin.org/get:

... [redacted] ...
< HTTP/1.1 200 OK
< Access-Control-Allow-Credentials: true
< Access-Control-Allow-Origin: *
< Content-Type: application/json
< Date: Wed, 25 Sep 2019 02:24:25 GMT
< Referrer-Policy: no-referrer-when-downgrade
< Server: nginx
< X-Content-Type-Options: nosniff
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
< Content-Length: 202
< Connection: keep-alive
<
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://httpbin.org/get"
}
* Connection #0 to host localhost left intact

Get path to the cache file from proxy.py logs:

... [redacted] ... - GET httpbin.org:80/get - 200 OK - 556 bytes
... [redacted] ... - Cached response at /var/folders/k9/x93q0_xn1ls9zy76m2mf2k_00000gn/T/httpbin.org-1569378301.407512.txt

Verify contents of the cache file cat /path/to/your/cache/httpbin.org.txt

HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Type: application/json
Date: Wed, 25 Sep 2019 02:24:25 GMT
Referrer-Policy: no-referrer-when-downgrade
Server: nginx
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
Content-Length: 202
Connection: keep-alive

{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://httpbin.org/get"
}

ManInTheMiddlePlugin

Modifies upstream server responses.

Start proxy.py as:

proxy \
    --plugins proxy.plugin.ManInTheMiddlePlugin

Verify using curl -v -x localhost:8899 http://google.com:

... [redacted] ...
< HTTP/1.1 200 OK
< Content-Length: 28
<
* Connection #0 to host localhost left intact
Hello from man in the middle

Response body Hello from man in the middle is sent by our plugin.

ProxyPoolPlugin

Forward incoming proxy requests to a set of upstream proxy servers.

Let's start upstream proxies first.

Start proxy.py on port 9000 and 9001

proxy --port 9000
proxy --port 9001

Now, start proxy.py with ProxyPoolPlugin (on default 8899 port), pointing to our upstream proxies at 9000 and 9001 port.

proxy \
    --plugins proxy.plugin.ProxyPoolPlugin \
    --proxy-pool localhost:9000 \
    --proxy-pool localhost:9001

Make a curl request via 8899 proxy:

curl -v -x localhost:8899 http://httpbin.org/get

Verify that 8899 proxy forwards requests to upstream proxies by checking respective logs.

FilterByClientIpPlugin

Reject traffic from specific IP addresses. By default this plugin blocks traffic from 127.0.0.1 and ::1.

Start proxy.py as:

proxy \
    --plugins proxy.plugin.FilterByClientIpPlugin

Send a request using curl -v -x localhost:8899 http://google.com:

... [redacted] ...
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 418 I'm a tea pot
< Connection: close
<
* Closing connection 0

Modify plugin to your taste e.g. Allow specific IP addresses only.

ModifyChunkResponsePlugin

This plugin demonstrate how to modify chunked encoded responses. In able to do so, this plugin uses proxy.py core to parse the chunked encoded response. Then we reconstruct the response using custom hard-coded chunks, ignoring original chunks received from upstream server.

Start proxy.py as:

proxy \
    --plugins proxy.plugin.ModifyChunkResponsePlugin

Verify using curl -v -x localhost:8899 http://httpbin.org/stream/5:

... [redacted] ...
modify
chunk
response
plugin
* Connection #0 to host localhost left intact
* Closing connection 0

Modify ModifyChunkResponsePlugin to your taste. Example, instead of sending hard-coded chunks, parse and modify the original JSON chunks received from the upstream server.

CloudflareDnsResolverPlugin

This plugin uses Cloudflare hosted DNS-over-HTTPS API (json).

DoH mandates a HTTP2 compliant client. Unfortunately proxy.py does not provide that yet, so we use a dependency. Install it:

pip install "httpx[http2]"

Now start proxy.py as:

proxy \
    --plugins proxy.plugin.CloudflareDnsResolverPlugin

By default, CloudflareDnsResolverPlugin runs in security mode and provides malware protection. Use --cloudflare-dns-mode family to also enable adult content protection too.

CustomDnsResolverPlugin

This plugin demonstrate how to use a custom DNS resolution implementation with proxy.py. This example plugin currently uses Python's in-built resolution mechanism. Customize code to your taste. Example, query your custom DNS server, implement DoH or other mechanisms.

Start proxy.py as:

proxy \
    --plugins proxy.plugin.CustomDnsResolverPlugin

CustomNetworkInterface

HttpProxyBasePlugin.resolve_dns callback can also be used to configure network interface which must be used as the source_address for connection to the upstream server.

See this thread for more details.

PS: There is no plugin named, but CustomDnsResolverPlugin can be easily customized according to your needs.

HTTP Web Server Plugins

Reverse Proxy

Extend in-built Web Server to add Reverse Proxy capabilities.

Start proxy.py as:

proxy --enable-web-server \
    --plugins proxy.plugin.ReverseProxyPlugin

With default configuration, ReverseProxyPlugin plugin is equivalent to following Nginx config:

location /get {
    proxy_pass http://httpbin.org/get
}

Verify using curl -v localhost:8899/get:

{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "localhost",
    "User-Agent": "curl/7.64.1"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://localhost/get"
}

Web Server Route

Demonstrates inbuilt web server routing using plugin.

Start proxy.py as:

proxy --enable-web-server \
    --plugins proxy.plugin.WebServerPlugin

Verify using curl -v localhost:8899/http-route-example, should return:

HTTP route response

Plugin Ordering

When using multiple plugins, depending upon plugin functionality, it might be worth considering the order in which plugins are passed on the command line.

Plugins are called in the same order as they are passed. Example, say we are using both FilterByUpstreamHostPlugin and RedirectToCustomServerPlugin. Idea is to drop all incoming http requests for facebook.com and www.facebook.com and redirect other http requests to our inbuilt web server.

Hence, in this scenario it is important to use FilterByUpstreamHostPlugin before RedirectToCustomServerPlugin. If we enable RedirectToCustomServerPlugin before FilterByUpstreamHostPlugin, facebook requests will also get redirected to inbuilt web server, instead of being dropped.

End-to-End Encryption

By default, proxy.py uses http protocol for communication with clients e.g. curl, browser. For enabling end-to-end encrypting using tls / https first generate certificates. Checkout the repository and run:

make https-certificates

Start proxy.py as:

proxy \
    --cert-file https-cert.pem \
    --key-file https-key.pem

Verify using curl -x https://localhost:8899 --proxy-cacert https-cert.pem https://httpbin.org/get:

{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://httpbin.org/get"
}

If you want to avoid passing --proxy-cacert flag, also consider signing generated SSL certificates. Example:

First, generate CA certificates:

make ca-certificates

Then, sign SSL certificate:

make sign-https-certificates

Now restart the server with --cert-file https-signed-cert.pem flag. Note that you must also trust generated ca-cert.pem in your system keychain.

TLS Interception

By default, proxy.py will not decrypt https traffic between client and server. To enable TLS interception first generate root CA certificates:

make ca-certificates

Lets also enable CacheResponsePlugin so that we can verify decrypted response from the server. Start proxy.py as:

proxy \
    --plugins proxy.plugin.CacheResponsesPlugin \
    --ca-key-file ca-key.pem \
    --ca-cert-file ca-cert.pem \
    --ca-signing-key-file ca-signing-key.pem

NOTE Also provide explicit CA bundle path needed for validation of peer certificates. See --ca-file flag.

Verify TLS interception using curl

curl -v -x localhost:8899 --cacert ca-cert.pem https://httpbin.org/get
*  issuer: C=US; ST=CA; L=SanFrancisco; O=proxy.py; OU=CA; CN=Proxy PY CA; [email protected]
*  SSL certificate verify ok.
> GET /get HTTP/1.1
... [redacted] ...
< Connection: keep-alive
<
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://httpbin.org/get"
}

The issuer line confirms that response was intercepted.

Also verify the contents of cached response file. Get path to the cache file from proxy.py logs.

❯ cat /path/to/your/tmp/directory/httpbin.org-1569452863.924174.txt

HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Type: application/json
Date: Wed, 25 Sep 2019 23:07:05 GMT
Referrer-Policy: no-referrer-when-downgrade
Server: nginx
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
Content-Length: 202
Connection: keep-alive

{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://httpbin.org/get"
}

Viola!!! If you remove CA flags, encrypted data will be found in the cached file instead of plain text.

Now use CA flags with other plugin examples to see them work with https traffic.

TLS Interception With Docker

Important notes about TLS Interception with Docker container:

  • Since v2.2.0, proxy.py docker container also ships with openssl. This allows proxy.py to generate certificates on the fly for TLS Interception.

  • For security reasons, proxy.py docker container does not ship with CA certificates.

Here is how to start a proxy.py docker container with TLS Interception:

  1. Generate CA certificates on host computer

    make ca-certificates
  2. Copy all generated certificates into a separate directory. We'll later mount this directory into our docker container

    mkdir /tmp/ca-certificatescp ca-cert.pem ca-key.pem ca-signing-key.pem /tmp/ca-certificates
  3. Start docker container

    docker run -it --rm \
        -v /tmp/ca-certificates:/tmp/ca-certificates \
        -p 8899:8899 \
        abhinavsingh/proxy.py:latest \
        --hostname 0.0.0.0 \
        --plugins proxy.plugin.CacheResponsesPlugin \
        --ca-key-file /tmp/ca-certificates/ca-key.pem \
        --ca-cert-file /tmp/ca-certificates/ca-cert.pem \
        --ca-signing-key /tmp/ca-certificates/ca-signing-key.pem
    • -v /tmp/ca-certificates:/tmp/ca-certificates flag mounts our CA certificate directory in container environment
    • --plugins proxy.plugin.CacheResponsesPlugin enables CacheResponsesPlugin so that we can inspect intercepted traffic
    • --ca-* flags enable TLS Interception.
  4. From another terminal, try TLS Interception using curl. You can omit --cacert flag if CA certificate is already trusted by the system.

    curl -v \
        --cacert ca-cert.pem \
        -x 127.0.0.1:8899 \
        https://httpbin.org/get
  5. Verify issuer field from response headers.

    * Server certificate:
    *  subject: CN=httpbin.org; C=NA; ST=Unavailable; L=Unavailable; O=Unavailable; OU=Unavailable
    *  start date: Jun 17 09:26:57 2020 GMT
    *  expire date: Jun 17 09:26:57 2022 GMT
    *  subjectAltName: host "httpbin.org" matched cert's "httpbin.org"
    *  issuer: CN=example.com
    *  SSL certificate verify ok.
  6. Back on docker terminal, copy response dump path logs.

    ...[redacted]... [I] access_log:338 - 172.17.0.1:56498 - CONNECT httpbin.org:443 - 1031 bytes - 1216.70 ms
    ...[redacted]... [I] close:49 - Cached response at /tmp/httpbin.org-ae1a927d064e4ab386ea319eb38fe251.txt
  7. In another terminal, cat the response dump:

    docker exec -it $(docker ps | grep proxy.py | awk '{ print $1 }') cat /tmp/httpbin.org-ae1a927d064e4ab386ea319eb38fe251.txt
    HTTP/1.1 200 OK
    ...[redacted]...
    {
      ...[redacted]...,
      "url": "http://httpbin.org/get"
    }

Proxy Over SSH Tunnel

This is a WIP and may not work as documented

Requires paramiko to work.

See requirements-tunnel.txt

Proxy Remote Requests Locally

                        |
+------------+          |            +----------+
|   LOCAL    |          |            |  REMOTE  |
|   HOST     | <== SSH ==== :8900 == |  SERVER  |
+------------+          |            +----------+
:8899 proxy.py          |
                        |
                     FIREWALL
                  (allow tcp/22)

What

Proxy HTTP(s) requests made on a remote server through proxy.py server running on localhost.

How

  • Requested remote port is forwarded over the SSH connection.
  • proxy.py running on the localhost handles and responds to remote proxy requests.

Requirements

  1. localhost MUST have SSH access to the remote server
  2. remote server MUST be configured to proxy HTTP(s) requests through the forwarded port number e.g. :8900.
    • remote and localhost ports CAN be same e.g. :8899.
    • :8900 is chosen in ascii art for differentiation purposes.

Try it

Start proxy.py as:

# On localhostproxy --enable-tunnel \
    --tunnel-username username \
    --tunnel-hostname ip.address.or.domain.name \
    --tunnel-port 22 \
    --tunnel-remote-host 127.0.0.1
    --tunnel-remote-port 8899

Make a HTTP proxy request on remote server and verify that response contains public IP address of localhost as origin:

# On remotecurl -x 127.0.0.1:8899 http://httpbin.org/get
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "origin": "x.x.x.x, y.y.y.y",
  "url": "https://httpbin.org/get"
}

Also, verify that proxy.py logs on localhost contains remote IP as client IP.

access_log:328 - remote:52067 - GET httpbin.org:80

Proxy Local Requests Remotely

                        |
+------------+          |     +----------+
|   LOCAL    |          |     |  REMOTE  |
|   HOST     | === SSH =====> |  SERVER  |
+------------+          |     +----------+
                        |     :8899 proxy.py
                        |
                    FIREWALL
                 (allow tcp/22)

Embed proxy.py

Blocking Mode

Start proxy.py in embedded mode with default configuration by using proxy.main method. Example:

import proxy

if __name__ == '__main__':
  proxy.main()

Customize startup flags by passing list of input arguments:

import proxy

if __name__ == '__main__':
  proxy.main([
    '--hostname', '::1',
    '--port', '8899'
  ])

or, customize startup flags by passing them as kwargs:

import ipaddress
import proxy

if __name__ == '__main__':
  proxy.main(
    hostname=ipaddress.IPv6Address('::1'),
    port=8899
  )

Note that:

  1. Calling main is simply equivalent to starting proxy.py from command line.
  2. main will block until proxy.py shuts down.

Non-blocking Mode

Start proxy.py in non-blocking embedded mode with default configuration by using Proxy context manager: Example:

import proxy

if __name__ == '__main__':
  with proxy.Proxy([]) as p:
    # ... your logic here ...

Note that:

  1. Proxy is similar to main, except Proxy does not block.
  2. Internally Proxy is a context manager.
  3. It will start proxy.py when called and will shut it down once the scope ends.
  4. Just like main, startup flags with Proxy can be customized by either passing flags as list of input arguments e.g. Proxy(['--port', '8899']) or by using passing flags as kwargs e.g. Proxy(port=8899).

Ephemeral Port

Use --port=0 to bind proxy.py on a random port allocated by the kernel.

In embedded mode, you can access this port. Example:

import proxy

if __name__ == '__main__':
  with proxy.Proxy([]) as p:
    print(p.acceptors.flags.port)

acceptors.flags.port will give you access to the random port allocated by the kernel.

Loading Plugins

Users can use --plugins flag multiple times to load multiple plugins. See Unable to load plugins if you are running into issues.

When using in embedded mode, you have a few more options. Example:

  1. Provide a fully-qualified name of the plugin class as bytes to the proxy.main method or proxy.Proxy context manager.
  2. Provide type instance of the plugin class. This is especially useful if you plan to define plugins at runtime.

Example, load a single plugin using --plugins flag:

import proxy

if __name__ == '__main__':
  proxy.main([
    '--plugins', 'proxy.plugin.CacheResponsesPlugin',
  ])

For simplicity, you can also pass the list of plugins as a keyword argument to proxy.main or the Proxy constructor.

Example:

import proxy
from proxy.plugin import FilterByUpstreamHostPlugin

if __name__ == '__main__':
  proxy.main([], plugins=[
    b'proxy.plugin.CacheResponsesPlugin',
    FilterByUpstreamHostPlugin,
  ])

Unit testing with proxy.py

proxy.TestCase

To setup and tear down proxy.py for your Python unittest classes, simply use proxy.TestCase instead of unittest.TestCase. Example:

import proxy

class TestProxyPyEmbedded(proxy.TestCase):

    def test_my_application_with_proxy(self) -> None:
        self.assertTrue(True)

Note that:

  1. proxy.TestCase overrides unittest.TestCase.run() method to setup and tear down proxy.py.
  2. proxy.py server will listen on a random available port on the system. This random port is available as self.PROXY.acceptors.flags.port within your test cases.
  3. Only a single acceptor and worker is started by default (--num-workers 1 --num-acceptors 1) for faster setup and tear down.
  4. Most importantly, proxy.TestCase also ensures proxy.py server is up and running before proceeding with execution of tests. By default, proxy.TestCase will wait for 10 seconds for proxy.py server to start, upon failure a TimeoutError exception will be raised.

Override startup flags

To override default startup flags, define a PROXY_PY_STARTUP_FLAGS variable in your test class. Example:

class TestProxyPyEmbedded(TestCase):

    PROXY_PY_STARTUP_FLAGS = [
        '--num-workers', '2',
        '--num-acceptors', '1',
        '--enable-web-server',
    ]

    def test_my_application_with_proxy(self) -> None:
        self.assertTrue(True)

See test_embed.py for full working example.

With unittest.TestCase

If for some reasons you are unable to directly use proxy.TestCase, then simply override unittest.TestCase.run yourself to setup and tear down proxy.py. Example:

import unittest
import proxy


class TestProxyPyEmbedded(unittest.TestCase):

    def test_my_application_with_proxy(self) -> None:
        self.assertTrue(True)

    def run(self, result: Optional[unittest.TestResult] = None) -> Any:
        with proxy.start([
                '--num-workers', '1',
                '--num-acceptors', '1',
                '--port', '... random port ...']):
            super().run(result)

or simply setup / tear down proxy.py within setUpClass and teardownClass class methods.

Utilities

TCP Sockets

new_socket_connection

Attempts to create an IPv4 connection, then IPv6 and finally a dual stack connection to provided address.

>>> conn = new_socket_connection(('httpbin.org', 80))
>>> ...[ use connection ]...
>>> conn.close()

socket_connection

socket_connection is a convenient decorator + context manager around new_socket_connection which ensures conn.close is implicit.

As a context manager:

>>> with socket_connection(('httpbin.org', 80)) as conn:
>>>   ... [ use connection ] ...

As a decorator:

>>> @socket_connection(('httpbin.org', 80))
>>> def my_api_call(conn, *args, **kwargs):
>>>   ... [ use connection ] ...

HTTP Client

build_http_request

  • Generate HTTP GET request

    >>> build_http_request(b'GET', b'/')
    b'GET / HTTP/1.1\r\n\r\n'
  • Generate HTTP GET request with headers

    >>> build_http_request(b'GET', b'/',
            headers={b'Connection': b'close'})
    b'GET / HTTP/1.1\r\nConnection: close\r\n\r\n'
  • Generate HTTP POST request with headers and body

    >>> import json
    >>> build_http_request(b'POST', b'/form',
            headers={b'Content-type': b'application/json'},
            body=proxy.bytes_(json.dumps({'email': '[email protected]'})))
        b'POST /form HTTP/1.1\r\nContent-type: application/json\r\n\r\n{"email": "[email protected]"}'

build_http_response

build_http_response(
    status_code: int,
    protocol_version: bytes = HTTP_1_1,
    reason: Optional[bytes] = None,
    headers: Optional[Dict[bytes, bytes]] = None,
    body: Optional[bytes] = None) -> bytes

PKI

API Usage

  • gen_private_key

    gen_private_key(
        key_path: str,
        password: str,
        bits: int = 2048,
        timeout: int = 10) -> bool
  • gen_public_key

    gen_public_key(
        public_key_path: str,
        private_key_path: str,
        private_key_password: str,
        subject: str,
        alt_subj_names: Optional[List[str]] = None,
        extended_key_usage: Optional[str] = None,
        validity_in_days: int = 365,
        timeout: int = 10) -> bool
  • remove_passphrase

    remove_passphrase(
        key_in_path: str,
        password: str,
        key_out_path: str,
        timeout: int = 10) -> bool
  • gen_csr

    gen_csr(
        csr_path: str,
        key_path: str,
        password: str,
        crt_path: str,
        timeout: int = 10) -> bool
  • sign_csr

    sign_csr(
        csr_path: str,
        crt_path: str,
        ca_key_path: str,
        ca_key_password: str,
        ca_crt_path: str,
        serial: str,
        alt_subj_names: Optional[List[str]] = None,
        extended_key_usage: Optional[str] = None,
        validity_in_days: int = 365,
        timeout: int = 10) -> bool

See pki.py and test_pki.py for usage examples.

CLI Usage

Use proxy.common.pki module for:

  1. Generation of public and private keys
  2. Generating CSR requests
  3. Signing CSR requests using custom CA.
python -m proxy.common.pki -h
usage: pki.py [-h] [--password PASSWORD] [--private-key-path PRIVATE_KEY_PATH]
              [--public-key-path PUBLIC_KEY_PATH] [--subject SUBJECT]
              action

proxy.py v2.2.0 : PKI Utility

positional arguments:
  action                Valid actions: remove_passphrase, gen_private_key,
                        gen_public_key, gen_csr, sign_csr

optional arguments:
  -h, --help            show this help message and exit
  --password PASSWORD   Password to use for encryption. Default: proxy.py
  --private-key-path PRIVATE_KEY_PATH
                        Private key path
  --public-key-path PUBLIC_KEY_PATH
                        Public key path
  --subject SUBJECT     Subject to use for public key generation. Default:
                        /CN=example.com

Internal Documentation

Code is well documented. You have a few options to browse the internal class hierarchy and documentation:

  1. Visit proxypy.readthedocs.io
  2. Build and open docs locally using make lib-doc
  3. Use pydoc3 locally using pydoc3 proxy

Run Dashboard

Dashboard is currently under development and not yet bundled with pip packages. To run dashboard, you must checkout the source.

Dashboard is written in Typescript and SCSS, so let's build it first using:

make dashboard

Also build the embedded Chrome DevTools if you plan on using it:

make devtools

Now start proxy.py with dashboard plugin and by overriding root directory for static server:

proxy --enable-dashboard --static-server-dir dashboard/public
...[redacted]... - Loaded plugin proxy.http.server.HttpWebServerPlugin
...[redacted]... - Loaded plugin proxy.dashboard.dashboard.ProxyDashboard
...[redacted]... - Loaded plugin proxy.dashboard.inspect_traffic.InspectTrafficPlugin
...[redacted]... - Loaded plugin proxy.http.inspector.DevtoolsProtocolPlugin
...[redacted]... - Loaded plugin proxy.http.proxy.HttpProxyPlugin
...[redacted]... - Listening on ::1:8899
...[redacted]... - Core Event enabled

Currently, enabling dashboard will also enable all the dashboard plugins.

Visit dashboard:

open http://localhost:8899/dashboard/

Inspect Traffic

This is a WIP and may not work as documented

Wait for embedded Chrome Dev Console to load. Currently, detail about all traffic flowing through proxy.py is pushed to the Inspect Traffic tab. However, received payloads are not yet integrated with the embedded developer console.

Current functionality can be verified by opening the Dev Console of dashboard and inspecting the websocket connection that dashboard established with the proxy.py server.

Proxy.Py Dashboard Inspect Traffic

Chrome DevTools Protocol

For scenarios where you want direct access to Chrome DevTools protocol websocket endpoint, start proxy.py as:

proxy --enable-devtools --enable-events

Now point your CDT instance to ws://localhost:8899/devtools.

Frequently Asked Questions

Stable vs Develop

  • master branch contains latest stable code and is available via PyPi repository and Docker containers via hub.docker.com

    Issues reported for stable releases are considered with top-priority. However, currently we don't back port fixes into older releases. Example, if you reported an issue in v2.3.1, but current master branch now contains v2.4.0rc1. Then, the fix will land in v2.4.0rc2.

  • develop branch contains cutting edge changes

    Development branch is kept stable (most of the times). But, if you want 100% reliability and serving users in production environment, ALWAYS use the stable version.

Release Schedule

A vX.Y.ZrcN pull request is created once a month which merges developmaster. Find below how code flows from a pull request to the next stable release.

  1. Development release is deployed from developtest.pypi.org after every pull request merge

  2. Alpha release is deployed from developpypi.org before merging the vX.Y.Z.rcN pull request from developmaster branch. There can be multiple alpha releases made before merging the rc pull request

  3. Beta release is deployed from masterpypi.org. Beta releases are made in preparation of rc releases and can be skipped if unnecessary

  4. Release candidate is deployed from masterpypi.org. Release candidates are always made available before final stable release

  5. Stable release is deployed from masterpypi.org

Threads vs Threadless

v1.x

proxy.py used to spawn new threads for handling client requests.

v2.0+

proxy.py added support for threadless execution of client requests using asyncio.

v2.4.0+

Threadless execution was turned ON by default for Python 3.8+ on mac and linux environments.

proxy.py threadless execution has been reported safe on these environments by our users. If you are running into trouble, fallback to threaded mode using --threaded flag.

For windows and Python < 3.8, you can still try out threadless mode by starting proxy.py with --threadless flag.

If threadless works for you, consider sending a PR by editing _env_threadless_compliant method in the proxy/common/constants.py file.

SyntaxError: invalid syntax

proxy.py is strictly typed and uses Python typing annotations. Example:

>>> my_strings : List[str] = []
>>> #############^^^^^^^^^#####

Hence a Python version that understands typing annotations is required. Make sure you are using Python 3.6+.

Verify the version before running proxy.py:

❯ python --version

All typing annotations can be replaced with comment-only annotations. Example:

>>> my_strings = [] # List[str]
>>> ################^^^^^^^^^^^

It will enable proxy.py to run on Python pre-3.6, even on 2.7. However, as all future versions of Python will support typing annotations, this has not been considered.

Unable to load plugins

Make sure plugin modules are discoverable by adding them to PYTHONPATH. Example:

PYTHONPATH=/path/to/my/app proxy --plugins my_app.proxyPlugin

...[redacted]... - Loaded plugin proxy.HttpProxyPlugin
...[redacted]... - Loaded plugin my_app.proxyPlugin

OR, simply pass fully-qualified path as parameter, e.g.

proxy --plugins /path/to/my/app/my_app.proxyPlugin

Here is a quick working example:

  • Contents of /tmp/plug folder
╰─ ls -1 /tmp/plug                                                                                                                       ─╯
my_plugin.py
  • Custom MyPlugin class
╰─ cat /tmp/plug/my_plugin.py                                                                                                            ─╯
from proxy.http.proxy import HttpProxyBasePlugin


class MyPlugin(HttpProxyBasePlugin):
  pass

This is an empty plugin for demonstrating external plugin usage. You must implement necessary methods to make your plugins work for real traffic

  • Start proxy.py with MyPlugin
╰─ PYTHONPATH=/tmp/plug proxy --plugin my_plugin.MyPlugin                                                                      ─╯
...[redacted]... - Loaded plugin proxy.http.proxy.HttpProxyPlugin
...[redacted]... - Loaded plugin my_plugin.MyPlugin
...[redacted]... - Listening on ::1:8899

Unable to connect with proxy.py from remote host

Make sure proxy.py is listening on correct network interface. Try following flags:

  • For IPv6 --hostname ::
  • For IPv4 --hostname 0.0.0.0

Basic auth not working with a browser

Most likely it's a browser integration issue with system keychain.

  • First verify that basic auth is working using curl

    curl -v -x username:[email protected]:8899 https://httpbin.org/get

  • See this thread for further details.

Docker image not working on macOS

It's a compatibility issue with vpnkit.

See moby/vpnkit exhausts docker resources and Connection refused: The proxy could not connect for some background.

GCE log viewer integration for proxy.py

A starter fluentd.conf template is available.

  1. Copy this configuration file as proxy.py.conf under /etc/google-fluentd/config.d/

  2. Update path field to log file path as used with --log-file flag. By default /tmp/proxy.log path is tailed.

  3. Reload google-fluentd:

    sudo service google-fluentd restart

Now proxy.py logs can be browsed using GCE log viewer.

ValueError: filedescriptor out of range in select

proxy.py is made to handle thousands of connections per second without any socket leaks.

  1. Make use of --open-file-limit flag to customize ulimit -n.
  2. Make sure to adjust --backlog flag for higher concurrency.

If nothing helps, open an issue with requests per second sent and output of following debug script:

./helper/monitor_open_files.sh <proxy-py-pid>

None:None in access logs

Sometimes you may see None:None in access logs. It simply means that an upstream server connection was never established i.e. upstream_host=None, upstream_port=None.

There can be several reasons for no upstream connection, few obvious ones include:

  1. Client established a connection but never completed the request.
  2. A plugin returned a response prematurely, avoiding connection to upstream server.

OSError when wrapping client for TLS Interception

With TLS Interception on, you might occasionally see following exceptions:

2021-11-06 23:33:34,540 - pid:91032 [E] server.intercept:678 - OSError when wrapping client
Traceback (most recent call last):
  ...[redacted]...
  ...[redacted]...
  ...[redacted]...
ssl.SSLError: [SSL: TLSV1_ALERT_UNKNOWN_CA] tlsv1 alert unknown ca (_ssl.c:997)
...[redacted]... - CONNECT oauth2.googleapis.com:443 - 0 bytes - 272.08 ms

Some clients can throw TLSV1_ALERT_UNKNOWN_CA if they cannot verify the certificate of the server because it is signed by an unknown issuer CA. Which is the case when we are doing TLS interception. This can be for a variety of reasons e.g. certificate pinning etc.

Another exception you might see is CERTIFICATE_VERIFY_FAILED:

2021-11-06 23:36:02,002 - pid:91033 [E] handler.handle_readables:293 - Exception while receiving from client connection <socket.socket fd=28, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 8899), raddr=('127.0.0.1', 51961)> with reason SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)')
Traceback (most recent call last):
  ...[redacted]...
  ...[redacted]...
  ...[redacted]...
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)
...[redacted]... - CONNECT init.push.apple.com:443 - 0 bytes - 892.99 ms

In future, we might support serving original HTTPS content for such clients while still performing TLS interception in the background. This will keep the clients happy without impacting our ability to TLS intercept. Unfortunately, this feature is currently not available.

Another example with SSLEOFError exception:

2021-11-06 23:46:40,446 - pid:91034 [E] server.intercept:678 - OSError when wrapping client
Traceback (most recent call last):
  ...[redacted]...
  ...[redacted]...
  ...[redacted]...
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:997)
...[redacted]... - CONNECT stock.adobe.io:443 - 0 bytes - 685.32 ms

Plugin Developer and Contributor Guide

High level architecture

                        +-------------+
                        |             |
                        |  Proxy([])  |
                        |             |
                        +------+------+
                               |
                               |
                   +-----------v--------------+
                   |                          |
                   |    AcceptorPool(...)     |
                   |                          |
                   +------------+-------------+
                                |
+-----------------+             |           +-----------------+
|                 |             |           |                 |
|   Acceptor(..)  <-------------+----------->  Acceptor(..)   |
|                 |                         |                 |
+---+-------------+                         +---------+-------+
    |                                                 |
    |                                                 |
    |    +------++------++------++------++------+     |
    |    |      ||      ||      ||      ||      |     |
    +---->      ||      ||      ||      ||      <-----+
         |      ||      ||      ||      ||      |
         +------++------++------++------++------+
                Threadless Worker Processes

proxy.py is made with performance in mind. By default, proxy.py will try to utilize all available CPU cores to it for accepting new client connections. This is achieved by starting AcceptorPool which listens on configured server port. Then, AcceptorPool starts Acceptor processes (--num-acceptors) to accept incoming client connections. Alongside, if --threadless is enabled, ThreadlessPool is setup which starts Threadless processes (--num-workers) to handle the incoming client connections.

Each Acceptor process delegates the accepted client connection to a threadless process via Work class. Currently, HttpProtocolHandler is the default work class.

HttpProtocolHandler simply assumes that incoming clients will follow HTTP specification. Specific HTTP proxy and HTTP server implementations are written as plugins of HttpProtocolHandler.

See documentation of HttpProtocolHandlerPlugin for available lifecycle hooks. Use HttpProtocolHandlerPlugin to add new features for http(s) clients. Example, See HttpWebServerPlugin.

Everything is a plugin

Within proxy.py everything is a plugin.

  • We enabled proxy server plugins using --plugins flag. Proxy server HttpProxyPlugin is a plugin of HttpProtocolHandler. Further, Proxy server allows plugin through HttpProxyBasePlugin specification.

  • All the proxy server plugin examples were implementing HttpProxyBasePlugin. See documentation of HttpProxyBasePlugin for available lifecycle hooks. Use HttpProxyBasePlugin to modify behavior of http(s) proxy protocol between client and upstream server. Example, FilterByUpstreamHostPlugin.

  • We also enabled inbuilt web server using --enable-web-server. Web server HttpWebServerPlugin is a plugin of HttpProtocolHandler and implements HttpProtocolHandlerPlugin specification.

  • There also is a --disable-http-proxy flag. It disables inbuilt proxy server. Use this flag with --enable-web-server flag to run proxy.py as a programmable http(s) server.

Development Guide

Setup Local Environment

Contributors must start proxy.py from source to verify and develop new features / fixes.

See Run proxy.py from command line using repo source for details.

WARNING On macOS you must install Python using pyenv, as Python installed via homebrew tends to be problematic. See linked thread for more details.

Setup Git Hooks

Pre-commit hook ensures tests are passing.

  1. cd /path/to/proxy.py
  2. ln -s $(PWD)/git-pre-commit .git/hooks/pre-commit

Pre-push hook ensures lint and tests are passing.

  1. cd /path/to/proxy.py
  2. ln -s $(PWD)/git-pre-push .git/hooks/pre-push

Sending a Pull Request

Every pull request is tested using GitHub actions.

See GitHub workflow for list of tests.

Benchmarks

See Benchmark directory on how to run benchmark comparisons with other OSS web servers.

To run standalone benchmark for proxy.py, use the following command from repo root:

./helper/benchmark.sh

Flags

proxy -h
usage: -m [-h] [--enable-events] [--enable-conn-pool] [--threadless]
          [--threaded] [--num-workers NUM_WORKERS] [--local-executor]
          [--backlog BACKLOG] [--hostname HOSTNAME] [--port PORT]
          [--unix-socket-path UNIX_SOCKET_PATH]
          [--num-acceptors NUM_ACCEPTORS] [--version] [--log-level LOG_LEVEL]
          [--log-file LOG_FILE] [--log-format LOG_FORMAT]
          [--open-file-limit OPEN_FILE_LIMIT]
          [--plugins PLUGINS [PLUGINS ...]] [--enable-dashboard]
          [--work-klass WORK_KLASS] [--pid-file PID_FILE]
          [--enable-proxy-protocol]
          [--client-recvbuf-size CLIENT_RECVBUF_SIZE] [--key-file KEY_FILE]
          [--timeout TIMEOUT] [--server-recvbuf-size SERVER_RECVBUF_SIZE]
          [--disable-http-proxy] [--disable-headers DISABLE_HEADERS]
          [--ca-key-file CA_KEY_FILE] [--ca-cert-dir CA_CERT_DIR]
          [--ca-cert-file CA_CERT_FILE] [--ca-file CA_FILE]
          [--ca-signing-key-file CA_SIGNING_KEY_FILE] [--cert-file CERT_FILE]
          [--auth-plugin AUTH_PLUGIN] [--basic-auth BASIC_AUTH]
          [--cache-dir CACHE_DIR]
          [--filtered-upstream-hosts FILTERED_UPSTREAM_HOSTS]
          [--enable-web-server] [--enable-static-server]
          [--static-server-dir STATIC_SERVER_DIR]
          [--min-compression-length MIN_COMPRESSION_LENGTH]
          [--pac-file PAC_FILE] [--pac-file-url-path PAC_FILE_URL_PATH]
          [--proxy-pool PROXY_POOL]
          [--filtered-client-ips FILTERED_CLIENT_IPS]
          [--filtered-url-regex-config FILTERED_URL_REGEX_CONFIG]
          [--cloudflare-dns-mode CLOUDFLARE_DNS_MODE]

proxy.py v2.3.2.dev190+ge60d80d.d20211124

options:
  -h, --help            show this help message and exit
  --enable-events       Default: False. Enables core to dispatch lifecycle
                        events. Plugins can be used to subscribe for core
                        events.
  --enable-conn-pool    Default: False. (WIP) Enable upstream connection
                        pooling.
  --threadless          Default: True. Enabled by default on Python 3.8+ (mac,
                        linux). When disabled a new thread is spawned to
                        handle each client connection.
  --threaded            Default: False. Disabled by default on Python < 3.8
                        and windows. When enabled a new thread is spawned to
                        handle each client connection.
  --num-workers NUM_WORKERS
                        Defaults to number of CPU cores.
  --local-executor      Default: False. Disabled by default. When enabled
                        acceptors will make use of local (same process)
                        executor instead of distributing load across remote
                        (other process) executors. Enable this option to
                        achieve CPU affinity between acceptors and executors,
                        instead of using underlying OS kernel scheduling
                        algorithm.
  --backlog BACKLOG     Default: 100. Maximum number of pending connections to
                        proxy server
  --hostname HOSTNAME   Default: ::1. Server IP address.
  --port PORT           Default: 8899. Server port.
  --unix-socket-path UNIX_SOCKET_PATH
                        Default: None. Unix socket path to use. When provided
                        --host and --port flags are ignored
  --num-acceptors NUM_ACCEPTORS
                        Defaults to number of CPU cores.
  --version, -v         Prints proxy.py version.
  --log-level LOG_LEVEL
                        Valid options: DEBUG, INFO (default), WARNING, ERROR,
                        CRITICAL. Both upper and lowercase values are allowed.
                        You may also simply use the leading character e.g.
                        --log-level d
  --log-file LOG_FILE   Default: sys.stdout. Log file destination.
  --log-format LOG_FORMAT
                        Log format for Python logger.
  --open-file-limit OPEN_FILE_LIMIT
                        Default: 1024. Maximum number of files (TCP
                        connections) that proxy.py can open concurrently.
  --plugins PLUGINS [PLUGINS ...]
                        Comma separated plugins. You may use --plugins flag
                        multiple times.
  --enable-dashboard    Default: False. Enables proxy.py dashboard.
  --work-klass WORK_KLASS
                        Default: proxy.http.HttpProtocolHandler. Work klass to
                        use for work execution.
  --pid-file PID_FILE   Default: None. Save "parent" process ID to a file.
  --enable-proxy-protocol
                        Default: False. If used, will enable proxy protocol.
                        Only version 1 is currently supported.
  --client-recvbuf-size CLIENT_RECVBUF_SIZE
                        Default: 1 MB. Maximum amount of data received from
                        the client in a single recv() operation. Bump this
                        value for faster uploads at the expense of increased
                        RAM.
  --key-file KEY_FILE   Default: None. Server key file to enable end-to-end
                        TLS encryption with clients. If used, must also pass
                        --cert-file.
  --timeout TIMEOUT     Default: 10.0. Number of seconds after which an
                        inactive connection must be dropped. Inactivity is
                        defined by no data sent or received by the client.
  --server-recvbuf-size SERVER_RECVBUF_SIZE
                        Default: 1 MB. Maximum amount of data received from
                        the server in a single recv() operation. Bump this
                        value for faster downloads at the expense of increased
                        RAM.
  --disable-http-proxy  Default: False. Whether to disable
                        proxy.HttpProxyPlugin.
  --disable-headers DISABLE_HEADERS
                        Default: None. Comma separated list of headers to
                        remove before dispatching client request to upstream
                        server.
  --ca-key-file CA_KEY_FILE
                        Default: None. CA key to use for signing dynamically
                        generated HTTPS certificates. If used, must also pass
                        --ca-cert-file and --ca-signing-key-file
  --ca-cert-dir CA_CERT_DIR
                        Default: ~/.proxy.py. Directory to store dynamically
                        generated certificates. Also see --ca-key-file, --ca-
                        cert-file and --ca-signing-key-file
  --ca-cert-file CA_CERT_FILE
                        Default: None. Signing certificate to use for signing
                        dynamically generated HTTPS certificates. If used,
                        must also pass --ca-key-file and --ca-signing-key-file
  --ca-file CA_FILE     Default: /Users/abhinavsingh/Dev/proxy.py/venv310/lib/
                        python3.10/site-packages/certifi/cacert.pem. Provide
                        path to custom CA bundle for peer certificate
                        verification
  --ca-signing-key-file CA_SIGNING_KEY_FILE
                        Default: None. CA signing key to use for dynamic
                        generation of HTTPS certificates. If used, must also
                        pass --ca-key-file and --ca-cert-file
  --cert-file CERT_FILE
                        Default: None. Server certificate to enable end-to-end
                        TLS encryption with clients. If used, must also pass
                        --key-file.
  --auth-plugin AUTH_PLUGIN
                        Default: proxy.http.proxy.AuthPlugin. Auth plugin to
                        use instead of default basic auth plugin.
  --basic-auth BASIC_AUTH
                        Default: No authentication. Specify colon separated
                        user:password to enable basic authentication.
  --cache-dir CACHE_DIR
                        Default: A temporary directory. Flag only applicable
                        when cache plugin is used with on-disk storage.
  --filtered-upstream-hosts FILTERED_UPSTREAM_HOSTS
                        Default: Blocks Facebook. Comma separated list of IPv4
                        and IPv6 addresses.
  --enable-web-server   Default: False. Whether to enable
                        proxy.HttpWebServerPlugin.
  --enable-static-server
                        Default: False. Enable inbuilt static file server.
                        Optionally, also use --static-server-dir to serve
                        static content from custom directory. By default,
                        static file server serves out of installed proxy.py
                        python module folder.
  --static-server-dir STATIC_SERVER_DIR
                        Default: "public" folder in directory where proxy.py
                        is placed. This option is only applicable when static
                        server is also enabled. See --enable-static-server.
  --min-compression-length MIN_COMPRESSION_LENGTH
                        Default: 20 bytes. Sets the minimum length of a
                        response that will be compressed (gzipped).
  --pac-file PAC_FILE   A file (Proxy Auto Configuration) or string to serve
                        when the server receives a direct file request. Using
                        this option enables proxy.HttpWebServerPlugin.
  --pac-file-url-path PAC_FILE_URL_PATH
                        Default: /. Web server path to serve the PAC file.
  --proxy-pool PROXY_POOL
                        List of upstream proxies to use in the pool
  --filtered-client-ips FILTERED_CLIENT_IPS
                        Default: 127.0.0.1,::1. Comma separated list of IPv4
                        and IPv6 addresses.
  --filtered-url-regex-config FILTERED_URL_REGEX_CONFIG
                        Default: No config. Comma separated list of IPv4 and
                        IPv6 addresses.
  --cloudflare-dns-mode CLOUDFLARE_DNS_MODE
                        Default: security. Either "security" (for malware
                        protection) or "family" (for malware and adult content
                        protection)

Proxy.py not working? Report at:
https://github.com/abhinavsingh/proxy.py/issues/new
Comments
  • [Ubuntu] Cannot use TLS interception

    [Ubuntu] Cannot use TLS interception

    Describe the bug Iam not able to use the TLS interception feature, as provided in the readme section. Even the basic

    To Reproduce Steps to reproduce the behavior:

    1. [Host machine] Install proxy==2.1.2 pip install proxy.py
    2. [Host machine] create ssl files
    export CA_KEY_FILE_PATH=ca-key.pem
    export CA_CERT_FILE_PATH=ca-cert.pem
    export CA_SIGNING_KEY_FILE_PATH=ca-signing-key.pem
    python -m proxy.common.pki gen_private_key --private-key-path $CA_KEY_FILE_PATH
    python -m proxy.common.pki remove_passphrase --private-key-path $CA_KEY_FILE_PATH
    python -m proxy.common.pki gen_public_key --private-key-path $CA_KEY_FILE_PATH --public-key-path $CA_CERT_FILE_PATH
    python -m proxy.common.pki gen_private_key --private-key-path $CA_SIGNING_KEY_FILE_PATH
    python -m proxy.common.pki remove_passphrase --private-key-path
    
    1. [Host machine] Run proxy.py
    $ proxy --plugins proxy.plugin.CacheResponsesPlugin --ca-key-file ca-key.pem --ca-cert-file ca-cert.pem --ca-signing-key-file ca-signing-key.pem --host 0.0.0.0 --log-level d
    2020-03-02 11:11:41,970 - pid:20523 [I] load_plugins:525 - Loaded plugin proxy.http.proxy.HttpProxyPlugin
    2020-03-02 11:11:41,971 - pid:20523 [I] load_plugins:525 - Loaded plugin proxy.plugin.CacheResponsesPlugin
    2020-03-02 11:11:41,972 - pid:20523 [I] listen:63 - Listening on 0.0.0.0:8899
    2020-03-02 11:11:41,975 - pid:20523 [D] start_workers:81 - Started acceptor#0 process 20525
    2020-03-02 11:11:41,976 - pid:20523 [I] start_workers:84 - Started 1 workers
    
    1. [Client machine] Send request from client system
    $ curl --proxy http://host_ip:8899 --cacert ~/work/varnish_docker_virtual/squid_docker/upstream_haproxy_certs/haproxy-ca-cert.pem https://httpbin.org/ip -vvv
    *   Trying host_ip...
    * TCP_NODELAY set
    * Connected to host_ip (host_ip) port 8899 (#0)
    * allocate connect buffer!
    * Establish HTTP proxy tunnel to httpbin.org:443
    > CONNECT httpbin.org:443 HTTP/1.1
    > Host: httpbin.org:443
    > User-Agent: curl/7.58.0
    > Proxy-Connection: Keep-Alive
    > 
    < HTTP/1.1 200 Connection established
    < 
    * Proxy replied 200 to CONNECT request
    * CONNECT phase completed!
    * ALPN, offering h2
    * ALPN, offering http/1.1
    * successfully set certificate verify locations:
    *   CAfile: /home/jitesh/work/varnish_docker_virtual/squid_docker/upstream_haproxy_certs/haproxy-ca-cert.pem
      CApath: /etc/ssl/certs
    * TLSv1.3 (OUT), TLS handshake, Client hello (1):
    * CONNECT phase completed!
    * CONNECT phase completed!
    * OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to httpbin.org:443 
    * stopped the pause stream!
    * Closing connection 0
    curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to httpbin.org:443 
    
    1. [Host machine] Check error log at the host
    2020-03-02 11:11:47,320 - pid:20525 [D] initialize:145 - Handling connection <socket.socket fd=9, family=AddressFamily.AF_INET, type=2049, proto=0, laddr=('host_ip', 8899), raddr=('gateway_ip/client_ip', 19673)>
    2020-03-02 11:11:47,322 - pid:20525 [D] handle_readables:302 - Client is ready for reads, reading
    2020-03-02 11:11:47,322 - pid:20525 [D] recv:65 - received 114 bytes from client
    2020-03-02 11:11:47,324 - pid:20525 [D] connect_upstream:420 - Connecting to upstream httpbin.org:443
    2020-03-02 11:11:47,363 - pid:20525 [D] connect_upstream:425 - Connected to upstream httpbin.org:443
    2020-03-02 11:11:47,402 - pid:20525 [D] generate_upstream_certificate:362 - Generating certificates /home/ubuntu/.proxy.py/certificates/httpbin.org.pem
    2020-03-02 11:11:47,422 - pid:20525 [D] flush:91 - flushed 39 bytes to client
    2020-03-02 11:11:47,423 - pid:20525 [E] on_request_complete:278 - OSError when wrapping client
    2020-03-02 11:11:47,423 - pid:20525 [I] access_log:332 - gateway_ip/client_ip:19673 - CONNECT httpbin.org:443 - 0 bytes - 105.42 ms
    2020-03-02 11:11:47,424 - pid:20525 [I] close:48 - Cached response at /tmp/httpbin.org-89311d71dce24450b200947c9ef8ac1f.txt
    2020-03-02 11:11:47,424 - pid:20525 [D] on_client_connection_close:189 - Closed server connection, has buffer False
    2020-03-02 11:11:47,424 - pid:20525 [D] shutdown:217 - Closing client connection <socket.socket fd=9, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('host_ip', 8899), raddr=('gateway_ip/client_ip', 19673)> at address ('gateway_ip/client_ip', 19673) has buffer False
    2020-03-02 11:11:47,425 - pid:20525 [D] shutdown:225 - Client connection shutdown successful
    2020-03-02 11:11:47,425 - pid:20525 [D] shutdown:230 - Client connection closed
    

    Expected behavior The expected outcome of https://github.com/abhinavsingh/proxy.py#tls-interception

    Version information

    • OS: Ubuntu 18.04.4 LTS (GNU/Linux 4.15.0-88-generic x86_64)
    • Curl: curl 7.58.0 (x86_64-pc-linux-gnu) libcurl/7.58.0 OpenSSL/1.1.1 zlib/1.2.11 libidn2/2.0.4 libpsl/0.19.1 (+libidn2/2.0.4) nghttp2/1.30.0 librtmp/2.3
    • proxy.py Version 2.1.2

    Additional context I am able to get response of http://httpbin.org/ip without any issues. So the problem only occurs at the HTTPS section. Do let me know if there are any procedures that I missed.

  • [TlsInterception] `OSError` when sending empty chunk to clients under `Python < 3.10`

    [TlsInterception] `OSError` when sending empty chunk to clients under `Python < 3.10`

    So far I used proxy.py version 2.3.1 to develop a plugin. Since I switched to version 2.4.0rc2, I am getting the following info/warning messages and TLS based access is denied. I tried with rc5 which issues the same messages on both the Windows and macOS platform.

    2022-01-11 15:13:15,840 - pid:1575 [W] handler.handle_readables:266 - Exception when receiving from client connection#29 with reason FileNotFoundError(2, 'No such file or directory')
    2022-01-11 15:13:15,840 - pid:1576 [W] handler.handle_readables:266 - Exception when receiving from client connection#29 with reason FileNotFoundError(2, 'No such file or directory')
    2022-01-11 15:13:15,840 - pid:1575 [I] server.access_log:406 - 127.0.0.1:50398 - CONNECT clientservices.googleapis.com:443 - 0 bytes - 50.59ms
    2022-01-11 15:13:15,841 - pid:1576 [I] server.access_log:406 - 127.0.0.1:50400 - CONNECT accounts.google.com:443 - 0 bytes - 45.82ms
    2022-01-11 15:13:16,290 - pid:1576 [W] handler.handle_readables:266 - Exception when receiving from client connection#29 with reason FileNotFoundError(2, 'No such file or directory')
    2022-01-11 15:13:16,291 - pid:1576 [I] server.access_log:406 - 127.0.0.1:50404 - CONNECT www.google.com:443 - 0 bytes - 44.86ms
    

    The browser states "The website is not reachable" and shows ERR_CONNECTION_CLOSED as the error message.

    To my best knowledge, both configurations, v2.3.1 and 2.4.0rcXare using the same configuration and use the same certificates but only v2.3.1 works. I am using Python 3.9.

    I also notice that there are no cached server certificates in ~/.proxy/certificates when using 2.4.0rcX.

  • network interface binding option

    network interface binding option

    hi,

    I wish bind address option. thank you.

    ex) pc ips (1.1.1.2, 1.1.1.3, 1.1.1.4)

    `# ip a

    2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether **** inet 1.1.1.2/24 brd 1.1.1.255 scope global enp2s0 inet 1.1.1.3/24 brd 1.1.1.255 scope global enp2s0 inet 1.1.1.4/24 brd 1.1.1.255 scope global enp2s0`

    I wish choice 1.1.1.3

    `import http.client

    conn = http.client.HTTPConnection('xenosi.de', source_address=tuple(['1.1.1.3', 0]));

    h = {} h['User-Agent'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36' conn.request('GET', '/ip.php?json', headers=h);

    res = conn.getresponse(); print(res.status, res.reason, res.read())`

    result) proxy --hostname=0.0.0.0 --bindip=1.1.1.3

  • Virtual Hosts Plugin

    Virtual Hosts Plugin

    Is your feature request related to a problem? Please describe. I want to run a bunch of local services on different ports, and have a proxy that can connect to each of them

    Describe the solution you'd like For example, the proxy is on port 80, and I have my personal DNS that resolves *.test to 127.0.0.1. When I go to example.test, the proxy will then go to 127.0.0.1:5000 for example.

  • Proxy.py with Firefox on NetBSD

    Proxy.py with Firefox on NetBSD

    python proxy.py --port 12500 --ipv4

    then configure Firefox proxy 127.0.0.1:12500

    it gives errors and not working

     return recvfds(s, 1)[0]
      File "/usr/lib/python3.7/multiprocessing/reduction.py", line 161, in recvfds
        len(ancdata))
    RuntimeError: received 0 items of ancdata
    
  • [ReverseProxyPlugin] Fails to process request when response is too big

    [ReverseProxyPlugin] Fails to process request when response is too big

    Describe the bug I try the reverse proxy example from README.md by starting proxy.py with (hopefully) appropriate flags (check below for invocation) and send simple requests using curl. For some reason proxy sporadically fails and times out after ~10 secs and client isn't receiving the response body. This happens around 20% of the times; the rest of them it's working fine.

    To Reproduce

    1. Run proxy.py as python -m proxy --disable-http-proxy --enable-web-server --plugins proxy.plugin.ReverseProxyPlugin --hostname 127.0.0.1
    2. Use the curl example found in README.md curl -v http://127.0.0.1:8899/get. When reproducible I get:
    $ curl -v http://127.0.0.1:8899/get
    *   Trying 127.0.0.1...
    * TCP_NODELAY set
    * Connected to 127.0.0.1 (127.0.0.1) port 8899 (#0)
    > GET /get HTTP/1.1
    > Host: 127.0.0.1:8899
    > User-Agent: curl/7.58.0
    > Accept: */*
    > 
    < HTTP/1.1 200 OK
    < Date: Mon, 10 Feb 2020 18:01:15 GMT
    < Content-Type: application/json
    < Content-Length: 250
    < Connection: keep-alive
    < Server: gunicorn/19.9.0
    < Access-Control-Allow-Origin: *
    < Access-Control-Allow-Credentials: true
    < 
    * transfer closed with 250 bytes remaining to read
    * Closing connection 0
    curl: (18) transfer closed with 250 bytes remaining to read
    
    1. The proxy.py output is:
    $ python -m proxy --disable-http-proxy --enable-web-server --plugins proxy.plugin.ReverseProxyPlugin --hostname 127.0.0.1
    2020-02-10 20:00:55,637 - pid:49866 [I] load_plugins:525 - Loaded plugin proxy.http.server.HttpWebServerPlugin
    2020-02-10 20:00:55,637 - pid:49866 [I] load_plugins:525 - Loaded plugin proxy.plugin.ReverseProxyPlugin
    2020-02-10 20:00:55,637 - pid:49866 [I] listen:63 - Listening on 127.0.0.1:8899
    2020-02-10 20:00:55,652 - pid:49866 [I] start_workers:84 - Started 8 workers
    2020-02-10 20:01:09,716 - pid:49870 [I] access_log:232 - 127.0.0.1:49854 - GET /get - 10381.46 ms
    2020-02-10 20:01:25,265 - pid:49870 [I] access_log:232 - 127.0.0.1:49917 - GET /get - 10297.06 ms
    2020-02-10 20:01:36,078 - pid:49867 [I] access_log:232 - 127.0.0.1:50003 - GET /get - 277.89 ms
    

    Notice the first 2 requests timing out at ~10 seconds and the 3rd going through fine.

    Expected behavior Was expecting to handle all requests without issues.

    Version information

    • OS: MacOS Catalina (10.15.2 (19C57))
    • Browser: N/A (using curl, but same happens for postman)
    • Device: MacBook Pro Mid 2015
    • proxy.py Version: v2.1.2, also tried latest from develop and results are the same

    Additional context Happy to help by debugging further & providing a patch for this (if indeed an issue), just need some pointers on where to look :)

    Screenshots N/A

  • [CacheResponsesPlugin] Support serving out of cache

    [CacheResponsesPlugin] Support serving out of cache

    Currently cache_responses.py plugin only caches but doesn't serves out of cached data.

    Per @trianta2 request here it will be a good idea to support this in future releases.

    However, for a production grade usage, this feature will require significant work. If someone is interested in taking a stab at this one, please feel free to reach out. Happy to discuss it further.

  • TLS Interception Cert Generation

    TLS Interception Cert Generation

    1. Fixes #299 where TLS interception not working as expected on Ubuntu was reported
    2. Closes #261 where we previously attempted a similar fix

    @Benouare @httpnotonly @ja8zyjits @roshanprince402 @tawmoto @whitespots Folks PTAL at this branch and give it a try. Please report if TLS interception is still an issue.

    • I personally tested it on MacOS where TLS interception was broken too.
    • I am using following flags at my end: proxy --ca-key-file ca-key.pem --ca-cert-file ca-cert.pem --ca-signing-key ca-signing-key.pem --ca-file venv373/lib/python3.7/site-packages/certifi/cacert.pem --plugins proxy.plugin.CacheResponsesPlugin
    • CA certificates were generated using make ca-certificates.
    • You can omit --ca-file flag on Ubuntu.
    • New approach uses custom openssl.cnf so this should also address Ubuntu use cases. But I haven't yet given it a try on Ubuntu.

    Please let me know.

    Screenshot of TLS interception via Chrome on MacOS. As we can see, certificate was signed by custom CA (example.com).

    Screen Shot 2020-06-07 at 4 50 35 PM
  • [Core] Default send buffer size must be configurable

    [Core] Default send buffer size must be configurable

    I notice that queueing a huge response in a plugins "handle_client_request()" method is very slow.

    def handle_client_request(self, request: HttpParser) -> Optional[HttpParser]:
        if some condition:
            return handle_request_locally()
        # else access remote resource
        return request
    
    def handle_request_locally(self) -> None:
        f = open('my_file', 'rb')
        file_data = f.read()
        f.close()
        self.client.queue(
            okResponse(
                file_data,
                {b'Content-Type': b'application/octet-stream'},
                conn_close=True,
        ))
    

    Connecting to the proxy via a browser and thereby accessing a huge file results in very slow handling of the data. My question: is there a bottleneck somewhere in proxy.py or is it simply the wrong approach? Even when running proxy.py on localhost and no network transfer is involved the browser indicates that it will take hours or days to transfer the data.

  • freeze_support() runtimeError on macOS with Python 3.8 using develop branch

    freeze_support() runtimeError on macOS with Python 3.8 using develop branch

    Describe the bug When i am trying to run proxy in develop in get this error :

    iMac-de-Benoit:proxy.py benoit$ proxy
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
        exitcode = _main(fd, parent_sentinel)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
        prepare(preparation_data)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
        _fixup_main_from_path(data['init_main_from_path'])
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
        main_content = runpy.run_path(main_path,
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 262, in run_path
        return _run_module_code(code, init_globals, run_name,
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 95, in _run_module_code
        _run_code(code, mod_globals, init_globals,
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/Library/Frameworks/Python.framework/Versions/3.8/bin/proxy", line 5, in <module>
        from proxy import entry_point
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/__init__.py", line 11, in <module>
        from .proxy import entry_point
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/proxy.py", line 22, in <module>
        from .core.acceptor import AcceptorPool
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/core/acceptor/__init__.py", line 11, in <module>
        from .acceptor import Acceptor
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/core/acceptor/acceptor.py", line 22, in <module>
        from ..threadless import ThreadlessWork, Threadless
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/core/threadless.py", line 26, in <module>
        from .event import EventQueue, eventNames
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/core/event.py", line 39, in <module>
        class EventQueue:
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/core/event.py", line 42, in EventQueue
        MANAGER: multiprocessing.managers.SyncManager = multiprocessing.Manager()
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 57, in Manager
        m.start()
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/managers.py", line 579, in start
        self._process.start()
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 121, in start
        self._popen = self._Popen(self)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 283, in _Popen
        return Popen(process_obj)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
        super().__init__(process_obj)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
        self._launch(process_obj)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 42, in _launch
        prep_data = spawn.get_preparation_data(process_obj._name)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 154, in get_preparation_data
        _check_not_importing_main()
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 134, in _check_not_importing_main
        raise RuntimeError('''
    RuntimeError: 
            An attempt has been made to start a new process before the
            current process has finished its bootstrapping phase.
    
            This probably means that you are not using fork to start your
            child processes and you have forgotten to use the proper idiom
            in the main module:
    
                if __name__ == '__main__':
                    freeze_support()
                    ...
    
            The "freeze_support()" line can be omitted if the program
            is not going to be frozen to produce an executable.
    Traceback (most recent call last):
      File "/Library/Frameworks/Python.framework/Versions/3.8/bin/proxy", line 5, in <module>
        from proxy import entry_point
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/__init__.py", line 11, in <module>
        from .proxy import entry_point
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/proxy.py", line 22, in <module>
        from .core.acceptor import AcceptorPool
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/core/acceptor/__init__.py", line 11, in <module>
        from .acceptor import Acceptor
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/core/acceptor/acceptor.py", line 22, in <module>
        from ..threadless import ThreadlessWork, Threadless
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/core/threadless.py", line 26, in <module>
        from .event import EventQueue, eventNames
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/core/event.py", line 39, in <module>
        class EventQueue:
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/proxy/core/event.py", line 42, in EventQueue
        MANAGER: multiprocessing.managers.SyncManager = multiprocessing.Manager()
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 57, in Manager
        m.start()
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/managers.py", line 583, in start
        self._address = reader.recv()
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/connection.py", line 250, in recv
        buf = self._recv_bytes()
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
        buf = self._recv(4)
      File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/connection.py", line 383, in _recv
        raise EOFError
    EOFError
    
    

    To Reproduce $ pip install git+https://github.com/abhinavsingh/[email protected] $ proxy

    Expected behavior That the dev is simply working.

    Version information

    • OS: OSX 10.13.6 High Sierra
    • Python : 3.8, Python 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27) [Clang 6.0 (clang-600.0.57)] on darwin
    • proxy.py Version : fresh develop branch

    Additional context If i do the same actions but with the master/release branch, everything is working fine.

  • [RaspberryPi] OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to httpbin.org:443

    [RaspberryPi] OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to httpbin.org:443

    Describe the bug Unable to send request to httpbin.org:443 w/ TLS certificate provided

    To Reproduce Steps to reproduce the behavior:

    run bellow on host machine

    1. run make ca-certificates in a folder in the path /home/pi/proxy.py/certificates/ because that is where I put it
    2. i ran python3 "/home/pi/proxy.py/proxy_start.py" but you need to run python3 -m proxy --enable-web-server --plugins proxy.plugin.ShortLinkPlugin --port 769 --hostname 0.0.0.0 --ca-key-file "/home/pi/proxy.py/certificates/ca-key.pem" --ca-cert-file "/home/pi/proxy.py/certificates/ca-cert.pem" --ca-signing-key-file "/home/pi/proxy.py/certificates/ca-signing-key.pem"
    3. run curl -v -x "address":769 --cacert ca-cert.pem https://httpbin.org/get

    Expected behavior expected results from https://github.com/abhinavsingh/proxy.py#tls-interception

    Version information

    • OS: Raspbian GNU/Linux 10 (buster)
    • Browser: cURL
    • Device: Raspberry Pi 3 Model B
    • proxy.py Version: 2.4.1

    Screenshots image

  • Bump setuptools from 59.0.1 to 65.5.1 in /docs

    Bump setuptools from 59.0.1 to 65.5.1 in /docs

    Bumps setuptools from 59.0.1 to 65.5.1.

    Release notes

    Sourced from setuptools's releases.

    v65.5.1

    No release notes provided.

    v65.5.0

    No release notes provided.

    v65.4.1

    No release notes provided.

    v65.4.0

    No release notes provided.

    v65.3.0

    No release notes provided.

    v65.2.0

    No release notes provided.

    v65.1.1

    No release notes provided.

    v65.1.0

    No release notes provided.

    v65.0.2

    No release notes provided.

    v65.0.1

    No release notes provided.

    v65.0.0

    No release notes provided.

    v64.0.3

    No release notes provided.

    v64.0.2

    No release notes provided.

    v64.0.1

    No release notes provided.

    v64.0.0

    No release notes provided.

    v63.4.3

    No release notes provided.

    v63.4.2

    No release notes provided.

    ... (truncated)

    Changelog

    Sourced from setuptools's changelog.

    v65.5.1

    Misc ^^^^

    • #3638: Drop a test dependency on the mock package, always use :external+python:py:mod:unittest.mock -- by :user:hroncok
    • #3659: Fixed REDoS vector in package_index.

    v65.5.0

    Changes ^^^^^^^

    • #3624: Fixed editable install for multi-module/no-package src-layout projects.
    • #3626: Minor refactorings to support distutils using stdlib logging module.

    Documentation changes ^^^^^^^^^^^^^^^^^^^^^

    • #3419: Updated the example version numbers to be compliant with PEP-440 on the "Specifying Your Project’s Version" page of the user guide.

    Misc ^^^^

    • #3569: Improved information about conflicting entries in the current working directory and editable install (in documentation and as an informational warning).
    • #3576: Updated version of validate_pyproject.

    v65.4.1

    Misc ^^^^

    v65.4.0

    Changes ^^^^^^^

    v65.3.0

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the Security Alerts page.
  • pip prod(deps): bump autopep8 from 1.6.0 to 2.0.1

    pip prod(deps): bump autopep8 from 1.6.0 to 2.0.1

    Bumps autopep8 from 1.6.0 to 2.0.1.

    Release notes

    Sourced from autopep8's releases.

    v2.0.1

    What's Changed

    New Contributors

    Full Changelog: https://github.com/hhatto/autopep8/compare/v2.0.0...v2.0.1

    v2.0.0

    version 1.7.1 is yanked.

    release version 2.0.0

    v1.7.1

    What's Changed

    New Contributors

    Full Changelog: https://github.com/hhatto/autopep8/compare/v1.7.0...v1.7.1

    v1.7.0

    Change

    New Feature

    • Support E275

    Bug Fix


    What's Changed

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • Multiple protocols using

    Multiple protocols using

    Check FAQs Please check Frequently Asked Questions before opening a feature request.

    Is your feature request related to a problem? Please describe. As write in documentation "Capable of serving multiple protocols over the same port", but I configured HTTPS proxy and it works only by HTTPS. If i try connect via HTTP to this HTTPS proxy I reciveing error ERR_CONNECTION_RESET.

    Describe the solution you'd like Cant find in documentation how to configure proxy.py to working HTTP and HTTPS simultaneously, even on different ports. I just did not find how to do this or it not implemented?

  • Bump ncipollo/release-action from 1.11.1 to 1.12.0

    Bump ncipollo/release-action from 1.11.1 to 1.12.0

    Bumps ncipollo/release-action from 1.11.1 to 1.12.0.

    Release notes

    Sourced from ncipollo/release-action's releases.

    v1.12.0

    What's Changed

    New Contributor

    Full Changelog: https://github.com/ncipollo/release-action/compare/v1.11.2...v1.12.0

    v1.11.2

    • Security updates
    • Adds support for skipIfReleaseExists
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • npm: bump qs from 6.5.2 to 6.5.3 in /dashboard

    npm: bump qs from 6.5.2 to 6.5.3 in /dashboard

    Bumps qs from 6.5.2 to 6.5.3.

    Changelog

    Sourced from qs's changelog.

    6.5.3

    • [Fix] parse: ignore __proto__ keys (#428)
    • [Fix] utils.merge`: avoid a crash with a null target and a truthy non-array source
    • [Fix] correctly parse nested arrays
    • [Fix] stringify: fix a crash with strictNullHandling and a custom filter/serializeDate (#279)
    • [Fix] utils: merge: fix crash when source is a truthy primitive & no options are provided
    • [Fix] when parseArrays is false, properly handle keys ending in []
    • [Fix] fix for an impossible situation: when the formatter is called with a non-string value
    • [Fix] utils.merge: avoid a crash with a null target and an array source
    • [Refactor] utils: reduce observable [[Get]]s
    • [Refactor] use cached Array.isArray
    • [Refactor] stringify: Avoid arr = arr.concat(...), push to the existing instance (#269)
    • [Refactor] parse: only need to reassign the var once
    • [Robustness] stringify: avoid relying on a global undefined (#427)
    • [readme] remove travis badge; add github actions/codecov badges; update URLs
    • [Docs] Clean up license text so it’s properly detected as BSD-3-Clause
    • [Docs] Clarify the need for "arrayLimit" option
    • [meta] fix README.md (#399)
    • [meta] add FUNDING.yml
    • [actions] backport actions from main
    • [Tests] always use String(x) over x.toString()
    • [Tests] remove nonexistent tape option
    • [Dev Deps] backport from main
    Commits
    • 298bfa5 v6.5.3
    • ed0f5dc [Fix] parse: ignore __proto__ keys (#428)
    • 691e739 [Robustness] stringify: avoid relying on a global undefined (#427)
    • 1072d57 [readme] remove travis badge; add github actions/codecov badges; update URLs
    • 12ac1c4 [meta] fix README.md (#399)
    • 0338716 [actions] backport actions from main
    • 5639c20 Clean up license text so it’s properly detected as BSD-3-Clause
    • 51b8a0b add FUNDING.yml
    • 45f6759 [Fix] fix for an impossible situation: when the formatter is called with a no...
    • f814a7f [Dev Deps] backport from main
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the Security Alerts page.
  • High CPU usage in idle state on Intel Mac when using Python3.8

    High CPU usage in idle state on Intel Mac when using Python3.8

    Check FAQs Checked.

    Describe the bug I started the proxy from command line in my mac and noticed the CPU usage of the acceptor is very high even though there is zero load. Seems there is a while loop running inside the process?

    To Reproduce Steps to reproduce the behavior:

    1. clone the repo
    2. install depedencies
    3. checkout to v2.4.3
    4. python -m proxy --num-acceptors 1 --hostname 0.0.0.0 --log-level d
    develop ✔ $ python -m proxy --num-acceptors 1 --hostname 0.0.0.0 --log-level d
    2022-11-27 14:17:20,414 - pid:28662 [D] utils.set_open_file_limit:320 - Open file soft limit set to 1024
    2022-11-27 14:17:20,415 - pid:28662 [I] plugins.load:85 - Loaded plugin proxy.http.proxy.HttpProxyPlugin
    2022-11-27 14:17:20,416 - pid:28662 [I] tcp.listen:80 - Listening on 0.0.0.0:8899
    2022-11-27 14:17:20,421 - pid:28662 [D] pool._start:148 - Started acceptor#0 process 28680
    2022-11-27 14:17:20,423 - pid:28662 [I] pool.setup:105 - Started 1 acceptors in threadless (local) mode
    2022-11-27 14:17:20,673 - pid:28680 [D] selector_events.__init__:59 - Using selector: KqueueSelector
    2022-11-27 14:17:20,674 - pid:28680 [D] threadless.run:412 - Working on 0 works
    

    Expected behavior I expect the CPU usage of the proxy process to be very low when there is no load at all. However, the process is utilizing the entire core.

    Version information

    • OS: Mac OS Monterey
    • Browser [e.g. chrome, safari]: N/A
    • Device: Mac
    • proxy.py Version: v2.4.3

    Screenshots If applicable, add screenshots to help explain your problem. Screen Shot 2022-11-27 at 14 18 30

Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125
Fast image augmentation library and easy to use wrapper around other libraries. Documentation:  https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

Jan 9, 2023
Implementation of fast algorithms for Maximum Spanning Tree (MST) parsing that includes fast ArcMax+Reweighting+Tarjan algorithm for single-root dependency parsing.

Fast MST Algorithm Implementation of fast algorithms for (Maximum Spanning Tree) MST parsing that includes fast ArcMax+Reweighting+Tarjan algorithm fo

Oct 14, 2022
TAPEX: Table Pre-training via Learning a Neural SQL Executor
TAPEX: Table Pre-training via Learning a Neural SQL Executor

TAPEX: Table Pre-training via Learning a Neural SQL Executor The official repository which contains the code and pre-trained models for our paper TAPE

Dec 28, 2022
An executor that performs image segmentation on fashion items

ClothingSegmenter U2NET fashion image/clothing segmenter based on https://github.com/levindabhi/cloth-segmentation Overview The ClothingSegmenter exec

Mar 30, 2022
An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

Mar 15, 2022
An Agnostic Computer Vision Framework - Pluggable to any Training Library: Fastai, Pytorch-Lightning with more to come
An Agnostic Computer Vision Framework - Pluggable to any Training Library: Fastai, Pytorch-Lightning with more to come

IceVision is the first agnostic computer vision framework to offer a curated collection with hundreds of high-quality pre-trained models from torchvision, MMLabs, and soon Pytorch Image Models. It orchestrates the end-to-end deep learning workflow allowing to train networks with easy-to-use robust high-performance libraries such as Pytorch-Lightning and Fastai

Dec 29, 2022
FFCV: Fast Forward Computer Vision (and other ML workloads!)
FFCV: Fast Forward Computer Vision (and other ML workloads!)

Fast Forward Computer Vision: train models at a fraction of the cost with accele

Jan 3, 2023
Python KNN model: Predicting a probability of getting a work visa. Tableau: Non-immigrant visas over the years.
Python KNN model: Predicting a probability of getting a work visa. Tableau: Non-immigrant visas over the years.

The value of international students to the United States. Probability of getting a non-immigrant visa. Project timeline: Jan 2021 - April 2021 Project

Nov 21, 2021
BasicNeuralNetwork - This project looks over the basic structure of a neural network and how machine learning training algorithms work
BasicNeuralNetwork - This project looks over the basic structure of a neural network and how machine learning training algorithms work

BasicNeuralNetwork - This project looks over the basic structure of a neural network and how machine learning training algorithms work. For this project, I used the sigmoid function as an activation function along with stochastic gradient descent to adjust the weights and biases.

Jan 22, 2022
Streaming over lightweight data transformations
Streaming over lightweight data transformations

Description Data augmentation libarary for Deep Learning, which supports images, segmentation masks, labels and keypoints. Furthermore, SOLT is fast a

Jan 8, 2023
Lightweight mmm - Lightweight (Bayesian) Media Mix Model

Lightweight (Bayesian) Media Mix Model This is not an official Google product. L

Jan 3, 2023
The deployment framework aims to provide a simple, lightweight, fast integrated, pipelined deployment framework that ensures reliability, high concurrency and scalability of services.

savior是一个能够进行快速集成算法模块并支持高性能部署的轻量开发框架。能够帮助将团队进行快速想法验证(PoC),避免重复的去github上找模型然后复现模型;能够帮助团队将功能进行流程拆解,很方便的提高分布式执行效率;能够有效减少代码冗余,减少不必要负担。

Dec 22, 2022
Custom TensorFlow2 implementations of forward and backward computation of soft-DTW algorithm in batch mode.

Batch Soft-DTW(Dynamic Time Warping) in TensorFlow2 including forward and backward computation Custom TensorFlow2 implementations of forward and backw

Aug 30, 2022
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [PaddlePaddle Implementation] Homepage of paper: Paint Transformer: Fee

Dec 16, 2022
Pytorch implementation of "Forward Thinking: Building and Training Neural Networks One Layer at a Time"
Pytorch implementation of

forward-thinking-pytorch Pytorch implementation of Forward Thinking: Building and Training Neural Networks One Layer at a Time Requirements Python 2.7

Oct 6, 2022
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by

Dec 30, 2022
Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.
Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Dec 26, 2022
This folder contains the python code of UR5E's advanced forward kinematics model.
This folder contains the python code of UR5E's advanced forward kinematics model.

This folder contains the python code of UR5E's advanced forward kinematics model. By entering the angle of the joint of UR5e, the detailed coordinates of up to 48 points around the robot arm can be calculated.

Sep 17, 2022
an implementation of softmax splatting for differentiable forward warping using PyTorch
an implementation of softmax splatting for differentiable forward warping using PyTorch

softmax-splatting This is a reference implementation of the softmax splatting operator, which has been proposed in Softmax Splatting for Video Frame I

Dec 28, 2022