aka "Bayesian Methods for Hackers": An introduction to Bayesian methods + probabilistic programming with a computation/understanding-first, mathematics-second point of view. All in pure Python ;)

Bayesian Methods for Hackers

Using Python and PyMC

The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chapters of slow, mathematical analysis. The typical text on Bayesian inference involves two to three chapters on probability theory, then enters what Bayesian inference is. Unfortunately, due to mathematical intractability of most Bayesian models, the reader is only shown simple, artificial examples. This can leave the user with a so-what feeling about Bayesian inference. In fact, this was the author's own prior opinion.

After some recent success of Bayesian methods in machine-learning competitions, I decided to investigate the subject again. Even with my mathematical background, it took me three straight-days of reading examples and trying to put the pieces together to understand the methods. There was simply not enough literature bridging theory to practice. The problem with my misunderstanding was the disconnect between Bayesian mathematics and probabilistic programming. That being said, I suffered then so the reader would not have to now. This book attempts to bridge the gap.

If Bayesian inference is the destination, then mathematical analysis is a particular path towards it. On the other hand, computing power is cheap enough that we can afford to take an alternate route via probabilistic programming. The latter path is much more useful, as it denies the necessity of mathematical intervention at each step, that is, we remove often-intractable mathematical analysis as a prerequisite to Bayesian inference. Simply put, this latter computational path proceeds via small intermediate jumps from beginning to end, where as the first path proceeds by enormous leaps, often landing far away from our target. Furthermore, without a strong mathematical background, the analysis required by the first path cannot even take place.

Bayesian Methods for Hackers is designed as an introduction to Bayesian inference from a computational/understanding-first, and mathematics-second, point of view. Of course as an introductory book, we can only leave it at that: an introductory book. For the mathematically trained, they may cure the curiosity this text generates with other texts designed with mathematical analysis in mind. For the enthusiast with less mathematical background, or one who is not interested in the mathematics but simply the practice of Bayesian methods, this text should be sufficient and entertaining.

The choice of PyMC as the probabilistic programming language is two-fold. As of this writing, there is currently no central resource for examples and explanations in the PyMC universe. The official documentation assumes prior knowledge of Bayesian inference and probabilistic programming. We hope this book encourages users at every level to look at PyMC. Secondly, with recent core developments and popularity of the scientific stack in Python, PyMC is likely to become a core component soon enough.

PyMC does have dependencies to run, namely NumPy and (optionally) SciPy. To not limit the user, the examples in this book will rely only on PyMC, NumPy, SciPy and Matplotlib.

Printed Version by Addison-Wesley

Bayesian Methods for Hackers is now available as a printed book! You can pick up a copy on Amazon. What are the differences between the online version and the printed version?

  • Additional Chapter on Bayesian A/B testing
  • Updated examples
  • Answers to the end of chapter questions
  • Additional explanation, and rewritten sections to aid the reader.

Contents

See the project homepage here for examples, too.

The below chapters are rendered via the nbviewer at nbviewer.jupyter.org/, and is read-only and rendered in real-time. Interactive notebooks + examples can be downloaded by cloning!

PyMC2

  • Prologue: Why we do it.

  • Chapter 1: Introduction to Bayesian Methods Introduction to the philosophy and practice of Bayesian methods and answering the question, "What is probabilistic programming?" Examples include:

    • Inferring human behaviour changes from text message rates
  • Chapter 2: A little more on PyMC We explore modeling Bayesian problems using Python's PyMC library through examples. How do we create Bayesian models? Examples include:

    • Detecting the frequency of cheating students, while avoiding liars
    • Calculating probabilities of the Challenger space-shuttle disaster
  • Chapter 3: Opening the Black Box of MCMC We discuss how MCMC operates and diagnostic tools. Examples include:

    • Bayesian clustering with mixture models
  • Chapter 4: The Greatest Theorem Never Told We explore an incredibly useful, and dangerous, theorem: The Law of Large Numbers. Examples include:

    • Exploring a Kaggle dataset and the pitfalls of naive analysis
    • How to sort Reddit comments from best to worst (not as easy as you think)
  • Chapter 5: Would you rather lose an arm or a leg? The introduction of loss functions and their (awesome) use in Bayesian methods. Examples include:

    • Solving the Price is Right's Showdown
    • Optimizing financial predictions
    • Winning solution to the Kaggle Dark World's competition
  • Chapter 6: Getting our prior-ities straight Probably the most important chapter. We draw on expert opinions to answer questions. Examples include:

    • Multi-Armed Bandits and the Bayesian Bandit solution.
    • What is the relationship between data sample size and prior?
    • Estimating financial unknowns using expert priors

    We explore useful tips to be objective in analysis as well as common pitfalls of priors.

PyMC3

  • Prologue: Why we do it.

  • Chapter 1: Introduction to Bayesian Methods Introduction to the philosophy and practice of Bayesian methods and answering the question, "What is probabilistic programming?" Examples include:

    • Inferring human behaviour changes from text message rates
  • Chapter 2: A little more on PyMC We explore modeling Bayesian problems using Python's PyMC library through examples. How do we create Bayesian models? Examples include:

    • Detecting the frequency of cheating students, while avoiding liars
    • Calculating probabilities of the Challenger space-shuttle disaster
  • Chapter 3: Opening the Black Box of MCMC We discuss how MCMC operates and diagnostic tools. Examples include:

    • Bayesian clustering with mixture models
  • Chapter 4: The Greatest Theorem Never Told We explore an incredibly useful, and dangerous, theorem: The Law of Large Numbers. Examples include:

    • Exploring a Kaggle dataset and the pitfalls of naive analysis
    • How to sort Reddit comments from best to worst (not as easy as you think)
  • Chapter 5: Would you rather lose an arm or a leg? The introduction of loss functions and their (awesome) use in Bayesian methods. Examples include:

    • Solving the Price is Right's Showdown
    • Optimizing financial predictions
    • Winning solution to the Kaggle Dark World's competition
  • Chapter 6: Getting our prior-ities straight Probably the most important chapter. We draw on expert opinions to answer questions. Examples include:

    • Multi-Armed Bandits and the Bayesian Bandit solution.
    • What is the relationship between data sample size and prior?
    • Estimating financial unknowns using expert priors

    We explore useful tips to be objective in analysis as well as common pitfalls of priors.

More questions about PyMC? Please post your modeling, convergence, or any other PyMC question on cross-validated, the statistics stack-exchange.

Using the book

The book can be read in three different ways, starting from most recommended to least recommended:

  1. The most recommended option is to clone the repository to download the .ipynb files to your local machine. If you have Jupyter installed, you can view the chapters in your browser plus edit and run the code provided (and try some practice questions). This is the preferred option to read this book, though it comes with some dependencies.

    • Jupyter is a requirement to view the ipynb files. It can be downloaded here. Jupyter notebooks can be run by (your-virtualenv) ~/path/to/the/book/Chapter1_Introduction $ jupyter notebook
    • For Linux users, you should not have a problem installing NumPy, SciPy, Matplotlib and PyMC. For Windows users, check out pre-compiled versions if you have difficulty.
    • In the styles/ directory are a number of files (.matplotlirc) that used to make things pretty. These are not only designed for the book, but they offer many improvements over the default settings of matplotlib.
  2. The second, preferred, option is to use the nbviewer.jupyter.org site, which display Jupyter notebooks in the browser (example). The contents are updated synchronously as commits are made to the book. You can use the Contents section above to link to the chapters.

  3. PDFs are the least-preferred method to read the book, as PDFs are static and non-interactive. If PDFs are desired, they can be created dynamically using the nbconvert utility.

Installation and configuration

If you would like to run the Jupyter notebooks locally, (option 1. above), you'll need to install the following:

  • Jupyter is a requirement to view the ipynb files. It can be downloaded here

  • Necessary packages are PyMC, NumPy, SciPy and Matplotlib.

  • New to Python or Jupyter, and help with the namespaces? Check out this answer.

  • In the styles/ directory are a number of files that are customized for the notebook. These are not only designed for the book, but they offer many improvements over the default settings of matplotlib and the Jupyter notebook. The in notebook style has not been finalized yet.

Development

This book has an unusual development design. The content is open-sourced, meaning anyone can be an author. Authors submit content or revisions using the GitHub interface.

How to contribute

What to contribute?

  • The current chapter list is not finalized. If you see something that is missing (MCMC, MAP, Bayesian networks, good prior choices, Potential classes etc.), feel free to start there.
  • Cleaning up Python code and making code more PyMC-esque
  • Giving better explanations
  • Spelling/grammar mistakes
  • Suggestions
  • Contributing to the Jupyter notebook styles

Commiting

  • All commits are welcome, even if they are minor ;)
  • If you are unfamiliar with Github, you can email me contributions to the email below.

Reviews

these are satirical, but real

"No, but it looks good" - John D. Cook

"I ... read this book ... I like it!" - Andrew Gelman

"This book is a godsend, and a direct refutation to that 'hmph! you don't know maths, piss off!' school of thought... The publishing model is so unusual. Not only is it open source but it relies on pull requests from anyone in order to progress the book. This is ingenious and heartening" - excited Reddit user

Contributions and Thanks

Thanks to all our contributing authors, including (in chronological order):

Authors
Cameron Davidson-Pilon Stef Gibson Vincent Ohprecio Lars Buitinck
Paul Magwene Matthias Bussonnier Jens Rantil y-p
Ethan Brown Jonathan Whitmore Mattia Rigotti Colby Lemon
Gustav W Delius Matthew Conlen Jim Radford Vannessa Sabino
Thomas Bratt Nisan Haramati Robert Grant Matthew Wampler-Doty
Yaroslav Halchenko Alex Garel Oleksandr Lysenko liori
ducky427 Pablo de Oliveira Castro sergeyfogelson Mattia Rigotti
Matt Bauman Andrew Duberstein Carsten Brandt Bob Jansen
ugurthemaster William Scott Min RK Bulwersator
elpres Augusto Hack Michael Feldmann Youki
Jens Rantil Kyle Meyer Eric Martin Inconditus
Kleptine Stuart Layton Antonino Ingargiola vsl9
Tom Christie bclow Simon Potter Garth Snyder
Daniel Beauchamp Philipp Singer gbenmartin Peadar Coyle

We would like to thank the Python community for building an amazing architecture. We would like to thank the statistics community for building an amazing architecture.

Similarly, the book is only possible because of the PyMC library. A big thanks to the core devs of PyMC: Chris Fonnesbeck, Anand Patil, David Huard and John Salvatier.

One final thanks. This book was generated by Jupyter Notebook, a wonderful tool for developing in Python. We thank the IPython/Jupyter community for developing the Notebook interface. All Jupyter notebook files are available for download on the GitHub repository.

Contact

Contact the main author, Cam Davidson-Pilon at [email protected] or @cmrndp

Imgur

Owner
Cameron Davidson-Pilon
Focusing on sustainable food. Former Director of Data Science @Shopify. Author of Bayesian Methods for Hackers and DataOrigami.
Cameron Davidson-Pilon
Comments
  • Port to PyMC3

    Port to PyMC3

    Following the discussion in #155 , I decided to open a new issue to track progress on PyMC3 port. pymc3 branch last commit is really old now (Feb 2014) and rebasing notebook files is a nightmare !

    So I created a new branch on my fork new-pymc3.

    PyMC3 port

    • [x] : Chapter 1
      • [x] : SMS example
    • [ ] : Chapter 2
      • [ ] : A little more on PyMC
      • [ ] : Modeling approaches
      • [ ] : An algorithm for human deceit
    • [ ] : Chapter 3
      • [ ] : Opening the black box of MCMC
      • [ ] : Diagnosing Convergence
      • [ ] : Useful tips for MCMC
    • [ ] : Chapter 4 (to be detailed)
      • [ ] : The greatest theorem never told
      • [ ] : The Disorder of Small Numbers
    • [ ] : Chapter 5 (to be detailed) Loss Functions
      • [ ] : Machine Learning via Bayesian Methods
    • [ ] : Chapter 6 (to be detailed)
      • [ ] : Useful priors to know about
      • [ ] : Eliciting expert prior
      • [ ] : Effect of the prior as $N$ increases
      • [ ] : Bayesian Rugby

    Minor things

    • [X] : Update requirements
    • [X] : convert .ipynb to Notebook v4
    • [X] : replace last cell by %run "../styles/setup.py" at the beginning

    Contribute

    Anyone who want to contribute should pull new-pymc3, make changes and ask me to give them write permissions on my fork. So the progress can be easy followed here.

    If you don't know how to contribute you can search in the repo some tasks : git grep TODO

  • Enhancing the notebooks with dynamic UIs

    Enhancing the notebooks with dynamic UIs

    Note: the dynamic notebooks live Here

    Hi cameron,

    I released Exhibitionist about 2 weeks ago. It's a python library for building dynamic HTML/UI views on top of live python objects in interactive python work. That's practically synonymous with ipnb today.

    Since then, I've been looking for a way to showcase the library and what's possible with it. I'd like to take your notebooks and use Exhibitionist to integrate interactive UI's (dynamic plots, etc') so that readers can interact in realtime with the concepts.

    I've already implemented the first view, allowing the user to visualize the exponential distribution while varying λ by using a slider. Here's a snapshot: xb_pp

    I'll be working on this in the coming weeks, how would you feel about having this live in a fork under @Exhibitionist. Would that be ok?

  • Bandit example stopped working

    Bandit example stopped working

    In Chapter 6 the interactive bandits example does not work anymore. The buttons are displayed, but the bar charts and pdf plots are not shown.

    Just looking over the code the problem could be in these lines:

    <div id="paired-bar-chart" style="width: 600px; margin: auto;"> </div>
    <div id="beta-graphs" style="width: 600px; margin-left:125px; "> </div>
    

    The same is true for the solo BanditsD3.html file.

  • Added Tensorflow Probability Example

    Added Tensorflow Probability Example

    This new added Jupyter Notebook does away with mentions of PyMC2, PyMC3, and Theano, and uses Google's Tensorflow probability for solving the same problems with the same concepts. There have also been increases to the resolutions of the matplotlib plots to show more detail on retina screens.

  • Capter 5 module 'pymc3.variational' has no attribute 'advi'

    Capter 5 module 'pymc3.variational' has no attribute 'advi'

    In[8] with model: mu, sds, elbo = pm.variational.advi(n=50000) step = pm.NUTS(scaling=model.dict_to_array(sds), is_cov=True) trace = pm.sample(5000, step=step, start=mu) Out

    AttributeError Traceback (most recent call last) in () 3 4 with model: ----> 5 mu, sds, elbo = pm.variational.advi(n=50000) 6 step = pm.NUTS(scaling=model.dict_to_array(sds), is_cov=True) 7 trace = pm.sample(5000, step=step, start=mu)

    AttributeError: module 'pymc3.variational' has no attribute 'advi'


    pymc3: 3.4.1-py36_0

  • Chapter 6: Mean-Variance Optimisation Loss Function

    Chapter 6: Mean-Variance Optimisation Loss Function

    Hi Cam,

    Could you explain how to use (or reference to a source) this loss function:

    screenshot from 2015-09-18 12 24 13

    I'm assuming that the loss function is attempting to minimise the portfolio weights for the 4 stocks (which can be done using scipy optimise etc.), but I'm not sure what the lambda parameter is.

    Thanks!

  • PDF documentation

    PDF documentation

    The front readme says: PDF versions are available! Look in the PDF/ directory. but that directory does not exist anymore in the repository.

    How can I most easily generate a PDF for latest version of the document? It is helpful to have a PDF of the book, so that it can be read on iPad, ebook readers, etc.

  • Port code to PyMC3

    Port code to PyMC3

    PyMC3 is coming along quite nicely and is a major improvement upon pymc 2. I thus think a port of PPfH to PyMC3 would be very useful, especially since pymc3 is not well documented yet. All examples should be easy to port.

    I don't think such a PR should be merged into master as pymc 2 is probably still around to stay and much more stable. But having a forked book would be quite nice. Certainly this point is up for debate.

  • Added Chapter 4 in Tensorflow Probability

    Added Chapter 4 in Tensorflow Probability

    Added a notebook with a complete rewrite of Chapter 4 using Tensorflow Probability instead of Pymc3 or pymc2. Also reduced the amount of numpy and scipy usage.

  • Matplotlib styles

    Matplotlib styles

    Hi,

    The plots in the notebooks rely on a custom matplotlibrc for visual style, Users who do share the same matplotlubrc can't reproduce the graphs, in terms of visual style, as that information is not part of the notebook.

    To make it worse, the default style is horrible: def

    fyi.

  • Ch 1: off by 1 error between pymc2 and 3

    Ch 1: off by 1 error between pymc2 and 3

    I absolutely love this project, and my team is now using it as a professional development module. We're using pymc3, and got a little puzzled by the last exercise in Ch 1 which asks you to "consider all instances where tau_samples < 45" as all of our values of tau are less than 45. Comparing the pymc2 and pymc3 versions the max value of tau is 45 in 2 and 44 in 3, with the distributions appearing the same other than being shifted by 1. Changing the >= to a > in the line lambda_ = pm.math.switch(tau >= idx, lambda_1, lambda_2) in the pymc3 version brings them back into alignment.

    Both versions state tau like this:

    tau = pm.DiscreteUniform("tau", lower=0, upper=n_count_data)
    

    n_count_data is 74, so tau can have any integer value 0-74 inclusive. To reason about this it helps to think of the values of tau as representing the boundaries between the days and not the days themselves, so there is always going to be one more than there are days.

    The pymc2 version sets up lambda_ like this:

    @pm.deterministic
    def lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):
        out = np.zeros(n_count_data)
        out[:tau] = lambda_1  # lambda before tau is lambda1
        out[tau:] = lambda_2  # lambda after (and including) tau is lambda2
        return out
    

    While pymc3 does this:

    with model:
        idx = np.arange(n_count_data) # Index
        lambda_ = pm.math.switch(tau >= idx, lambda_1, lambda_2)
    

    So when tau is zero, the pymc2 version executes out[:0] which returns an empty array. In real world terms, this is the case where all events happened under the lambda_2 regime. pymc3 by contrast executes pm.math.switch(0 >= idx, lambda_1, lambda_2) and returns lambda_1 for the first element of idx because 0 >= 0 is True. The comparison operator needs to be > so that you can evaluate 0 > 0 and get False for that and all other elements of idx for the case where you're always in the lambda_2 regime.

    I think I have all this right, but I'm a bit new to all of this so I wanted to lay out the reasoning before I put in a random PR to change a >= to a >. Thanks for reading, and let me know if this all makes sense.

  • Chapter 6: Bayesian Multi-armed Bandits Code

    Chapter 6: Bayesian Multi-armed Bandits Code

    After carefully studying the example code for the multi-armed bandit on chapter six, I found a piece of code which I believe is missing a parameter:

    def sample_bandits(self, n=1):
    
            bb_score = np.zeros(n)
            choices = np.zeros(n)
            
            for k in range(n):
                #sample from the bandits's priors, and select the largest sample
                choice = np.argmax(np.random.beta(1 + self.wins, 1 + self.trials - self.wins))
                
                #sample the chosen bandit
                result = self.bandits.pull(choice)
    

    Here, np.random.beta(1 + self.wins, 1 + self.trials - self.wins) is missing the size parameter, thus it returns a single value, not an array. That makes np.argmax() to pick a bandit useless, as that will always return 0.

    Shouldn't the code be np.random.beta(1 + self.wins, 1 + self.trials - self.wins, len(self.n_bandits)) ?

  • Chapter 3 minor question

    Chapter 3 minor question

    I have a minor question at Chapter 3.

    Why probability of belonging to cluster 1 was calculated as the following [Figure 1] rather than this?

    v = (1 - p_trace) * norm_pdf(x, loc=center_trace[:, 1], scale=std_trace[:, 1]) >
    p_trace * norm_pdf(x, loc=center_trace[:, 0], scale=std_trace[:, 0])

    [Figure 1] 525755D7-DD5A-4701-9778-D1CB80FA69F4

    Thank you for your wonderful book!

  • Chapter 2: description regarding the separation plot for Fig. 2.3.2

    Chapter 2: description regarding the separation plot for Fig. 2.3.2

    "The black vertical line is the expected number of defects we should observe, given this model. This allows the user to see how the total number of events predicted by the model compares to the actual number of events in the data." The above ordinates form the paragraph under the first separation plot https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC2.ipynb However, I suppose there might be some misunderstandings: the expected number of defects should be computed by the approach explained in Appendix 2.5, i.e. posterior_probability.sum() in my case, it's about 6.99753 which corresponds to the number of the realized defect 7. However, what you computed within separation_plot.py is N - \sum_i p_i , in my case, it about 16.0047. In my opinion, this makes sense to show how far the blue bar the blue bars should distribute. As you explained in the text: Ideally, all the blue bars should be close to the right-hand side.

    But, the description at the beginning of this issue, as I mentioned above, is not exact anymore. Maybe, we could say: The black vertical line is the expected number of defects (counting from right-hand side)

    Best wishes,

    Beinstein.

  • Chapter 2 -- tfp.bijectors.AffineScalar is deprecated

    Chapter 2 -- tfp.bijectors.AffineScalar is deprecated

    Executing the code for the model in the Challenger Space Shuttle Disaster example, it returns an AttributeError, roughly "tfp.bijectors has no attribute AffineScalar". After an unusually long search in the internet, I found that, as per this source (https://github.com/tensorflow/probability/releases), this attribute is indeed deprecated. The release said to use tfp.Shift(shift) or tfp.Scale(scale), instead. Since the code called for a multiplying factor, I substituted the line for tp.bijectors.Scale(100.). It worked fine.

  • Chpt #6; Example stock returns: ystockquote is not working

    Chpt #6; Example stock returns: ystockquote is not working

    I could not make ystockquote to work no matter what I tried. Instead I used yahoo's yfinance package which has a straightforward api. I'm recording the updated code here in case someone encounters the same problem.

    import yfinance as yf import pandas as pd

    n_observations = 100 # we will truncate the the most recent 100 days.

    stocks = ["AAPL", "GOOG", "TSLA", "AMZN"]

    enddate = "2022-02-19" startdate = "2021-07-01"

    stock_closes = pd.DataFrame()

    for stock in stocks: x = yf.download(stock, startdate, enddate) stock_closes[stock] = x['Close']

    stock_closes stock_returns = stock_closes.pct_change().iloc[1:, :] stock_return = stock_returns.iloc[-n_observations:,:]

A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

Aug 26, 2022
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View → Camera : set the current camera to the 3D view Vie

Feb 8, 2022
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Sep 20, 2022
This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object Detection, built on SECOND.

3D-CVF This is the official implementation of 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View Spatial Feature Fusion for 3D Object

Sep 2, 2022
Scientific Computation Methods in C and Python (Open for Hacktoberfest 2021)

Sci - cpy README is a stub. Do expand it. Objective This repository is meant to be a ready reference for scientific computation methods. Do ⭐ it if yo

Apr 15, 2022
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral) This is the official implementat

Sep 19, 2022
Registration Loss Learning for Deep Probabilistic Point Set Registration
Registration Loss Learning for Deep Probabilistic Point Set Registration

RLLReg This repository contains a Pytorch implementation of the point set registration method RLLReg. Details about the method can be found in the 3DV

Sep 3, 2022
A simple python module to generate anchor (aka default/prior) boxes for object detection tasks.

PyBx WIP A simple python module to generate anchor (aka default/prior) boxes for object detection tasks. Calculated anchor boxes are returned as ndarr

Aug 4, 2022
Deep universal probabilistic programming with Python and PyTorch
Deep universal probabilistic programming with Python and PyTorch

Getting Started | Documentation | Community | Contributing Pyro is a flexible, scalable deep probabilistic programming library built on PyTorch. Notab

Sep 15, 2022
Probabilistic Programming and Statistical Inference in PyTorch

PtStat Probabilistic Programming and Statistical Inference in PyTorch. Introduction This project is being developed during my time at Cogent Labs. The

Jul 19, 2022
Modular Probabilistic Programming on MXNet

MXFusion | | | | Tutorials | Documentation | Contribution Guide MXFusion is a modular deep probabilistic programming library. With MXFusion Modules yo

Sep 18, 2022
Deep Probabilistic Programming Course @ DIKU

Deep Probabilistic Programming Course @ DIKU

May 14, 2022
DeepProbLog is an extension of ProbLog that integrates Probabilistic Logic Programming with deep learning by introducing the neural predicate.

DeepProbLog DeepProbLog is an extension of ProbLog that integrates Probabilistic Logic Programming with deep learning by introducing the neural predic

Sep 8, 2022
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

RfD-Net [Project Page] [Paper] [Video] RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction Yinyu Nie, Ji Hou, Xiaoguang Han, Matthi

Sep 14, 2022
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.

TorchRL Disclaimer This library is not officially released yet and is subject to change. The features are available before an official release so that

Sep 22, 2022
[EMNLP 2021] MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity Representations

MuVER This repo contains the code and pre-trained model for our EMNLP 2021 paper: MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity

May 30, 2022
[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation
[CVPR'21] Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation

Projecting Your View Attentively: Monocular Road Scene Layout Estimation via Cross-view Transformation Weixiang Yang, Qi Li, Wenxi Liu, Yuanlong Yu, Y

Sep 13, 2022
PanopticBEV - Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images
PanopticBEV - Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images

Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images This r

Sep 16, 2022