Performance analysis of predictive (alpha) stock factors

https://media.quantopian.com/logos/open_source/alphalens-logo-03.png

Alphalens

GitHub Actions status

Alphalens is a Python Library for performance analysis of predictive (alpha) stock factors. Alphalens works great with the Zipline open source backtesting library, and Pyfolio which provides performance and risk analysis of financial portfolios. You can try Alphalens at Quantopian -- a free, community-centered, hosted platform for researching and testing alpha ideas. Quantopian also offers a fully managed service for professionals that includes Zipline, Alphalens, Pyfolio, FactSet data, and more.

The main function of Alphalens is to surface the most relevant statistics and plots about an alpha factor, including:

  • Returns Analysis
  • Information Coefficient Analysis
  • Turnover Analysis
  • Grouped Analysis

Getting started

With a signal and pricing data creating a factor "tear sheet" is a two step process:

import alphalens

# Ingest and format data
factor_data = alphalens.utils.get_clean_factor_and_forward_returns(my_factor,
                                                                   pricing,
                                                                   quantiles=5,
                                                                   groupby=ticker_sector,
                                                                   groupby_labels=sector_names)

# Run analysis
alphalens.tears.create_full_tear_sheet(factor_data)

Learn more

Check out the example notebooks for more on how to read and use the factor tear sheet. A good starting point could be this

Installation

Install with pip:

pip install alphalens

Install with conda:

conda install -c conda-forge alphalens

Install from the master branch of Alphalens repository (development code):

pip install git+https://github.com/quantopian/alphalens

Alphalens depends on:

Usage

A good way to get started is to run the examples in a Jupyter notebook.

To get set up with an example, you can:

Run a Jupyter notebook server via:

jupyter notebook

From the notebook list page(usually found at http://localhost:8888/), navigate over to the examples directory, and open any file with a .ipynb extension.

Execute the code in a notebook cell by clicking on it and hitting Shift+Enter.

Questions?

If you find a bug, feel free to open an issue on our github tracker.

Contribute

If you want to contribute, a great place to start would be the help-wanted issues.

Credits

For a full list of contributors see the contributors page.

Example Tear Sheet

Example factor courtesy of ExtractAlpha

https://github.com/quantopian/alphalens/raw/master/alphalens/examples/table_tear.png

https://github.com/quantopian/alphalens/raw/master/alphalens/examples/returns_tear.png

https://github.com/quantopian/alphalens/raw/master/alphalens/examples/ic_tear.png

Owner
Quantopian, Inc.
Quantopian builds software tools and libraries for quantitative finance.
Quantopian, Inc.
Comments
  • Feature request: Use alphalens with returns instead of prices

    Feature request: Use alphalens with returns instead of prices

    Is there a way to run Alphalens using my own custom input matrix of returns (ie. custom factor adjusted) rather than inputting Prices (ie. the "pricing" table in the examples) for each period?

  • Integration with pyfolio and Quantopian's new Risk Model

    Integration with pyfolio and Quantopian's new Risk Model

    I have been thinking about the nice "Alpha decomposition" #180 feature. The information it provides is actually a small part of what already available in pyfolio and the Quantopian's new Risk Model. On one side we cannot replicate all the information provided by those two tools but on the other side it would be great to have all that analysis without having to build an algorithm and run a backtest, something that could be integrated into Alphalens.

    Then, why don't we create a function in Alphalens that builds the input required by pyfolio and the Quantopian's new Risk Model? Alphalens already simulates the cumulative returns of a portfolio weighted by factor values, so we only need to format those information in a way that is compatible with the other two tools. That would be a pure theoretical analysis, but except for commissions and slippage the results would be realistic and it also would serve as benchmark for the algorithm (users can compare the algorithm results, after setting commission and slippage to 0, with these theoretical results and check if they implemented the algorithm correctly).

    I haven't looked at pyfolio in details so I don't know the details of the input, but if @twiecki can help me with those details I can work on this feature and the same for Quantopian's new Risk Mode (I don't know if that is part of pyfolio or a separate project).

  •  ENH: added positions computation in 'performance.create_pyfolio_input'

    ENH: added positions computation in 'performance.create_pyfolio_input'

    'performance.create_pyfolio_input' now computes positions too. Also it is now possible to select the 'period' to be used in benchmark computation and for factor returns/positions is now possible to select equal weighing instead of factor weighing.

  • new api

    new api

    Here is where I'm headed with the new api thoughts from https://github.com/quantopian/alphalens/pull/110 so to review creating a tear sheet would be a two step process.

    1. format the data
    2. call for the tear sheet

    where all of the tear sheets take a factor_data dataframe that looks something like screen shot 2017-01-13 at 11 53 54 am

    Right now the main blocking issue is the event-study-esque plots as that function requires raw prices. I think that the number of plots and the uniqueness of what they are saying probably merits them getting a separate tear sheet (which would be able to take raw prices).

  • API: run Alphalens with returns instead of prices (utils.get_clean_factor)

    API: run Alphalens with returns instead of prices (utils.get_clean_factor)

    For issue #253 refactor compute_forward_returns add get_clean_factor API refactor get_clean_factor_and_forward_returns as compose of compute_forward_returns and get_clean_factor

  • Create sub tearsheets

    Create sub tearsheets

    This breaks down the full tear sheet into multiple smaller ones covering: returns, information, and turnover analysis.

    This PR is aimed mainly at issue #106 so Thomas your thoughts would be great!

    This is a first start so things are pretty rough, and it definitely won't pass tests but I think there will be a lot of discussion so I'm not too worried.

  • Two Factor Interaction Development: Initial Data Structure, Test Module, and Plot

    Two Factor Interaction Development: Initial Data Structure, Test Module, and Plot

    This begins the development of the "factors_interaction_tearsheet" from issue #219. The goal of this pull request is to get feedback on whether this branch seems to be going in the right direction.

    Description of Changes

    1. Create join_factor_with_factor_and_forward_returns function
      • Creates a function complimentary to get_clean_factor_and_forward_returns that joins an additional factor to the factor_data dataframe returned by get_clean_factor_and_forward_returns.
      • This new dataframe returned, call it "multi_factor_data", will be the core source/data structure providing the necessary data for the factors_interaction_tearsheet computations.
    2. Create an associated test module.
    3. Modify perf.mean_return_by_quantile to take an additional parameter so that it can group by multiple factor quantiles.
    4. Add first plotting function, plot_multi_factor_quantile_returns, to create an annotated heatmap of mean returns by two-factor quantile bins.
    5. Create the tears.create_factors_interaction_tear_sheet as the entry point to the multi-factor tearsheet.

    Requesting Feedback

    1. Comments and suggestions on the utils.join_factor_with_factor_and_forward_returns function
      1. Should there be a wrapper that builds the multi_factor_data dataframe in one step. (i.e. wrap this function with get_clean_factor_and_forward_returns?
    2. I'm not too familiar with creating effective unit tests, so any feedback on this module is appreciated.
    3. In regards to Change 3 above:
      1. My first thought, following suggestion of @luca-s, was to create a separate performance module which would contain all functions for this sort of computation.
      2. Since the existing performance module already contains a lot of the needed functionality, I thought maybe I would create a wrapper function in this new module that added the necessary functionality.
      3. However, in perf.mean_return_by_quantile, I needed to add a parameter to this function to make it work in a clean manner. Not sure how I could have done that with a wrapper.
      4. So I guess my question is, what are the community's thoughts on how I dealt with this particular issue, and also what are thoughts on related considerations going forward?
    4. Any other comments/guidance on path of development going forward is greatly appreciated.
    5. Also, let me know if there are too many changes in this pull request for efficient/easy review.
  • Added support for group neutral factor analysis

    Added support for group neutral factor analysis

    I am working on a sector neutral factor and I discovered that Alphalens analysis on a group neutral factor is currently limited. With group neutral factor I mean a factor intended to rank stocks across the same group, so that it makes sense to compare performance of top vs bottom stocks in the same group but it doesn't make sense to compare performance of a stock in one group with performance of another group.

    The main shortcoming is the return analysis: as there is no way to demean the returns by group, the statistics shown are of little use. Also when the plots are broken down by group, the results are not useful either as there are 2 bugs:

    • API documentation claims that perf.mean_return_by_quantile was performing is performing the demeaning at group level when splitting the plots by group, but even in that case it is not true
    • The same goes for perf.mean_information_coefficient

    Also changed the API arguments names in a consistent way:

    • the tears.* functions use 'long_short' and 'group_neutral' all around, as those functions are intended to be the top level ones. 'by_group' is used if the function support multiple plots output, one for each group
    • the remaining API (mostry performance.*) use 'demeaned' and 'group_adjust' as they used to be
  • ENH Pyfolio integration

    ENH Pyfolio integration

    Issue #225

    Added performance.create_pyfolio_input, which create input suitable for pyfolio. Intended use:

    factor_data = alphales.utils.get_clean_factor_and_forward_returns(factor, prices)
    
    pf_returns = alphales.performance.create_pyfolio_input(factor_data,'1D',
                         long_short=True,
                         group_neutral=False,
                         quantiles=None,
                         groups=None)
    
    pyfolio.tears.create_returns_tear_sheet(pf_returns)
    

    Also, I greatly improved the function that computes the cumulative returns as that is the base on which the pyfolio integration feature is built on. Mainly I removed the legacy assumptions which required the factor to be computed at specific frequency (daily, weekly etc) and also the periods had to be multiple of this frequency (2 days, 4 days etc). The function is now able to properly compute returns when there are gaps in the factor data or when we analyze an intraday factor.

  • BUG: Calculate mean returns by date before mean return by quantile

    BUG: Calculate mean returns by date before mean return by quantile

    Compute mean return by date. If by_date flag is false, then compute and return the average of the daily mean returns (and also the standard error of these daily mean returns).

    Resolves: Issue #309

  • Will alphalens support multi-factor models in the future?

    Will alphalens support multi-factor models in the future?

    alphalens is awesome! I have been used alphalens to filter several effective factors from many factors in stock market for some time. However, by the previous step I just got several effective factors independently. In practice, the more common scenario is to generate a multi-factor linear model and regression the multi-factor model (for example Fama-French 3-factor model) other than single-factor models and do hypothesis testing in this multi-factor model. Will this multi-factor model be considered into adding up to alphalens in the future?

  • fix 'Index' object has no attribute 'get_values' bug

    fix 'Index' object has no attribute 'get_values' bug

    When invoke the function alphalens.tears.create_turnover_tear_sheetalphalens.tears.create_turnover_tear_sheet() without allocating the param turnover_period, may cause the AttributeError: 'Index' object has no attribute 'get_values'.It is because the utils.get_forward_returns_columns() returns columns as an object of Index instead of pd.Series.Hence, Index object hasn't have function get_values().

  • importing alphalens 0.3.6 gets error

    importing alphalens 0.3.6 gets error "No module named 'pandas.util._decorators'"

    Problem Description

    I've tried importing alphalens but encountered the issue of "No module named 'pandas.util._decorators'" right the first command of "import alphalens" Please provide a minimal, self-contained, and reproducible example:

    import alphalens
    

    Please provide the full traceback:

    ModuleNotFoundError                       Traceback (most recent call last)
    <ipython-input-21-6e4fa055c088> in <module>
    ----> 1 import alphalens
    
    e:\temp\Python36\lib\site-packages\alphalens\__init__.py in <module>
    ----> 1 from . import performance
          2 from . import plotting
          3 from . import tears
          4 from . import utils
          5 
    
    e:\temp\Python36\lib\site-packages\alphalens\performance.py in <module>
         20 from pandas.tseries.offsets import BDay
         21 from scipy import stats
    ---> 22 from statsmodels.regression.linear_model import OLS
         23 from statsmodels.tools.tools import add_constant
         24 from . import utils
    
    e:\temp\Python36\lib\site-packages\statsmodels\regression\__init__.py in <module>
    ----> 1 from .linear_model import yule_walker
          2 
          3 from statsmodels.tools._testing import PytestTester
          4 
          5 __all__ = ['yule_walker', 'test']
    
    e:\temp\Python36\lib\site-packages\statsmodels\regression\linear_model.py in <module>
         34 
         35 from statsmodels.compat.python import lrange, lzip
    ---> 36 from statsmodels.compat.pandas import Appender
         37 
         38 import numpy as np
    
    e:\temp\Python36\lib\site-packages\statsmodels\compat\__init__.py in <module>
    ----> 1 from statsmodels.tools._testing import PytestTester
          2 
          3 from .python import (
          4     PY37,
          5     asunicode, asbytes, asstr,
    
    e:\temp\Python36\lib\site-packages\statsmodels\tools\__init__.py in <module>
          1 from .tools import add_constant, categorical
    ----> 2 from statsmodels.tools._testing import PytestTester
          3 
          4 __all__ = ['test', 'add_constant', 'categorical']
          5 
    
    e:\temp\Python36\lib\site-packages\statsmodels\tools\_testing.py in <module>
          9 
         10 """
    ---> 11 from statsmodels.compat.pandas import assert_equal
         12 
         13 import os
    
    e:\temp\Python36\lib\site-packages\statsmodels\compat\pandas.py in <module>
          3 import numpy as np
          4 import pandas as pd
    ----> 5 from pandas.util._decorators import deprecate_kwarg, Appender, Substitution
          6 
          7 __all__ = ['assert_frame_equal', 'assert_index_equal', 'assert_series_equal',
    
    ModuleNotFoundError: No module named 'pandas.util._decorators'
    

    Please provide any additional information below: I'm using VScode Win10. I tried updating pip, pandas, np, alphalens to newer versions, still no hope. Please give me a hand on that, thanks in advance

    Versions

    • Alphalens version: 0.3.6
    • Python version: 3.6.8
    • Pandas version: 0.18.1
    • Matplotlib version: 3.3.4
    • Numpy: 1.17.0
    • Scipy: 1.0.0
    • Statsmodels: 0.12.2
    • Zipline: 1.2.0
  • MissingDataError: exog contains inf or nans

    MissingDataError: exog contains inf or nans

    I am getting the MissingDataError: exog contains inf or nans when I am trying to get the returns tearsheet. The input data from get_clean_factor_and_forward_returns does not have any nans or infs in it. I saw mentions of this error on the Quantopian forum (https://quantopian-archive.netlify.app/forum/threads/alphalens-giving-exog-contains-inf-or-nans.html) and a couple of other places, but no solution. Any thoughts? Thank you!

  • Alphalens

    Alphalens

    Problem Description

    UnboundLocalError: local variable 'period_len' referenced before assignment, line 319

    days_diffs = []
            for i in range(30):
                if i >= len(forward_returns.index):
                    break
                p_idx = prices.index.get_loc(forward_returns.index[i])
                if p_idx is None or p_idx < 0 or (
                        p_idx + period) >= len(prices.index):
                    continue
                start = prices.index[p_idx]
                end = prices.index[p_idx + period]
                period_len = diff_custom_calendar_timedeltas(start, end, freq)
                days_diffs.append(period_len.components.days)
    
            delta_days = period_len.components.days - mode(days_diffs).mode[0]
            period_len -= pd.Timedelta(days=delta_days)
            label = timedelta_to_string(period_len)
    

    Please provide the full traceback:

    [Paste traceback here]
    

    Please provide any additional information below:

    Versions

    • Alphalens version:
    • Python version: 3.7
    • Pandas version:
    • Matplotlib version:
  • Problem:create_event_returns_tear_sheet

    Problem:create_event_returns_tear_sheet

    Problem Description

    When I use the alphalens.tears.create_event_returns_tear_sheet, it shows one error: unsupported operand type(s) for -: 'slice' and 'int', besides other functions work well. Looking forwards someone could help me. Thank you.

    Please provide a minimal, self-contained, and reproducible example:

    [Paste code here]
    **alphalens.tears.create_event_returns_tear_sheet(data,
                                                    returns,
                                                    avgretplot=(5, 15),
                                                    long_short=True,
                                                    group_neutral=False,
                                                    std_bar=True,
                                                    by_group=False)**
    
    
    
    **Please provide the full traceback:**
    ```python
    [Paste traceback here]
    ```---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    D:\Anaconda\lib\site-packages\pandas\core\groupby\groupby.py in apply(self, func, *args, **kwargs)
        734             try:
    --> 735                 result = self._python_apply_general(f)
        736             except TypeError:
    
    D:\Anaconda\lib\site-packages\pandas\core\groupby\groupby.py in _python_apply_general(self, f)
        750     def _python_apply_general(self, f):
    --> 751         keys, values, mutated = self.grouper.apply(f, self._selected_obj, self.axis)
        752 
    
    D:\Anaconda\lib\site-packages\pandas\core\groupby\ops.py in apply(self, f, data, axis)
        205             group_axes = group.axes
    --> 206             res = f(group)
        207             if not _is_indexed_like(res, group_axes):
    
    D:\Anaconda\lib\site-packages\pandas\core\groupby\groupby.py in f(g)
        718                     with np.errstate(all="ignore"):
    --> 719                         return func(g, *args, **kwargs)
        720 
    
    D:\Anaconda\lib\site-packages\alphalens\performance.py in average_cumulative_return(q_fact, demean_by)
        802     def average_cumulative_return(q_fact, demean_by):
    --> 803         q_returns = cumulative_return_around_event(q_fact, demean_by)
        804         q_returns.replace([np.inf, -np.inf], np.nan, inplace=True)
    
    D:\Anaconda\lib\site-packages\alphalens\performance.py in cumulative_return_around_event(q_fact, demean_by)
        798             mean_by_date=True,
    --> 799             demean_by=demean_by,
        800         )
    
    D:\Anaconda\lib\site-packages\alphalens\performance.py in common_start_returns(factor, returns, before, after, cumulative, mean_by_date, demean_by)
        701 
    --> 702         starting_index = max(day_zero_index - before, 0)
        703         ending_index = min(day_zero_index + after + 1,
    
    TypeError: unsupported operand type(s) for -: 'slice' and 'int'
    
    During handling of the above exception, another exception occurred:
    
    TypeError                                 Traceback (most recent call last)
    <ipython-input-46-6fc348201897> in <module>
          5                                                 group_neutral=False,
          6                                                 std_bar=True,
    ----> 7                                                 by_group=False)
    
    D:\Anaconda\lib\site-packages\alphalens\plotting.py in call_w_context(*args, **kwargs)
         43             with plotting_context(), axes_style(), color_palette:
         44                 sns.despine(left=True)
    ---> 45                 return func(*args, **kwargs)
         46         else:
         47             return func(*args, **kwargs)
    
    D:\Anaconda\lib\site-packages\alphalens\tears.py in create_event_returns_tear_sheet(factor_data, returns, avgretplot, long_short, group_neutral, std_bar, by_group)
        573         periods_after=after,
        574         demeaned=long_short,
    --> 575         group_adjust=group_neutral,
        576     )
        577 
    
    D:\Anaconda\lib\site-packages\alphalens\performance.py in average_cumulative_return_by_quantile(factor_data, returns, periods_before, periods_after, demeaned, group_adjust, by_group)
        858         elif demeaned:
        859             fq = factor_data['factor_quantile']
    --> 860             return fq.groupby(fq).apply(average_cumulative_return, fq)
        861         else:
        862             fq = factor_data['factor_quantile']
    
    D:\Anaconda\lib\site-packages\pandas\core\groupby\generic.py in apply(self, func, *args, **kwargs)
        222     )
        223     def apply(self, func, *args, **kwargs):
    --> 224         return super().apply(func, *args, **kwargs)
        225 
        226     @Substitution(
    
    D:\Anaconda\lib\site-packages\pandas\core\groupby\groupby.py in apply(self, func, *args, **kwargs)
        744 
        745                 with _group_selection_context(self):
    --> 746                     return self._python_apply_general(f)
        747 
        748         return result
    
    D:\Anaconda\lib\site-packages\pandas\core\groupby\groupby.py in _python_apply_general(self, f)
        749 
        750     def _python_apply_general(self, f):
    --> 751         keys, values, mutated = self.grouper.apply(f, self._selected_obj, self.axis)
        752 
        753         return self._wrap_applied_output(
    
    D:\Anaconda\lib\site-packages\pandas\core\groupby\ops.py in apply(self, f, data, axis)
        204             # group might be modified
        205             group_axes = group.axes
    --> 206             res = f(group)
        207             if not _is_indexed_like(res, group_axes):
        208                 mutated = True
    
    D:\Anaconda\lib\site-packages\pandas\core\groupby\groupby.py in f(g)
        717                 def f(g):
        718                     with np.errstate(all="ignore"):
    --> 719                         return func(g, *args, **kwargs)
        720 
        721             elif hasattr(nanops, "nan" + func):
    
    D:\Anaconda\lib\site-packages\alphalens\performance.py in average_cumulative_return(q_fact, demean_by)
        801 
        802     def average_cumulative_return(q_fact, demean_by):
    --> 803         q_returns = cumulative_return_around_event(q_fact, demean_by)
        804         q_returns.replace([np.inf, -np.inf], np.nan, inplace=True)
        805 
    
    D:\Anaconda\lib\site-packages\alphalens\performance.py in cumulative_return_around_event(q_fact, demean_by)
        797             cumulative=True,
        798             mean_by_date=True,
    --> 799             demean_by=demean_by,
        800         )
        801 
    
    D:\Anaconda\lib\site-packages\alphalens\performance.py in common_start_returns(factor, returns, before, after, cumulative, mean_by_date, demean_by)
        700             continue
        701 
    --> 702         starting_index = max(day_zero_index - before, 0)
        703         ending_index = min(day_zero_index + after + 1,
        704                            len(returns.index))
    
    TypeError: unsupported operand type(s) for -: 'slice' and 'int'
    
    <Figure size 432x288 with 0 Axes>
    ​
    
    **Please provide any additional information below:**
    
    
    ## Versions
    
    * Alphalens version: 0.4.0
    * Python version: 3.7.6
    * Pandas version: 1.0.1
    * Matplotlib version: 3.1.3
    
Supply a wrapper ``StockDataFrame`` based on the ``pandas.DataFrame`` with inline stock statistics/indicators support.

Stock Statistics/Indicators Calculation Helper VERSION: 0.3.2 Introduction Supply a wrapper StockDataFrame based on the pandas.DataFrame with inline s

Dec 28, 2022
stock data on eink with raspberry

small python skript to display tradegate data on a waveshare e-ink important you need locale "de_AT.UTF-8 UTF-8" installed. do so in raspi-config's Lo

Feb 22, 2022
Common financial risk and performance metrics. Used by zipline and pyfolio.

empyrical Common financial risk metrics. Table of Contents Installation Usage Support Contributing Testing Installation pip install empyrical Usage S

Dec 26, 2022
High-performance TensorFlow library for quantitative finance.

TF Quant Finance: TensorFlow based Quant Finance Library Table of contents Introduction Installation TensorFlow training Development roadmap Examples

Jan 1, 2023
Technical Analysis Library using Pandas and Numpy
Technical Analysis Library using Pandas and Numpy

Technical Analysis Library in Python It is a Technical Analysis library useful to do feature engineering from financial time series datasets (Open, Cl

Jan 2, 2023
Github.com/CryptoSignal - #1 Quant Trading & Technical Analysis Bot - 2,100 + stars, 580 + forks

CryptoSignal - #1 Quant Trading & Technical Analysis Bot - 2,100 + stars, 580 + forks https://github.com/CryptoSignal/Crypto-Signal Development state:

Jan 1, 2023
Performance analysis of predictive (alpha) stock factors
Performance analysis of predictive (alpha) stock factors

Alphalens Alphalens is a Python Library for performance analysis of predictive (alpha) stock factors. Alphalens works great with the Zipline open sour

Dec 28, 2022
Python-Stock-Info-CLI: Get stock info through CLI by passing stock ticker.
Python-Stock-Info-CLI: Get stock info through CLI by passing stock ticker.

Python-Stock-Info-CLI Get stock info through CLI by passing stock ticker. Installation Use the following command to install the required modules at on

Nov 5, 2021
This is the Alpha of Nutte language, she is not complete yet / Essa é a Alpha da Nutte language, não está completa ainda

nutte-language This is the Alpha of Nutte language, it is not complete yet / Essa é a Alpha da Nutte language, não está completa ainda My language was

Dec 18, 2021
Sakamata-alpha-pycord - Sakamata bot alpha with pycord

sakamatabot このリポジトリは? ホロライブ所属VTuber沙花叉クロヱさんの非公式ファンDiscordサーバー「クロヱ水族館」の運営/管理補助を行う

May 4, 2022
Stock-history-display - something like a easy yearly review for your stock performance
Stock-history-display - something like a easy yearly review for your stock performance

Stock History Display Available on Heroku: https://stock-history-display.herokua

Jan 7, 2022
Driver Analysis with Factors and Forests: An Automated Data Science Tool using Python

Driver Analysis with Factors and Forests: An Automated Data Science Tool using Python ??

May 26, 2022
See trending stock tickers on Reddit and check Stock perfomance

See trending stock tickers on Reddit and check Stock perfomance

Jan 6, 2023
Stock Market Insights is a Dashboard that gives the 360 degree view of the particular company stock
Stock Market Insights is a Dashboard that gives the 360 degree view of the particular company stock

Stock Market Insights is a Dashboard that gives the 360 degree view of the particular company stock.It extracts specific data from multiple sources like Social Media (Twitter,Reddit ,StockTwits) , News Articles and applies NLP techniques to get sentiments and insights.

Sep 10, 2021
This python code will get requests from SET (The Stock Exchange of Thailand) a previously-close stock price and return it in Thai Baht currency using beautiful soup 4 HTML scrapper.

This python code will get requests from SET (The Stock Exchange of Thailand) a previously-close stock price and return it in Thai Baht currency using beautiful soup 4 HTML scrapper.

Oct 24, 2022
Stock game is a python program that simulates real-life stock marketing, saving, and investments

Stock game is a python program that simulates real-life stock marketing, saving, and investments. Users get to trade and manage their portfolio and manage their 100,000 dollar portfolio.

Jul 14, 2022
DeepSTD: Mining Spatio-temporal Disturbances of Multiple Context Factors for Citywide Traffic Flow Prediction
DeepSTD: Mining Spatio-temporal Disturbances of Multiple Context Factors for Citywide Traffic Flow Prediction

DeepSTD: Mining Spatio-temporal Disturbances of Multiple Context Factors for Citywide Traffic Flow Prediction This is the implementation of DeepSTD in

Sep 26, 2022
AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications.
AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications.

AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models tabular data.

Dec 27, 2022
AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications.
AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications.

AutoTabular AutoTabular automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just

Jun 26, 2022
A python wrapper for Alpha Vantage API for financial data.
A python wrapper for Alpha Vantage API for financial data.

alpha_vantage Python module to get stock data/cryptocurrencies from the Alpha Vantage API Alpha Vantage delivers a free API for real time financial da

Jan 7, 2023