Functional Data Analysis, or FDA, is the field of Statistics that analyses data that depend on a continuous parameter.

scikit-fda: Functional Data Analysis in Python

scikit-fda: Functional Data Analysis in Python

python build status Documentation Status Codecov PyPIBadge license doi

Functional Data Analysis, or FDA, is the field of Statistics that analyses data that depend on a continuous parameter.

This package offers classes, methods and functions to give support to FDA in Python. Includes a wide range of utils to work with functional data, and its representation, exploratory analysis, or preprocessing, among other tasks such as inference, classification, regression or clustering of functional data. See documentation for further information on the features included in the package.

Documentation

The documentation is available at fda.readthedocs.io/en/stable/, which includes detailed information of the different modules, classes and methods of the package, along with several examples showing different functionalities.

The documentation of the latest version, corresponding with the develop version of the package, can be found at fda.readthedocs.io/en/latest/.

Installation

Currently, scikit-fda is available in Python 3.6 and 3.7, regardless of the platform. The stable version can be installed via PyPI:

pip install scikit-fda

Installation from source

It is possible to install the latest version of the package, available in the develop branch, by cloning this repository and doing a manual installation.

git clone https://github.com/GAA-UAM/scikit-fda.git
pip install ./scikit-fda

Make sure that your default Python version is currently supported, or change the python and pip commands by specifying a version, such as python3.6:

git clone https://github.com/GAA-UAM/scikit-fda.git
python3.6 -m pip install ./scikit-fda

Requirements

scikit-fda depends on the following packages:

The dependencies are automatically installed.

Contributions

All contributions are welcome. You can help this project grow in multiple ways, from creating an issue, reporting an improvement or a bug, to doing a repository fork and creating a pull request to the development branch.

The people involved at some point in the development of the package can be found in the contributors file.

License

The package is licensed under the BSD 3-Clause License. A copy of the license can be found along with the code.

Owner
Grupo de Aprendizaje Automático - Universidad Autónoma de Madrid
Machine Learning Group at Universidad Autónoma de Madrid
Grupo de Aprendizaje Automático - Universidad Autónoma de Madrid
Comments
  • Feature/improve visualization

    Feature/improve visualization

    The plotting functions have been changed to not use global state (using pyplot) whenever possible. Instead, the plotting functions in skfda create and return a new figure, if none is passed.

    Pyplot is still used for Sphinx-gallery, as it has to scrap the produced images.

    All the examples have been changed in order to use the new API and the object oriented functionalities of Matplotlib.

  • Retrieve coefficients for function reconstruction

    Retrieve coefficients for function reconstruction

    Hey there!

    I fitted a KNN FDataGrid to my input data. It actually looks pretty good so far and I now would like to "export" it so I can represent the function as numerical values (preferable in a numpy array).

    I saw that you offer some basis that can be used to "export" the underlying representation. Could you elaborrate on what basis should be used when? My data represents a demand/supply curve. I tried the BSpline one but it only constructs something close to a sine wave which doesn't really represent my data.

    Here is an image of the graph itself: grafik

    Is there some way to get the raw representation instead of transforming it to another basis?

  • add inverse_transform #297 method to FPCA

    add inverse_transform #297 method to FPCA

    Reference issue: https://github.com/GAA-UAM/scikit-fda/issues/297#issue-775429060

    What does it address ? The inverse_transform method projects the functional principal score (coefficient value w.r.t to eigenfunctions basis) of a (or set of) sample back to the input space. In other words FPCA.inverse_transform(FPCA.transform()) should approximate the identity map.

    Empirical tests: I've only tested the inverse_transform method on synthetic datasets (both FDataGrid and FDataBasis) and assessed the results in terms of 'identity regression' i.e. with QQ-plots on the distribution of the residuals where residuals = X_input.value - X_recovered.value, where:

    • X_recovered is computed as FPCA.inverse_transform(FPCA.transform(X_input)),
    • value is .coefficients or .data_matrix attribute, depending on the input data format.

    X_input is generated according to:

    cov = Exponential(length_scale=0.5)
    n_grid = 100
    n_samples = 50
    
    fd = make_gaussian_process(
        start=0,
        stop=4,
        n_samples=n_samples,
        n_features=n_grid,
        mean=lambda t: np.power(t, 2) + 5,
        cov=cov,
    )
    

    Herebelow are the resulting QQ-plots (computed with scipy.stats.probplot) when `X_recovered' is computed with 3 principal components:

    • When `fd' is let in FDataGrid format: slope ~0.675, intercept ~ -3.e-17 and an R^2 ~ 0.9994. Note that the residuals are computed in terms of funtional data values. qqplot-fdatagrid

    • When `fd' is mapped to FDataBasis format with a 4-order Bspline basis with cardinality 50: -slope ~ 1.08, intercept -5.e-17 and R^2 ~0.9996. Note that the residuals are computed in terms of the coefficients in the (here Bspline) basis. qqplot-fdatabasis

    I can enhance the PR and provide further emprical results, just tell me :)

  • ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

    ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

    Scikit-fda is successfully installed, but when I try to import it, I receive the following error:

    ValueError                                Traceback (most recent call last)
    <ipython-input-17-97c44e0d3210> in <module>
    ----> 1 import skfda
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/skfda/__init__.py in <module>
         35 from .representation._functional_data import concatenate
         36 
    ---> 37 from . import representation, datasets, preprocessing, exploratory, misc, ml, \
         38     inference
         39 
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/skfda/datasets/__init__.py in <module>
          5                              fetch_weather, fetch_aemet,
          6                              fetch_octane, fetch_gait)
    ----> 7 from ._samples_generators import (make_gaussian,
          8                                   make_gaussian_process,
          9                                   make_sinusoidal_process,
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/skfda/datasets/_samples_generators.py in <module>
          7 from .. import FDataGrid
          8 from .._utils import _cartesian_product
    ----> 9 from ..misc import covariances
         10 from ..preprocessing.registration import normalize_warping
         11 from ..representation.interpolation import SplineInterpolation
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/skfda/misc/__init__.py in <module>
    ----> 1 from . import covariances, kernels, metrics
          2 from . import operators
          3 from . import regularization
          4 from ._math import (log, log2, log10, exp, sqrt, cumsum,
          5                     inner_product, inner_product_matrix)
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/skfda/misc/covariances.py in <module>
          7 import sklearn.gaussian_process.kernels as sklearn_kern
          8 
    ----> 9 from ..exploratory.visualization._utils import _create_figure, _figure_to_svg
         10 
         11 
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/skfda/exploratory/__init__.py in <module>
          1 from . import depth
    ----> 2 from . import outliers
          3 from . import stats
          4 from . import visualization
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/skfda/exploratory/outliers/__init__.py in <module>
          4 )
          5 from ._iqr import IQROutlierDetector
    ----> 6 from .neighbors_outlier import LocalOutlierFactor
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/skfda/exploratory/outliers/neighbors_outlier.py in <module>
          2 from sklearn.base import OutlierMixin
          3 
    ----> 4 from ...misc.metrics import lp_distance
          5 from ...ml._neighbors_base import (
          6     KNeighborsMixin,
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/skfda/misc/metrics.py in <module>
          6 
          7 from .._utils import _pairwise_commutative
    ----> 8 from ..preprocessing.registration import normalize_warping, ElasticRegistration
          9 from ..preprocessing.registration._warping import _normalize_scale
         10 from ..preprocessing.registration.elastic import SRSF
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/skfda/preprocessing/__init__.py in <module>
    ----> 1 from . import registration
          2 from . import smoothing
          3 from . import dim_reduction
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/skfda/preprocessing/registration/__init__.py in <module>
         14 from ._warping import invert_warping, normalize_warping
         15 
    ---> 16 from .elastic import ElasticRegistration
         17 
         18 from . import validation, elastic
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/skfda/preprocessing/registration/elastic.py in <module>
          1 
    ----> 2 from fdasrsf.utility_functions import optimum_reparam
          3 import scipy.integrate
          4 from sklearn.base import BaseEstimator, TransformerMixin
          5 from sklearn.utils.validation import check_is_fitted
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/fdasrsf/__init__.py in <module>
         20 del sys
         21 
    ---> 22 from .time_warping import fdawarp, align_fPCA, align_fPLS, pairwise_align_bayes
         23 from .plot_style import f_plot, rstyle, plot_curve, plot_reg_open_curve, plot_geod_open_curve, plot_geod_close_curve
         24 from .utility_functions import smooth_data, optimum_reparam, f_to_srsf, gradient_spline, elastic_distance, invertGamma, srsf_to_f
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/fdasrsf/time_warping.py in <module>
          7 import numpy as np
          8 import matplotlib.pyplot as plt
    ----> 9 import fdasrsf.utility_functions as uf
         10 import fdasrsf.fPCA as fpca
         11 import fdasrsf.geometry as geo
    
    ~/opt/anaconda3/envs/elephant/lib/python3.7/site-packages/fdasrsf/utility_functions.py in <module>
         17 from joblib import Parallel, delayed
         18 import numpy.random as rn
    ---> 19 import optimum_reparamN2 as orN2
         20 import optimum_reparam_N as orN
         21 import cbayesian as bay
    
    src/optimum_reparamN2.pyx in init optimum_reparamN2()
    
    ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject
    

    I'm using the standard format

    import skfda
    

    I don't know why this is happening, any help is greatly appreciated

    Version information

    • OS: MacOS
    • Python version: 3.7.9
    • scikit-fda version: 0.5
    • Version of other packages involved: [numpy: 1.19.2, scipy: 1.6.0, matplotlib: 3.3.4 , conda: 4.9.2 ]
  • ModuleNotFoundError: No module named 'optimum_reparam'

    ModuleNotFoundError: No module named 'optimum_reparam'

    Hi, I just forked/cloned the repository and I found a module which is not listed in the dependencies. It's called "optimum_reparam" and it's imported right here. Maybe somewhere else. I failed finding the module myself, could someone provide details about this module in the README requirements section, please?

    Thanks for this initiative!

  • Problem compiling binaries in macOS

    Problem compiling binaries in macOS

    Describe the bug Problem compiling C code from fdasrsf:

    error: $MACOSX_DEPLOYMENT_TARGET mismatch: now "10.14" but "10.15" during configure

    In the fdasrsf package it is fixed the variable with os.environ['MACOSX_DEPLOYMENT_TARGET'] = '10.14' in the setup.py. After cloning the repository and remove the line I get the same error. I tried to export the environment variable before running the installation with no success.

    Complete trace:

    Building wheel for fdasrsf (PEP 517) ... error ERROR: Command errored out with exit status 1: command: /Users/pablomm/scikit_fda_test2/venv/bin/python /Users/pablomm/scikit_fda_test2/venv/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /var/folders/tk/bl_pbdpj6kb7r3s584k85jxw0000gn/T/tmpo2_rkkrs cwd: /private/var/folders/tk/bl_pbdpj6kb7r3s584k85jxw0000gn/T/pip-install-ym8nls3t/fdasrsf_a923c378ec5a4ec689f1d7459d35c8c7 Complete output (38 lines): generating build/_DP.c (already up-to-date) running bdist_wheel running build running build_py creating build/lib.macosx-10.15-x86_64-3.8 creating build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/plot_style.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/curve_stats.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/curve_functions.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/geodesic.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/utility_functions.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/regression.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/tolerance.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/pcr_regression.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/init.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/fPCA.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/umap_metric.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/time_warping.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/geometry.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/curve_regression.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/rbfgs.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/fPLS.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf copying fdasrsf/boxplots.py -> build/lib.macosx-10.15-x86_64-3.8/fdasrsf running build_ext cythoning src/optimum_reparamN2.pyx to src/optimum_reparamN2.c cythoning src/fpls_warp.pyx to src/fpls_warp.c cythoning src/mlogit_warp.pyx to src/mlogit_warp.c cythoning src/ocmlogit_warp.pyx to src/ocmlogit_warp.c cythoning src/oclogit_warp.pyx to src/oclogit_warp.c cythoning src/optimum_reparam_N.pyx to src/optimum_reparam_N.c cythoning src/cbayesian.pyx to src/cbayesian.cpp building 'optimum_reparamN2' extension creating build/temp.macosx-10.15-x86_64-3.8 creating build/temp.macosx-10.15-x86_64-3.8/src clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/private/var/folders/tk/bl_pbdpj6kb7r3s584k85jxw0000gn/T/pip-build-env-qnbgsf0y/overlay/lib/python3.8/site-packages/numpy/core/include -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/usr/local/opt/tcl-tk/include -I/Users/pablomm/scikit_fda_test2/venv/include -I/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/include/python3.8 -c src/optimum_reparamN2.c -o build/temp.macosx-10.15-x86_64-3.8/src/optimum_reparamN2.o not modified: 'build/_DP.c' error: $MACOSX_DEPLOYMENT_TARGET mismatch: now "10.14" but "10.15" during configure


    ERROR: Failed building wheel for fdasrsf Building wheel for findiff (setup.py) ... done Created wheel for findiff: filename=findiff-0.8.9-py3-none-any.whl size=29218 sha256=c2bc96e93c195fb5c2a1acbf932b5311e8e94e571edf74929eca6e019c66532d Stored in directory: /Users/pablomm/Library/Caches/pip/wheels/df/48/68/71cc95b16d5f7c5115a009f92f9a5a3896fb2ece31228b0aa5 Successfully built findiff Failed to build fdasrsf ERROR: Could not build wheels for fdasrsf which use PEP 517 and cannot be installed directly

    To Reproduce

    virtualenv venv
    source venv/bin/activate
    python --version # 3.8.8 and also tried 3.9.2
    pip install scikit-fda
    

    Also, it is happening when the package is installed from code with python setup.py install

    Version information

    • OS: macOS catalina 10.15.7
    • Python version: 3.9.2 and 3.8.8
    • scikit-fda version: stable master (0.5) and develop
    • gcc and g++ version: 12.0.0
  • Importing from skfda import FDataGrid gives you ValueError

    Importing from skfda import FDataGrid gives you ValueError

    Describe the bug When importing FDataGrid from skfda it will launch the following error: ` ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

    `To Reproduce We are installing scikit-fda==0.3 with the following docker image:

    FROM python:3.7-slim-buster
    
    # ------------------------------------------------------------------------------------Basic Linux Packages------------------------------------------------------------------------------------
    RUN apt-get update \
        && apt-get install -y --no-install-recommends \
        ca-certificates \
        cmake \
        build-essential \
        gcc \
        g++ \
        git \
        wget \
        curl \
        libffi-dev \
        python-dev \
        unixodbc-dev \
        && rm -rf /var/lib/apt/lists/*
    

    Expected behavior To be able to import the library Version information

    • OS: Linux
    • Python version: python:3.7-slim-buster
    • scikit-fda version: 0.3
    • Version of other packages involved [e.g. numpy, scipy, matplotlib, ... ]: numpy==1.16.2 , scipy==1.3.1, matplotlib==3.1.1

    Additional context Add any other context about the problem here. Since 01/02/2021 we have the issue of not being able to import the library. Before of that date we could import it without problem with the same requirements and environment

    Error Thread:

    src/hvl/utils/model_utils.py:12: in <module>
        from skfda import FDataGrid
    /usr/local/lib/python3.7/site-packages/skfda/__init__.py:36: in <module>
        from . import representation, datasets, preprocessing, exploratory, misc, ml
    /bin/bash failed with return code: 1
    /usr/local/lib/python3.7/site-packages/skfda/datasets/__init__.py:6: in <module>
    return code: 1
        from ._samples_generators import (make_gaussian_process,
    /usr/local/lib/python3.7/site-packages/skfda/datasets/_samples_generators.py:8: in <module>
        from ..misc import covariances
    /usr/local/lib/python3.7/site-packages/skfda/misc/__init__.py:2: in <module>
        from . import covariances, kernels, metrics
    /usr/local/lib/python3.7/site-packages/skfda/misc/covariances.py:9: in <module>
        from ..exploratory.visualization._utils import _create_figure, _figure_to_svg
    /usr/local/lib/python3.7/site-packages/skfda/exploratory/__init__.py:4: in <module>
        from . import visualization
    /usr/local/lib/python3.7/site-packages/skfda/exploratory/visualization/__init__.py:1: in <module>
        from . import clustering, representation
    /usr/local/lib/python3.7/site-packages/skfda/exploratory/visualization/clustering.py:11: in <module>
        from ...ml.clustering.base_kmeans import FuzzyKMeans
    /usr/local/lib/python3.7/site-packages/skfda/ml/__init__.py:2: in <module>
        from . import classification, clustering, regression
    /usr/local/lib/python3.7/site-packages/skfda/ml/classification/__init__.py:3: in <module>
        from ..._neighbors import (KNeighborsClassifier, RadiusNeighborsClassifier,
    /usr/local/lib/python3.7/site-packages/skfda/_neighbors/__init__.py:11: in <module>
        from .unsupervised import NearestNeighbors
    >   ???
    E   ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject
    
    deps/fdasrsf/optimum_reparam.pyx:1: ValueError
    
  • Feature/local outlier factor

    Feature/local outlier factor

    • Created LocalOutlierFactor (which wraps scikit-learn multivariate version)
    • Example in gallery of detection of outliers
    • New real dataset employed in the example (fetch_octane)
    • Test and Doctests added
  • Error when calling L2Regularization

    Error when calling L2Regularization

    I have tried different python versions, but calling:

    regularization=L2Regularization(LinearDifferentialOperator())

    As in the docs, results in the following error:

    TypeError: init() takes 1 positional argument but 2 were given

  • Feature/neighbors

    Feature/neighbors

    Added the following neighbors estimators:

    • NearestNeighbors
    • KNeighborsClassifier
    • RadiusNeighborsClassifier
    • NearestCentroids
    • KNeighborsScalarRegressor
    • RadiusNeighborsScalarRegressor
    • KNeighborsFunctionalRegressor
    • RadiusNeighborsFunctionalRegressor

    I wrote some examples of KNeighborsClassifier, RadiusNeighborsClassifier and KNeighborsScalarRegressor, I will write other for the regressors with functional response and the nearest centroids, but in another PR.

    Also, I had to modify the lp_distance lo accept the infinity case and the mean to accept a list of weights.

    There are some little things which can be improved after merge #101.

  • Transformer to reduce the image dimension

    Transformer to reduce the image dimension

    Created a transformer which receives a multivariate function from R^n->R^d and applies a vectorial norm to reduce the image dimension to 1.

    There are two version of the transformer, a procedural one and a sklearn style transformer.

    I put the functions in skfda.preprocessing.dim_reduction, but maybe there is a better place.

  • Add a variable selection example

    Add a variable selection example

    Currently we have 4 variable selection methods implemented: mrMR, RKVS, MH and RMH.

    It would be useful to have an example in the documentation that explains and compare these methods.

  • Refactor LinearDifferentialOperator

    Refactor LinearDifferentialOperator

    The linear differential operator class present several shortcomings:

    • It only allows unidimensional input.
    • It is old and hacky and does not use recent improvements.
    • It is slow (the slowest test by a large margin is because of the BSpline linear differential operator).

    It should be refactored to improve all of these deficiencies.

  • How to cite scikit-fda

    How to cite scikit-fda

    Thanks for developing this package. I would like to properly cite the work done and am unsure which article, proceding, or thesis one should include.

    Would it be possible to do something similar to scikit-learn in your documentation and code?

    Thank you,

  • ANOVA slow for FDataBasis

    ANOVA slow for FDataBasis

    ANOVA is slow for FDataBasis causing unnecessary slowdowns in tests, due to the fine grid used for GP generation. Consider generating GPs directly in basis form if possible, to achieve faster and more accurate results.

  • Make knn regressor picklable

    Make knn regressor picklable

    Discussed in https://github.com/GAA-UAM/scikit-fda/discussions/460

    Originally posted by ecavan July 12, 2022 Having trouble saving the trained mode, "Can't pickle <function _to_multivariate_metric..multivariate_metric error "

PyIOmica (pyiomica) is a Python package for omics analyses.
PyIOmica (pyiomica) is a Python package for omics analyses.

PyIOmica (pyiomica) This repository contains PyIOmica, a Python package that provides bioinformatics utilities for analyzing (dynamic) omics datasets.

Jun 29, 2022
A set of tools to analyse the output from TraDIS analyses

QuaTradis (Quadram TraDis) A set of tools to analyse the output from TraDIS analyses Contents Introduction Installation Required dependencies Bioconda

Feb 16, 2022
Python library for creating data pipelines with chain functional programming

PyFunctional Features PyFunctional makes creating data pipelines easy by using chained functional operators. Here are a few examples of what it can do

Sep 22, 2022
Python Library for learning (Structure and Parameter) and inference (Statistical and Causal) in Bayesian Networks.

pgmpy pgmpy is a python library for working with Probabilistic Graphical Models. Documentation and list of algorithms supported is at our official sit

Sep 16, 2022
Streamz helps you build pipelines to manage continuous streams of data

Streamz helps you build pipelines to manage continuous streams of data. It is simple to use in simple cases, but also supports complex pipelines that involve branching, joining, flow control, feedback, back pressure, and so on.

Sep 17, 2022
Functional tensors for probabilistic programming

Funsor Funsor is a tensor-like library for functions and distributions. See Functional tensors for probabilistic programming for a system description.

Sep 8, 2022
Very basic but functional Kakuro solver written in Python.
Very basic but functional Kakuro solver written in Python.

kakuro.py Very basic but functional Kakuro solver written in Python. It uses a reduction to exact set cover and Ali Assaf's elegant implementation of

Jan 15, 2022
WithPipe is a simple utility for functional piping in Python.

A utility for functional piping in Python that allows you to access any function in any scope as a partial.

Oct 26, 2021
PipeChain is a utility library for creating functional pipelines.

PipeChain Motivation PipeChain is a utility library for creating functional pipelines. Let's start with a motivating example. We have a list of Austra

Aug 7, 2022
A utility for functional piping in Python that allows you to access any function in any scope as a partial.

WithPartial Introduction WithPartial is a simple utility for functional piping in Python. The package exposes a context manager (used with with) calle

Oct 26, 2021
Supply a wrapper ``StockDataFrame`` based on the ``pandas.DataFrame`` with inline stock statistics/indicators support.

Stock Statistics/Indicators Calculation Helper VERSION: 0.3.2 Introduction Supply a wrapper StockDataFrame based on the pandas.DataFrame with inline s

Sep 16, 2022
COVID-19 deaths statistics around the world
COVID-19 deaths statistics around the world

COVID-19-Deaths-Dataset COVID-19 deaths statistics around the world This is a daily updated dataset of COVID-19 deaths around the world. The dataset c

Jul 10, 2022
track your GitHub statistics
track your GitHub statistics

GitHub-Stalker track your github statistics ?? features find new followers or unfollowers find who got a star on your project or remove stars find who

Jun 10, 2022
TextDescriptives - A Python library for calculating a large variety of statistics from text

A Python library for calculating a large variety of statistics from text(s) using spaCy v.3 pipeline components and extensions. TextDescriptives can be used to calculate several descriptive statistics, readability metrics, and metrics related to dependency distance.

Sep 22, 2022
BasstatPL is a package for performing different tabulations and calculations for descriptive statistics.

BasstatPL is a package for performing different tabulations and calculations for descriptive statistics. It provides: Frequency table constr

Oct 31, 2021
Working Time Statistics of working hours and working conditions by industry and company

Working Time Statistics of working hours and working conditions by industry and company

Sep 15, 2022
A python package which can be pip installed to perform statistics and visualize binomial and gaussian distributions of the dataset

GBiStat package A python package to assist programmers with data analysis. This package could be used to plot : Binomial Distribution of the dataset p

Nov 9, 2021
Important dataframe statistics with a single command

quick_eda Receiving dataframe statistics with one command Project description A python package for Data Scientists, Students, ML Engineers and anyone

Dec 19, 2021
Statistical Analysis 📈 focused on statistical analysis and exploration used on various data sets for personal and professional projects.
Statistical Analysis 📈 focused on statistical analysis and exploration used on various data sets for personal and professional projects.

Statistical Analysis ?? This repository focuses on statistical analysis and the exploration used on various data sets for personal and professional pr

Sep 3, 2022