Meerkat provides fast and flexible data structures for working with complex machine learning datasets.

Meerkat logo

GitHub Workflow Status GitHub Documentation Status pre-commit

Meerkat provides fast and flexible data structures for working with complex machine learning datasets.

Getting Started | What is Meerkat? | Supported Columns | Docs | Contributing | About

Getting started

pip install meerkat-ml

Note: some parts of Meerkat rely on optional dependencies. If you know which optional dependencies you'd like to install, you can do so using something like pip install meerkat-ml[dev,text] instead. See setup.py for a full list of optional dependencies.

Load your dataset into a DataPanel and get going!

import meerkat as mk
dp = mk.DataPanel.from_csv("...")

What is Meerkat?

Meerkat makes it easier for ML practitioners to interact with high-dimensional, multi-modal data. It provides simple abstractions for data inspection, model evaluation and model training supported by efficient and robust IO under the hood.

Meerkat's core contribution is the DataPanel, a simple columnar data abstraction. The Meerkat DataPanel can house columns of arbitrary type – from integers and strings to complex, high-dimensional objects like videos, images, medical volumes and graphs.

DataPanel loads high-dimensional data lazily. A full high-dimensional dataset won't typically fit in memory. Behind the scenes, DataPanel handles this by only materializing these objects when they are needed.

import meerkat as mk

# Images are NOT read from disk at DataPanel creation...
dp = mk.DataPanel({
    'text': ['The quick brown fox.', 'Jumped over.', 'The lazy dog.'],
    'image': mk.ImageColumn.from_filepaths(['fox.png', 'jump.png', 'dog.png']),
    'label': [0, 1, 0]
}) 

# ...only at this point is "fox.png" read from disk
dp["image"][0]

DataPanel supports advanced indexing. Using indexing patterns similar to those of Pandas and NumPy, we can access a subset of a DataPanel's rows and columns.

import meerkat as mk
dp = ... # create DataPanel

# Pull a column out of the DataPanel
new_col: mk.ImageColumn = dp["image"]

# Create a new DataPanel from a subset of the columns in an existing one
new_dp: mk.DataPanel = dp[["image", "label"]] 

# Create a new DataPanel from a subset of the rows in an existing one
new_dp: mk.DataPanel = dp[10:20] 
new_dp: mk.DataPanel = dp[np.array([0,2,4,8])]

# Pull a column out of the DataPanel and get a subset of its rows 
new_col: mk.ImageColumn = dp["image"][10:20]

DataPanel supports map, update and filter operations. When training and evaluating our models, we often perform operations on each example in our dataset (e.g. compute a model's prediction on each example, tokenize each sentence, compute a model's embedding for each example) and store them . The DataPanel makes it easy to perform these operations and produce new columns (via DataPanel.map), store the columns alongside the original data (via DataPanel.update), and extract an important subset of the datset (via DataPanel.filter). Under the hood, dataloading is multiprocessed so that costly I/O doesn't bottleneck our computation. Consider the example below where we use update a DataPanel with two new columns holding model predictions and probabilities.

# A simple evaluation loop using Meerkat 
dp: DataPane = ... # get DataPane
model: nn.Module = ... # get the model
model.to(0).eval() # prepare the model for evaluation

@torch.no_grad()
def predict(batch: dict):
    probs = torch.softmax(model(batch["input"].to(0)), dim=-1)
    return {"probs": probs.cpu(), "pred": probs.cpu().argmax(dim=-1)}

# updated_dp has two new `TensorColumn`s: 1 for probabilities and one
# for predictions
updated_dp: mk.DataPanel = dp.update(function=predict, batch_size=128, is_batched_fn=True)

DataPanel is extendable. Meerkat makes it easy for you to make custom column types for our data. The easiest way to do this is by subclassing AbstractCell. Subclasses of AbstractCell are meant to represent one element in one column of a DataPanel. For example, say we want our DataPanel to include a column of videos we have stored on disk. We want these videos to be lazily loaded using scikit-video, so we implement a VideoCell class as follows:

import meerkat as mk
import skvideo.io

class VideoCell(mk.AbstractCell):
    
    # What information will we eventually  need to materialize the cell? 
    def __init__(filepath: str):
        super().__init__()
        self.filepath = filepath
    
    # How do we actually materialize the cell?
    def get(self):
        return skvideo.io.vread(self.filepath)
    
    # What attributes should be written to disk on `VideoCell.write`?
    @classmethod
    def _state_keys(cls) -> Collection:
        return {"filepath"}

# We don't need to define a `VideoColumn` class and can instead just
# create a CellColumn fro a list of `VideoCell`
vid_column = mk.CellColumn(map(VideoCell, ["vid1.mp4", "vid2.mp4", "vid3.mp4"]))

Supported Columns

Meerkat ships with a number of core column types and the list is growing.

Core Columns

Column Description
ListColumn Flexible and can hold any type of data.
NumpyArrayColumn np.ndarray behavior for vectorized operations.
TensorColumn torch.tensor behavior for vectorized operations on the GPU.
ImageColumn Holds images stored on disk (e.g. as PNG or JPEG)
VideoColumn Holds videos stored on disk (e.g. as MP4)
MedicalVolumeColumn Optimized for medical images stored DICOM or NIFTI format.
SpacyColumn Holds processed text in spaCy Doc objects.
EmbeddingColumn Holds embeddings and provides utility methods like umap and build_faiss_index.
ClassificationOutputColumn Holds classifier predictions.
CellColumn Like ListColumn, but optimized for AbstractCell objects.

Contributed Columns

Column Supported Description
WILDSInputColumn Yes Build DataPanels for the WILDS benchmark.

About

Meerkat is being developed at Stanford's Hazy Research Lab. Please reach out to kgoel [at] cs [dot] stanford [dot] edu if you would like to use or contribute to Meerkat.

Owner
Robustness Gym
Building tools for evaluating and repairing ML models.
Robustness Gym
Comments
  • [BUG] from_pandas without reset_index

    [BUG] from_pandas without reset_index

    When using the meerkat from_pandas, things break if you just ran a filter a do not call reset_index(). You get an ambiguous key error when calling from_pandas . I would add some check with a better error message if a user has a non-sequential index of the dataframe.

  • [WIP] Implement `BlockManager` backend

    [WIP] Implement `BlockManager` backend

    Overhaul the internals of the Meerkat DataPanel. The changes seek to enable:

    1. Vectorized row-wise operations (e.g. slicing, reduction)
    2. Simplified I/O and improved latency
    3. Clarified view vs. copy behavior
      • We introduce a new spec detailing when users should expect to get views vs. copies (similar to this resource for NumPy) – I'm working on enforcing this spec throughout the codebase.

    The new internals are based primarily off the BlockManager class, a dict-like object meant to replace the dictionary we were storing the DataPanel's columns in before. The BlockManager manages links between a DataPanel's columns and data blocks (AbstractBlock, NumpyBlock) where the data is actually stored. It implements consolidate, which takes columns of similar type in a DataPanel and stores their data together in a block, and apply which applies row-wise operations (e.g. getitem) to the blocks in a vectorized fashion. Other important classes:

    • BlockRef objects link a block with the BlockManager. These are critical to the functioning of the BlockManager and are the primary type of object passed between the blocks and the block manager. They consists of two things:
      1. A reference to the block (self.block)
      2. A set of columns in the BlockManager whose data live in the Block
    • BlockableMixin - a mixin used with AbstractColumn that holds references to a column's block and the columns index in the block
    • BlockView - a simple DataClass holding a block and an index into the block. It is typical for new columns to be created from BlockView

    Note: I marked this is a WIP because there are still a few more things to be done on this front.

    1. Make concat BlockManager aware

    Other major changes:

    • Removed visible_rows from AbstractColumn,
    • Removed _cloneable_kwargs in favor of a unified _clone, _copy, and _view module (cloneable.py)
  • Make `DataPanel(dp)` return some shallow copied version of the original `dp`.

    Make `DataPanel(dp)` return some shallow copied version of the original `dp`.

    Issue

    It is very natural for users (and developers) to construct new DataPanel objects from existing ones via DataPanel(dp).

    Important Aside

    An unexpected consequence of this issue is finding a good way to stratify which attributes should be recomputed and which should simply be shallow copied over.

    As an example, two attributes that every DataPanel has is _data and _identifier. _data is typically large and heavy-weight, so we will almost always want to shallow copy it. _identifier is quite lightweight and may be unique to different DataPanels, so maybe this is a property we recompute each time in __init__. Note this is just an example, we may want the identifier to persist.

    This is especially relevant for subclassing DataPanel. As of PR #57, self.from_batch() is used to construct new DataPanel containers from existing ones with shared underlying data. However, as the PR mentions, self.from_batch() is called by many other ops (_get, merge, concat, etc.), and none of these methods have a seamless way of passing arguments other than data to __init__.

    An example of this is EntityDataPanel, where the index_column should be passed from the current instance to the newly constructed instance. Because there is no way to plumb that information through different calls, the initializer of EntityDataPanel gets called with EntityDataPanel(index_column=None) even if the current instance has an index column. This results in a new column "_ent_index" being added to the new EntityDataPanel.

    Proposed Solution 1

    Implement a private instance method called _clone(data=None, visible_columns=None...) -> DataPanel/subclass which implements the default functionality for how to construct a new DataPanel with the relevant arguments to plumb from current instance to new instance. We can then call self._clone(data=data, visible_columns-optional) instead of self.from_batch() in ops like _get, merge, concat, etc.

    Let's consider the EntityDataPanel case. We want to plumb self.index_column from a current EntityDataPanel to all EntityDataPanels constructed in its image. ._clone will look something like

    class EntityDataPanel:
        def _clone(self, data=None) -> EntityDataPanel:
            if data is None:
                data = self.data
            return EntityDataPanel(data, identifier=identifier, index_column=self.index_column)
    

    We can then have ops like DataPanel._get() for example use self._clone() instead of self.from_batch(). For example

    class DataPanel:
        def _get(self, idx, materialize=False):
            ...
            # example cases where `index` returns a datapanel
            elif isinstance(index, slice):
                # slice index => multiple row selection (DataPanel)
                # return self.from_batch(
                #    {
                #        k: self._data[k]._get(index, materialize=materialize)
                #        for k in self.visible_columns
                #    })
                return self._clone({
                    k: self._data[k]._get(index, materialize=materialize)
                    for k in self.visible_columns
                })
            ...
    

    Proposed Solution 2

    Instead of having developers reimplement ._clone(), we can have them implement something like _state_keys() but for init args. Something like ._clone_kwargs():

    class EntityDataPanel:
        def _clone_kwargs(self) -> EntityDataPanel:
            default_kwargs = super()._clone_kwargs()
            default_kwargs.update({"index_column": self.index_column})
            return default_kwargs
    
    class DataPanel:
        def _default_kwargs(self):
            return {"data": self.data, "identifier": self.identifier}
    
        def _clone(self, **kwargs):
            default_kwargs = self._clone_kwargs()
            if kwargs:
                default_kwargs.update(kwargs)
            return self.__class__(**default_kwargs)
    
  • [BUG] Indexing into DataPanel changes custom column type

    [BUG] Indexing into DataPanel changes custom column type

    Bug Description When indexing to get a subset of rows from a DataPanel with a complex custom column type, the type of that column is being changed to a ListColumn in the new subset DataPanel

    To Reproduce May be difficult to reproduce as it's only occurring for one custom column type that we have.

    1. Create complex custom column type (ours is a column where each cell is a time series with categorical values and subclasses mk.CellColumn)
    2. Create a DataPanel instance (dp) that has the above column and some data inside of it
    3. Index into the DataPanel (dp_subset = dp[0:1])
    4. The column type for that specific column in dp_subset has changed to a ListColumn

    System Information

    • OS: MacOS
  • Add args, kwargs to ColumnIOMixin._read_data

    Add args, kwargs to ColumnIOMixin._read_data

    @krandiash enable this code to run without errors:

    import meerkat as mk import spacy

    nlp = spacy.load("en_core_web_sm") doc1 = nlp("Apple is looking at buying U.K. startup for $1 billion") doc2 = nlp("Hello there")

    dp = mk.DataPanel({ # 'text': ['The quick brown fox.', 'Jumped over.'], # 'spacy': mk.SpacyColumn([doc1, doc2]), 'list': [{}, {}] })

    dp.write('meerkat.dataset') dp2 = dp.read('meerkat.dataset', nlp=nlp)

  • [FEATURE] Sort DataPanel by a column

    [FEATURE] Sort DataPanel by a column

    Add a sort function that can be used to sort the DataPanel by values in a column.

    dp = mk.DataPanel({'a': [1, 3, 2], 'b': ['a', 'c', 'b']})
    dp.sort('a') # sorted view into the dp
    
  • Remove `visible_columns` from `DataPanel`

    Remove `visible_columns` from `DataPanel`

    DataPanels no longer rely on visible_columns to create views. This PR removes visible_columns entirely.

    Other changes:

    • Improve code coverage
      • Reactivate provenance tests
      • DataPanel batch tests
      • Concat tests
      • Merge tests
    • Remove Identifiers, Splits and Info from DataPanel and AbstractColumn
  • [BUG] Appending along columns not working without suffix argument

    [BUG] Appending along columns not working without suffix argument

    Appending to a DataPanel along columns does not work without suffix argument even when the column names do not overlap.

    dp = ms.DataPanel({
        'text': ['The quick brown fox.', 'Jumped over.', 'The lazy dog.'],
        'label': [0, 1, 0]
    })
    dp2 = ms.DataPanel({
        'string': ['The quick brown fox.', 'Jumped over.', 'The lazy dog.'],
        'target': [0, 1, 0]
    })
    dp.append(dp2, axis=1)
    

    This code throws ValueError. It works when I provide any suffix, although they are not used.

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-18-5f32282aa054> in <module>()
    ----> 1 dp.append(dp2, axis=1)
    
    1 frames
    /usr/local/lib/python3.7/dist-packages/mosaic/datapanel.py in append(self, dp, axis, suffixes, overwrite)
        422             if not overwrite and shared:
        423                 if suffixes is None:
    --> 424                     raise ValueError()
        425                 left_suf, right_suf = suffixes
        426                 data = {
    
    ValueError:
    
  • V1 Entity Data Panel

    V1 Entity Data Panel

    Adds entity data panel in pipelines folder. Core ideas

    • Data panel that has zero or more embedding columns
    • Data panel has index panel for functions like iget and icontains for the unique entity id
    • Supports appending, from_datapanel, and other data panel methods
    • Supports embedding based functions (e.g., cosine nearest neighbors) that returns the metadata.
  • [BUG] ValueError: Can only compare identically-labeled Series objects

    [BUG] ValueError: Can only compare identically-labeled Series objects

    @hannahkim24

    import pandas as pd
    import meerkat as mk
    col = mk.PandasSeriesColumn(pd.Series([1,2,3,4]))
    col[[0,1,2]] == col[[0,1,3]]
    

    Gives a value error "ValueError: Can only compare identically-labeled Series objects"

  • [BUG?] Getting the column names returns a list and not a set

    [BUG?] Getting the column names returns a list and not a set

    Heya!

    Been using your library and wanted to compare if two DataPanels have the same column names. While doing so realized that columns() return a list of the column names, and not a set.

    https://github.com/robustness-gym/meerkat/blob/e3b437d47809ef8e856a5f732ac1e11a1176ba1f/meerkat/datapanel.py#L151

    In my case that was a problem as the two DataPanels had the same column names but in different order which caused comparison of columns() to fail. Tbh I did not expect that as I regarded the order of the columns as an implementation detail. As such wanted to ask, if that was intended or just a bug. And if a bug, if you want a pull request which changes that to return a set?

  • Typo in setup.py for Cython dependency causes installation to sometimes fail

    Typo in setup.py for Cython dependency causes installation to sometimes fail

    Hi! I believe there's a typo in setup.py around the Cython dependency. This:

        "semver>=2.13.0",
      > "multiprocess>=0.70.11" "Cython>=0.29.21",
        "progressbar>=2.5",
    

    (source)

    is missing a comma, causing the strings to be concatenated; it should be:

        "semver>=2.13.0",
        "multiprocess>=0.70.11",
        "Cython>=0.29.21",
        "progressbar>=2.5",
    

    This is causing installation to fail in some cases (for some reason, with Poetry 1.2 or above but not with earlier versions of poetry) for us with an error like: "multiprocess: Could not parse version constraint: >=0.70.11Cython"

  • [BUG] Cannot download the imagenette dataset?

    [BUG] Cannot download the imagenette dataset?

    Hello! Cannot download the imagenette dataset. This line of code fails: from meerkat.contrib.imagenette import download_imagenette

    #!pip install meerkat-ml import meerkat as mk from meerkat.contrib.imagenette import download_imagenette

    download_imagenette(".") dp = mk.DataPanel.from_csv("imagenette2-160/imagenette.csv") dp["img"] = mk.ImageColumn.from_filepaths(dp["img_path"])

    dp[["label", "split", "img"]].lz[:3]

    Include any relevant screenshots.

    ModuleNotFoundError Traceback (most recent call last) in () 1 #!pip install meerkat-ml 2 import meerkat as mk ----> 3 from meerkat.contrib.imagenette import download_imagenette 4 5 download_imagenette(".")

    ModuleNotFoundError: No module named 'meerkat.contrib'


  • [BUG] deepcopy corrupts block manager

    [BUG] deepcopy corrupts block manager

    A call to copy.deepcopy on a datapanel corrupts _block_index of the columns:

    dp = mk.DataPanel({
        "a": pd.Series([0,1,2,3]),
        "b": pd.Series([0,1,2,3]),
        "c": pd.Series([0,1,2,3]),
    })
    dp.consolidate()
    print(dp["a"]._block_index)
    
    import copy
    
    dp = copy.deepcopy(dp)
    print(dp["a"]._block_index)
    
  • [BUG] Check for empty examples in AudioSet

    [BUG] Check for empty examples in AudioSet

    There are some examples in audioset who's start time and end time are outside of the length of the video. For example,

    balanced_train_segments/YTID=kKf9OprN9nw_st=400.0_et=410.wav ```
    
    When creating the Audioset DataPanel we should check for this and remove those rows. 
  • [FEATURE] Add caching functionality to LambdaColumn

    [FEATURE] Add caching functionality to LambdaColumn

    I’m envisioning is something in between a map and a LambdaColumn where the computation happens lazily but is cached once it’s computed. Right now, it’s either you do it all up front or you don’t get caching.

    This idea was raised @ANarayan who pointed out that it would be helpful for caching feature preprocessing in NLP pipelines.

A framework for building (and incrementally growing) graph-based data structures used in hierarchical or DAG-structured clustering and nearest neighbor search
A framework for building (and incrementally growing) graph-based data structures used in hierarchical or DAG-structured clustering and nearest neighbor search

A framework for building (and incrementally growing) graph-based data structures used in hierarchical or DAG-structured clustering and nearest neighbor search

Jul 10, 2022
Sep 18, 2022
A Python Package to Tackle the Curse of Imbalanced Datasets in Machine Learning

imbalanced-learn imbalanced-learn is a python package offering a number of re-sampling techniques commonly used in datasets showing strong between-cla

Sep 20, 2022
PLUR is a collection of source code datasets suitable for graph-based machine learning.

PLUR (Programming-Language Understanding and Repair) is a collection of source code datasets suitable for graph-based machine learning. We provide scripts for downloading, processing, and loading the datasets. This is done by offering a unified API and data structures for all datasets.

Sep 10, 2022
Reggy - Regressions with arbitrarily complex regularization terms

reggy Regressions with arbitrarily complex regularization terms. Currently suppo

Jan 20, 2022
A data preprocessing package for time series data. Design for machine learning and deep learning.

A data preprocessing package for time series data. Design for machine learning and deep learning.

Sep 18, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

Sep 20, 2022
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Sep 19, 2022
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices

Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API.

Sep 27, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Sep 17, 2022
Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Python Extreme Learning Machine (ELM) Python Extreme Learning Machine (ELM) is a machine learning technique used for classification/regression tasks.

Sep 5, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques

Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

Sep 20, 2022
CD) in machine learning projectsImplementing continuous integration & delivery (CI/CD) in machine learning projects

CML with cloud compute This repository contains a sample project using CML with Terraform (via the cml-runner function) to launch an AWS EC2 instance

Jul 14, 2022
LibTraffic is a unified, flexible and comprehensive traffic prediction library based on PyTorch
LibTraffic is a unified, flexible and comprehensive traffic prediction library based on PyTorch

LibTraffic is a unified, flexible and comprehensive traffic prediction library, which provides researchers with a credibly experimental tool and a convenient development framework. Our library is implemented based on PyTorch, and includes all the necessary steps or components related to traffic prediction into a systematic pipeline.

Sep 22, 2022
flexible time-series processing & feature extraction
flexible time-series processing & feature extraction

tsflex is a toolkit for flexible time-series processing & feature extraction, making few assumptions about input data. Useful links Documentation Exam

Sep 11, 2022
A flexible CTF contest platform for coming PKU GeekGame events

Project Guiding Star: the Backend A flexible CTF contest platform for coming PKU GeekGame events Still in early development Highlights Not configurabl

Mar 3, 2022
This repository has datasets containing information of Uber pickups in NYC from April 2014 to September 2014 and January to June 2015. data Analysis , virtualization and some insights are gathered here

uber-pickups-analysis Data Source: https://www.kaggle.com/fivethirtyeight/uber-pickups-in-new-york-city Information about data set The dataset contain

Nov 3, 2021
Data science, Data manipulation and Machine learning package.
Data science, Data manipulation and Machine learning package.

duality Data science, Data manipulation and Machine learning package. Use permitted according to the terms of use and conditions set by the attached l

Aug 13, 2022
Data Version Control or DVC is an open-source tool for data science and machine learning projects
Data Version Control or DVC is an open-source tool for data science and machine learning projects

Continuous Machine Learning project integration with DVC Data Version Control or DVC is an open-source tool for data science and machine learning proj

Jul 29, 2021