Petastorm library enables single machine or distributed training and evaluation of deep learning models from datasets in Apache Parquet format. It supports ML frameworks such as Tensorflow, Pytorch, and PySpark and can be used from pure Python code.

Petastorm

Build Status (Travis CI) Code coverage License Latest Version

Petastorm is an open source data access library developed at Uber ATG. This library enables single machine or distributed training and evaluation of deep learning models directly from datasets in Apache Parquet format. Petastorm supports popular Python-based machine learning (ML) frameworks such as Tensorflow, PyTorch, and PySpark. It can also be used from pure Python code.

Documentation web site: https://petastorm.readthedocs.io

Installation

pip install petastorm

There are several extra dependencies that are defined by the petastorm package that are not installed automatically. The extras are: tf, tf_gpu, torch, opencv, docs, test.

For example to trigger installation of GPU version of tensorflow and opencv, use the following pip command:

pip install petastorm[opencv,tf_gpu]

Generating a dataset

A dataset created using Petastorm is stored in Apache Parquet format. On top of a Parquet schema, petastorm also stores higher-level schema information that makes multidimensional arrays into a native part of a petastorm dataset.

Petastorm supports extensible data codecs. These enable a user to use one of the standard data compressions (jpeg, png) or implement her own.

Generating a dataset is done using PySpark. PySpark natively supports Parquet format, making it easy to run on a single machine or on a Spark compute cluster. Here is a minimalistic example writing out a table with some random data.

import numpy as np
from petastorm.codecs import CompressedImageCodec, NdarrayCodec, ScalarCodec
from petastorm.etl.dataset_metadata import materialize_dataset
from petastorm.unischema import Unischema, UnischemaField, dict_to_spark_row
from pyspark.sql import SparkSession
from pyspark.sql.types import IntegerType


HelloWorldSchema = Unischema('HelloWorldSchema', [
   UnischemaField('id', np.int32, (), ScalarCodec(IntegerType()), False),
   UnischemaField('image1', np.uint8, (128, 256, 3), CompressedImageCodec('png'), False),
   UnischemaField('other_data', np.uint8, (None, 128, 30, None), NdarrayCodec(), False),
])


def row_generator(x):
   """Returns a single entry in the generated dataset. Return a bunch of random values as an example."""
   return {'id': x,
           'image1': np.random.randint(0, 255, dtype=np.uint8, size=(128, 256, 3)),
           'other_data': np.random.randint(0, 255, dtype=np.uint8, size=(4, 128, 30, 3))}


def generate_hello_world_dataset(output_url='file:///tmp/hello_world_dataset'):
   rows_count = 10
   rowgroup_size_mb = 256

   spark = SparkSession.builder.config('spark.driver.memory', '2g').master('local[2]').getOrCreate()
   sc = spark.sparkContext

   # Wrap dataset materialization portion. Will take care of setting up spark environment variables as
   # well as save petastorm specific metadata
   with materialize_dataset(spark, output_url, HelloWorldSchema, rowgroup_size_mb):

       rows_rdd = sc.parallelize(range(rows_count))\
           .map(row_generator)\
           .map(lambda x: dict_to_spark_row(HelloWorldSchema, x))

       spark.createDataFrame(rows_rdd, HelloWorldSchema.as_spark_schema()) \
           .coalesce(10) \
           .write \
           .mode('overwrite') \
           .parquet(output_url)
  • HelloWorldSchema is an instance of a Unischema object. Unischema is capable of rendering types of its fields into different framework specific formats, such as: Spark StructType, Tensorflow tf.DType and numpy numpy.dtype.
  • To define a dataset field, you need to specify a type, shape, a codec instance and whether the field is nullable for each field of the Unischema.
  • We use PySpark for writing output Parquet files. In this example, we launch PySpark on a local box (.master('local[2]')). Of course for a larger scale dataset generation we would need a real compute cluster.
  • We wrap spark dataset generation code with the materialize_dataset context manager. The context manager is responsible for configuring row group size at the beginning and write out petastorm specific metadata at the end.
  • The row generating code is expected to return a Python dictionary indexed by a field name. We use row_generator function for that.
  • dict_to_spark_row converts the dictionary into a pyspark.Row object while ensuring schema HelloWorldSchema compliance (shape, type and is-nullable condition are tested).
  • Once we have a pyspark.DataFrame we write it out to a parquet storage. The parquet schema is automatically derived from HelloWorldSchema.

Plain Python API

The petastorm.reader.Reader class is the main entry point for user code that accesses the data from an ML framework such as Tensorflow or Pytorch. The reader has multiple features such as:

  • Selective column readout
  • Multiple parallelism strategies: thread, process, single-threaded (for debug)
  • N-grams readout support
  • Row filtering (row predicates)
  • Shuffling
  • Partitioning for multi-GPU training
  • Local caching

Reading a dataset is simple using the petastorm.reader.Reader class which can be created using the petastorm.make_reader factory method:

from petastorm import make_reader

 with make_reader('hdfs://myhadoop/some_dataset') as reader:
    for row in reader:
        print(row)

hdfs://... and file://... are supported URL protocols.

Once a Reader is instantiated, you can use it as an iterator.

Tensorflow API

To hookup the reader into a tensorflow graph, you can use the tf_tensors function:

from petastorm.tf_utils import tf_tensors

with make_reader('file:///some/localpath/a_dataset') as reader:
   row_tensors = tf_tensors(reader)
   with tf.Session() as session:
       for _ in range(3):
           print(session.run(row_tensors))

Alternatively, you can use new tf.data.Dataset API;

from petastorm.tf_utils import make_petastorm_dataset

with make_reader('file:///some/localpath/a_dataset') as reader:
    dataset = make_petastorm_dataset(reader)
    iterator = dataset.make_one_shot_iterator()
    tensor = iterator.get_next()
    with tf.Session() as sess:
        sample = sess.run(tensor)
        print(sample.id)

Pytorch API

As illustrated in pytorch_example.py, reading a petastorm dataset from pytorch can be done via the adapter class petastorm.pytorch.DataLoader, which allows custom pytorch collating function and transforms to be supplied.

Be sure you have torch and torchvision installed:

pip install torchvision

The minimalist example below assumes the definition of a Net class and train and test functions, included in pytorch_example:

import torch
from petastorm.pytorch import DataLoader

torch.manual_seed(1)
device = torch.device('cpu')
model = Net().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.5)

def _transform_row(mnist_row):
    transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize((0.1307,), (0.3081,))
    ])
    return (transform(mnist_row['image']), mnist_row['digit'])


transform = TransformSpec(_transform_row, removed_fields=['idx'])

with DataLoader(make_reader('file:///localpath/mnist/train', num_epochs=10,
                            transform_spec=transform), batch_size=64) as train_loader:
    train(model, device, train_loader, 10, optimizer, 1)
with DataLoader(make_reader('file:///localpath/mnist/test', num_epochs=10,
                            transform_spec=transform), batch_size=1000) as test_loader:
    test(model, device, test_loader)

If you are working with very large batch sizes and do not need support for Decimal/strings we provide a petastorm.pytorch.BatchedDataLoader that can buffer using Torch tensors (cpu or cuda) with a signficantly higher throughput.

Spark Dataset Converter API

Spark converter API simplifies the data conversion from Spark to TensorFlow or PyTorch. The input Spark DataFrame is first materialized in the parquet format and then loaded as a tf.data.Dataset or torch.utils.data.DataLoader.

The minimalist example below assumes the definition of a compiled tf.keras model and a Spark DataFrame containing a feature column followed by a label column.

from petastorm.spark import SparkDatasetConverter, make_spark_converter
import tensorflow.compat.v1 as tf  # pylint: disable=import-error

# specify a cache dir first.
# the dir is used to save materialized spark dataframe files
spark.conf.set(SparkDatasetConverter.PARENT_CACHE_DIR_URL_CONF, 'hdfs:/...')

df = ... # `df` is a spark dataframe

# create a converter from `df`
# it will materialize `df` to cache dir.
converter = make_spark_converter(df)

# make a tensorflow dataset from `converter`
with converter.make_tf_dataset() as dataset:
    # the `dataset` is `tf.data.Dataset` object
    # dataset transformation can be done if needed
    dataset = dataset.map(...)
    # we can train/evaluate model on the `dataset`
    model.fit(dataset)
    # when exiting the context, the reader of the dataset will be closed

# delete the cached files of the dataframe.
converter.delete()

The minimalist example below assumes the definition of a Net class and train and test functions, included in pytorch_example.py, and a Spark DataFrame containing a feature column followed by a label column.

from petastorm.spark import SparkDatasetConverter, make_spark_converter

# specify a cache dir first.
# the dir is used to save materialized spark dataframe files
spark.conf.set(SparkDatasetConverter.PARENT_CACHE_DIR_URL_CONF, 'hdfs:/...')

df_train, df_test = ... # `df_train` and `df_test` are spark dataframes
model = Net()

# create a converter_train from `df_train`
# it will materialize `df_train` to cache dir. (the same for df_test)
converter_train = make_spark_converter(df_train)
converter_test = make_spark_converter(df_test)

# make a pytorch dataloader from `converter_train`
with converter_train.make_torch_dataloader() as dataloader_train:
    # the `dataloader_train` is `torch.utils.data.DataLoader` object
    # we can train model using the `dataloader_train`
    train(model, dataloader_train, ...)
    # when exiting the context, the reader of the dataset will be closed

# the same for `converter_test`
with converter_test.make_torch_dataloader() as dataloader_test:
    test(model, dataloader_test, ...)

# delete the cached files of the dataframes.
converter_train.delete()
converter_test.delete()

Analyzing petastorm datasets using PySpark and SQL

A Petastorm dataset can be read into a Spark DataFrame using PySpark, where you can use a wide range of Spark tools to analyze and manipulate the dataset.

# Create a dataframe object from a parquet file
dataframe = spark.read.parquet(dataset_url)

# Show a schema
dataframe.printSchema()

# Count all
dataframe.count()

# Show a single column
dataframe.select('id').show()

SQL can be used to query a Petastorm dataset:

spark.sql(
   'SELECT count(id) '
   'from parquet.`file:///tmp/hello_world_dataset`').collect()

You can find a full code sample here: pyspark_hello_world.py,

Non Petastorm Parquet Stores

Petastorm can also be used to read data directly from Apache Parquet stores. To achieve that, use make_batch_reader (and not make_reader). The following table summarizes the differences make_batch_reader and make_reader functions.

make_reader make_batch_reader
Only Petastorm datasets (created using materializes_dataset) Any Parquet store (some native Parquet column types are not supported yet.
The reader returns one record at a time. The reader returns batches of records. The size of the batch is not fixed and defined by Parquet row-group size.
Predicates passed to make_reader are evaluated per single row. Predicates passed to make_batch_reader are evaluated per batch.
Can filter parquet file based on the filters argument. Can filter parquet file based on the filters argument

Troubleshooting

See the Troubleshooting page and please submit a ticket if you can't find an answer.

See also

  1. Gruener, R., Cheng, O., and Litvin, Y. (2018) Introducing Petastorm: Uber ATG's Data Access Library for Deep Learning. URL: https://eng.uber.com/petastorm/
  2. QCon.ai 2019: "Petastorm: A Light-Weight Approach to Building ML Pipelines".

How to Contribute

We prefer to receive contributions in the form of GitHub pull requests. Please send pull requests against the github.com/uber/petastorm repository.

  • If you are looking for some ideas on what to contribute, check out github issues and comment on the issue.
  • If you have an idea for an improvement, or you'd like to report a bug but don't have time to fix it please a create a github issue.

To contribute a patch:

  • Break your work into small, single-purpose patches if possible. It's much harder to merge in a large change with a lot of disjoint features.
  • Submit the patch as a GitHub pull request against the master branch. For a tutorial, see the GitHub guides on forking a repo and sending a pull request.
  • Include a detailed describtion of the proposed change in the pull request.
  • Make sure that your code passes the unit tests. You can find instructions how to run the unit tests here.
  • Add new unit tests for your code.

Thank you in advance for your contributions!

See the Development for development related information.

Owner
Uber Open Source
Open Source Software at Uber
Uber Open Source
Comments
  • Leverage pyarrow predicate filtering

    Leverage pyarrow predicate filtering

    Pyarrow ParquetDataset supports predicate filtering. We should replace our own implementation to utilize theirs https://github.com/apache/arrow/blob/master/python/pyarrow/parquet.py#L789

  • Unischema supports Parquet schema with more than 255 fields

    Unischema supports Parquet schema with more than 255 fields

    Many of our datasets have more than 255 fields. This commit provides an alternative namedtuple implementation 'namedtuple2' to support more than 255 fields with the Python 3.6 interpreter.

  • Pytorch example with DataLoader adapter, using MNIST data

    Pytorch example with DataLoader adapter, using MNIST data

    This code includes an MNIST dataset generator, a pytorch training example that uses the resulting dataset, and a simple README.md.

    As can be seen from the main.py, there are few limitations that come to light which could help us improve petastorm:

    • Batch shuffling
    • Support for custom transforms
    • Total data size (or some semblance of it?)

    Running pytorch/examples/mnist/main.py (in a Docker container) with the default 10 epoch yielded the following outcome (I just show the test output for the middle 8 epochs):

    ...
    Train Epoch: 1 [59520/60000 (99%)]	Loss: 0.505042
    
    Test set: Average loss: 0.2056, Accuracy: 9395/10000 (94%)
    
    ...
    Test set: Average loss: 0.1337, Accuracy: 9596/10000 (96%)
    Test set: Average loss: 0.1033, Accuracy: 9684/10000 (97%)
    Test set: Average loss: 0.0919, Accuracy: 9710/10000 (97%)
    Test set: Average loss: 0.0760, Accuracy: 9770/10000 (98%)
    Test set: Average loss: 0.0689, Accuracy: 9797/10000 (98%)
    Test set: Average loss: 0.0623, Accuracy: 9803/10000 (98%)
    Test set: Average loss: 0.0632, Accuracy: 9791/10000 (98%)
    Test set: Average loss: 0.0541, Accuracy: 9818/10000 (98%)
    
    ...
    Train Epoch: 10 [59520/60000 (99%)]	Loss: 0.040862
    
    Test set: Average loss: 0.0505, Accuracy: 9845/10000 (98%)
    
    real	3m3.021s
    user	20m4.680s
    sys	0m22.228s
    

    With the petastormed variant, the training accuracy looks on-par, with somewhat better runtime. I'll show just the test output:

    Test set: Average loss: 0.2035, Accuracy: 9385/10000 (94%)
    Test set: Average loss: 0.1326, Accuracy: 9591/10000 (96%)
    Test set: Average loss: 0.1040, Accuracy: 9675/10000 (97%)
    Test set: Average loss: 0.0887, Accuracy: 9705/10000 (97%)
    Test set: Average loss: 0.0761, Accuracy: 9752/10000 (98%)
    Test set: Average loss: 0.0715, Accuracy: 9774/10000 (98%)
    Test set: Average loss: 0.0627, Accuracy: 9797/10000 (98%)
    Test set: Average loss: 0.0606, Accuracy: 9810/10000 (98%)
    Test set: Average loss: 0.0582, Accuracy: 9824/10000 (98%)
    Test set: Average loss: 0.0548, Accuracy: 9828/10000 (98%)
    
    real	2m35.852s
    user	2m33.508s
    sys	0m6.576s
    
  • Added tests for test_parquet_reader.py

    Added tests for test_parquet_reader.py

    Added tests for selecting specific columns and requesting invalid columns to test_parquet_reader. Modified specific column test to request specific column names, rather than regex patterns, so could select even columns rather than odd, so would always find at least one. Added comment explaining why regex patterns were a problem.

  • Error reading parquet files made by AWS Athena

    Error reading parquet files made by AWS Athena

    I made a bunch of parquet files using an amazon athena CTAS query. I downloaded these files to first test locally (the end goal is to access the data from S3).

    If I run the code below;

    import s3fs
    from petastorm.reader import make_batch_reader
    from petastorm.tf_utils import make_petastorm_dataset
    
    dataset_url = "file:///Data/test-parquet"
    
    with make_batch_reader(dataset_url) as reader:
        dataset = make_petastorm_dataset(reader)
        for batch in dataset:
            break
    batch.correct
    

    I receive a lot of warnings and then an error in for batch in dataset

    pyarrow.lib.ArrowIOError: The file only has 1 row groups, requested metadata for row group: 1

    If 1 look at dataset.take(1) or something alike, I do see the correct schema of the table. However, I don't seem to be able to access the data.

  • Add unit tests for compress in random shuffling buffer

    Add unit tests for compress in random shuffling buffer

    ~Compress remaining shuffling buffer should use remained size, that is, self._size.~

    self.size actually is a property decorator defined afterwards. I just change it to keep consistent with other places of code to improve readability.

    Also, I added some unit tests to check compress results.

  • Expose the flag to disable Ømq copy buffers

    Expose the flag to disable Ømq copy buffers

    One of our engineers found an optimization involving disabling ZeroMQ copy buffers in the ProcessWorker, but this is not exposed in the top-level factory methods, make_reader and make_batch_reader. It's useful, and probably should be.

  • Problem with HelloWorld Example on Front Page of Repo

    Problem with HelloWorld Example on Front Page of Repo

    Hi I'm running the following code:

    from petastorm.unischema import Unischema, UnischemaField, dict_to_spark_row
    from petastorm.codecs import ScalarCodec, CompressedImageCodec, NdarrayCodec
    from petastorm.etl.dataset_metadata import materialize_dataset
    from pyspark.sql.types import IntegerType
    import numpy as np
    from petastorm.fs_utils import FilesystemResolver
    
    resolver=FilesystemResolver(output_url + 'test', spark.sparkContext._jsc.hadoopConfiguration(),
                                 hdfs_driver='libhdfs')
    fact = resolver.filesystem_factory()
    
    HelloWorldSchema = Unischema('HelloWorldSchema', [
       UnischemaField('id', np.int32, (), ScalarCodec(IntegerType()), False),
       UnischemaField('other_data', np.uint8, (None, 128, 30, None), NdarrayCodec(), False),
    ])
    
    
    def row_generator(x):
       """Returns a single entry in the generated dataset. Return a bunch of random values as an example."""
       return {'id': x,
               'other_data': np.random.randint(0, 255, dtype=np.uint8, size=(4, 128, 30, 3))}
    
    def generate_hello_world_dataset(output_url, spark, sc):
       rows_count = 1000
       rowgroup_size_mb = 256
    
       # Wrap dataset materialization portion. Will take care of setting up spark environment variables as
       # well as save petastorm specific metadata
       with materialize_dataset(spark, url, HelloWorldSchema, rowgroup_size_mb, filesystem_factory=fact):
    
           rows_rdd = sc.parallelize(range(rows_count))\
               .map(row_generator)\
               .map(lambda x: dict_to_spark_row(HelloWorldSchema, x))
    
           spark.createDataFrame(rows_rdd, HelloWorldSchema.as_spark_schema(), ) \
               .coalesce(10) \
               .write \
               .mode('overwrite') \
               .parquet(url)
        
    generate_hello_world_dataset(url, spark, sc)
    

    This is the only way that I can run with a libhdfs setup. I get the following error.

    org.apache.spark.api.python.PythonException: Traceback (most recent call last):
      File "/opt/cloudera/parcels/SPARK2-2.4.0.cloudera2-1.cdh5.13.3.p0.1041012/lib/spark2/python/pyspark/worker.py", line 377, in main
        process()
      File "/opt/cloudera/parcels/SPARK2-2.4.0.cloudera2-1.cdh5.13.3.p0.1041012/lib/spark2/python/pyspark/worker.py", line 372, in process
        serializer.dump_stream(func(split_index, iterator), outfile)
      File "/opt/cloudera/parcels/SPARK2-2.4.0.cloudera2-1.cdh5.13.3.p0.1041012/lib/spark2/python/pyspark/serializers.py", line 393, in dump_stream
        vs = list(itertools.islice(iterator, batch))
      File "/opt/cloudera/parcels/SPARK2-2.4.0.cloudera2-1.cdh5.13.3.p0.1041012/lib/spark2/python/pyspark/util.py", line 99, in wrapper
        return f(*args, **kwargs)
      File "/basedir/home/aredd/venvs/prometheus/lib64/python3.6/site-packages/petastorm/etl/dataset_metadata.py", line 216, in get_row_group_info
      File "/basedir/home/aredd/venvs/prometheus/lib64/python3.6/site-packages/petastorm/fs_utils.py", line 108, in <lambda>
      File "/basedir/tmp/mapred.tmp1/yarn/nm/usercache/username/appcache/application_1576215002453_189781/container_e15_1576215002453_189781_01_000003/PRO/pro/lib64/python3.6/site-packages/petastorm/hdfs/namenode.py", line 266, in hdfs_connect_namenode
        return pyarrow.hdfs.connect(hostname, url.port or 8020, driver=driver, user=user)
      File "/basedir/tmp/mapred.tmp1/yarn/nm/usercache/username/appcache/application_1576215002453_189781/container_e15_1576215002453_189781_01_000003/PRO/pro/lib64/python3.6/site-packages/pyarrow/hdfs.py", line 215, in connect
        extra_conf=extra_conf)
      File "/basedir/tmp/mapred.tmp1/yarn/nm/usercache/username/appcache/application_1576215002453_189781/container_e15_1576215002453_189781_01_000003/PRO/pro/lib64/python3.6/site-packages/pyarrow/hdfs.py", line 40, in __init__
        self._connect(host, port, user, kerb_ticket, driver, extra_conf)
      File "pyarrow/io-hdfs.pxi", line 105, in pyarrow.lib.HadoopFileSystem._connect
      File "pyarrow/error.pxi", line 80, in pyarrow.lib.check_status
    pyarrow.lib.ArrowIOError: HDFS connection failed
    
            at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
            at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588)
            at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571)
            at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
            at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
            at scala.collection.Iterator$class.foreach(Iterator.scala:891)
            at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
            at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
            at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
            at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
            at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
            at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
            at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
            at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
            at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
            at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
            at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
            at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
            at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
            at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
            at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
            at org.apache.spark.scheduler.Task.run(Task.scala:121)
            at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
            at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1405)
            at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
            at java.lang.Thread.run(Thread.java:748)
    

    Thanks in advance

  • Train-Test Dataset Split

    Train-Test Dataset Split

    Is there currently support for splitting a Petastorm dataset into train-test for PyTorch? In PyTorch, one would typically do this to a Dataset class but since Petastorm only has the classes Reader and DataLoader (as below), I wonder if this feature has been implemented.

    trainloader = DataLoader(make_reader('file://' + filename), batch_size=128)

  • Predicting is slow and sometimes doesn't even work.

    Predicting is slow and sometimes doesn't even work.

    Hi, I'm currently using PySpark 3.1.1 and I'm using petastorm to be able to use my TF models with Spark Dataframes. After much digging through the examples I'm struggling with some implementations. I'm trying to implement an AutoEncoder model and my dataset is as follows:

    +----------+-------------+--------------+------------+---------+--------------+----+
    |screw_id  |profile_1111|profile_2222   |profile_time|   gof   |profile_stepnr|rank|
    +----------+-------------+--------------+------------+---------+--------------+----+
    |12925510_1|0.0          |2.28          |1           |1.0      |0             |1   |
    |12925510_1|5.1          |0.0           |30          |1.0      |0             |1   |
    |12925510_1|10.3         |0.0           |40          |1.0      |0             |1   |
    |12925510_1|15.9         |0.0           |47          |1.0      |0             |1   |
    |12925510_1|21.0         |0.0           |52          |1.0      |0             |1   |
    |12925510_1|26.2         |2.16          |61          |1.0      |0             |1   |
    |12925510_1|31.4         |2.08          |68          |1.0      |0             |1   |
    |12925510_1|36.5         |2.2           |75          |1.0      |0             |1   |
    |12925510_1|41.7         |2.2           |87          |1.0      |0             |1   |
    +----------+-------------+--------------+------------+---------+--------------+----+
    

    After some feature engineering implemented via a pipeline my features get encoded into a vector format in a new column named "features". I create the AE model (I don't think is relevant for this use-case to post it here, but I can add it if needed) and then the spark converter for both my training and validation dataset:

    converter_train = make_spark_converter(train_tf.select('features')) converter_val = make_spark_converter(val_tf.select('features'))

    Using the examples provided in this repo I have implemented the train_and_evaluate function as shown next. If I'm not mistaken, for unsupervised learning where no labels are provided I should use my 'features' for both X and Y or it will complain that I did not provide the gradients for any variable:

    BATCH_SIZE = 2**11
    #Epochs set to 1 for testing purposes
    NUM_EPOCHS = 1
    import os
    import tensorflow as tf
    
    def train_and_evaluate(lr=0.001):
        model = get_compiled_model(lr)
        
    
        with converter_train.make_tf_dataset(batch_size=BATCH_SIZE) as train_dataset, \
               converter_val.make_tf_dataset(batch_size=BATCH_SIZE) as val_dataset:
            
            # tf.keras only accept tuples, not namedtuples
            train_dataset = train_dataset.map(lambda x: (x.features, x.features))
            steps_per_epoch = len(converter_train) // BATCH_SIZE
    
            val_dataset = val_dataset.map(lambda x: (x.features, x.features))
            validation_steps = max(1, len(converter_test) // BATCH_SIZE)
    
            print(f"steps_per_epoch: {steps_per_epoch}, validation_steps: {validation_steps}")
    
            hist = model.fit(train_dataset,
                             steps_per_epoch=steps_per_epoch,
                             epochs=NUM_EPOCHS,
                             validation_data=val_dataset,
                             validation_steps=validation_steps,
                             callbacks=ae_callback(),
                             verbose=2)
                    
            return hist.history['val_loss'][-1], hist.history['val_accuracy'][-1], model 
      
    loss, accuracy, model = train_and_evaluate()
    print("Validation Accuracy: {}".format(accuracy))
    

    The model trains "fine" (performance is not as good as it did in Pandas but I haven't spent much time calibrating it) and relatively fast (2/3 min). With this trained model I now want to infer on a new dataset:

    def pred():
        with converter_unit.make_tf_dataset(batch_size=BATCH_SIZE) as t_dataset:
            te_dataset = t_dataset.map(lambda x: (x.features, x.features))
            return model.predict(te_dataset, verbose=2)
    

    I run this function and never (or almost never) get the results and it never errors out. The test dataframe has only 400 lines so it should be pretty fast considering that training the model took only a couple min. Any suggestion ?

  • Allow users to use s3, s3a and s3n protocols when saving / reading datasets

    Allow users to use s3, s3a and s3n protocols when saving / reading datasets

    s3, s3a and s3n url protocols can be explicitly specified when saving petatorm datasets.

    Fixed a bug on petastorm dataset write execution path previously preventing writing directly to s3 buckets.

    Tested: modified examples/generate_external_dataset.py and examples/python_hello_world.py to write/read from s3 bucket using s3a and s3n buckets (wasn't able to properly configure s3 authentication to check that). Was able to write/read data successfully.

  • make_batch_reader loses dtype with list-of-strings columns, causing Tensorflow error when lists contain a None value

    make_batch_reader loses dtype with list-of-strings columns, causing Tensorflow error when lists contain a None value

    The dtype is lost using make_batch_reader when a column is a list of strings. If a batch contains no None values, then Tensorflow is still able to infer the string type of the array. But if the batch contains any None values, then Tensorflow produces the following error:

    InternalError: Unsupported object type NoneType
    

    Note that this similar but different than issue https://github.com/uber/petastorm/issues/744.

    Example

    from pathlib import Path
    import numpy as np
    import pandas as pd
    import petastorm
    from petastorm.unischema import UnischemaField
    from petastorm import tf_utils
    from petastorm.transform import TransformSpec
    
    # Create parquet dataset
    data_path = Path('/path/to/data.parquet')
    data_pd = pd.DataFrame({'list_of_str': [['A', 'B'], ['C', 'D'], ['E', None], ['G', 'H']]})
    data_pd.to_parquet(data_path, row_group_size=2)
    
    noop_transform_spec = TransformSpec(lambda x: x, edit_fields=[UnischemaField('list_of_str', np.str_, (2, ), nullable=True)])
    
    reader = petastorm.make_batch_reader(data_path.as_uri(),
                                         workers_count=1,
                                         shuffle_row_groups=False,
                                         num_epochs=2,
                                         transform_spec=noop_transform_spec)
    
    reader.next() # output: inferred_schema_view(list_of_str=array([['A', 'B'], ['C', 'D']], dtype=object))
    reader.next() # output: inferred_schema_view(list_of_str=array([['E', None], ['G', 'H']], dtype=object))
    
    # Read with tensorflow
    dataset = tf_utils.make_petastorm_dataset(reader)
    dataset_itr = dataset.as_numpy_iterator()
    
    dataset_itr.next() # output: inferred_schema_view(list_of_str=array([[b'A', b'B'], [b'C', b'D']], dtype=object))
    dataset_itr.next() # InternalError: Unsupported object type NoneType
    

    Workaround

    Modify all TransformSpec funcs to replace list-of-string columns with any missing string values with None strings.

    def is_string_list(column_type: pyarrow.DataType) -> bool:
        return isinstance(column_type, pyarrow.ListType) and pyarrow.types.is_string(column_type.value_type)
    
    fields_str_list = [f for f in table.schema.names if is_string_list(table.column(f).type)]
    
    def transform_spec_with_workaround(rows_pd: pd.DataFrame) -> pd.DataFrame:
        ...  # custom transformation
    
        for f in fields_str_list:
            rows_pd[f] = rows_pd[f].map(lambda a: np.ma.masked_values(a, None).filled('None'))
    
        return rows_pd
    

    Full Trace

    ---------------------------------------------------------------------------
    InternalError                             Traceback (most recent call last)
    <ipython-input-50-55cc7ab782db> in <module>
    ----> 1 dataset_itr.next()
    
    ~/conda/lmigpuv0_17_1/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py in next(self)
       4693 
       4694   def next(self):
    -> 4695     return self.__next__()
       4696 
       4697 
    
    ~/conda/lmigpuv0_17_1/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py in __next__(self)
       4690       return numpy
       4691 
    -> 4692     return nest.map_structure(to_numpy, next(self._iterator))
       4693 
       4694   def next(self):
    
    ~/conda/lmigpuv0_17_1/lib/python3.7/site-packages/tensorflow/python/data/ops/iterator_ops.py in __next__(self)
        759   def __next__(self):
        760     try:
    --> 761       return self._next_internal()
        762     except errors.OutOfRangeError:
        763       raise StopIteration
    
    ~/conda/lmigpuv0_17_1/lib/python3.7/site-packages/tensorflow/python/data/ops/iterator_ops.py in _next_internal(self)
        745           self._iterator_resource,
        746           output_types=self._flat_output_types,
    --> 747           output_shapes=self._flat_output_shapes)
        748 
        749       try:
    
    ~/conda/lmigpuv0_17_1/lib/python3.7/site-packages/tensorflow/python/ops/gen_dataset_ops.py in iterator_get_next(iterator, output_types, output_shapes, name)
       2726       return _result
       2727     except _core._NotOkStatusException as e:
    -> 2728       _ops.raise_from_not_ok_status(e, name)
       2729     except _core._FallbackException:
       2730       pass
    
    ~/conda/lmigpuv0_17_1/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
       6939   message = e.message + (" name: " + name if name is not None else "")
       6940   # pylint: disable=protected-access
    -> 6941   six.raise_from(core._status_to_exception(e.code, message), None)
       6942   # pylint: enable=protected-access
       6943 
    
    ~/conda/lmigpuv0_17_1/lib/python3.7/site-packages/six.py in raise_from(value, from_value)
    
    InternalError: Unsupported object type NoneType
    	 [[{{node PyFunc}}]] [Op:IteratorGetNext]
    
  • dynamic padding via `collate_fn`

    dynamic padding via `collate_fn`

    I would like to dynamically pad my tensors by way of the collate_fn argument that can be passed to petastorm.pytorch.DataLoader, but I am seemingly thwarted by make_batch_reader here, thus it appears make_batch_reader prevents the user from shoring up tensor size through the dataloader.

    Or is this possible and I'm just missing how to do so? collate_fn can take care of the variable length values on a batch by batch basis. Otherwise it seems like I'd need to pad all the data in my spark data frame which increases data size substantially, slows training and I assume i/o through petastorm in general.

    What I would like to do looks something like below where the function passed to collate_fun would dynamically pad my variable length values.

    reader = make_batch_reader(
            channel,
            workers_count=2,
            num_epochs=1,
            schema_fields=['input', 'labels']
        )
    
    dl = DataLoader(reader,
                    batch_size = 8,
                    shuffling_queue_capacity = 100000,
                    collate_fn=some_padding_function
                   )
    
  • Newer pyarrow versions?

    Newer pyarrow versions?

    I see that in your workflow file you mention newer versions of pyarrow. Are these supported and if so is there a way to download a pypi package referencing them?

    Many thanks!

  • Validate_schema keyword not supported yet

    Validate_schema keyword not supported yet

    Hi, Im using petastorm to feed tensorflow models lunched with spark in an EMR cluster. The code is the basic to read parquet files on s3:

    from pyarrow import fs
    from petastorm.reader import Reader
    from petastorm.tf_utils import make_petastorm_dataset
    
    ratings_uri = "s3://path/to/parquet/file"
    
    s3, path = fs.FileSystem.from_uri(ratings_uri)
    with Reader(pyarrow_filesystem= s3, dataset_path=path) as ratings_r:
        r = make_petastorm_dataset(ratings_r)
    

    It throw the next error:

    Traceback (most recent call last):
      File "/home/hadoop/ai_script.py", line 105, in <module>
        with Reader(pyarrow_filesystem= s3, dataset_path=path) as ratings_r:
      File "/home/hadoop/.local/lib/python3.7/site-packages/petastorm/reader.py", line 406, in __init__
        filters=filters)
      File "/home/hadoop/.local/lib/python3.7/site-packages/pyarrow/parquet.py", line 1213, in __new__
        metadata_nthreads=metadata_nthreads)
      File "/home/hadoop/.local/lib/python3.7/site-packages/pyarrow/parquet.py", line 1466, in __init__
        "Dataset API".format(keyword))
    ValueError: Keyword 'validate_schema' is not yet supported with the new Dataset API
    

    How can be solved this issue? Thanks

  • Performance on large amounts of data

    Performance on large amounts of data

    Hello!

    I'm attempting to train a relatively simple transformer model on a large amount of data (35m rows, 20 features). The data have been materialized as parquet, where each column is an array of size ~30. These are just small enough that with some manipulation I can fit them into a pandas data frame and keep that in memory, but I'd like to be able to train on larger datasets -- and more workers -- in the future.

    At least with my naive use of petastorm, it appears that throughput is quite low. Simply iterating over a petastorm.pytorch.DataLoader can take hours, timings which make my use case somewhat intractable. Changing the worker type or number of workers did not seem to make things better or worse.

    I'm materializing the dataset this way:

    import numpy as np
    
    from petastorm.etl.dataset_metadata import materialize_dataset
    from petastorm.codecs import NdarrayCodec
    from petastorm.unischema import Unischema, UnischemaField
    
    train_path = f"{base_path}/train"
    
    fields = [
        UnischemaField(
            column,
            np.float32,
            (encoder_length if column.endswith("encoder") else decoder_length,),
            NdarrayCodec(),
            False,
        )
        for column in data_result.data_train.columns
    ]
    
    schema = Unischema("schema", fields)
    
    with materialize_dataset(spark, train_path, schema):
        data_result.data_train.write.mode("overwrite").parquet(train_path)
    

    And reading it this way:

    from petastorm.pytorch import DataLoader
    
    train_reader = make_batch_reader(
        train_path,
        num_epochs=None,
        # reader_pool_type="process",
        # workers_count=15,
        # cur_shard=hvd.rank(),
        # shard_count=hvd.size(),
    )
    
    with DataLoader(train_reader, batch_size=batch_size) as train_dataloader:
        train_dataloader_iter = iter(train_dataloader)
    
        for _ in range(steps_per_epoch):
            batch = next(train_dataloader_iter)
    

    Any hints as to what I can do to improve throughput? Some option or technique I might be missing? Using BatchedDataLoader instead of DataLoader did help substantially, but I'm running into possibly memory-related errors when using that with Horovod (any insight into that would also be appreciated -- unfortunately Databricks doesn't give me much information other than telling me the process has died).

Distributed Tensorflow, Keras and PyTorch on Apache Spark/Flink & Ray
Distributed Tensorflow, Keras and PyTorch on Apache Spark/Flink & Ray

A unified Data Analytics and AI platform for distributed TensorFlow, Keras and PyTorch on Apache Spark/Flink & Ray What is Analytics Zoo? Analytics Zo

Aug 2, 2022
BigDL: Distributed Deep Learning Framework for Apache Spark
BigDL: Distributed Deep Learning Framework for Apache Spark

BigDL: Distributed Deep Learning on Apache Spark What is BigDL? BigDL is a distributed deep learning library for Apache Spark; with BigDL, users can w

Aug 2, 2022
Distributed scikit-learn meta-estimators in PySpark
Distributed scikit-learn meta-estimators in PySpark

sk-dist: Distributed scikit-learn meta-estimators in PySpark What is it? sk-dist is a Python package for machine learning built on top of scikit-learn

Jul 15, 2022
pure-predict: Machine learning prediction in pure Python
 pure-predict: Machine learning prediction in pure Python

pure-predict speeds up and slims down machine learning prediction applications. It is a foundational tool for serverless inference or small batch prediction with popular machine learning frameworks like scikit-learn and fasttext. It implements the predict methods of these frameworks in pure Python.

Jul 28, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

Jul 30, 2022
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library,  for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Aug 1, 2022
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective. 10x Larger Models 10x Faster Trainin

Aug 4, 2022
Apache Liminal is an end-to-end platform for data engineers & scientists, allowing them to build, train and deploy machine learning models in a robust and agile way
Apache Liminal is an end-to-end platform for data engineers & scientists, allowing them to build, train and deploy machine learning models in a robust and agile way

Apache Liminals goal is to operationalise the machine learning process, allowing data scientists to quickly transition from a successful experiment to an automated pipeline of model training, validation, deployment and inference in production. Liminal provides a Domain Specific Language to build ML workflows on top of Apache Airflow.

Jul 26, 2022
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Aug 8, 2022
XGBoost-Ray is a distributed backend for XGBoost, built on top of distributed computing framework Ray.

XGBoost-Ray is a distributed backend for XGBoost, built on top of distributed computing framework Ray.

Jul 28, 2022
WAGMA-SGD is a decentralized asynchronous SGD for distributed deep learning training based on model averaging.

WAGMA-SGD is a decentralized asynchronous SGD based on wait-avoiding group model averaging. The synchronization is relaxed by making the collectives externally-triggerable, namely, a collective can be initiated without requiring that all the processes enter it. It partially reduces the data within non-overlapping groups of process, improving the parallel scalability.

Jun 18, 2022
Python library which makes it possible to dynamically mask/anonymize data using JSON string or python dict rules in a PySpark environment.

pyspark-anonymizer Python library which makes it possible to dynamically mask/anonymize data using JSON string or python dict rules in a PySpark envir

Jun 30, 2022
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.

TensorFlowOnSpark TensorFlowOnSpark brings scalable deep learning to Apache Hadoop and Apache Spark clusters. By combining salient features from the T

Jul 30, 2022
[DEPRECATED] Tensorflow wrapper for DataFrames on Apache Spark

TensorFrames (Deprecated) Note: TensorFrames is deprecated. You can use pandas UDF instead. Experimental TensorFlow binding for Scala and Apache Spark

Jul 23, 2022
A collection of interactive machine-learning experiments: 🏋️models training + 🎨models demo
A collection of interactive machine-learning experiments: 🏋️models training + 🎨models demo

?? Interactive Machine Learning experiments: ??️models training + ??models demo

Jul 29, 2022
Microsoft Machine Learning for Apache Spark
Microsoft Machine Learning for Apache Spark

Microsoft Machine Learning for Apache Spark MMLSpark is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark

Aug 1, 2022
DistML is a Ray extension library to support large-scale distributed ML training on heterogeneous multi-node multi-GPU clusters

DistML is a Ray extension library to support large-scale distributed ML training on heterogeneous multi-node multi-GPU clusters

Aug 1, 2022
SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker.
SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker.

SageMaker Python SDK SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker. With the S

Aug 2, 2022
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.

Ray provides a simple, universal API for building distributed applications. Ray is packaged with the following libraries for accelerating machine lear

Aug 2, 2022