Tools for analyzing Git history using SQLite

git-history

PyPI Changelog Tests License

Tools for analyzing Git history using SQLite

Installation

Install this tool using pip:

$ pip install git-history

Usage

This tool can be run against a Git repository that holds a file that contains JSON, CSV/TSV or some other format and which has multiple versions tracked in the Git history. See Git scraping to understand how you might create such a repository.

The file command analyzes the history of an individual file within the repository, and generates a SQLite database table that represents the different versions of that file over time.

The file is assumed to contain multiple objects - for example, the results of scraping an electricity outage map or a CSV file full of records.

Assuming you have a file called incidents.json that is a JSON array of objects, with multiple versions of that file recorded in a repository.

Change directory into the GitHub repository in question and run the following:

git-convert file incidents.db incidents.json

This will create a new SQLite database in the incidents.db file with two tables:

  • commits containing a row for every commit, with a hash column and the commit_at date.
  • items containing a row for every item in every version of the filename.json file - with an extra commit column that is a foreign key back to the commits table.

If you have 10 historic versions of the incidents.json file and each one contains 30 incidents, you will end up with 10 * 30 = 300 rows in your items table.

De-duplicating items using IDs

If your objects have a unique identifier - or multiple columns that together form a unique identifier - you can use the --id option to de-duplicate and track changes to each of those items over time.

If there is a unique identifier column called IncidentID you could run the following:

git-convert file incidents.db incidents.json --id IncidentID

This will create three tables - commits, items and item_versions.

The items table will contain just the most recent version of each row, de-duplicated by ID.

The item_versions table will contain a row for each captured differing version of that item, plus the following columns:

  • item as a foreign key to the items table
  • commit as a foreign key to the commits table
  • version as the numeric version number, starting at 1 and incrementing for each captured version

If you have already imported history, the command will skip any commits that it has seen already and just process new ones. This means that even though an initial import could be slow subsequent imports should run a lot faster.

Additional options:

  • --repo DIRECTORY - the path to the Git repository, if it is not the current working directory.
  • --branch TEXT - the Git branch to analyze - defaults to main.
  • --id TEXT - as described above: pass one or more columns that uniquely identify a record, so that changes to that record can be calculated over time.
  • --ignore TEXT - one or more columns to ignore - they will not be included in the resulting database.
  • --csv - treat the data is CSV or TSV rather than JSON, and attempt to guess the correct dialect
  • --convert TEXT - custom Python code for a conversion, see below.
  • --import TEXT - Python modules to import for --convert.
  • --ignore-duplicate-ids - if a single version of a file has the same ID in it more than once, the tool will exit with an error. Use this option to ignore this and instead pick just the first of the two duplicates.
  • --silent - don't show the progress bar.

Note that id, item, version, commit and rowid are reserved column names that are used by this tool. If your data contains any of these they will be renamed to id_, item_, version_, commit_ or rowid_ to avoid clashing with the reserved columns.

There is one exception: if you have an id column and use --id id without specifying more than one ID column, your ìd` column will be used as the item ID but will not be renamed.

CSV and TSV data

If the data in your repository is a CSV or TSV file you can process it by adding the --csv option. This will attempt to detect which delimiter is used by the file, so the same option works for both comma- and tab-separated values.

git-convert file trees.db trees.csv --id TreeID

Custom conversions using --convert

If your data is not already either CSV/TSV or a flat JSON array, you can reshape it using the --convert option.

The format needed by this tool is an array of dictionaries that looks like this:

[
    {
        "id": "552",
        "name": "Hawthorne Fire",
        "engines": 3
    },
    {
        "id": "556",
        "name": "Merlin Fire",
        "engines": 1
    }
]

If your data does not fit this shape, you can provide a snippet of Python code to converts the on-disk content of each stored file into a Python list of dictionaries.

For example, if your stored files each look like this:

{
    "incidents": [
        {
            "id": "552",
            "name": "Hawthorne Fire",
            "engines": 3
        },
        {
            "id": "556",
            "name": "Merlin Fire",
            "engines": 1
        }
    ]
}

You could use the following Python snippet to convert them to the required format:

json.loads(content)["incidents"]

(The json module is exposed to your custom function by default.)

You would then run the tool like this:

git-convert file database.db incidents.json \
  --id id \
  --convert 'json.loads(content)["incidents"]'

The content variable is always a bytes object representing the content of the file at a specific moment in the repository's history.

You can import additional modules using --import. This example shows how you could read a CSV file that uses ; as the delimiter:

git-history file trees.db ../sf-tree-history/Street_Tree_List.csv \
  --repo ../sf-tree-history \
  --import csv \
  --import io \
  --convert '
    fp = io.StringIO(content.decode("utf-8"))
    return list(csv.DictReader(fp, delimiter=";"))
    ' \
  --id TreeID

If your Python code spans more than one line it needs to include a return statement.

Development

To contribute to this tool, first checkout the code. Then create a new virtual environment:

cd git-history
python -m venv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest
Comments
  • Add live demos

    Add live demos

    I'm tempted to pull a bunch of different example repos on a schedule and bundle them into the same demo instance.

    Could have a recipes.md documentation page that shares the same demos and shows how they were built, using cog somehow.

  • Default to only storing columns that have changed in item_version

    Default to only storing columns that have changed in item_version

    (Original title: Option to store only columns that have changed in item_versions)

    When browsing a list of item versions like this one it's difficult to tell at a glance which rows have changed since the previous version:

    image

    It would be neat if there was a mode that could only store values in the versions table for columns that have changed since the last version.

  • Use integer primary keys for smaller tables

    Use integer primary keys for smaller tables

    Refs #12. Still needs a bit more work:

    • [x] See if I can come up with a better column name than _item_hash_id (I went with _item_id)
    • [x] Ship sqlite-utils 3.19 and update dependency
    • [x] Update schema description in README
    • [x] Update reserved columns
  • Support history of more than one file in a single database

    Support history of more than one file in a single database

    Provide a way to customize the name of the items_ and item_versions tables - but also need to work out how to reflect in the commits table that they may be related to other things (perhaps? Maybe it's OK to have commits just live there - but they should probably indicate their repository somehow - maybe via a foreign key to a repos table).

  • Add a `--dialect` option for forcing a CSV dialect

    Add a `--dialect` option for forcing a CSV dialect

    Running against https://github.com/simonw/fara-history

    (git-history) git-history % git-history file fara.db ../fara-history/FARA_All_Registrants.csv --repo ../fara-history --id Registration_Number --changed --branch master --csv
      [------------------------------------]  1/376    0%Traceback (most recent call last):
      File "/Users/simon/.local/share/virtualenvs/git-history-nXMauUZE/bin/git-history", line 33, in <module>
        sys.exit(load_entry_point('git-history', 'console_scripts', 'git-history')())
      File "/Users/simon/.local/share/virtualenvs/git-history-nXMauUZE/lib/python3.10/site-packages/click/core.py", line 1128, in __call__
        return self.main(*args, **kwargs)
      File "/Users/simon/.local/share/virtualenvs/git-history-nXMauUZE/lib/python3.10/site-packages/click/core.py", line 1053, in main
        rv = self.invoke(ctx)
      File "/Users/simon/.local/share/virtualenvs/git-history-nXMauUZE/lib/python3.10/site-packages/click/core.py", line 1659, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "/Users/simon/.local/share/virtualenvs/git-history-nXMauUZE/lib/python3.10/site-packages/click/core.py", line 1395, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/Users/simon/.local/share/virtualenvs/git-history-nXMauUZE/lib/python3.10/site-packages/click/core.py", line 754, in invoke
        return __callback(*args, **kwargs)
      File "/Users/simon/Dropbox/Development/git-history/git_history/cli.py", line 246, in file
        item = fix_reserved_columns(item)
      File "/Users/simon/Dropbox/Development/git-history/git_history/utils.py", line 8, in fix_reserved_columns
        if not any(reserved_with_suffix_re.match(key) for key in item):
      File "/Users/simon/Dropbox/Development/git-history/git_history/utils.py", line 8, in <genexpr>
        if not any(reserved_with_suffix_re.match(key) for key in item):
    TypeError: expected string or bytes-like object
    

    After much debugging, it turns out the problem is running the CSV parser against this specific revision of the file: https://github.com/simonw/fara-history/blob/ab27087f642680697db6c914d094bf3d06b363f3/FARA_All_Registrants.csv

    Here's what's happening:

    >>> import csv, httpx, io
    >>> content = httpx.get("https://raw.githubusercontent.com/simonw/fara-history/ab27087f642680697db6c914d094bf3d06b363f3/FARA_All_Registrants.csv").content
    >>> decoded = content.decode("utf-8")
    >>> dialect = csv.Sniffer().sniff(decoded[:512])
    >>> (dialect.delimiter, dialect.doublequote, dialect.escapechar, dialect.lineterminator, dialect.quotechar, dialect.quoting, dialect.skipinitialspace)
    (',', False, None, '\r\n', '"', 0, False)
    >>> reader = csv.DictReader(io.StringIO(decoded), dialect=dialect)
    >>> items = list(reader)
    >>> [it for it in items if it["Registration_Number"] == '4797']
    [{'Registration_Number': '4797',
      'Registration_Date': '04/20/1993',
      'Termination_Date': '05/06/1993',
      'Name': 'National Petroleum Company, "Sudan""',
      'Business_Name': ' Ltd."',
      'Address_1': '',
      'Address_2': '525 South Lancaster Street',
      'City': '',
      'State': 'Arlington',
      'Zip': 'VA',
      None: ['22204']}]
    

    What is going on with that last item of None: ['22204']?

  • Item should link to commit

    Item should link to commit

    There's a bug in the non-id branch. Each item in items is redefined, with a new object, then _commit is set, but never used anywhere.

    In my use case, I need to know the commit each item comes from, and this fix allows it: image

  • _version resets to 1 when command is run incrementally

    _version resets to 1 when command is run incrementally

    I restarted it to try out an optimization, but then realized there's a nasty bug: the _version resets to 1 if you restart it even if there are already commits in the database.

    Originally posted by @simonw in https://github.com/simonw/git-history/issues/21#issuecomment-983116364

  • Change how reserved columns work to have an underscore prefix

    Change how reserved columns work to have an underscore prefix

    Right now I've implemented it such that id and commit and version and item are reserved columns, and any user-provided data with those columns gets renamed to commit_ and id_ and so-on - see #8.

    I've changed my mind. I think this tool's columns should all have a _ prefix instead (like semi-private class properties in Python).

    I'm even going to do that for the _id column, mainly so I don't have to explain a special case for id.

  • Columns with list/dict JSON values always detected as changed

    Columns with list/dict JSON values always detected as changed

    It looks like a column with a JSON value such as:

     [
        "Bridge work",
        "Long-term construction"
    ]
    

    Gets recorded as a changed value for every version, even when it hasn't changed.

  • `--start-at` and `--start-after` options

    `--start-at` and `--start-after` options

    Found a bad commit which broke the script: https://github.com/simonw/sf-tree-history/commit/3fb63a99dfab8a75c83d341c67afc9abf484e0c4 in https://github.com/simonw/git-history/issues/21#issuecomment-983130553_

    Solution: options to say "start at this commit" or "start at the commit AFTER this commit".

  • Support for `--import xml.etree.ElementTree`

    Support for `--import xml.etree.ElementTree`

    https://github.com/simonw/neededge-history/blob/main/v1.xml

    This should work, but doesn't without a small change:

    git-history file neededge.db v1.xml --id url --convert '
    tree = xml.etree.ElementTree.fromstring(content)
    return [site.attrib for site in tree.iter("site")]
    ' --import xml.etree.ElementTree
    

    Originally posted by @simonw in https://github.com/simonw/git-history/issues/30#issuecomment-987285857

  • Ensure that shipping the item _commit fix doesn't break existing databases

    Ensure that shipping the item _commit fix doesn't break existing databases

    An interesting challenge with this change: since it modifies the schema, shipping a release with it could break existing databases the next time git-history file ... is run against them.

    This would affect my workflow here for example: https://github.com/simonw/scrape-instances-social/blob/main/.github/workflows/scrape.yml

    Originally posted by @simonw in https://github.com/simonw/git-history/issues/59#issuecomment-1321268605

  • Feature idea: --always columns

    Feature idea: --always columns

    Sometimes you might find that you want to record a value every time for a column even while using the mechanism which uses null for values that have not changed - for this project for example: https://github.com/simonw/scrape-instances-social

    Idea: a --always colname option which turns this on (and can be applied multiple times).

  • Running with --full-versions twice fails with an error

    Running with --full-versions twice fails with an error

    scrape-instances-social % git-history file counts.db instances.json \
      --convert "
        instances = json.loads(content)
        return [
        {
            'id': 'all',
            'users': sum(d['users'] or 0 for d in instances),
            'statuses': sum(int(d['statuses'] or 0) for d in instances),
            'instances': len(instances)
        }
      ]" --id id --full-versions
      [####################################]  17/17  100%%                                                                                                          scrape-instances-social % 
    scrape-instances-social % 
    scrape-instances-social % git-history file counts.db instances.json \
      --convert "
        instances = json.loads(content)
        return [
        {
            'id': 'all',
            'users': sum(d['users'] or 0 for d in instances),
            'statuses': sum(int(d['statuses'] or 0) for d in instances),
            'instances': len(instances)
        }
      ]" --id id --full-versions
    Traceback (most recent call last):
      File "/Users/simon/.local/bin/git-history", line 8, in <module>
        sys.exit(cli())
      File "/Users/simon/.local/pipx/venvs/git-history/lib/python3.10/site-packages/click/core.py", line 1128, in __call__
        return self.main(*args, **kwargs)
      File "/Users/simon/.local/pipx/venvs/git-history/lib/python3.10/site-packages/click/core.py", line 1053, in main
        rv = self.invoke(ctx)
      File "/Users/simon/.local/pipx/venvs/git-history/lib/python3.10/site-packages/click/core.py", line 1659, in invoke
        return _process_result(sub_ctx.command.invoke(sub_ctx))
      File "/Users/simon/.local/pipx/venvs/git-history/lib/python3.10/site-packages/click/core.py", line 1395, in invoke
        return ctx.invoke(self.callback, **ctx.params)
      File "/Users/simon/.local/pipx/venvs/git-history/lib/python3.10/site-packages/click/core.py", line 754, in invoke
        return __callback(*args, **kwargs)
      File "/Users/simon/.local/pipx/venvs/git-history/lib/python3.10/site-packages/git_history/cli.py", line 187, in file
        item_id_to_version, item_id_to_last_full_hash = get_versions_and_hashes(
      File "/Users/simon/.local/pipx/venvs/git-history/lib/python3.10/site-packages/git_history/cli.py", line 555, in get_versions_and_hashes
        for row in db.query(sql):
      File "/Users/simon/.local/pipx/venvs/git-history/lib/python3.10/site-packages/sqlite_utils/db.py", line 410, in query
        cursor = self.execute(sql, params or tuple())
      File "/Users/simon/.local/pipx/venvs/git-history/lib/python3.10/site-packages/sqlite_utils/db.py", line 422, in execute
        return self.conn.execute(sql, parameters)
    sqlite3.OperationalError: no such column: item_version._item_full_hash
    

    While working on:

    • https://github.com/simonw/scrape-instances-social/issues/2
  • Defining composed ids considering new lines as different items

    Defining composed ids considering new lines as different items

    I'm a newbie to the datasette ecosystem and I'm particularly amazed by the git-scraping technique. Thanks Simon for sharing it!

    I need help defining a composed id on the rows for this CSV where I'm tracking power outages events in Buenos Aires's metropolitan area every 20'.

    https://github.com/OpenDataCordoba/cortes_enre/blob/main/cortes_enre.csv

    My problem is that there is no a clear ID of each event and I would like to track changes over it

    Consider this recent commit https://github.com/OpenDataCordoba/cortes_enre/commit/b3cde1c1d3b27dc0a76249d0025e3cbe68d914ed

    Here it seems I could use all the columns but the last two as a composed id

    latitud,longitud,nn,tipo,empresa,partido,localidad,subestacion,alimentador
    

    Then the colums afectados (affected users) and normalizacion estimada (estimated time to normalization) could change during a few next updates, but eventually the line will be deleted.

    The problem is that the composed id basically describes the "place" where the outage is happening, and maybe in the future it could be a totally different event in the same place unrelated to the current event.

    So, how could I distinguish different events in the same place? I'm wondering if there is a way to consider it's a new item if the composed id appears again (ie the commit is not updating an existing line but adding it).

  • Tables not being created, only `namespaces`

    Tables not being created, only `namespaces`

    Hi!

    I am trying to run git-history on a repository containing a json file that has multiple versions over time ( hundreds of commits to the same file).

    When I run git-history file some_data.db data/latest.json --branch master, it creates a some_data.db but there are no commits tables being created:

    image

    Only a single namespaces table with a single item is created.. image

    Not sure what I am missing here?

    Kind regards, Lasse

  • Floating point numbers seem to always be recorded as changed

    Floating point numbers seem to always be recorded as changed

    In this example:

    image

    I don't think latitude and longitude should be populated as they have not changed between records (unlike units).

    This is from a demo database built against https://github.com/simonw/scrape-san-mateo-fire-dispatch with:

    git-history file history.db incidents.json --id id
    

    Relevant code:

    https://github.com/simonw/git-history/blob/ce9e2f161f8037aab8f15dcffb4c7ff8f94ab3b4/git_history/cli.py#L344-L354

Related tags
Mongita is to MongoDB as SQLite is to SQL
Mongita is to MongoDB as SQLite is to SQL

Mongita is a lightweight embedded document database that implements a commonly-used subset of the MongoDB/PyMongo interface. Mongita differs from MongoDB in that instead of being a server, Mongita is a self-contained Python library. Mongita can be configured to store its documents either on disk or in memory.

Nov 25, 2022
Python function to extract all the rows from a SQLite database file while iterating over its bytes, such as while downloading it

Python function to extract all the rows from a SQLite database file while iterating over its bytes, such as while downloading it

Nov 9, 2022
Python function to query SQLite files stored on S3

sqlite-s3-query Python function to query a SQLite file stored on S3. It uses multiple HTTP range requests per query to avoid downloading the entire fi

Nov 21, 2022
Manage your sqlite database very easy (like django) ...

Manage your sqlite database very easy (like django) ...

Feb 9, 2022
pickleDB is an open source key-value store using Python's json module.

pickleDB pickleDB is lightweight, fast, and simple database based on the json module. And it's BSD licensed! pickleDB is Fun >>> import pickledb >>>

Dec 1, 2022
AWS Tags As A Database is a Python library using AWS Tags as a Key-Value database.

AWS Tags As A Database is a Python library using AWS Tags as a Key-Value database. This database is completely free* ??

Nov 25, 2022
Oh-My-PickleDB is an open source key-value store using Python's json module.
Oh-My-PickleDB is an open source key-value store using Python's json module.

OH-MY-PICKLEDB oh-my-pickleDB is a lightweight, fast, and intuitive data manager written in python ?? Table of Contents About Getting Started Deployme

Feb 20, 2022
Tiny Git is a simplified version of Git with only the basic functionalities to gain better understanding of git internals.

Tiny Git is a simplified version of Git with only the basic functionalities to gain better understanding of git internals. Implemented Functi

Oct 15, 2021
Crud-python-sqlite: used to manage telephone contacts through python and sqlite

crud-python-sqlite This program is used to manage telephone contacts through python and sqlite. Dependencicas python3 sqlite3 Installation Clone the r

Jan 24, 2022
Google-drive-to-sqlite - Create a SQLite database containing metadata from Google Drive

google-drive-to-sqlite Create a SQLite database containing metadata from Google

Nov 27, 2022
GEP (GDB Enhanced Prompt) - a GDB plug-in for GDB command prompt with fzf history search, fish-like autosuggestions, auto-completion with floating window, partial string matching in history, and more!

GEP (GDB Enhanced Prompt) GEP (GDB Enhanced Prompt) is a GDB plug-in which make your GDB command prompt more convenient and flexibility. Why I need th

Dec 1, 2022
Git Plan - a better workflow for git
Git Plan - a better workflow for git

git plan A better workflow for git. Git plan inverts the git workflow so that you can write your commit message first, before you start writing code.

Sep 23, 2022
git-cola: The highly caffeinated Git GUI

git-cola: The highly caffeinated Git GUI git-cola is a powerful Git GUI with a slick and intuitive user interface. Copyright (C) 2007-2020, David Agu

Dec 1, 2022
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

Nov 17, 2022
git-partial-submodule is a command-line script for setting up and working with submodules while enabling them to use git's partial clone and sparse checkout features.

Partial Submodules for Git git-partial-submodule is a command-line script for setting up and working with submodules while enabling them to use git's

Sep 22, 2022
A terminal tool for git. When we use git, do you feel very uncomfortable with too long commands
A terminal tool for git. When we use git, do you feel very uncomfortable with too long commands

PIGIT A terminal tool for git. When we use git, do you feel very uncomfortable with too long commands. For example: git status --short, this project c

Apr 9, 2022
Dicionario-git-github - Dictionary created to help train new users of Git and GitHub applications
Dicionario-git-github - Dictionary created to help train new users of Git and GitHub applications

Dicionário ?? Dicionário criado com o objetivo de auxiliar no treinamento de nov

Feb 7, 2022
A collection of robust and fast processing tools for parsing and analyzing web archive data.

ChatNoir Resiliparse A collection of robust and fast processing tools for parsing and analyzing web archive data. Resiliparse is part of the ChatNoir

Nov 29, 2022
Tools for analyzing data collected with a custom unity-based VR for insects.

unityvr Tools for analyzing data collected with a custom unity-based VR for insects. Organization: The unityvr package contains the following submodul

Aug 1, 2022
Tools for analyzing Java JVM gc log files

gc_log This package consists of two separate utilities useful for : gc_log_visualizer.py regionsize.py GC Log Visualizer This was updated to run under

Jan 4, 2022