Skip to main content
Overwrite an existing Big Table by clearing all rows and (optionally) adding new data. Row IDs for any added rows are returned in the response in the same order as the input row data is passed in the request. Row data may be passed in JSON, CSV, or TSV format. If a column is not included in the passed row data, it will be empty in the added row. If a column is passed that does not exist in the updated table schema, or with a value that does not match the column’s type, the default behavior is for no action to be taken and the API call to return an error. However, you can control this behavior with the onSchemaError query parameter.
There is currently no way to supply values for user-specific columns in the API. Those columns will be cleared when using this endpoint.

Updating the Schema

You can optionally update the table schema at the same time by providing a new schema in the JSON request body. If you do not provide a schema, the existing schema will be used. When using a CSV or TSV request body, you cannot pass a schema. If you need to update the schema, use the onSchemaError=updateSchema query parameter, or stash the CSV/TSV data and pass a JSON request body referencing the stash ID.

Examples

To clear all rows from a table, send a PUT request with an empty array in the rows field:
{
    "rows": []
}
If you want to reset a table’s data with a small number of rows, you can do so by providing the data inline in the rows field (being sure that row object structure matches the table schema):
[
    {
        "fullName": "Alex Bard",
        "ageInYears": 30,
        "hiredOn": "2021-07-03"
    }
]
However, this is only appropriate for relatively small initial datasets (around a few hundred rows or less, depending on schema complexity). If you need to work with a larger dataset you should utilize stashing.
Stashing is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and upload them to a single stash ID.Then, to reset a table’s data from the stash, use the $stashID reference in the rows field instead of providing the data inline:
{
    "rows": { "$stashID": "20240215-job32" }
}
This endpoint can be combined with the Get Rows endpoint and stashing to fetch the current table data, modify it, and then overwrite the table with the updated data:
  1. Call the Get Rows endpoint to fetch the current table data. Pass a reasonably high limit query parameter because you want to fetch all rows in as few requests as possible.
  2. Modify the data as desired, and then stash the modified rows using the Stash Data endpoint.
  3. If a continuation was returned in step 1, repeat steps 1-3, passing the continuation query parameter to Get Rows until all rows have been fetched, modified, and stashed.
  4. Finally, call this endpoint with same stash ID used in step 2. This will overwrite the table with the updated data:
{
    "rows": { "$stashID": "20240215-job32" }
}
Risk of Data LossWe strongly recommend you use the If-Match header to ensure the table is not modified between steps 1 and 4. See the Data Versioning guide for more information.