onSchemaError query parameter.
Updating the Schema
You can optionally update the table schema at the same time by providing a new schema in the JSON request body. If you do not provide a schema, the existing schema will be used. When using a CSV or TSV request body, you cannot pass a schema. If you need to update the schema, use theonSchemaError=updateSchema query parameter, or stash the CSV/TSV data and pass a JSON request body referencing the stash ID.
Examples
Clear table data
Clear table data
To clear all rows from a table, send a PUT request with an empty array in the
rows field:Reset table data
Reset table data
If you want to reset a table’s data with a small number of rows, you can do so by providing the data inline in the However, this is only appropriate for relatively small initial datasets (around a few hundred rows or less, depending on schema complexity). If you need to work with a larger dataset you should utilize stashing.
rows field (being sure that row object structure matches the table schema):Reset table data from Stash
Reset table data from Stash
Stashing is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and upload them to a single stash ID.Then, to reset a table’s data from the stash, use the
$stashID reference in the rows field instead of providing the data inline:Batch update table rows
Batch update table rows
This endpoint can be combined with the Get Rows endpoint and stashing to fetch the current table data, modify it, and then overwrite the table with the updated data:
- Call the Get Rows endpoint to fetch the current table data. Pass a reasonably high
limitquery parameter because you want to fetch all rows in as few requests as possible. - Modify the data as desired, and then stash the modified rows using the Stash Data endpoint.
- If a
continuationwas returned in step 1, repeat steps 1-3, passing thecontinuationquery parameter to Get Rows until all rows have been fetched, modified, and stashed. - Finally, call this endpoint with same stash ID used in step 2. This will overwrite the table with the updated data: