Limits to the number of updates to a storage record
It appears that there is a certain limit to the number of times a record can be updated - in the test performed, the updates after 8K-14K times on a single item, returns the error - {"code":413,"reason":"Error writing record metadata to Datastore.","message":"The record metadata is too big"}'
Steps to reproduce:
- Insert a storage record, using PUT /storage; give it a kind, meta data & data structures. Take note of the ID.
- Now in a loop update the record using PUT /storage, over a number of times. Use the ID retrieved in step 1 and update any property/increment it to keep track
- After a certain number (which varies from 8K to 14K in the test done so far), one get the 403 error and the record is not updated.
Desired:
- The limit should be documented & published.
- If the limit exceeds & cannot be extended say to a larger limit, options must exist to
- to purge/export the older versions AND
- to rollover the oldest versions
Note: The issue is discussed with the Core Services & Storage team. As suggested by @kelham , I am creating a new issue here.