Skip to content

Conversion: validate records at the start using json schemas

Mykyta Savchuk requested to merge batch-spatial-location-array-500 into master

There is a bug, when SpatialLocation in record is array, storage returns 500 on batch. Example record data:

"SpatialLocation": [
            {
                "SpatialGeometryTypeID": "Point",
....
]

And there were a lot of similar bugs related to record geo data validation lately (like misspelling field names and so on, that resulted in 500), but the existing validation logic is not intuitive and is spread across the code with random checks, so to fix them we had to add different checks (if's) in different parts of the code, and to make this logic more robust and prevent future bugs it was decided to do a refactoring and place the validation in one place using json schemas, since we already have them, and remove any further validation checks. Also, as we already have a well defined schema for the SpatialLocation geo attribute, I added a validation for such case in addition to the existing generic geo attribute validation, which I also converted to json schema (GenericGeoAttribute.json).

  • Changed conversion logic to add validation against the schema at the start of the flow;
  • Removed unnecessary validation checks;
  • In case of validation error, the message from validation library is returned, instead of constant predefined messages;
  • I changed tests' data (unit and integration), because the schema for AnyCrsFeatureCollection says that if bbox is present, it should not be null, so I added a sample value.
Edited by Mykyta Savchuk

Merge request reports

Loading