Implement dynamic content-schema conversion for all data enpoints

Story 1: As a data definition specialist, during the work on the content schemas, I want to work with the widely used JSON schema format.

Story 2: As the API client for bulk data, I want to rely on automatic schema conversion.

Read flow: It implies that the content-schemas will be stored in a schemas registry in JSON schema notation. As some bulk data of some kind will be requested by the API client, the service will pull (or use cached one) schema from the registry, convert it to the Python model, and provide a response accordingly to the specific schema (+ version).

Write flow: It implies that the content-schemas will be stored in a schemas registry in JSON schema notation. As some bulk data of some kind will be sent by the API client, the service will pull (or use cached one) schema from the registry, convert it to the Python model, validate the payload, and store the payload accordingly to the specific schema (+ version).

Acceptance criteria:

#TBU

in this task, mandatory fields error for /data endpoints should be fixed as well:

Accepted result: Example for dif-lib

{
    "code": 422,
    "reason": "Data validation failed.",
    "errors": {
        "Mandatory parameters missing": [
            "DifferentialLiberationTestID"
        ]
    }
} 

Edited Jun 16, 2023 by Siarhei Khaletski
Assignee Loading
Time tracking Loading