Build a plan for the full meta-data validation against the schemas
Context
So far, the DDMS does not validate the WPC payload against the schemas completely.
Only basic validation proceeds on the DDMS side. Since the DDMS proxies a payload to the Storage service, the second does not validate the payload against the schemas thoroughly as well. As a result, the totally corrupted data can be saved.
The DDMS must have the protection mechanism, and fully validate a payload against the schemas (at the moment without the integrity checking).
Moreover, in a case when the different schemas are registered in the Storage service (e.g. rafsddms:wks:RockSampleAnalysis:1.0.1, and osdu:wks:RockSampleAnalysis:1.0.1) the system must be able to manage the different kinds, even if the kinds schemas have an absolutely different set of properties).
Scope
- Make research on the JSON-schema usage for dynamic validation at runtime and its alternatives
- Create a set of jupyter-notebooks to demonstrate the flow, as well as performance benchmarks
- Build a plan of implementation within #147 (closed)
Note
Validation from the project can be plugged in ((!)not async code).