Commit a2f07e60 authored by Paal Kvamme's avatar Paal Kvamme
Browse files

Update comments after analyzing some thread safety issues.

parent b3792277
......@@ -569,8 +569,8 @@ ZgyInternalBulk::readToExistingBuffer(
// thread safe.
//
// * The cloud backend doesn't honor the parallel_ok argument.
// While this would be a very good idea it is also difficult
// rather difficult to implement.
// While this would be a very good idea it is also rather difficult
// to implement.
//
// * There is a risk of creating too many threads if the application
// is doing its own multi threading. Ideally the user should
......@@ -583,7 +583,7 @@ ZgyInternalBulk::readToExistingBuffer(
// cloud. Which the setting here won't.
//
// * See commants in LocalFileLinux::xx_readv() and _deliverOneBrick().
// Abd GenLodImpl::_calculate().
// And GenLodImpl::_calculate().
}
/**
......@@ -919,6 +919,7 @@ ZgyInternalBulk::_partsNeeded(
* The function will always be called to deliver one full brick. If the raw
* data only contains a single sample then this is a constant-value brick.
* otherwise there should be exactly bricksize-in-bytes available.
* After any decompression has been done.
* If the brick size is (1,1,1) (not even possible right now) then the
* distinction between constant-value and regular goes away.
*
......@@ -932,19 +933,55 @@ ZgyInternalBulk::_partsNeeded(
* Another caveat: Even though result is reference counted, its buffer
* might end up set to "raw" which means it must not be used after the
* delivery function returns. Mitigated by adding a clear() member to
* DataBuffer. THat doesn't help if somebody already has a raw pointer though.
* DataBuffer. That doesn't help if somebody already has a raw pointer though.
*
* Thread safety: Maybe.
* Thread safety: Yes, but TODO-Worry this is not trivial to verify.
*
* Multithreading by having multiple read requests from the API layer is
* safe, as those requests don't share any mutable data.
*
* TODO-Worry: Multithreading by having the low level xx_readv() deliver
* the requested bricks in parallel MIGHT be thread safe. But beware of
* overlapping requests. Also, is it technically allowed for multiple
* threads to share the same output array even if they write to different
* parts of it? If the array isn't 8-byte aligned (on x86_64) then the
* answer is probably no. In general I am not sure.
* safe. Those requests don't share any mutable data. Specifically,
* "result" will be different for each read request.
*
* Multithreading by having the low level xx_readv() deliver the
* requested bricks in parallel is also supposed to be thread safe.
* The result parameter is shared between threads but different
* threads will be writing to different parts of the buffer due to
* "bpos" being different. (Note 1,2). The raw pointer might also be
* referring to shared data. E.g. a cache. (Note 3). But since the
* data will be destined for different parts of the result buffer it
* shouldn't be possible for the same bytes to be sent to different
* threads.
*
* Note 1:
*
* No two threads should be trying to write to the same location in
* result. But there are no guarantees of alignment. E.g, thread#1
* might write the first 7 bytes, thread#2 the next 7 bytes. The
* assumption is that if the hardware needs to do a read/modify/write
* due to the memory bus or cache line being wider than 8 bits (which
* they definitely are) then the hardware or the compiler is
* responsible for preventing a race condition. For C++11 this appears
* to be the case. C++11 1.7 [intro.memory]/p2, irrelevant note
* omitted:
*
* A memory location is either an object of scalar type or a
* maximal sequence of adjacent bit-fields all having non-zero
* width. Two or more threads of execution (1.10) can update
* and access separate memory locations without interfering
* with each other.
*
* Note (*2):
*
* In the lower layers there are situations where the same block can
* be requested twice in a single request. The reason for that is that
* the caller needs two regions in the file and padding causes the two
* to overlap. Once we get here the padding shouldn't be visible.
*
* Note (*3):
*
* Besides, if xx_readv() was called with immutable_ok=true then there
* will be no race condition because "raw" will only be read from. And
* if immutable_ok=false e.g. because of byteswapping then "raw" needs
* to be a private copy of the data anyway.
*/
void
ZgyInternalBulk::_deliverOneBrick(
......@@ -1759,22 +1796,10 @@ ZgyInternalBulk::writeRegion(
_metadata->ih().datatype()))
throw OpenZGY::Errors::ZgyUserError("Invalid data type in writeRegion");
// TODO-Performamce:
// Getting the range (for updating min/max), testing for all in-use
// samples being the same (for storing empty bricks), and downcast
// from float to storage (may skip clipping or isfinite() or both)
// are all somewhat related. The function to get the range might tell
// us whether the buffer has any NaNs or Inf or both. Depending on
// the answer the scale to storage might use short cuts. If there are
// no infinites, no nan, and range inside what the target can accept,
// the conversion might become a lot faster. Another tweak that is
// maybe less clean is to just go ahead and downcast without worrying
// about NaN/Inf and just hope they don't mess up too much.
// TODO-Performamce: Combining range() and _scaleDataToStorage()
// might save some time.
if (!is_storage) {
// TODO-Performance: Multi-threading of _scaleDataToStorage().
// Need to figure out the cutoff where the buffer is too small and the
// overhead of OpenMP starts getting in the way.
SimpleTimer t1(*_ststimer, timers_on());
data = _scaleDataToStorage(data);
}
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment