this code was created in a REPL/ChatGPT/minidump-API/HITL session,
until I had something that "seemed to work". the present commit
is the result of rereading, refactoring for understanding etc.
it's not "pure refacting" in the sense that it's behavior-changing,
but AFAICT for the better. e.g. "line 0" => just leave that out and
many similar changes.
(the now-removed 'treat as pid' was hallunicated by the bot; the
taken-from-sentry version missed the guard against -1)
> The index of the thread that requested a dump be written in the
> threads vector. [..] If the dump was not produced as a result of an exception
> [..] this field will be set to -1,
rather than try-and-recover, just look at the headers and show body/POST etc.
this avoids hard-to-reason about situations where either of those won't work
because the other has already been executed; in combination with reasoning
about max size usage the explicit solution is simply easier to reason about.
further:
* makes api_catch_all one of the content_encoding-ready views.
* implement a max length for the ingest api view
These tests were originally in what is now 1201f754e3
but they were held back because they provide more information
to an attacker than strictly required.
The orignal (non-published, now published) commit message (which
goes with both the code and the tests) was:
As noted by @Cycloctane:
> The problem is that the infinte loop I was talking about is happening inside
> `brotli_generator`. Because brotli `decompressor.is_finished()` never returns
> True if the input is not valid brotli compressed data or is truncated. And
> `decompressor.process()` will keep returning empty bytes that won't be *yield*
> out, making the generator keep looping inside itself. `MaxDataReader` is not
> possible to limit it.
this was exposed when dealing with things that yield in very big chunks
potentially (e.g. brotli bombs)
tests are more directly on the GeneratorReader itself now rather than
integrating this with particular genators-under-test.