Skip to content

Wait for the full PDU before parsing#82

Merged
eigenein merged 1 commit intopython-smpplib:masterfrom
Lynesth:patch-1
Jun 4, 2019
Merged

Wait for the full PDU before parsing#82
eigenein merged 1 commit intopython-smpplib:masterfrom
Lynesth:patch-1

Conversation

@Lynesth
Copy link
Contributor

@Lynesth Lynesth commented May 25, 2019

After playing and testing the lib, I noticed that under "stress" - sending around 10 SMS/s and receiving the DLRs on the same connection - I encountered failing PDUs out of nowhere.

I pinned the issue down to the fact that the current implementation expects all the data from a pdu to be in the buffer and sometimes, it isn't. WHen that's the case, it will retrieve less data than the expected length and it then messes all following PDUs (if it doesn't raise an exception before).

I first tried using the MSG_WAITALL flag but this didn't seem to work (surely something platform dependant) so I made that little loop which fixes everything on my side.

Let me know if I missed something, or if you need more information

Copy link
Member

@code-of-kpp code-of-kpp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately some more changes required. We support non-blocking mode by calling this function just once when there is something to read from the socket.
So logic should be modified a bit to avoid while len(loop)` this can be done later in separate PR after this one got merged though.

@eigenein
Copy link
Collaborator

Should we merge even though?

@Lynesth
Copy link
Contributor Author

Lynesth commented May 28, 2019

Hmmm I'm not sure I get it (genuinely).

If I understand correctly, the raw_len = self._socket.recv(4) that retrieves the length of the next pdu is blocking until it receives something or until the timeout is reached (which then stops blocking and raises an timeout exception) and my changes don't modify this behavior, right ?

Then, when we get the length of the next pdu, we need all of it to keep working anyway, so we might just wait for it, no ?

I'm not sure what you would like to do instead ?
I thought of checking the length of the returned pdu in read_once and, if it is not correct, wait until we get the rest, but wouldn't it be the same thing ? And it would not work for _bind and unbind since they call read_pdu directly.

Maybe a failsafe to make sure we don't get stuck in that loop forever would be a good addition though (it would mean that the connection is not broken but we stop receiving anything at all).

@eigenein
Copy link
Collaborator

Yes, I also vote for merging that. Doesn't look like it breaks any interface. Actually, this is documented that recv may return less bytes than requested and the PR follows the advice from the docs

@eigenein eigenein merged commit ade431e into python-smpplib:master Jun 4, 2019
@eigenein
Copy link
Collaborator

eigenein commented Jun 4, 2019

@podshumok I'm merging that. If you still have objections, please raise a concern and I'll will revert or fix that

@stefanruijsenaars
Copy link
Contributor

If I understand correctly, the raw_len = self._socket.recv(4) that retrieves the length of the next pdu is blocking until it receives something or until the timeout is reached (which then stops blocking and raises an timeout exception) and my changes don't modify this behavior, right ?

socket.recv(4) might only get 1 or 2 or 3 bytes occasionally - in which case the library currently raises a "Broken PDU" error - so we might want to wait for the full 4 bytes there as well. See:

https://stackoverflow.com/questions/59017085/socket-recv-receives-less-bytes-than-expected
https://stackoverflow.com/questions/2295737/case-when-blocking-recv-returns-less-than-requested-bytes

@eigenein
Copy link
Collaborator

@stefanruijsenaars Yeah, actually, you're right. We need to take that into account. I wonder how it works now… 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants