From: Matt Caswell Date: Tue, 29 May 2018 14:58:47 +0000 (+0100) Subject: Only set TCP_NODELAY if the protocol is TCP X-Git-Tag: OpenSSL_1_1_1-pre8~102 X-Git-Url: https://git.librecmc.org/?a=commitdiff_plain;h=6712ba9323cd9dc550ae3cc258cb61b5b23dcd83;p=oweals%2Fopenssl.git Only set TCP_NODELAY if the protocol is TCP This doesn't apply if we're doing DTLS, or using UNIX domain sockets. Reviewed-by: Rich Salz (Merged from https://github.com/openssl/openssl/pull/6373) --- diff --git a/apps/s_socket.c b/apps/s_socket.c index f4264cd9ff..76f9289002 100644 --- a/apps/s_socket.c +++ b/apps/s_socket.c @@ -147,7 +147,7 @@ int init_client(int *sock, const char *host, const char *port, #endif if (!BIO_connect(*sock, BIO_ADDRINFO_address(ai), - type == SOCK_STREAM ? BIO_SOCK_NODELAY : 0)) { + protocol == IPPROTO_TCP ? BIO_SOCK_NODELAY : 0)) { BIO_closesocket(*sock); *sock = INVALID_SOCKET; continue;