Unable to read data from the transport connection…TLS/SSL Problems in MIM

Hey Yall…. Seems like MS are forcing TLS 1.2 or above on SMTP connections and other O365 web based authentication services that connect to Azure, if you get an error like:

System.Web.Services: System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. —> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. —> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)

Then try applying these reg entries that force > TLS 1.2 connections on 32 bit and 64 bit .Net v2.0.50727 which is what the MIM portal users….the later versions seemed to be forced already. They also get applied as part of a hardening GPO but its name escapes me ATM.

[HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v2.0.50727]
"SystemDefaultTlsVersions"=dword:00000001
"SchUseStrongCrypto"=dword:00000001

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v2.0.50727]
"SystemDefaultTlsVersions"=dword:00000001
"SchUseStrongCrypto"=dword:00000001

MS have been phasing it on on SMTP connections via the term they call speedbump

FIM/MIM times out or SSL connection error behind Azure Load Balancer

Recently I had been setting up a load balanced MIM Portal solution in the Azure cloud and the setup seemed to work ok, after a while I got reports it was working for some people. It turned out if you were connected via WIFI it worked fine but if you connected via LAN it didn’t…….How bizarre! All the networks, gateways and site to site VPN’s were setup correctly and indeed if you tried to access any of the portal servers directly it worked only access via the Load Balancer was problematic.

The solution ended up being the setting “Large Send Offload” and I will explain why…

I collected the traces both on the Azure Server and my client pc. Initially I saw the 3-way handshake was successful, which means the port is opened and packet is reaching the backend VM. I then noticed that to complete SSL handshake, my client PC sends Client Hello, but never receives Server Hello.
then the Azure Server logs were checked, I noticed that the Server Sends Server hello but with a payload larger than 1350 MTU……but why is that a problem?

Below is the architecture

Azure Server –> Load balancer –> Azure gateway–> Source client PC

When the Azure Sever sends payload greater than 1350 , the Azure gateway sends the Destination Unreachable message to the Load balancer. Stating that it should decrease the Payload and send it again. The Destination Unreachable message is sent to Load balancer, but the load balancer never sends to the Backend VM, because it never knows the packet is for the particular target VM. It always thinks that the packet is for itself. Due to which the target backend VM never reduces the payload and subsequently doesn’t re-send it to the gateway.

Finally the Solution!

• Go to the Network adaptor
• Go to properties
• Click on configure

• Go to Advanced option and Disable the below 2 options

Disable the “Large Send Offload”
Check the status of “Jumbo frames” also but this usually disabled by default anyway

After disabling those settings everything started working apparently this mechanism is by design, can only be resolved by disabling the above 2 features. Apparently Microsoft are working on making the Azure Load Balancers accept and forward large packets.

WordPress Appliance - Powered by TurnKey Linux