Known issue

TLS 1.3 seems to breaks screenconnect when using ssl on mono

stylnchris 4 years ago updated by DaveE 2 years ago 136

TLS 1.3 seems to breaks screenconnect  when using ssl on mono.

Disabling it in chrome/firefox seems to be a quick fix, however at some point im guessing it would be better for mono to support TLS 1.3. 

Version 6.5.16479.6613

ConnectWise Control Version:
Server Affected:
Host Client Affected:
Guest Client Affected:


Known issue

Hi all,

Sorry for the late response. Again, right now the best workaround is a reverse proxy.

I haven't worked on it myself, but I understand that this update would require a major effort. I'll talk to our developers who have worked on it to see how we might offer a better solution. (Will update here in the next 1-2 weeks.)

do you not have a bash shell?   

apt-get install bash ??

What version of ubuntu server?  

uname -a 

Also I see a ^M which means your copying this via windows and the script could contain foreign characters.   

download the script from github using the download zip option then scp the script to your server rather then copy and pasting this...

Alright, so what I did now is:

I downloaded the ZIP file, uploaded it using cyberduck

Unzipped it, see:

ubuntu@remote01:~$ unzip sc_ssl_support-master.zip
Archive: sc_ssl_support-master.zip
creating: sc_ssl_support-master/
inflating: sc_ssl_support-master/README.md
inflating: sc_ssl_support-master/sc_ssl_script_v2

ubuntu@remote01:~/sc_ssl_support-master$ chmod +x sc_ssl_script_v2

ubuntu@remote01:~/sc_ssl_support-master$ mv sc_ssl_script_v2 sc_ssl_script_v2.sh

ubuntu@remote01:~/sc_ssl_support-master$ ./sc_ssl_script_v2.sh
-bash: ./sc_ssl_script_v2.sh: /bin/bash^M: bad interpreter: No such file or dire ctory

Please tell me how to solve this, driving me nuts, thanks!

Linux remote01 4.4.0-137-generic #163-Ubuntu SMP Mon Sep 24 13:14:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

I was able to fix that issue using:
sed -i -e 's/\r$//' sc_ssl_script_v2.sh
Now I can run the script using sudo sh sc_ssl_script_v2.sh

However, I got errors while running the script and when I visit our screenconnect https is not working.

Thanks for you assistance (below you can find the errors that I noticed during running the script)

invoke-rc.d: initscript nginx, action "start" failed.
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2018-10-03 07:42:26 CEST; 9ms ago
Process: 20298 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=1/FAILURE)
Process: 20293 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)

Oct 03 07:42:24 remote01 nginx[20298]: nginx: [emerg] bind() to failed (98: Address already in use)
Oct 03 07:42:24 remote01 nginx[20298]: nginx: [emerg] bind() to failed (98: Address already in use)
Oct 03 07:42:25 remote01 nginx[20298]: nginx: [emerg] bind() to failed (98: Address already in use)
Oct 03 07:42:25 remote01 nginx[20298]: nginx: [emerg] bind() to failed (98: Address already in use)
Oct 03 07:42:26 remote01 nginx[20298]: nginx: [emerg] bind() to failed (98: Address already in use)
Oct 03 07:42:26 remote01 nginx[20298]: nginx: [emerg] still could not bind()
Oct 03 07:42:26 remote01 systemd[1]: nginx.service: Control process exited, code=exited status=1
Oct 03 07:42:26 remote01 systemd[1]: Failed to start A high performance web server and a reverse proxy server.
Oct 03 07:42:26 remote01 systemd[1]: nginx.service: Unit entered failed state.
Oct 03 07:42:26 remote01 systemd[1]: nginx.service: Failed with result 'exit-code'.

Encountered exception during recovery:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/certbot/auth_handler.py", line 75, in handle_authorizations
resp = self._solve_challenges(aauthzrs)
File "/usr/lib/python3/dist-packages/certbot/auth_handler.py", line 126, in _solve_challenges
resp = self.auth.perform(all_achalls)
File "/usr/lib/python3/dist-packages/certbot_nginx/configurator.py", line 1048, in perform
File "/usr/lib/python3/dist-packages/certbot_nginx/configurator.py", line 858, in restart
nginx_restart(self.conf('ctl'), self.nginx_conf)
File "/usr/lib/python3/dist-packages/certbot_nginx/configurator.py", line 1118, in nginx_restart
"nginx restart failed:\n%s\n%s" % (out.read(), err.read()))
certbot.errors.MisconfigurationError: nginx restart failed:

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/certbot/error_handler.py", line 108, in _call_registered
File "/usr/lib/python3/dist-packages/certbot/auth_handler.py", line 310, in _cleanup_challenges
File "/usr/lib/python3/dist-packages/certbot_nginx/configurator.py", line 1067, in cleanup
File "/usr/lib/python3/dist-packages/certbot_nginx/configurator.py", line 858, in restart
nginx_restart(self.conf('ctl'), self.nginx_conf)
File "/usr/lib/python3/dist-packages/certbot_nginx/configurator.py", line 1118, in nginx_restart
"nginx restart failed:\n%s\n%s" % (out.read(), err.read()))
certbot.errors.MisconfigurationError: nginx restart failed:
nginx restart failed:

you may need to stop your screenconnect service before running the script.

verify that changes has been made to the screenconnect web.config file.

if changes are made, then try to run:

sudo service screenconnect restart

sudo service nginx start

then it should be up and running


You shouldn't have to restart the screen connect service,  once you modify web.config the service will automatically restart, seems that CW/SC has some watcher for that file, once it changes it reloads itself.  

I didn't know that it is enough to change the ports, but as far as i can see in the documentation it states nothing about restarting the service: https://docs.connectwise.com/ConnectWise_Control_Documentation/On-premises/Get_started_with_ConnectWise_Control_On-Premise/Change_ports_for_an_on-premises_installation

I think i stopped it before making the changes :)

Anyhow you may need to change your script. start changing the port of screenconnect instead of being the last thing it does.

If port 80 is used by Screenconnect you wont be able to start nginx on the same port.

crap, valid point

I really wanted screenconnect to be the last thing I "touched" because if there were errors along the way I was hoping someone running it would be smart enough to Control+C the script. 

Perhaps I can get around this by stopping the screenconnect service first in the script?

hard part is not everyone installed the screenconnect service as "screenconnect"  some might have customized during install and "named it" during the install.

what to do... open to suggestions...   

perhaps I should instruct users to just shutdown the screenconnect service before executing the script.


posted a version 3 of the script to include what we are talking about here....   

Michael, would you mind reviewing it?

I like the changes you made, just a side note, they may have a firewall (hopefully) that blocks port 8080 so they may not be able to see it remotely.

perhaps you could check if screenconnect is running as first thing, if you find it, stop it. If you don't find it as a running process you you can look if it exists in init.d as screenconnect, if so you can safely begin, if not you can ask for the name of the service and then do the same thing again with the new servicename. If you can't find it then exit the script with a message like "Seems you don't have screenconnect installed? have you misspelled the service name?"

You may also give the option to append a parameter like --skip-shutdown-service and then it won't look for a service :)

Error while running nginx -c /etc/nginx/nginx.conf -t.

nginx: [emerg] BIO_new_file("/etc/letsencrypt/live/<domain>/fullchain.pe m") failed (SSL: error:02001002:system library:fopen:No such file or directory:f open('/etc/letsencrypt/live/<domain>/fullchain.pem','r') error:2006D080: BIO routines:BIO_new_file:no such file)

nginx: configuration file /etc/nginx/nginx.conf test failed

I get the error above, and the error below when running systemctl status nginx.service

it is not field of expertise, but i found some references thats sounds like your issue:



The interesting part is if there is any certificates at all.

Do you remember to run all commands as SUDO?

do you have a /etc/letsencrypt/live/  directory?   did you see if letsencrypt pulled down a real certificate? 

If letsencrypt didn't issue a certificate that explains why nginx can't start, its looking for the cert.

you can also try running this again -->  

certbot --nginx -d <hostname>      


certbot --nginx -d help.myservername.com 

Everything as sudo, I checked /etc/nginx/sites-enabled/default and this is the content:

Seems like this has all been created by certbot, should I remove it again?

Glenn, one more thing, can you post your web.config?


cat /opt/screenconnect/web.config| grep -i Uri

<paste output here>

Known issue

Hi all,

Sorry for the late response. Again, right now the best workaround is a reverse proxy.

I haven't worked on it myself, but I understand that this update would require a major effort. I'll talk to our developers who have worked on it to see how we might offer a better solution. (Will update here in the next 1-2 weeks.)


It's not a major effort.  Upgrade the mono you're using or move it to dotnetcore.  That will fix the TLS issue as well as fix the random crash issue which is also related to using a 3 year old version of mono.

Not for nothing but I believe them when they say its a major effort, they probably have all kinds of custom "hooks" into that version of MONO, which  would likely require a lot of rework and regression testing.   You have to remember that screenconnect doesnt really come default using SSL.   DONT GET ME WRONG, IT SHOULD, and provide nice integration with LetsEncrypt for free SSL Certificates.   It's not as simple as your making it though. 

oh boy!

<add key="WebServerListenUri" value="http://+:80/">


<add key="RelayListenUri" value="relay://+:443/">


So your RelayListenUri is set to 443, this means all unattended clients that are deployed use port 443 to connect back to your screenconnect server.   This is not an ideal configuration.   Typically you want "WebServerListenUri" 80 to have port 8080 (when configuring reverse proxy) so that both 80 and 443 are free to be used by nginx (the reverse proxy).   This also explains why you were not able to get a cert from letsencrypt and why nginx isn't firing up.   The ports are in use.    The hard part is if you have unattended clients deployed out there its not for the faint of heart to change that port.    You can find documentation here: https://docs.connectwise.com/ConnectWise_Control_Documentation/On-premises/Advanced_setup/Change_the_relay_address_for_access_agents.  If followed this procedure myself to change the default port of 8041 to another port in the lower range that isn't blocked by a lot of corporate firewalls.  The bad part about this is unless all your unattended clients are connected (which in the real world computers are shutdown sometimes at night, you might have it deployed to a laptop that isn't online at the moment, etc) you will "miss" deploying this change to some of the clients forcing you to reinstall them by hand anyways. 

Depending on how many unattended clients/agents you have out there it might be easier to start from scratch building the server from the ground up then redeploying the agents likely via site visit or some other mechanism. 

Alright, well this is what Screenconnect advised me to do when I set it up, not sure if it's somehwere on a page or what.

Can't I create 2 listen URI's, set the addressable URI to the new port, do a reinstall for all clients and then when I feel most of the devices are migrated remove :443?

I also found this:


But not sure if this would really help or just make it more complex.

yes you can do exactly what your saying. 

Thanks! Any advise on a port that would usually not be blocked?

I've had good luck with 554, but it can be a crap-shoot,  some have better luck with 8080, which would just mean that you would have to run sc web service on something else, 8040 even.  then configure nginx to point to 8040 instead of 8080.

I'm using just port 443 on nginx to reverse proxy to another port on screenconnect like 8041, I haven't touched port 80, so my screenconnect is listening on port 80 for the clients

add key="WebServerListenUri" value=""

add key="RelayListenUri" value="relay://+:8041/"

add key="RelayAddressableUri" value="relay://+:80/"

in nginx i have the following:

server {

listen 443 default_server ssl;

location / {







and a lot of other stuff, i use a signed cert so i don't have the same setup for letsencrypt

it works perfectly for me :)

michael im pretty sure you have a misconfiguration here

have you installed any unattended clients lately?

relayaddressableuri shouldnt be in your config unless your moving to that port, all reinstalls of the unattended client will happen with it thinking it needs to bind to port 80 which is wrong (imo) what is your relay port using now?

you may be right at some point, but my webservice goes to 443, and relay from the agents goes to 80.

though there is a connection to 8041, and i don't know how that works, but it does.

I haven't testet it behind a firewall where only open ports are 80 and 443. This is the documentation i used:


My firewall has 80,443,8041 and 8042 open.

I live with 1 error, its the  External Accessibility Check that fails in admin.I believe it is because of nginx.

so if your on your SC box, you can do a  netstat | grep -i 8041   I would expect you have a ton of connections to that IP.   Again you wont see any issues unless you are trying to build a new client from the webui and then have that client installed on a remote server/pc.  my guess is you will see no connections on 8042 and its doesn't seem like that one even needs to be open because the port isn't listed anywhere in the config.     

If you see a ton of connections to 8041, id really remove the [add key = RelayAccessableUri] line  out of your web.config file  **remember its a good idea to copy the whole file to another file for backup.

cp  /opt/screenconnect/web.config /opt/screenconnect/web.config.bak-20181005  

you'll run the risk of it trying to connect to port 80 in the future when/if you build a new installer and if you do a "reinstall" on a client say if your upgrading screenconnect to a newer version.     

nginx is likely proxying on port 80 and 443 and    port 80 in you sc config is there just to ruin your day (someday).  all nginx is doing with port 80 is redirecting to 443.  

your unattended clients are connecting directly to the 8041 port (in your case).  well at least this is what I believe.  netstat will prove/disprove it.   unattended clients dont technically use nginx at all.... its not being reverse proxied... all unattened is goind direct from the pc out there in the world to your server (sc claims to have some kind of their own encryption here though)  

what we are accomplishing by using a reverse proxy with nginx is to essentially put a wrapper around the webserver sc runs so that when you are logging into that webserver or you have someone visiting that page for support there is no unencrypted information being passed on the Internet.   Think of you logging into the page, in clear text, passing though hops on the internet if you were out and about at a coffee shop helping someone with Screenconnect.   someone could spy on you along the way or right inside of the coffee shop itself, discover your password and log in directly as you.  

THATS WHY we SSL the webpage.  

Another thing I should bring up is 2 factor auth -- https://docs.connectwise.com/ConnectWise_Control_Documentation/Get_started/Administration_page/Security_page/Enable_two-factor_authentication_for_host_accounts

id highly recommend implementing that as well.

Anyways hope that helps.    


I've tried your script a few times and all I get is 502 Bad Gateway NGINX 1.10.3.

My web.config file looks like this:

add key="WebServerListenUri" value="">
key="RelayListenUri" value="relay://+:8041/">
add key="RedirectFromBaseUrl" value="http://*/">
add key="RedirectToBaseUrl" value="https://connect.companyname.com:443/">

ScreenConnect services start up correctly.

I'm running Ubuntu 16.04 LTS

Even during the script exection when you give 120 seconds to test http://hostname:8080 nothing rendered. 

What could be incorrect here in my conf?

/var/log/nginx/error.log reports the following:

2018/11/15 23:42:16 [error] 13691#13691: *108 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: my_ip, server: , request: "GET / HTTP/1.1", upstream: "", host: "connect.companyname.com"
2018/11/15 23:42:16 [error] 13691#13691: *108 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: my_ip, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "", host: "connect.companyname.com"

ok, so mine is a bit different 

<add key="WebServerListenUri" value="http://+:8080/">


<add key="RelayListenUri" value="relay://+:8041/">


you need to remove this as it was once needed for HTTPS directly for screenconnect/mono to handle these requests:

add key="RedirectFromBaseUrl" value="http://*/">
add key="RedirectToBaseUrl" value="https://connect.companyname.com:443/">

because NGINX is doing this redirect for you.   

Essentially your forwarding your traffic to NGINX rather then screenconnect, NIGINX will handle your HTTPS, so then NGINX is talking to your server directly via loopback -to- screenconnect on a non-HTTPS port, in the above config port 8080 HTTP, which is not exposed to the internet.  

Your RELAY PORT,   RelayListenUri is for the unattended clients to talk to your server as well as anyone in an active session doing remote desktop type behavior.   Do not change this port as it will have disruption.

Excellent, removing these two lines helped, there was also an AlternateListenUri listening on port 80 I had to remove:

add key="RedirectFromBaseUrl" value="http://*/">
add key="RedirectToBaseUrl" value="https://connect.companyname.com:443/">

But this all got it working finally.  Thanks for compiling that script, it's super helpful!

Happy to help.  Unfortunately I think I've realized its impossible to script for every situation that is out there.   Script sorta works best if you never setup SSL in the first place on ScreenConnect.  

Maybe make a note of that in the script.  We had a previous Godaddy issued SSL cert that had expired installed into ScreenConnect.  

oh one more thing,  technically a ton of connections is somewhat a misleadinge.  You should have EXACTLY the same amount of connections as you have unattended clients installed (out there) that are currently online in screenconnect.    and +1,2, or 3 less or more if have someone supporting a session at the time your checking netstat. 

A coworker of mine investigated this and tried to determine root cause, below is his analysis:

As far as I can tell, the problem lies with the Session ID. In the successful handshake, the browser sends a 0-length session ID and the server sends back a 0-length session ID. This is fine under the protocol and the handshake continues. However, the TLS 1.3 draft states that the client should always send a non-empty session ID in the ClientHello in order to resolve compatibility issues with TLS 1.3. The failed handshake shows this. The browser sends a 32-byte session ID and the server responds with the same ID. The server then sends the same session ID back to the client. This normally signals that the browser should resume an existing TLS connection. As far as I can tell, the browser then fails to find such a session and then throws an Alert error. My conclusion is one of two possible problems: 1) that the Sconnect software is blindly mirroring the same session ID in all cases, or 2) the software fails to forget TLS sessions that the browser closed, so it thinks that it can resume a TLS session which the browser has already closed and invalidated.

From RCC 5246 (TLS 1.2):
The client sends a ClientHello using the Session ID of the session to
be resumed. The server then checks its session cache for a match.
If a match is found, and the server is willing to re-establish the
connection under the specified session state, it will send a
ServerHello with the same Session ID value. At this point, both
client and server MUST send ChangeCipherSpec messages and proceed
directly to Finished messages. Once the re-establishment is
complete, the client and server MAY begin to exchange application
layer data. (See flow chart below.) If a Session ID match is not
found, the server generates a new session ID, and the TLS client and
server perform a full handshake.

ClientHello - session_id
The ID of a session the client wishes to use for this connection.
This field is empty if no session_id is available, or if the
client wishes to generate new security parameters.

ServerHello – session_id
This is the identity of the session corresponding to this
connection. If the ClientHello.session_id was non-empty, the
server will look in its session cache for a match. If a match is
found and the server is willing to establish the new connection
using the specified session state, the server will respond with
the same value as was supplied by the client. This indicates a
resumed session and dictates that the parties must proceed
directly to the Finished messages. Otherwise, this field will
contain a different value identifying the new session. The server
may return an empty session_id to indicate that the session will
not be cached and therefore cannot be resumed. If a session is
resumed, it must be resumed using the same cipher suite it was
originally negotiated with. Note that there is no requirement
that the server resume any session even if it had formerly
provided a session_id. Clients MUST be prepared to do a full
negotiation -- including negotiating new cipher suites -- during
any handshake.

From the TLS 1.3 draft:
if the client sends a non-empty session ID,
the server MUST send the change_cipher_spec as described in this

Note: TLS defines two generic alerts (see Section 6) to use upon
failure to parse a message. Peers which receive a message which
cannot be parsed according to the syntax (e.g., have a length
extending beyond the message boundary or contain an out-of-range
length) MUST terminate the connection with a "decode_error" alert.
Peers which receive a message which is syntactically correct but
semantically invalid (e.g., a DHE share of p - 1, or an invalid enum)
MUST terminate the connection with an "illegal_parameter" alert.

illegal_parameter: A field in the handshake was incorrect or
inconsistent with other fields. This alert is used for errors
which conform to the formal protocol syntax but are otherwise

Now, I also noticed the following line in the TLS 1.3 draft which might be related as I could not rule it out. I mention this because the TLS 1.3 ClientHello includes a psk_key_exchange_modes extension, which the server seems to ignore.
Clients MUST verify that the server's selected_identity is within the
range supplied by the client, that the server selected a cipher suite
indicating a Hash associated with the PSK, and that a server
"key_share" extension is present if required by the ClientHello
"psk_key_exchange_modes" extension. If these values are not
consistent, the client MUST abort the handshake with an
"illegal_parameter" alert.

Please fix this so paying customers can use your product.

It seems that ConnectWise will end Linux support soon, regarding the ending support for macOS due to problematic Mono branch. They are recommend transitioning from macOS to Windows server or ConnectWise Control Cloud only and do not recommend Linux literally, because "many of the same Mono issues exist on Linux Server".

You can read the article here: https://docs.connectwise.com/ConnectWise_Control_Documentation/Technical_support_bulletins/Changes_to_macOS_on-premises_support

I don't beleive that ConnectWise operates ConnectWise Control Cloud on Windows platform and it seems like they are force us to move to the cloud, because it's much more expensive = profitable for them. And I don't want to imagine what we would have to do after migration to cloud against to GDPR.

It's sad, unbelievable, unprofessional and ConnectWise create the worst reputation of all our partners.

Eric Davis (SDT) 1 month ago

Hi all,

Sorry for the late response. Again, right now the best workaround is a reverse proxy.

I haven't worked on it myself, but I understand that this update would require a major effort. I'll talk to our developers who have worked on it to see how we might offer a better solution. (Will update here in the next 1-2 weeks.)

Where's this update? Or is it time to tell my customers you don't care. 

Clearly they are tying to push people to a hosted (expensive) solution. Ever since Screenconnect was bought out it's been horrible support and little development.

I see this happen often where the software ends up being "ghostship" software.

I posted this 5 months ago:

"SC went with a custom fork of mono that is no longer developed... (We are really paying for someones stupid decision to do that).

They need to dump MONO and go with native .NET."


We're practically in 2019, you have no solution for your customers employ SSL? You want us to run a reverse proxy? Put up step by step instructions, or integrate that with your existing product. You should be EMBARRASSED

What will actually happen is they will drop Linux support like they dropped OSX support.


Honestly, if they drop linux support it will be the last time I ever renew the license, and whatever version I end up on will be the last type of thing.   I can't imagine running windows after using Linux servers for the past 10 years.   I don't even run windows domain controllers thanks to Univention Corporate Server (UCS).    Yes windows desktops are still in the environment but I find it hard to move away from it as a desktop platform.    

Hope ConnectWise/Screenconnect really isn't thinking about moving away from linux support.    Honestly, I cant believe their hosted servers are running on windows.   Paying for a windows server license + this product,  NO WAY!   

Forget it.

The SSL thing to me isn't a big deal with reverse proxy, it works and its very stable.   Id rather have them keep working/releasing updates on what we have then drop support for linux.   

I do agree though, that they really should have supported this out of the box, with LetsEncrypt integration and the whole bit.

Honestly, I'll make it work under newer Mono if they want to pay for my time.  Porting to .NET Core is likely a bit more work.


The ease of running under a linux vm or on prem is precisely why I choose this to begin with. Dropping linux support would be as stupid as not having a working SSL solution

They just need to update to the latest version of mono...the issue is not hard to get fixed....

This is ridiculous that this still hasn't been fixed. As of the latest version of chrome on Android I can't workaround in the flags and the app doesn't work. How about fixing this.

How do you go about configuring NGINX in this fashion with certificates in place already from the old configuration?

windows or linux?   what version of linux?

I tried running the script earlier today, but kept getting the same error as Glenn: https://control.product.connectwise.com/communities/6/topics/1691-tls-13-seems-to-breaks-screenconnect-when-using-ssl-on-mono#comment-6268

I tried just manually doing the steps from the tutorial and then just pointing the NGINX config with the certs to the existing cert files, tried copying them into the default path in the instructions. Either way NGINX wouldn't start because it was saying it couldn't find those files. 

We already have a payed signed certificate, so do we have to set it all up and then reregister those certs?

Connectwise has it documented on how to convert a PEM format cert to a format usable within ScreenConnect.  No loss there.  It will need to be installed into the Nginx Reverse Proxy as PEM format though.  Honestly, if the Reverse proxy and screenconnect are running within the same host, you would just have Nginx listing to the public IP on both 80 and 443, then pass it off to the localhost ( and have ScreenConnect listen to that instead.  It will help performance because the host will not need to decode, re-encode traffic.

In short, don't use the TLS cert within screenconnect, install it into the Reverse Proxy.  Also, don't use the script from the other thread, use the hard manual settings from this one.  People are reporting better luck. I've had ScreenConnect running for several months via the Nginx RP.  Works great and sounds more painful than it really is.

I got it to work with the Tyler Woods post. The SSL Configurator that ScreenConnect provided originall deleted the original key so all I had was the PVK formatted one. I tried converting it but I couldn't get it to work so we just rekeyed the cert. Got it all working now. 

Only downside is the jnlp for supporting from a Linux system doesn't work properly because it points to the internal port. I can make it work by downloading the jnlp file and editing it with the correct address. I'd like to find a way to get that to be corrected by default, but that's not a huge problem.

I got the reverse proxy working thanks to this thread - I wish Connectwise had mentioned moving to HTTPS wouldn't work in their How To guide!

it's working fine but the only issue I see is that when I create an on-demand session, the invite info contains the port for screenconnect behind the proxy (e.g. 54000 and not 443) and so the links I send dont work. Where is screenconnect picking that info up from and can I change it? Thanks

This is all kinds of ridiculous. The average customer is supposed to wade through these posts and start messing with their configuration to run a reverse proxy? This needs to be addressed by ConnectWise or all my clients who use this will walk to the competitor.

I suspect they're hoping the average customer will move to their cloud platform....

Possible, but clearly we are not the "average customer" and our needs/requirements extend outside of the cloud platform.

OK.  so I think its ridiculous how this thread has become this.   

1) your not going to go to another product, because your all grandfathered into the prices.  Which is 90% less then other competitions solutions.  If your going to go do that then go buy Bomgar's solution at 5k with yearly maintenance supports.

I still agree that CW should have fixed this but the reality is they are having a hard time because of the way that chose to fork mono.    It comes down to do you really want new features and bug fixes or do you want SSL working.  

NGINX takes approx 10 minutes to put infront of this and handles SSL rather decently.   

I've recently been playing with HAPROXY which basically does the same thing only FAST.  REALLY Fast.  The downsite to HAPROXY is you need to have a PURCHASED CERT.  

ConnectWise.  Feel Free to delete this thread if you want or lock it off for future comments.  

If anyone needs help with SSL via NGINX Reverse Proxy or HAPROXY feel free to email me directly.  

Prerequirement:   You must be running linux (preferably ubuntu 14-18, debian or similar debian platform).


I'd love to see a STEP BY STEP, literally a step by step for implementing this reverse proxy. I'm running Ubuntu 16.04 minimal.

It's already been outlined in this thread.  Once you have ConnectWise installed and running, rebind the IP to on port 10500.  I bound a second global IP (replace to the ConnectWise relay and have that listening on port 80 (relay traffic is inherently secure so there is no need to waste resources encoding it further).

Install Nginx and use then config below, editing the bold areas for your specific installation.  For your public IP replace "".  I have two IPs since I've run into some corporate firewalls locking down traffic over other ports.  This keeps everything publicly aligned to 80 and 443.

web.config file...

add key="WebServerListenUri" value=""

add key="RelayListenUri" value="relay://"

endpoint address="net.tcp://" binding="customBinding" contract="ScreenConnect.ISessionManagerChannel"


endpoint address="net.tcp://" binding="customBinding" contract="ScreenConnect.ISessionManager"


/etc/nginx/conf.d/default.conf file...

server {
server_name connect.example.com;
return 301 https://$host$request_uri;

server {
listen default_server ssl;
server_name connect.example.com;

ssl on;

# ssl_certificate /etc/nginx/ssl/support.yourdomain.com.crt;
# ssl_certificate_key /etc/nginx/ssl/support.yourdomain.com.key;
ssl_certificate /etc/ssl/certs/wild_example_2019_combined.crt;
ssl_certificate_key /etc/ssl/certs/wild_example_2019.key;

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
keepalive_timeout 60;
## TLSv1 AND TLSv1.1 AND TLSv1.2;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_protocols TLSv1.2;

# Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
ssl_dhparam /etc/nginx/ssl/dhparam.pem;

ssl_prefer_server_ciphers on;

# ssl_ecdh_curve secp521r1;

add_header Strict-Transport-Security max-age=15768000;

location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;
client_max_body_size 50m;
client_body_buffer_size 256k;
proxy_connect_timeout 180;
proxy_send_timeout 180;
proxy_read_timeout 90;
proxy_buffer_size 16k;
proxy_buffers 4 64k;
proxy_busy_buffers_size 128k;
proxy_temp_file_write_size 128k;


Assumes you have  your own REAL cert.

Get your cert in PEM format.   i.e.   

open all of your certs from whoever in notepad or whatever and combine all these in this order.



name file  ending in mycert.pem (probably not required but to lazy to check if it is)

as root:

apt-get install haproxy -y

mkdir -p  /etc/haproxy/certs

-->  put your cert,   the above directory

rm /etc/haproxy/haproxy.cfg 

vim or nano  (your choice  nano if your in experienced)  /etc/haproxy/haproxy.cfg


log local0 notice
maxconn 2000
user haproxy
group haproxy
tune.ssl.default-dh-param 2048
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
stats socket /run/haproxy/haproxy.sock mode 660 level admin
stats timeout 2m # Wait up to 2 minutes for input
ssl-server-verify none

log global
mode http
option dontlognull
retries 3
option redispatch
timeout connect 5000
timeout client 5m
timeout server 5m
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http

listen Web-Services
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/
mode http
option httplog

redirect scheme https if { hdr(host) -i help.myscreenconnect.com } !{ ssl_fc }

stats enable
stats uri /stats
stats realm Strictly\ Private
stats auth <username>:<password> 

acl host_help hdr(host) -i help.myscreenconnect.com

use_backend ScreenConnect if host_help

backend ScreenConnect
balance roundrobin
option httpclose
option forwardfor
cookie JSESSIONID prefix
server node1 cookie A check
reqadd X-Forwarded-Proto:\ https
reqadd X-Forwarded-Port:\ 443


Grab the info between the <code> and </code> and copy it 

paste it in the file

Adjust the username and password for stats line  (stats auth blah blah)  

change  help.myscreenconnect.com to  whatever it is that you got going on. 

don't touch anything else (unless you would like to adjust the IP rather then using the loopback to say  your boxes internal IP)

verify that haproxy is good with your config:

haproxy -c -f /etc/haproxy/haproxy.cfg

(  -c  is check   -f is file)

once good restart haproxy.

Now.   If you get errors or your already using  80 and 443 for other things (mainly screenconnect) then  you have to adjust 
your web.config file.

This is where shit sorta gets interesting.

typically if you installed this and didn't change the default install path its...



web.confg is there

make a backup of the file.

cp  /opt/screenconnect/web.config  /opt/screenconnect/web.config.backup_2019-03-25 

yes you should date the file even know unix will timestamp it anyways.  (its good habit)

vim or nano  /etc/opt/screenconnect/web.config

look for a line like this:

<add key="WebServerListenUri"  value="https://+:443/">

change it to this:

<add key="WebServerListenUri" value="http://+:5000/">

After all is said and done 

Screenconnect will restart and be listening on :5000 (port 5000)   you can verify by going to   <server's ip:5000> in a browser without HTTPS. 

now just restart haproxy to get it up and running again now that the ports are not conflicting. 




Thank you for this info and for the script you developed, I haven't applied haproxy, I have used it before in a diff environment.

I am having an issue I'm not sure how to tackle. I'm running nginx and screenconnect, I already had an ssl, so I was able to get the parts of your script that I needed.

the ssl portion is working and connecting to the server.

my issue is this

I get the 3 twirling things and the info never loads.

any ideas?

Thank you again for all your contributions

Thank you


ok I finally got this working on CentOS, (thanks @stylnchris!) but I really think this security should be built in to the product


This is absolutely ridiculous that this is still a "known issue" since March 26 2018 and there's no fix. 

I contacted support and they suggested the reverse proxy setup that was described on here but indicated it wasn't a setup they supported if something didn't work right. Another thing they suggest was to downgrade TLS...reducing security isn't a good solution. 

Looks like this issue is finally going to be addressed. Got an email about upgrading/renewal prices about to go up. In the email is this:

Mono Updates We’re upgrading Mono for increased Linux security and sustainability. The updates will add support for more recent versions of TLS as well as a greater number of 64-bit Linux distros. You will also notice a general improvement in performance when connected to a large number of sessions.

I did not get this email...any word on the renewal price increase?

What is going on with the upgrade, I cannot use my product securely without putting in some kind of reverse DNS or proxy.

Please release the update, even the beta has some problems; right now after finally getting an SSL bonded and working. I reset the password via the web.config and now every time when I run setup I get all the way to the end and then when I click Finish a time out message comes up.

Anyone have this issue and if so how did you resolve it?

Thank you for reading this.

The new release 2019.4 does NOT fix this issue.  The excuses given are the typical"it was too hard and we didn't want to delay this release".  The problem is they forked mono...which now means 100% of the dev is on them...and since they are windows/cloud based they REALLY do not want to update the Linux installs...and they REALLY do not want self-hosted installs any longer either.

Ok, so everyone is going to the cloud because it is easier, well I just know something, when the cloud goes down so does your business.

Money, Money! ahh well.

by the way, for all those that want to host their own, use a reverse proxy, there is a way, I guarantee it!

The point is we shouldn't have to.  That's another layer of software that has to keep updated and another layer of complexity...this is not a hard problem to solve if SC really wanted to solve it.  If they refuse to i will then have no choice to do this..i really do not want to have to do it.

I did the setup, it took less than one hour, and the extra management really isn't there. I update the server regularly and it updates nginx at the same time.

I would recommend you to spend the hour, and have the system running smooth again.

I'm not moving to the cloud with the price I pay yearly on my selfhosted server.


Michael, with respect, you are wrong.
Most of us are reasonably technical, but the key takeaway is that modern software should have security baked in, not as an optional extra and not as a 'roll your own' option.
It took me nearly 2 days to do the research, back up everything multiple times, plan a rollback strategy, and work out what another user meant in his post because there were NO official docs. I felt that the instructions could be improved so I made a blog post about it. This work potentially cost me $3000 because Connectwise aren't taking security seriously enough. 
Just because you did it in an hour doesn't make our arguments redundant.
I hope we get a version of Control soon that supports TLS 1.3 out of the box, it will help a lot of us.

You are absolutely right. They should fix it so the OnPremise version supports TLS 1.3!

You are also right in your point that there is no official documentation.

It seems they are trying to force us to the cloud which is - for me - a lot more expensive.

I'm not a million $ organisation, and I didn't take the time it took to investigate this into account.

My way of doing it was to take a snapshot of the server (my rollback plan). And then spend the time to install nginx and test it is running.

None this less i totally agree with you that ConnectWise should fix this issue as a security priority instead of spending time on design.

cheers Micheal, I apologise for being rude, this issue is very frustrating for all of us.

Any updates on this issue? 

they do have full support for windows on prem.  That is what i am going to move to as the release of the chrome based edge is imminent...less moving parts that way...

William, while I don't blame you for moving to a product that is properly supported, I'd encourage on prem Linux users to stay-
I recognise that updating to a newer version of Mono is a huge task, but allowing the platform to go from 4 different Server options to 1 or two is hugely sub optimal. We need a usable linux version.

Anyone know if this is fixed in the beta that just came out? 

unfortunately I cannot afford to wait.  While some folks will run an nginx reverse proxy that still means you are running a version that is not able to properly use tls connections.  This also means you now have to maintain the base os + screenconjnect + nginx and its associated components and configurations manually.  No thanks.  I do not have the time and do not wish to bear the costs of maintaining an unsupported configuration.  Once chredge gets released if the Linux version has not been released with a properly supported version of SC then I have no choice but to move to Windows.

We are using ConnectWise Control on Linux for a years and I understand your disappointment. The text below is NOT defense of ConnectWise at all.

But I don't understand anyone who want to run a service as ConnectWise Control on on-premise Windows Server. I hope that you looked at the Microsoft license before you made a choice, Product Terms especially, because you have assign assign CAL license to every device which will be connected with ConnectWise Control, or buy External Connector to allow unlimited external devices and users. And be aware, you have to assign CAL license to every internal user or device as well, even though you're using External Connector.

So we are running ConnectWise Control on Linux with Apache proxy for more than a year. OS (Ubuntu) and Apache (and all other packages) are upgrading automatically, without any problems at all. Yes, we have to do some research at the start and invested a few hours to done it right, but it's same with all Linux problems. And now a few of us here created a easy manual and script to do it in a few minutes.

I hope that the ConnectWise do not "win" this situation by forcing every client to Windows, one by one, like you :(

I tried the 19.5 beta hoping to resolve this issue, unfortunately SC was entirely non-functional on a centos 8 vps. mono was just running at 95% cpu for several hours. Could barely even get the login screen to come up. 

The release notes for 19.6 state that ConnectWise Control will no longer be officially supported on distributions other than Ubuntu and Debian, though it may still be functional on them. CentOS, out.

I hope not either...the cals i am not worried about as i'll simply charge my clients for it..roll it into their fees.


I have upgraded to the pre-release/beta 19.6 and i am able to connect to my server using chrome and firefox again


I upgraded my centos 8 vps to the beta 19.6, and it's working great even without official support. Performance seems quite good, and the mono process isn't killing the cpu like before. I already have a nginx reverse proxy in front, so I haven't done anything with installing a ssl certificate directly on mono yet. 

Commenting disabled