Your comments

I've been stung with this multiple times because my stupid brain clicks send clipboard instead of send C-A-D and SC promptly starts dumping the entire contents of the clipboard by which time I've clicked send C-A-D so it keeps dumping it all into the password field which blocks you from logging in. Only solution I've found is restart the SC service on the remote machine. 

You could use a script to make API calls to remove machines from this group, though we just do it manually when we there's a number in there (about every 6 months for us). In fact, you don't even need the group - script could query API for each machine, check the uptime and it sufficiently negative (offline for enough time) could then call API to end the session. 

We'd also find this useful.  We have a fleet of Pi's running Raspbian out on customer sites. We can rarely connect inbound to the network they're on, but they can connect out.  We need to be able to access GUI on the Pi for remote support and diags. 

If they're using LogMeIn, I don't know why you're posting here. 

For ScreenConnect, this information is in your Windows Event Application & Security Logs. 

As far as I know VNC doesn't log connections. 


Any MSP that's using LogMeIn, ScreenConnect, and VNC would worry me. There's no reason to have LMI & SC, which suggests lack of standardisation etc. If they're using LabTech (ConnectWise Automate) then unfortunately VNC cannot be blocked from installing, even though it's not been used for years. 
See https://product.connectwise.com/communities/5/topics/12974-allow-prevention-of-installing-vnc-when-using-screenconnect 

ScreenConnect doesn't sit on top of a standard web server (such as Nginx / IIS / Apache) - it implements the web server as it's own custom service.  As such the standard / known ways of configuring LE for standard servers aren't applicable. That means SC need to take responsibility for building this support in, and I suggest should really have already done this. 

As you're not using a standard web server, support for LE should already be built in. 

That's quite a good idea! Though it might run into issues with customers who have it set to "Fred Bloggs | Some Long Jobname | Their Company Brand" so that's what goes on their emails?

I still think this product needs this (vital for advanced auditing), but here are some steps that will help you if you're struggling with DB size issues. SQLite relies entirely on OS caching the file - there's no way to force it into RAM like with a DB engine. 

1. Put the whole server on SSD, local NVMe for preference.  You should probably use RAID but make your own priority judgement here.  This should allow you to get away with less RAM, and just make the whole server fly. 

2. If you've still got issues, throw RAM at the problem. I'd estimate you need is:
RAM required by OS at idle (after running for a few days - just take whatever GB it's using now)
+
1.2 x size of your DB file

at least 2GB extra. 

3. Consider if you really need advanced auditing and / or to keep data for as long as you do now.  Could you export more frequently and clear the DB down to keep the size down.

My rule for SC is if Windows resource monitor doesn't show at least 2GB of "free" (light blue - NOT the "available") RAM with the server under the highest load then you need to add more. Highest load may be when you're running reports etc. 


Disclaimer - some of what follow is probably not technically "true", but is the best I can recall from the last time I had to touch this - combined in a way as to emphasise how I worked around the issue. 

Remember you won't see a process using the RAM the DB is using - it's using file cache RAM, which isn't shown as a belonging to a process, but is included in the dark blue section of the graph. If the server is short on RAM the first thing the OS will sacrifice is the file caching you're relying on to speed up your DB access. Also check the Disk section of resource monitor while generating your "high load" scenario - if you see any significant read activity of the DB file then add RAM because you want it able to serve all reads from cache. As a test, I just exported all our audit data and all you could see on the server was CPU at 80% with no file reads.  It was quick and returned so much data Chrome ran out of memory and killed the tab.

If that all sounds expensive, for us we have 7GB RAM, 2 vCore & 50GB SSD in a public cloud for under GBP £20 (USD $25) pm and this happily runs 1,500 agents as long as we don't turn on advanced auditing (which is suicidal in our experience). Shop around. If you want to use advanced auditing or keep the data for any length of time - size accordingly and hope. 

We have two SC instance purely because we can't mix our support licenses with the access licenses we get via LT. 

"in the event of an Active shooter scenario" - I think you may be trying to fix that problem the wrong way...