Your comments

yeah, the exploit that just happened still would have been an issue 

I've found no information in the past to support this.

I found reference to MSSQL in some documentation somewhere and when I asked about it I was told "Oh that's not supposed to be in there, we'll get that cleaned up"

It's obvious that the limitations are with SQLite, and I just cannot believe them when they say they have instances with 200gb databases or with 30k endpoints and they run fine in their hosted platform on SQLite when we have so many people in this thread alone having the same issues and I've basically built out a brand new server with a new database and have the exact same issues we had before it was rebuilt and on flash storage.

I could go throw my SC instance on a PC that has a proc overclocked to 5ghz, and PCIe SSD's that push 6000mb/s read/write and I bet I'd still experience the SAME problems we have now.

Given the number of people that I've seen post around the internet and even those in this thread alone that have posted about having issues, it blows my mind that we have to resort to having basically no logging to have a functional product

And scrolling back up in this same thread we see various people talking about using unsupported methods of using MSSQL and the performance being way better than what the rest of us get with SQLite, the downside is when the product gets updated it generally breaks stuff.

We now have a thread here that's been open for 5-6 years, and because they are now owned by ConnectWi$e, I doubt they'll never add support for the on premise product to use MSSQL because ConnectWi$e wants to make money by getting people on the hosted platform.

I mean, they claim they have customers with 30k endpoints and 200gb databases working just fine in their hosted cloud platform but the amount of I/O that would require even if on SSD would be insane. The best part is I've had multiple tickets open over the last few months to troubleshoot performance issues and they can't figure it out, they told me repeatedly "You need to have SC on Flash storage" and I moved it off of our Nimble to a full flash storage array and it made minimal difference and our DB is only 2.5gb

Otherwise their solution must be throw ridiculous amounts of money at it by buying/building an insane solution for their hosted platform to run on between compute, networking & storage. Or they have functions/features that exist up there that don't exist for the on premise product and aren't owning up to it.

When we had SC on our Nimble, out of the 30 VM's we have (including some that read/write into MSSQL DB's all day) ScreenConnect was our most I/O intensive VM by far.

You might be able to do it via API but that's nothing I've any experience with personally.

Next steps with this from Control's end developmentally would likely be add into the reporting the ability to show authentication failures/successes, as well as via the dashboard visualize on successful/failed logon attempts (with the ability for it to just from the dashboard click and tell ya what you need/want)

Good luck!

Hey Steve, 

Not sure if I follow what it is you're trying to accomplish.

If you're looking to have this information easily accessible/searchable my suggestion will be utilize the syslog from Control & pipe the logs into your SIEM. potentially even set alarms for patterns of behavior.

feature has been implemented in the newer stable release versions of control.


logging is available via the audit tab.

Where are we at with this, 5 years old and "Pending review" 

Support for another DB engine like MSSQL or mysql would be great where we have more flexibility & can have better performance as well. I've heard stories of people getting it working with MSSQL on the backend as is but the problem ends up being eventually a patch breaks it because something changes.

We're running an instance with ~10k agents now, it does alright but we've got over 100 people that have access to the platform, it's easy to have a lot going on at once, and when a lot of people start cruising through the web UI, running searches to find stuff or whatever gets taxing on the server because of delays to process the searches.

It really is probably one of the better products out there with TONS of potential. it sucks to see that CW is running it into the ground. Some of the OG support is still with them, the people that know the product well but I think there's been some turnover too and I haven't a clue what that looks like for the dev team.

This is a pretty huge pain point for a lot of people not having logging or any way to audit failed logons.

We utilize syslog data from SC to track all events & those are retained within our SIEM for 400 days but that doesn't account for failed logon attempts so we can look for abuse & try to mitigate it.

The only way I can think of is stick a WAF in front of SC web portal & protect against credential stuffing & connections from IP's with poor reputation. Fortinet has a pretty slick WAF product but there's just a lot of cost associated with it.