Add support for MariaDB/MySQL or another DB engine for scalability
jp10 7 years ago • updated by Cody Arnold 2 years ago • 40 • 3 duplicates
The weakest link seems to be the SQLite DB. Once the DB gets to 1GB size, the SC application becomes unstable.
Available in Version:
Customer support service by UserEcho
PLEASE do this! The existing SQLite DB engine is a real problem. Even when DB size is low it's not great - as DB size grows it becomes a nightmare. Try running 90 day advanced auditing - every time server is restarted SC does 30-60 mins of DB maintenance before it's usable. Please also give option NOT to ryun DB maintenance on start up. Currently I'm trying to run an audit report of 30 days of basic auditing and it dies because there's too much data.
I just had my session.db corrupt because I was trying to read the session data for reporting and the DB browser died. This NEVER happens with proper DB engines. Thankfully that file only contains a month of auditing data that my boss wants a report on by the end of the day... Nothing important then! :-(
Definitely needed! Our installation is struggling with SQLLite.
This is a definite need - we use 5.6 right now on a MSSQL server and it handles A LOT better than SQLite could ever do!
I will second that, R.peters. We use an MSSQL as well now and the performance difference is quite large. SQLite was always a bottleneck as the DB grew to thousands of sessions, and frequently gave us problems during SC upgrades. Many of our clients require us to send them scheduled reports that tell them which of our admins has accessed their machines, when, and why. MSSQL has allowed us to easily create SSRS audit reports to meet this need, and having this officially built-in would be fantastic.
There is a forum post about doing this, but AFAICT it's not officially supported or even documented:
This should be a high priority the linux version realy would be much better off with mysql
If this is running colo on the labtech server then their already is an mysql server to work with.
The sqllite code just cant handle the load, i cant even view my audit logs becuase the job times out
Really need this functionnality !
Also, Need support POSTGRESQL database urgently!.
Please allow for Microsoft SQL Server as a supported database platform for CW Control
This is LONG overdue for a production product that many, many companies rely on for great remote connectivity to clients. Even if the first step is just ability to use MS SQL, this would be a huge improvement. Plus, it's not that it wouldn't work with it as the queries are almost identical (evidenced by the fact that other customers have switched to it). However, as it's not officially supported, we do not want to risk any potential "out of support" issues to switch until we know that the scenario will be supported, with updates as well. MS SQL Express supports 10GB databases at this point, so even if SQL licensing is too much a burden, this is a no cost option. Please at least give us the option of choosing the built-in SQLite or a MS SQL server at minimum. Thanks!
The application gets slower the more unattended agents that we have. Please add MS SQL support!
Also get this problem, have on prem unlimited licences but the database fills up quickly over time and cant cope. Periodically we need to manually purge data out of the session.db
What database maintenance plan do you currently have set up?
This needs to be added to the product asap. The current structure is unmanageable with more than 30,000 access sessions. Retention is already set to only a few days. When the db file gets to 2ish GB the web app completely crashes out.
I am very surprised as well that this has yet to be implemented. SQLite is just that lite. "SQLite emphasizes economy, efficiency, reliability, independence, and simplicity" to quote directly from the product page. It is not meant to scale to large data. Large data is meant for a RDBMS (relational database management system) that can reindex, defrag, etc.
Can SQLite do everything I have mentioned - sure and depending on memory (RAM) constraints, it can either do it well or as many have seen, crash hard. I would suggest that they makes this change. Otherwise there are always other RATs in the world (non-viral).
** BUMPITY BUMP BUMP **
We're only running 1000 agents, and every time I try to run an audit report for a 4 day period it times out and crashes the browser. It's impossible to run this report, I have to run one day at a time. This NEEDS a proper DB. My vote's for mysql, but honestly I dont' care as long as it's not SQLShite which is totally unsuited to operating this at scale. We'd like advanced auditing, but every time we've tried turning it on it makes the system unusable due to DB size.
Sadly, this product's major downfall seems to be the database. We, along with many others as I see, have issues with SQLite simply not being able to handle it. I personally would love to see a MySQL or PostgreSQL implementation on ScreenConnect to tighten up this product.
If ported to MySQL, devs would need to be careful to ensure proper full backup handled properly (given lack of VSS). We recently had hell with LabTech because their DB backup process is only partial, and we didn't fully understand the implications of lack of VSS support (restored server had completely useless DB) - so SC need to be careful to design something better - even if it's just dump the WHOLE database to a file-per-table zip file at a configured time.
Need this as well. Any blip with that file and our company is dead in the water. My file gets to 2gb almost monthly even with maint plan in place.
Support fro something else would be a huge help.
I still think this product needs this (vital for advanced auditing), but here are some steps that will help you if you're struggling with DB size issues. SQLite relies entirely on OS caching the file - there's no way to force it into RAM like with a DB engine.
1. Put the whole server on SSD, local NVMe for preference. You should probably use RAID but make your own priority judgement here. This should allow you to get away with less RAM, and just make the whole server fly.
2. If you've still got issues, throw RAM at the problem. I'd estimate you need is:
RAM required by OS at idle (after running for a few days - just take whatever GB it's using now)
1.2 x size of your DB file
at least 2GB extra.
3. Consider if you really need advanced auditing and / or to keep data for as long as you do now. Could you export more frequently and clear the DB down to keep the size down.
My rule for SC is if Windows resource monitor doesn't show at least 2GB of "free" (light blue - NOT the "available") RAM with the server under the highest load then you need to add more. Highest load may be when you're running reports etc.
Disclaimer - some of what follow is probably not technically "true", but is the best I can recall from the last time I had to touch this - combined in a way as to emphasise how I worked around the issue.
Remember you won't see a process using the RAM the DB is using - it's using file cache RAM, which isn't shown as a belonging to a process, but is included in the dark blue section of the graph. If the server is short on RAM the first thing the OS will sacrifice is the file caching you're relying on to speed up your DB access. Also check the Disk section of resource monitor while generating your "high load" scenario - if you see any significant read activity of the DB file then add RAM because you want it able to serve all reads from cache. As a test, I just exported all our audit data and all you could see on the server was CPU at 80% with no file reads. It was quick and returned so much data Chrome ran out of memory and killed the tab.
If that all sounds expensive, for us we have 7GB RAM, 2 vCore & 50GB SSD in a public cloud for under GBP £20 (USD $25) pm and this happily runs 1,500 agents as long as we don't turn on advanced auditing (which is suicidal in our experience). Shop around. If you want to use advanced auditing or keep the data for any length of time - size accordingly and hope.
Where are we at with this, 5 years old and "Pending review"
Support for another DB engine like MSSQL or mysql would be great where we have more flexibility & can have better performance as well. I've heard stories of people getting it working with MSSQL on the backend as is but the problem ends up being eventually a patch breaks it because something changes.
We're running an instance with ~10k agents now, it does alright but we've got over 100 people that have access to the platform, it's easy to have a lot going on at once, and when a lot of people start cruising through the web UI, running searches to find stuff or whatever gets taxing on the server because of delays to process the searches.
Instances in our cloud, running current versions of the CTRL, are using SQLite and are performing well. We have examples of instances of CTRL in our cloud running 30K+ agents, instances running 250+ concurrent connections, and instances with with DBs over 200GB.
There are no plans to add additional DB options for the product at this time, but we'll continue to look for ways to improve the product to scale for Enterprise. At the moment the DB is not the best opportunity to improve performance.
It would be helpful to learn more about your on premise set up, @cody. Feel free to reach out CTRLPM@connectwise.com.
What sort of retention do you have for logs and extended logging? We require at a minimum 90 days of extended auditing retention and would like to store more than that. This tends to bloat the database, and is where a more resilient database engine would shine. Also, how are you handling uptime? With each instance only being able to run on a single server with read/write access to the SQLite database file, what happens when that server goes down, or the application has issues, you can't simply fail over to another node that points to the same data.
Mike - I have to disagree. Extended auditing was operationally useless for us, and support have told us the reason for this is excessive SQLite DB size. When we restarted the server, it took over an hour for SC to become usable and even then it was slow for three more hours. IMO SQLite is not suited to anything like 200GB. It's suitable for more like 200MB. Regardless of the maximum size listed on https://www.sqlite.org/whentouse.html - that doesn't mean it's a good idea to use it at that scale. The lack of caching being one obvious problem (relying on OS FS cache is not as performant as a DB engine taking nGB of RAM away from the OS and dedicating it to the DB table cache). Another is the single writer limitation.
While certainly SQLite makes the system easy to deploy and work with, and while you may say you don't have issues running it at scale it's very clear that many customers do have issues running it at scale (or at least think they do). If you're SURE all these issues aren't related to the DB engine, perhaps you'd like to explain what is causing them and how to resolve them, because support have told us it's a DB scaling issue?
Alex, I had a semi-decent conversation with Sean White over at ConnectWise who is a senior Product Manager for ConnectWise Control but unfortunately I don't think it went anywhere. I'm mainly after high availability which is near impossible with SQLite. There are an abundance of code changes required for HA to be possible due to the fact that the DB is loaded into RAM and thus can't read updated information written to the DB by another node, but a standby node referencing the same DB could be started upon failure of the primary node and theoretically would work fine. I'm exploring this option independently with the current SQLite database but having a central database that both nodes could talk to would be ideal so we don't need to store the SQLite DB on shared/network storage, or implement some type of database syncing.
With the reporting, I basically gave up trying to run queries against the database file. The few times I've done this, it resulted in crashing of the application and in two cases, database loss.
Max, I wish I had a suggestion for HA, but I don't have anything in my toolbox that I'd trust. Don't quote me on this - but I think I've seen it written that it's possible to change SC to another DB engine like PostgreSQL, but it's not supported. As you want HA, I'd assume you'll be wanting to stay within support from CW.
I also had a database die because I tried to export the auditing reports. Dead, server gone, toasted.
Insert Coin To Start Game. That server was allowing our support desk to service our clients. That was a fun couple of days - NOT! As I recall, support's response was to tell us to start again with a fresh install and not be stupid enough to try and export the auditing logs again.
I'm looking forward to hearing from Mike how querying the DB causing complete loss of the DB is not a DB engine issue. There seems to be a disconnect between how he thinks the DB engine behaves and what you, we, and SC support have seen in real life.
For the record - I'm not in favour of MS SQL due to license / limitations, and MySQL needs careful backup / restore handling due to lack of VSS support (as we found out on our LabTech DB) though I expect that is true of all / most of the open source options.
Alex, no worries. Yes there have been reports of successfully switching to MSSQL but as you and Sean noted, these are not supported configurations. The general response from support for any DB issue is to wipe and rebuild. I've been troubleshooting a memory leak issue and the FIRST suggestion was to dump the entire DB and start from scratch. Um.... how is that a solution, never mind the first suggestion.
I'm honestly in favor of any database technology that will get us the features we're looking for. I'm an Azure shop so being able to use Azure based SQL databases whether they be MSSQL or MySQL would be a game changer but I'm pretty much open to anything. As you noted, I believe most if not all open source database solutions are not VSS friendly, however MySQL also gives us options in the way of database HA such that we can take a database node down for backup every hour.
I'm curious to see if we can get some traction going on this. It's clearly something the community wants/needs and aside from the labor it will take to rewrite the necessary code, I see no drawbacks to a change in DB technology especially if the application defaults to SQLite and gives us the option of using a more resilient database engine.
There has been a lot of talk here about wanting a database such as MySQL or MSSQL as opposed to SQLite for performance, and reporting which are valid points but what about redundancy or HA. We have been plagued by application level issues lately which ultimately caused us to lose our 5GB database. Had this information been stored in MySQL or MSSQL, not only could the data have been preserved, but we could even have had a secondary server so we could continue supporting our clients while we fixed the application issues on the main server. If Control is to be a proper remote support application, we need to be able to ensure uptime regardless of whether the issue is server, application, or network related. This is simply impossible with the current database design. ConnectWise, please implement this.
Our solution to the DB issue was to every 2 hours copy the SQLite db and all Recording ( SC native ) to a S3 storage.
Then a secondary server transcodes all the videos and gather any relevant information from the SQLite DB into a mysql storage.
A interface for reporting, audit, searching ,viewing video sessions was then created.
This makes the DB on the SC server irrelevant as we max loose 2 hours of information on breakdown and we can can limit the amount of data stored in SQLite by only keeping a few days on information.
We handle about 10'000 - 15'000 videos a day in this way.
Obviously above should not be necessary but we had no faith in ConnectWise to create a similar solution as we seem to be one of the few that actually records ALL sessions.
Nikolaj, would it be possible for you to share a little more detail and possibly some code around how you're doing this? I'm very interested. We can leverage Azure SQL Databases, Azure Blob Storage, and Azure WebApps to accomplish the same but a starting point would be fantastic!
Lets all be honest here. ConnectWise is not going to fix this because they want you to use their cloud hosting which doesn't have this problem.
If that's true that their cloud hosting doesn't use SQLite I would be surprised.
I mean, they claim they have customers with 30k endpoints and 200gb databases working just fine in their hosted cloud platform but the amount of I/O that would require even if on SSD would be insane. The best part is I've had multiple tickets open over the last few months to troubleshoot performance issues and they can't figure it out, they told me repeatedly "You need to have SC on Flash storage" and I moved it off of our Nimble to a full flash storage array and it made minimal difference and our DB is only 2.5gb
Otherwise their solution must be throw ridiculous amounts of money at it by buying/building an insane solution for their hosted platform to run on between compute, networking & storage. Or they have functions/features that exist up there that don't exist for the on premise product and aren't owning up to it.
When we had SC on our Nimble, out of the 30 VM's we have (including some that read/write into MSSQL DB's all day) ScreenConnect was our most I/O intensive VM by far.
We are in a similar situation. We have about 15K endpoints and we've had to set the DB maintenance schedules to delete pretty much everything after two days otherwise we start feeling a lot of lag. We have an all flash Nimble but even tried moving the VM to a local PCIe SSD for testing. Made no difference.
It's obvious that the limitations are with SQLite, and I just cannot believe them when they say they have instances with 200gb databases or with 30k endpoints and they run fine in their hosted platform on SQLite when we have so many people in this thread alone having the same issues and I've basically built out a brand new server with a new database and have the exact same issues we had before it was rebuilt and on flash storage.
I could go throw my SC instance on a PC that has a proc overclocked to 5ghz, and PCIe SSD's that push 6000mb/s read/write and I bet I'd still experience the SAME problems we have now.
Given the number of people that I've seen post around the internet and even those in this thread alone that have posted about having issues, it blows my mind that we have to resort to having basically no logging to have a functional product
And scrolling back up in this same thread we see various people talking about using unsupported methods of using MSSQL and the performance being way better than what the rest of us get with SQLite, the downside is when the product gets updated it generally breaks stuff.
We now have a thread here that's been open for 5-6 years, and because they are now owned by ConnectWi$e, I doubt they'll never add support for the on premise product to use MSSQL because ConnectWi$e wants to make money by getting people on the hosted platform.