We didn't bother to test as the functionality wasn't there. MySQL geospatial functionality (and procedural capabilities) are still in their infancy, and relatively unused. We did some testing for geospatial data against MS SQL Server some years back, and PG was far easier to set up, and several times faster, as well as having a far richer set of functionality, that is widely accepted in the geo community. If you have specialized / more complex DB requirements such as complex geospatial or time-series functionality, then something like PG is going to work much better as it has a well-proven and extensive set of add-on modules such as PostGIS.Īlso if you have functionality that requires custom DB functions, PG is well ahead here, with it's own well tested procedural language, as well as allowing these functions and procedures to be written in C, Python, and many more. My take on MySQL is that it works great for standardized ORM-style queries which is what it was originally designed for, a relatively light-weight DB for object-oriented developers who couldn't be bothered with back-end database intricacies. We use PG for other stuff than Traccar that MySQL just can't do, such as heavy lifting of geospatial data, functions and queries, as well as a fair bit of JSON data, in our own system, of which Traccar is one of the feed systems, so we are really stuck with using PG. But Postgres is also susceptible to tuning, maybe far more so than MySQL, hence maybe some of the performance differentials observed. We are running PG13 at the moment, there have been some performance improvements. This is a longish answer, as for DBs, one solution / size doesn't fit all.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |