Managing databases used to be about micro-managing a small number of powerful database servers, with DBA’s concentrating on performance optimizations to squeeze the last drop of power out of those systems, like hand-building a single car to win a race. Now you have hundreds - or even thousands - of servers instead of a handful of machines, often in the cloud or outsourced to hosters, and you’re running multiple database engines across a variety of platforms. And as the complexity increases, so does the risk, because it’s no longer about a database powering a single application, but database systems that large numbers of business processes depend on.
Managing databases effectively today is about co-ordination and efficiency, not just performance. Instead of that performance-tuned race car, you’re running an entire mass transit system of buses, trains and trams to move a whole city’s worth of people around efficiently.
To do that, you need a database management tool that’s built to tackle those problems. You need tools that give you a holistic view of your databases so you can improve DBA productivity by managing the whole environment, not a single database.
It’s not only DBAs who care about managing databases either; operations managers and IT managers have just as much responsibility for the environment those databases run in – and the resources they consume. dbWatch gives you the tools to improve efficiency and productivity for all these roles.
Making best practices better
Databases usually run happily with only a little maintenance, but you do need to run backups, truncate transaction logs and reorganize and rebuild indexes on the most heavily used tables. Even large IT departments tend to leave too much of this to manual processes, which means relying on overworked, fallible humans.
Depending on the version and edition of your database engine, the kind of data you store in your tables and whether you’re taking a full or differential backup, some of that can be done online; but with a different version or edition, some maintenance means stopping services. Automating that means the right policy always gets applied, and the maintenance gets done at the right time, when the systems aren’t under heavy load, to avoid unplanned downtime. dbWatch automatically scans subnets for new database instances that need to be managed and monitored. As part of that process they’re added to the backup set; when customers first start using it, it’s not uncommon to find ‘new’ databases that have been running for several months with no backup.
Automating routine tasks to fit industry best practices and getting a simple interface for dealing with multiple database engines across different platforms means DBAs can handle more instances, across more versions of those engines on all the platforms you use. Fully customizable reports show traditional performance indicators like response time and availability, down to the individual database process and SQL query, with preconfigured alerts for critical database components. DbWatch also shows metrics that give operations and IT managers the insights they need to run their database environment more efficiently – as dashboards and reports within dbWatch or via integration with key management platforms.
You can see what software licences your databases are consuming, what hardware you have available, which databases could be consolidated onto the same servers for better utilization, and even which are idle and can be taken offline. You can see what practices are working well and should be documented, what needs to change, and how your databases are growing so you can plan ahead for adding extra resources.
The dbWatch tools are designed to work at scale. You can tag database instances by function, department or any other way you want to organise them, and use attributes to help you search and filter large groups of databases. You can bulk import password and login information for instances from spreadsheets, autoscan your network to find new database instances and run queries and reports across hundreds or even thousands of database instances.
That’s more important now that databases routinely run in virtualised environments, because you have to consider the performance of that whole environment; improving the performance of a single database server could easily come at the expense of the resources allocated to another instance.
Counting the costs
Simply knowing what database systems you’re using is invaluable, because the list is often longer than you think. dbWatch provides a full set of tools and reports to help you optimise your Oracle and SQL Server licence portfolio, showing you any under or over-licensing, with details of your actual usage and projected requirements, so you can get full value out of the licences you have and potentially reduce licence costs.
Monitoring in the cloud is just as important, for controlling costs if you already use cloud database services, or to predict what it will cost you to migrate. Moving to cloud services should be a strategic decision about how you evolve your software and services, not simply a cost-cutting exercise – because while the cloud is convenient, it isn’t necessarily cheap. If you lift-and-shift existing applications, you’ll save on the costs of buying and running hardware, but you need to know in advance how much it will cost you to get the performance you want in the cloud. Once you’re there, you pay by usage – so you want to monitor what you use to make sure you haven’t overprovisioned and you’re not leaving the engine running on instances or virtual machines you don’t need.
Any database, anywhere
Few businesses will use only on-premise databases – or only cloud services. Few will standardise on a single database engine or just one OS platform. That’s why dbWatch works with a wide range of database engines, on multiple platforms. Instead of needing a different agent for each database engine and platform, it’s written in the native SQL dialect for each engine, making database calls and running tasks and scripts through the native interface - so it doesn’t matter if you run your database engine on Windows Server, Linux or Solaris, or in the cloud. It includes the ODBC and Java drivers to talk to all the different versions of Oracle, SQL Server and the other database engines dbWatch supports.
That lets you use the same tool to monitor performance and resources on all the different database engines and versions you use, and compare the results in the same place. You can push the same policies and best practices to all your database instances, make sure they’re all backed up and run reports across them all, from the same place. That’s more efficient than using multiple tools, and gives you a clear view of your entire database environment.
The distributed architecture dbWatch also means it doesn’t take up a lot of resources itself. Instead of a single central repository needing gigabytes of memory and terabytes of storage space (and the licence for another database instance to manage that), it doesn’t pull data from an instance until it’s needed for a report.
Role-based access controls respect your existing security and access policies (whether you’re using Active Directory or even Kerberos), and the fine-grained access controls mean you can control which admins can connect to an individual database, or even to individual procedures inside a database, to manage reporting on business-critical applications. dbWatch can connect across multiple subnets, each with their own policies and firewalls. As long as the database server has an IP connection, whatever the bandwidth and latency, dbWatch can connect to manage and monitor it – whether it’s in your own data centre, hosted in the cloud or in a truly remote location like an oil rig or a cruise ship.
Put it all together, and dbWatch gives you the tools to go beyond simple performance tuning to manage your entire database environment at scale for optimum efficiency and cost savings, while still getting the performance the business needs. In the age of bigger and bigger data, dbWatch gives you the big picture.