dbKoda Changelog

What's new in dbKoda 0.10.1

Apr 3, 2018
  • The dbKoda team is very pleased to announce the release of dbKoda 0.10! This release includes some important bug fixes and cool functionality — including a password manager and the ability to export query results to JSON or CSV. But for us the most exciting new feature is our new performance panel. This is a new performance dashboard which gives you a unique insight into the performance of your MongoDB instance.
  • Launching the performance panel:
  • To create the performance panel, selectCreate Performance Panel from the right click menu on the connection you want to monitor. You can create as many panels as you have open connections.
  • From Data to information:
  • There’s lots of options for graphically viewing the performance of a MongoDB server. Generally they present the raw data from commands like db.serverStatus() in a succession of line charts — with time on the X-axis and the values of various metrics on the Y-axis.
  • There’s nothing wrong with representing performance data in this way; it’s been the state of the art for as long as I can remember (and sadly, I’ve got coming on three decades of memory on database performance management). But as Ian Lowe once said : “Data isn’t information; information isn’t knowledge; knowledge isn’t wisdom”. A line chart of a single metric is really just raw data — it’s not information or knowledge. What we’ve tried to do with the dbKoda performance panel is to turn data into information. We’ve done this by arranging metric values in such as way as to illuminate their meaning.
  • For instance, consider the following db.serverStatus() metric:
  • It’s easy to plot such a variable over time:
  • But doing so only tells you how the metric varies, it doesn’t tell you what the variable means. If however, we display the metric as an arrow between the disk subsystem and the wired tiger cache, it’s meaning is more obvious:
  • Now you can instantly understand that this metric shows the amount of data flowing from the disk subsystem into the WiredTiger Cache.
  • The historical context:
  • Now, it’s true that the default view only provides the real time perspective and not the trend or historical context. In my opinion, presenting the time dimension for dozens of metrics simultaneously really just overwhelms the viewer and decreases the signal to noise ratio. But trend is important, so we can show the history for all metrics with a single click
  • Alarms and outlier detection:
  • Where possible, we advise on metrics whose value seems unreasonably high or which is statistically out of bounds. Here’s an example of some alarms raised on one of our test machines:
  • The first alarm is a statistical alarm. It is raised because the value of the metric is statistically out of line with what we’ve observed since we started monitoring. In this case, we see that sessions are spending about 11 million microseconds per seconds waiting for reads (eg, there are 11 seconds of wait time observed every second). That is way above the normal amount which is more like 2 seconds per second. If you are wondering how we wait more than a second every second, the answer is that we have more than one session waiting: 11 sessions waiting for a second in any given second for example.
  • The second alarm is a threshold alarm. We see that for every document returned by all queries, over 7,000 documents are examined. This usually means that there are a lot of collection scans or inefficient index scans going on.