dbdeployer release candidate

The latest release of dbdeployer is possibly the last one with a leading 0. If no serious bugs are found in the next two weeks, the next release will bear a glorious 1.0.

Latest news

The decision to get out of the stream of pre-releases that were published until now comes because I have implemented all the features that I wanted to add: mainly, all the ones that I wished to add to MySQL-Sandbox but it would have been too hard:

The latest addition is the ability of running multi-source topologies. Now we can run four topologies:

  • master-slave is the default topology. It will install one master and two slaves. More slaves can be added with the option --nodes.
  • group will deploy three peer nodes in group replication. If you want to use a single primary deployment, add the option --single-primary. Available for MySQL 5.7 and later.
  • fan-in is the opposite of master-slave. Here we have one slave and several masters. This topology requires MySQL 5.7 or higher.
    all-masters is a special case of fan-in, where all nodes are masters and are also slaves of all nodes.

It is possible to tune the flow of data in multi-source topologies. The default for fan-in is three nodes, where 1 and 2 are masters, and 2 are slaves. You can change the predefined settings by providing the list of components:

$ dbdeployer deploy replication \
--topology=fan-in \
--nodes=5 \
--master-list="1 2 3" \
--slave-list="4 5" \
8.0.4 \
--concurrent

In the above example, we get 5 nodes instead of 3. The first three are master (--master-list="1 2 3") and the last two are slaves (--slave-list="4 5") which will receive data from all the masters. There is a test automatically generated to test replication flow. In our case it shows the following:

$ ~/sandboxes/fan_in_msb_8_0_4/test_replication
# master 1
# master 2
# master 3
# slave 4
ok - '3' == '3' - Slaves received tables from all masters
# slave 5
ok - '3' == '3' - Slaves received tables from all masters
# pass: 2
# fail: 0

The first three lines show that each master has done something. In our case, each master has created a different table. Slaves in nodes 5 and 6 then count how many tables they found, and if they got the tables from all masters, the test succeeds.
Note that for all-masters topology there is no need to specify master-list or slave-list. In fact, those lists will be auto-generated, and they will both include all deployed nodes.

What now?

Once I make sure that the current features are reasonably safe (I will only write more tests for the next 10~15 days) I will publish the first (non-pre) release of dbdeployer. From that moment, I’d like to follow the recommendations of the Semantic Versioning:

  • The initial version will be 1.0.0 (major, minor, revision);
  • The spects for 1.0 will be the API that needs to be maintained.
  • Bug fixes will increment the revision counter.
  • New features that don’t break compatibility with the API will increment the minor counter;
  • New features or changes that break compatibility will trigger a major counter increment.

Using this method will give users a better idea of what to expect. If we get a revision number increase, it is only bug fixes. An increase in the minor counter means that there are new features, but all previous features work as before. An increase in the major counter means that something will break, either because of changed interface or because of changed behavior.
In practice, the tests released with 1.0.0 should run with any 1.x subsequent version. When those tests need changes to run correctly, we will need to bump up the major version.

Let’s see if this method is sustainable. So far, I haven’t had need to do behavioural changes, which are usually provoked by new versions of MySQL that introduce incompatible behavior (definitely MySQL does not follow the Semantic Versioning principles.) When the next version becomes available, I will see if this RC of dbdeployer can stand its ground.

via The Data Charmer
dbdeployer release candidate