The Race to the Tangle II. Teamwork

We discuss how we try to increase the number of transactions we make to the tangle with the hardware we have.

5 comments

This post is a continuation of the race to the tangle in which we discuss how we try to increase the number of transactions we make to the tangle with the hardware we have.

For those who are not sure about what we are doing, we would like to remark that we are developing an IPS and storing locations in the IOTA. So we are not developing an IPS on the IOTA.

Why do we store locations in the tangle? Because our goal is to use the locations as evidence of facts, and we believe that having them stored and signed in the tangle is ideal for our purpose. You can review the last part of this post to see an example of what we do with the locations which are stored in the tangle (notice that the images of the web are gifs).

For us, the locations that we store in the tangle are like a snapshot of the current state of the installation. Obviously, the more locations we store, the more representative the snapshot will be. Therefore, the underlying issue to be addressed is to increase the number of locations stored in the tangle with the hardware we have. We do not talk about using cloud solutions or better hardware, but to optimize the performance of the hardware that we are using.

After these considerations, it is time to move on to the interesting part.

First of all, the code. Keep in mind that it is a code to carry out tests (not for production). Before using it, you should analyse it in order to understand how it works.

We have the following Low-Cost Single Board Computers (LCSBC):

  • 1 x Up Squared.
  • 1 x Raspberry Pi 2 Model B + Micro SD 32 Class 10.
  • 3 x Raspberry Pi 3 Model B + Micro SD 8 Class 4.

teamwork

We use Eclipse Kura in all the LCSBC so they can easily communicate with each other by MQTT. Using the MQTT connection we can create a cluster of LCSBCs (hereinafter nodes). Our approach is as follows:

  • Each node uses the worker service.
  • The service adds itself to the list of workers of the cluster, publishing a message in topic /mide/iota/workers/<worker-id>.
  • The node which starts an execution uses the manager service. Speaking properly, the execution uses the service (there is no public manager service).
  • The POW is not parallelized, but the transactions. Why? Because: i) it is a complex task, and ii) without a fast connection between nodes it does not make sense (this would even slow down the system).
  • The manager assigns jobs to each worker by publishing them in the topic /mide/iota/todo/<worker-id>.
  • The workers publish the results of the jobs in the topic /mide/iota/done/<work-id>.

Analysing our previous results we realised that the deviation in the average transaction time per LCSBC is high. Therefore, we follow two approaches:

  • Collaborative: Each transaction is assigned to a free worker. It should be most efficient.
  • Competitive: The workers compete with each other to be the fastest in each transaction (like blockchain mining). It should be more stable.

We have conducted two experiments in which we have sent 300 transactions to the following addresses:

Unfortunately, there is a bug in the IOTA library which provokes that sometimes an exception occurs in the POW. In our implementation, we discard these transactions, and therefore we do not consider them in the results. It is important to note that the error occurs during the POW, so the results would be better without this bug.

The bug also affects the execution in a different way depending on the configured mode. While in the competitive mode the nodes work until the end of the execution because these nodes are called for each particular work, in the collaborative mode this does not happen. Our solution to analyse the results has been keeping only the valid data of the competitive mode and all the data until the first bug in the collaborative mode. In the collaborative mode we have done several tests until we achieved a case in the bug appear late (after many transactions).

The results are:

Competitive

  • Transactions: 232.
  • Time: 4708 seconds.
  • Time per transaction: 20.29 seconds.
  • Transactions per hour: 177.4
  • Variance: 237.51
  • Standard deviation: 15.41
  • Maximum transaction time: 86 seconds.

competitive

Collaborative

  • Transactions: 203.
  • Time: 3107 seconds.
  • Time per transaction: 15.6 seconds.
  • Transactions per hour: 230.75.
  • Variance: 9984.75.
  • Standard deviation: 99.92.
  • Maximum transaction time: 679 seconds.

collaborative

comparative

comparative2

Here you can find the comparison with the previous ones:

global-iota

5 comments on “The Race to the Tangle II. Teamwork”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s