Transaction Streaming

To obtain a stream of FIO Chain transactions to look for relevant data, e.g. if you are a centralized exchange and want to know when a FIO Token deposit occurred to a specific account, you can use one of the methods below.

There are two approaches often seen in pre-processing: pulling data via repeated requests and streaming data via websocket. Because of the overhead of making many repeated HTTP requests, and because nodeos does not support pipelining, the streaming options are going to be significantly faster. (At some point a Unix domain socket option may become available making the pull option less inefficient. But, as of the time of writing, it is not yet enabled in the httpplugin.) After each solution below the complexity is ranked in terms of _complexity (required infrastructure to run the solution), difficulty (how difficult it is to handle the information from the approach,) and quality (is the data complete? Is it trustworthy?)

Use /get_block

(low complexity, low difficulty, low quality)

This is a common method with no additional plugins required. It is also slow and can result in missing information. We caution against using this method if accuracy is critical.

At a high level, the process for crawling blocks looks like:

  • Call /get_info to get the latest irreversible block's height "last_irreversible_block_num" (which never rolls back)
  • Call /get_block for each block height with {"block_num_or_id":height} to get "transactions" array, which contains all transactions on the block
  • Loop "transactions" array and get "transactions"[i]
  • Check whether "transactions"[i]."status" == "executed", if yes go to next step, if no skip this transaction
  • Get "transactions"[i]."trx"."transaction"."actions"[0]."data" as "data"
  • Check whether "data"."payee_public_key" is the correct deposit address. If yes, then "data"."actor" is the sender address and "data"."amount"/1000000000 is the amount of FIO sent

There are several issues with this approach:

  • Action traces are not included in the transactions so seeing fees being charged, and rewards payouts is not possible.
  • Supporting the tracking of multi-signature transactions requires special handling. Multi-signature transactions will give a different data structure for the transaction. The msig transactions will not have any structure at all, only a string with the transaction ID. For these it will be necessary to also call the get_transaction API using both the block number (non-history nodes require a block hint) and the transaction ID.

get-block.go is an example of how to get transactions from get_block.

Use V1 History

(low complexity, low difficulty, high quality)

The major downside to this approach is that it requires many calls to get all of the transactions, but it does result in having full action-traces available and ensures multi-sig transactions are not missed.

v1history.go is an example of how to use the v1 history endpoints get_block_txids and get_transaction endpoints to get transaction information.

Use websocket

(low complexity, high difficulty, high quality)

The state-history plugin is very fast and efficient at providing data, but it is difficult to understand and use directly. Queries are specified using ABI-encoded binary requests, and the data returned is also ABI encoded. Generally this is how many of the more advanced tools ingest the data before normalizing it.

Use Chronicle

(high complexity, low difficulty, high quality)

Chronicle is a tool that consumes the state-history-plugins data and converts it to JSON. It sends this data over an outgoing websocket for processing. There are some challenges here too, as many of the numeric fields are changed to a string, which can be problematic for strongly-typed languages. Chronicle has a lot of options, making it a very good choice for when integrating into a custom data backend.

fio.etl is an example of a tool that uses Chronicle.

Use Hyperion

(high complexity, low difficulty, high quality)

Hyperion history adds a large number of capabilities including streaming APIs with filtering support, v1 history compatible APIs plus many additional useful endpoints. It is a somewhat complex app, involving message queues, key-value stores, ingest processes, and an elasticsearch backend.

Consume Blocks via P2P

(low complexity, high difficulty, low quality)

This method is only recommended for near-real-time monitoring. It is possible to have a node push blocks directly over a TCP connection using the EOS p2p protocol, and then to process each block using the ABI to decode the transactions. This has the same downsides as using get_block and the added complexity of handling the binary protocol, but it is useful for handling data real-time.

fiowatch is an example of a tool that consumes blocks via P2P.