REST API Benchmarks

BitCurb provides enterprise level engine for reconciliation and processing of financial data files.

We provide Software-as-a-Service API that is comprised of Crunch API and ETL API functions, respectively providing capability for data matching and data processing.

With our flexible licencing and hosting model, we aim to meet the demanding requirements of mid-market clients while at the same time BitCurb stays fit for the large data volume challenges your business may have.

If your business is processing millions of data records on a daily basis and you cannot find which one of the market tools would serve you best, this article will help you make best choice for your needs.

We measured the performance of our key engine components – Crunch and ETL in two different canned environments. The insightful analysis of the performance we managed to achieve is listed below.

We built two environments in Azure, representing two different machine series – a high performing Standard DS5 v2 (16 vcpus, 56 GiB memory) from DV3-series and Standard D4s v3 (4 vcpus, 16 GiB memory) from Dv2-series.

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general

To minimize the impact of network latency and achieve higher performance, on each of the servers in Azure we installed:

  • OS: Windows Server 2016 Datacenter Edition
  • SQL Server: SQL Server 2017 Express
  • IIS

For the purpose of taking the performance metrics, we hosted our solution as .NET Core In-process solution.

Data reconciliation with BitCurb Crunch engine

We measured the performance of our crunch operations on both of the machines, generating 1 million, 2 million, 5 million and 7 million row data sets and executed a single One-To-One matching rule against it.

There are some key factors such as complexity of matching rule, success rate of matched data and network topology that would determine the level of deviation in your case

BitCurb Crunch

The timespan includes the triggering of the request, the loading of the data, the matching and returning the JSON result to the client that started the operation.

Standard D4s v3

The completion of the crunch operation on D4s v3 machine took 20 seconds and 28 ms for 1 million rows and 43 seconds and 50 ms for 2 million rows.

16Gib RAM machine was not able to process 5 million and 7 million of data.

Standard DS5 v2

The DS5 v2 machine showed slightly better performance when reconciling 1 and 2 million rows – respectively 15 seconds and 49 ms, and 32 seconds and 5 ms

The higher memory available in the configuration was sufficient to process 5 and 7 million rows for respectively 99 seconds and 10 ms, and 141 seconds.

BitCurb ETL

We measured the performance of our ETL operations on both of the machines with generated CSV and SWIFT MT940 files containing 1, 2 and 5 million data rows.

Our test CSV files contain 8 columns.

In all of our executions, we used column expressions, which transforms and enhances the result whilst processing the file formats.

As listed earlier, the complexity of column expression and the network topology may affect the level of deviation in your case.

The timespan includes the triggering of the request, the loading of the data, the matching and returning the JSON result to the client that started the operation.

ETL for SWIFT MT 940 file format

BitCurb ETL

Result time is in minutes.

Standard D4s v3

The completion of the ETL operation on D4s v3 machine took 14 minutes and 22 sec for 1 million transactions and 28 minutes and 40 sec for 2 million transactions.

16Gib RAM machine was not able to process 5 million transactions.

Standard DS5 v2

The DS5 v2 machine showed slightly better performance when parsing the swift file for 1 and 2 million transactions – respectively 12 minutes and 22 sec, and 24 minutes and 40 seconds.

The higher memory available in the configuration was sufficient to process 5 million transactions taking 62 minutes and 1 sec.

ETL for CSV file format

BitCurb ETL CSV

Result time is in minutes.

Standard D4s v3

The completion of the ETL operation on D4s v3 machine took 1 minute and 57 sec for 1 million rows, 4 minutes and 5 sec for 2 million rows, and 10 minutes and 9 seconds for 5 million rows.

Standard DS5 v2

The DS5 v2 machine showed slightly better performance –1 minute and 46 sec for 1 million rows, 3 minutes and 36 seconds for 2 million rows, and 9 minutes and 2 seconds for 5 million rows.

Conclusion

This benchmark statistics position BitCurb API in the enterprise segment of the market, meeting the demand of every enterprise business. Our solution can process millions in a single workload regardless if it is hosted in the cloud or on your premises.

If our solution is matching your data capacity needs, please get in touch with us and we can discuss your business case.