Available translations

Data Mining - Fetching Data From APIs

foundations.png
Summary: Superalgos allows its users to fetch data from any API on the Internet without needing to write code. Instead, users design an API Map, which is a set of definitions for how a Bot can access a certain API and how to interpret the API Server responses. With this API Map, an API Data Fetcher Bot can be configured to extract data from any API Server and store Raw Data inside the system. This raw data can later be processed and consumed by other Bots.
Main Workflow
How to Fetch Data from an API?
The main workflow to pull this off is quite straight forward. It does not require coding at all. Follow this guide and you will be using data from any source on the Internet for your trading strategies in no time.
  • 1. Choose an API: The current version is limited to public APIs only. Secured APIs coming soon.
  • 2. Design an API Map: Read the APIs online documentation and identify the Endpoints that return the data you would like to fetch. Carefully map the Endpoints defining the parameters the API Server expects at each call, and also how the API Server response is formatted.
  • 3. Setup an API Data Fetcher Bot: Create a Data Mine for this API and add an API Data Fetcher Bot. From that Bot's Record Properties you will need to reference the fields returned by the API Server defined at the API Map. This is how you define which fields from each response you are going to save to disk. You might also need to define query or path parameters here.
  • 5. Setup Indicator Bots: Once the data is converted, you can extract specific Datasets from it. These datasets can be used to define a set of indicators you can build from that data. These Indicators can be as simple as reorganizations of the data (with some adjustments to the names of fields and splitting the data into different indicators), or it can include some data processing / transformations if you wish (this would require some coding).
  • 6. Setup Plotters: After you have set up your indicators you can design plotters for them if you wish.
  • 7. Use Data for Trading Strategies: These indicators can be consumed from your trading strategies.
Data Aggregation
How Does Data Aggregation Work?
When Raw Data is fetched from the Internet, it needs to be converted into a Dataset Type either of Multi-Time-Frame-Daily or Multi-Time-Frame-Market types. Usually Raw Data is a timestamped dataset with a timesStamp field, or it has begin and end properties if they are fragmented in time slots of one minute.
From there the conversion process needs to generate elements with time slots of 2 min, 3 min, 4 min, 5 min, 10, min,... up to 24hs. To achieve this, each field needs to be aggregated in a certain way.
At the Bot where this conversion happens, the Record Property nodes needs to have an aggregationMethod config property that defines how this is going to be handled. Here is a list of the supported aggregation methods that can be used during this conversion procedure:
  • First: This means that the first value found within the time range being processed, will define the value of the Record Property.
  • Last: This means that from all values found at the current time range, the last will be selected for the final value of the Record Property node being evaluated.
  • Min: This means that the mathematical minimum value from all values found in the time range will be used as the value for the Record Property.
  • Max: This means that the mathematical maximum value from all values found in the time range will be used as the value for the Record Property.
  • Sum: This means that the sum of all values found in the time range will be used as the value for the Record Property.
  • Avg: This means that the average of all values found in the time range will be used as the value for the Record Property.
Step By Step
Step by Step Guide to Fetch Data from an API
This is a detailed guide on how to setup a data mining operation fetching data from an Internet API.
  • Step #2: Open a second UI (this will prevent the previous one from saving the changes, but you don't need to do that). You will use the first UI to check from time to time how nodes are configured there, and the second UI to build your own data mining operation from scratch. The rest of the steps you will do within the second UI.
  • Step #3: Select an API from the Internet with data that might be useful at your Trading Strategy.
  • Step #4: Read the API's documentation, indentify the endpoints, what parameters they require and what data they will provide you. Another helpful way to see how the api responds is by typing the api endpoint into your browser and looking at the raw output.
  • Step #5: At Superalgos, get a clean workspace, without plugins pre-installed.
  • Step #6: Create an API Map node, and start mapping the documented API by adding children to it. Especially pay attention to unnamed arrays and objects that may wrap the data.
  • Step #7: For each node you add, read the node's doc page. This will save you time trying to guess how all this works.
  • Step #8: Define one API Endpoint first, once you have the first one working, go for the rest of the endpoints available.
  • Step #9: Define the query, or path parameters, and the responses. Read each node's help for details on how to do it right.
  • Step #11: Create a new Data Mine with the name of the company that hosts the API.
  • Step #13: Install a new market. This procedure is going to setup many things for you. It will detect the API Map, and the Data Mine you created and will create appropiate Tasks to run your API Fetcher Bot.
  • Step #14: Pay attention to the nodePath property at the API Response Schema node of the API Map. Check the node's docs page for details.
  • Step #17: The timestamp record property is the only required property. It helps keep the records you save organized. Additionally, the timestamp may live in the header of the API response. Read the docs of the API Response Field Reference node to set up the nodePath property if necessary.
  • Step #20: Configure the aggregationMethod property for each Record Property. Check the docs of that node for details.
  • Step #21: If you wish, you can create multiple Indicator Bots to cut the dataset in different groups of Record Properties.
  • Step #23: Now you are ready to run the Task where the Data Fetcher Bot Process Instance is defined.
  • Step #24: If something did not work, check the workspace map to see if there is a node with an error. Locate the node, click on the error and see the error's page at the doc. Try to understand what is wrong, fix it and try again.
  • Step #25: Iterate through step #24 until everything is working. Re read node's documentation until you get the details of what it explains there. Reread this guide or this whole Topic from the start, until the concepts explained sinks in.
  • Step #26: If after digesting the docs, and following all this advice you still cannot get it to work, go to the Telegram Support group and ask for help.
  • Step #27: Once all is running well, set up your Trading Strategy in a way that allows you to test if you have access to the processed data (the one provided by the latest Indicator Bots created).
  • Step #29: Once the data is correctly consumed by your Trading Strategy, iterate through this guide by adding another API Endpoint to the API Map. You will need to add an addtional API Fetcher Bot.
  • Step #30: Once you have successfully mapped the whole API and you have all the needed API Data Fetchers, Indicator Bots and Plotters tested, you are ready to share your API Map and Data Mine with the community as a plugin, and receive SA Tokens in exchange for your contribution.
Previous
Data Mining - Plotting
Next
Data Mining - Consuming Data From Strategies