You've successfully subscribed to Alpaca Learn | Developer-First API for Crypto and Stocks
Great! Next, complete checkout for full access to Alpaca Learn | Developer-First API for Crypto and Stocks
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info is updated.
Billing info update failed.
Search
Algorithmic Trading Basics

Algo Trading for Dummies  -  Collecting & Storing The Market Data (Part 1)

Alpaca Team
Alpaca Team

The lifeblood of any algorithmic trading system is, of course, its data — so that’s what we’ll cover in the first two posts of the mini-series.

Always Always Collect Any Live Data

For the retail trader, most platforms and brokers are broadly the same, you’ll be provided with a simple wrapper for a relatively simple REST or Websocket API. It’s usually worth modifying the provided wrapper to suit your purposes, and potentially create your own custom wrapper — however, that can be done later once you have a better understanding of the structure and requirements of your trading system.

Depending on the nature of the trading strategy, there are various types of data you may need to access and work with — OHLCV data (candlesticks), bid/ asks, and fundamental or exotic data. OHLCV is usually the easiest to get historical data for, which will be important later for back-testing of strategies. While there are some sources for tick data and historic bid/ask or orderbook snapshots, they generally come at high costs.

With this last point in mind, it’s always good to collect any live data which will be difficult or expensive to access at a later date. This can be done by setting up simple polling scripts to periodically pull and save any data that might be relevant for back-testing in the future, such as bid/ask spread. This data can provide helpful insight into the market structure, which you wouldn’t be able to track otherwise.

Alpaca Python Wrapper Lets You Start Off Quickly

The Alpaca Python Wrapper provides a simple API wrapper to begin working with to create the initial proof of concept scripts. It serves well for both downloading bulk historical data and pulling live data for quick calculations, so will need little modification to get going.

alpacahq/alpaca-trade-api-python
Python client for Alpaca’s trade API. Contribute to alpacahq/alpaca-trade-api-python development by creating an account on GitHub.

It’s also to be noted that the Alpaca Wrapper returns market data in the form of pandas Dataframes, which has slightly different syntax compared to a standard Python array or dictionary — although this is covered thoroughly in the documentation so shouldn’t be an issue.

Keeping A Local Cache Of Data

While data may be relatively quick and easy to access on the fly, via the market API, for live trading, even small delays become a serious slow down when running batches of backtesting across large time periods or multiple trading symbols. As such, it’s best to keep a local cache of data to work with. This also allows you to create consistent data samples to design and verify your algorithms against.

There are many different storage solutions available, and in most cases it will come down to what you’re most familiar with. But, we’ll explore some of the options anyway.

No Traditional RDB For Financial Data Please

Financial data is time-series, meaning that each attribute is indexed by its associated time-stamp. Depending on the volume of data-points, traditional relational databases can quickly become impractical, as in many cases it is best to treat each data column as a list rather than the database as a collection of separate records.

On top of this, a database manager can add a lot of unnecessary overhead and complexity for a simple project that will have limited scaling requirements. Sure, if you’re planning to make a backend data storage solution which will be constantly queried by dozens of trading bots for large sets of data, you’ll probably want a full specialised time-series database.

However, in most cases you’ll be able to get away with simply storing the data in CSV files — at least initially.

Cutting Down Dev Time By Using CSVs

download and store OHLCV data into a CSV
download and store OHLCV data into a CSV. GitHub Gist: instantly share code, notes, and snippets.
import alpaca_trade_api as tradeapi

api = tradeapi.REST(key_id=<your key id>,secret_key=<your secret key>)

storageLocation = "<your folder location>"
barTimeframe = "1H" # 1Min, 5Min, 15Min, 1H, 1D
assetsToDownload = ["SPY","MSFT","AAPL","NFLX"]

iteratorPos = 0 # Tracks position in list of symbols to download
assetListLen = len(assetsToDownload)
while iteratorPos < assetListLen:
	symbol = assetsToDownload[iteratorPos]
	
	dataFile = ""
	lastDate = "2013-00-00T00:00:00.000Z" # ISO8601 Date
	
	# Verifies if symbol file exists	
	try: # If file exists, reads the time of the last bar
		dataFile =  open(storageLocation + '{0}.csv'.format(symbol), 'a+')
		lastDate = list(csv.DictReader(dataFile))[-1]["time"]
	except: # If not, initialises new CSV file
		dataFile = open(storageLocation + '{0}.csv'.format(symbol), 'w')
		dataFile.write("time,open,high,low,close,volume\n")


	returned_data = api.get_bars(symbol,barTimeframe,start_dt=lastDate).bars

	# Reads, formats and stores the new bars
	for bar in returned_data:
		ret_time = str(bar.time)
		ret_open = str(bar.open)
		ret_high = str(bar.high)
		ret_low = str(bar.low)
		ret_close = str(bar.close)
		ret_volume = str(bar.volume)
		
		# Writes formatted line to CSV file
		dataFile.write(ret_time + "," + ret_open + "," + ret_high + "," + ret_low + "," + ret_close + "," + ret_volume)
	
	dataFile.close()

	iteratorPos += 1

(Code Snippet to download and store OHLCV data into a CSV) https://gist.github.com/yoshyoshi/5a35a23ac263747eabc70906fd037ff3

The use of CSVs, or another simple format, significantly cuts down on usage of a key resource — development time. Unless you know that you will absolutely need a higher speed storage solution in the future, it’s better to keep the project as simple as possible. You’re unlikely be using enough data to make local storage speed much of an issue.

Even an SQL database can easily handle the storage and querying of hundreds of thousands of lines of data. To put that in perspective, 500k lines is equivalent to the 1 minute bars for a symbol between June 2013 and June 2018 (depending on trading hours). A well optimized system which only pulls and processes the necessary data will have no problem in overheads, meaning that any storage solution should be fine. Whether than be an SQL database, NoSQL or a collection of CSV files in a folder.

Additionally, it isn’t infeasible to store the full working dataset in RAM while in use. The 500k lines of OHLCV data used just over 700MB of RAM when serialized into lists (Tested in Python with data from the Alpaca client mentioned earlier).

When it comes to the building blocks of a piece of software, its best to keep everything as simple and efficient as possible, while keeping the components suitably modular so they may be adjusted in future if the design specification of the project changes.

By Matthew Tweed

Algorithmic Trading BasicsMarket Data APIPython

Alpaca Team

API-first stock brokerage. *Securities are offered through Alpaca Securities LLC* http://alpaca.markets/#disclosures