I Built a Go Plugin for Alpaca’s MarketStore as a College Intern

Hey all! I’m Ethan and recently started working for Alpaca as a Software Engineering Intern! For my first task, I created a Go plugin for Alpaca’s open source MarketStore server that fetches and writes Binance minute-level.

Image from iOS (2).jpg

You might be wondering — What is MarketStore? MarketStore is a database server written in Go that helps users handle large amounts of financial data. Inside of MarketStore, there are Go plugins that allow users to gather important financial and crypto data from third party sources.

For this blog post, I’ll be going over how I created the plugin from start to finish in three sections: Installing MarketStore, understanding MarketStore’s plugin structure, creating the Go plugin., and installing the Go plugin.

Experience Installing and Running MarketStore Locally

First, I set up MarketStore locally. I installed the latest version of Go and started going through the installation process outlined in MarketStore’s README. All the installation commands worked swimmingly, but when I tried to run marketstore using

ethanc@ethanc-Inspiron-5559:~/go/bin/src/github.com/alpacahq/marketstore$ marketstore -config mkts.yml

I got this weird error:

/usr/local/go/src/fmt/print.go:597:CreateFile/go/src/github.com/alpacahq/marketstore/executor/wal.go:87open /project/data/mktsdb/WALFile.1529203211246361858.walfile: no such file or directory: Error Creating WAL File

I was super confused and couldn’t find any other examples of this error online. After checking and changing permissions in the directory, I realized my mkts.yml file configuration root_directory was incorrect. To resolve this, I changed mkts.yml from

root_directory: /project/data/mktsdb

To

root_directory: /home/ethanc/go/bin/src/github.com/alpacahq/marketstore/project/data/mktsdb

and reran

ethanc@ethanc-Inspiron-5559:~/go/bin/src/github.com/alpacahq/marketstore$ marketstore -config mkts.yml

This time, everything worked fine and I got this output:

ethanc@ethanc-Inspiron-5559:~/go/bin/src/github.com/alpacahq/marketstore$ marketstore -config mkts.yml
…
I0621 11:37:52.067803 27660 log.go:14] Launching heartbeat service…
I0621 11:37:52.067856 27660 log.go:14] Enabling Query Access…
I0621 11:37:52.067936 27660 log.go:14] Launching tcp listener for all services
…

To enable the gdaxfeeder plugin which grabs data from a specified cryptocurrency, I uncommented these lines in the mkts.yml file:

and reran

ethanc@ethanc-Inspiron-5559:~$ marketstore -config mkts.yml

which yielded:

…
I0621 11:44:27.248433 28089 log.go:14] Enabling Query Access…
I0621 11:44:27.248448 28089 log.go:14] Launching tcp listener for all services…
I0621 11:44:27.254118 28089 gdaxfeeder.go:123] lastTimestamp for BTC = 2017–09–01 04:59:00 +0000 UTC
I0621 11:44:27.254189 28089 gdaxfeeder.go:123] lastTimestamp for ETH = 0001–01–01 00:00:00 +0000 UTC
I0621 11:44:27.254242 28089 gdaxfeeder.go:123] lastTimestamp for LTC = 0001–01–01 00:00:00 +0000 UTC
I0621 11:44:27.254266 28089 gdaxfeeder.go:123] lastTimestamp for BCH = 0001–01–01 00:00:00 +0000 UTC
I0621 11:44:27.254283 28089 gdaxfeeder.go:144] Requesting BTC 2017–09–01 04:59:00 +0000 UTC — 2017–09–01 09:59:00 +0000 UTC
…

Now that I got MarketStore running, I used Jupyter notebooks and tested out the commands listed in this Alpaca tutorial and got the same results. You can read more about how to run MarketStore in MarketStore’s README, Alpaca’s tutorial, and this thread.

Understanding how MarketStore Plugins work

After installing, I wanted to understand how their MarketStore repository works and how their current Go plugins work. Before working in Alpaca, I didn’t have any experience with the Go programming language. So, I completed the Go’s “A Tour of Go” tutorial to get a general feel of the language. Having some experience with C++ and Python, I saw a lot of similarities and found that it wasn’t as difficult as I thought it would be.

Creating a MarketStore Plugin

To get started, I read the MarketStore Plugin README. To summarize at a very high level, there are two critical Go features which power plugins: Triggers and BgWorkers. You use triggers when you want your plugin to respond when certain types data are written to your MarketStore’s database. You would use BgWorkers if you want your plugin to run in the background.

I only needed to use the BgWorker feature because my plugin’s goal is to collect data outlined by the user in the mkts.yml configuration file.

To get started, I read the code from the gdaxfeeder plugin which is quite similar to what I wanted to do except that I’m trying to get and write data from the Binance exchange instead of the GDAX exchange.

I noticed that the gdaxfeeder used a GDAX Go Wrapper, which got its historical price data public endpoint. Luckily, I found a Go Wrapper for Binance created by adshao that has the endpoints which retrieves the current supported symbols as well as retrieves Open, High, Low, Close, Volume data for any timespan, duration, or symbol(s) set as the parameters.

To get started, I first created a folder called binancefeeder then created a file called binancefeeder.go inside of that. I then first tested the Go Wrapper for Binanceto see how to create a client and talk to the Binance API’s Kline endpoint to get data:

I then ran this command in my root directory:

ethanc@ethanc-Inspiron-5559:~/go/bin/src/github.com/alpacahq/marketstore$ go run binancefeeder.go

and received the following response with Binance data:

&{1529553060000 6769.28000000 6773.91000000 6769.17000000 6771.34000000 32.95342700 1529553119999 223100.99470354 68 20.58056800 139345.00899491}
&{1529553120000 6771.33000000 6774.00000000 6769.66000000 6774.00000000 36.43794400 1529553179999 246732.39415947 93 20.42194600 138288.41850603}
…

So, it turns out that the Go Wrapper worked!

Next, I started brainstorming how I wanted to configure the Binance Go plugin. I ultimately chose symbols, queryStart, queryEnd, and baseTimeframe as my parameters since I wanted the user to query any specific symbol(s), start time, end time, and timespan (ex: 1min). Then, right after my imports, I started creating the necessary configurations and structure for BinanceFetcher for a MarketStore plugin:

The FetcherConfig’s members are what types of settings the user can configure in their configuration file (ex: mkts.yml) to start the plugin. The BinanceFetcher’’s members are similar to the FetcherConfig with the addition of the config member. This will be used in the Run function later.

After creating those structures, I started to write the background worker function. To set it up, I created the necessary variables inside the backgroundworker function and copied the recast function from the gdaxfeeder. The recast function uses Go’s Marshal function to encode the config JSON data received, then sets a variable ret to an empty interface called FetcherConfig. Then it stores the parsed JSON config data in the ret variable and returns it:

Then inside the NewBgWorker function, I started to create a function to determine and return the correct time format as well as set up the symbols, end time, start time, and time duration. If there are no symbols set, by default, the background worker retrieves all the valid cryptocurrencies and sets the symbol member to all those currencies. It also checks the given times and duration and sets them to defaults if empty. At the end, it returns the pointer to BinanceFetcher as the bgworker.BgWorker:

Then, I started creating the Run function which is implemented by BgWorker (see bgworker.go for more details). To get a better sense of how to handle errors and write modular code in Go, I read the code for plugins gdaxfeeder and polygon plugins. The Run function receives the BinanceFetcher (which is dereferenced since bgworker.BgWorker was the pointer to BinanceFetcher). Our goal for the Run function is to call the Binance API’s endpoint with the given parameters for OHLCV and retrieve the data and writes it to your MarketStore’s database.

I first created a new Binance client with no API key or secret since I’m using their API’s public endpoints.

Then, to make sure that the BinanceFetcher doesn’t make any incorrectly formatted API calls, I created a function to check the timestamp format using regex and change it to the correct one. I had to convert the user’s given timestamp to maintain consistency in the Alpaca’s utils.Timeframe which has a lot of helpful functions but has different structure members than the one’s Binance uses (ex: “1min” vs. “1m”). If the user uses an unrecognizable timestamp format, it sets the baseTimeframe value to 1 minute:

The start and end time objects are already checked in the NewBgWorker function and returns a null time.Time object if invalid. So, I only have to check if the start time is empty and set it to the default string of the current Time. The end time isn’t checked since it will be ignored if incorrect which will be explained in the later section:

Now that the BinanceFetcher checks for the validity of its parameters and sets it to defaults if its not valid, I moved onto programming a way to call the Binance API. 

To make sure we don’t overcall the Binance API and get IP banned, I used a for loop to get the data in intervals. I created a timeStart variable which is first set to the given time start and then created a timeEnd variable which is 300 times the duration plus the timeStart's time. At the beginning of each loop after the first one, the timeStart variable is set to timeEnd and the timeEnd variable is set to 300 times the duration plus the timeStart’s time:

When it reaches the end time given by the user, it simply alerts the user through glog and continues onward. Since this is a background worker, it needs to continue to work in the background. Then it writes the data retrieved to the MarketStore database. If invalid, the plugin will stop because I don’t want to write garbage values to the database:

Installing Go Plugin

To install, I simply changed back to the root directory and ran:

ethanc@ethanc-Inspiron-5559:~/go/bin/src/github.com/alpacahq/marketstore$ make plugins

Then, to configure MarketStore to use my file, I changed my config file, mkts.yml, to the following:

Then, I ran MarketStore:

ethanc@ethanc-Inspiron-5559:~/go/bin/src/github.com/alpacahq/marketstore$ marketstore -config mkts.yml

And got the following:

…
I0621 14:48:46.944709 6391 plugins.go:42] InitializeBgWorkers
I0621 14:48:46.944801 6391 plugins.go:45] bgWorkerSetting = &{binancefeeder.so BinanceFetcher map[base_timeframe:1Min query_start:2018–01–01 00:00 query_end:2018–01–02 00:00 symbols:[ETH]]}
I0621 14:48:46.952424 6391 log.go:14] Trying to load module from path: /home/ethanc/go/bin/bin/binancefeeder.so…
I0621 14:48:47.650619 6391 log.go:14] Success loading module /home/ethanc/go/bin/bin/binancefeeder.so.
I0621 14:48:47.651571 6391 plugins.go:51] Start running BgWorker BinanceFetcher…
I0621 14:48:47.651633 6391 log.go:14] Launching heartbeat service…
I0621 14:48:47.651679 6391 log.go:14] Enabling Query Access…
I0621 14:48:47.651749 6391 log.go:14] Launching tcp listener for all services…
I0621 14:48:47.654961 6391 binancefeeder.go:198] Requesting ETH 2018–01–01 00:00:00 +0000 UTC — 2018–01–01 05:00:00 +0000 UTC
…

Testing:

When I was editing my plugin and debugging, I often ran the binancefeeder.go file:

ethanc@ethanc-Inspiron-5559:~/go/bin/src/github.com/alpacahq/marketstore$ go run binancefeeder.go

If I ran into an issue I couldn’t resolve, I used the equivalent print function for Go (fmt). If there is an issue while running the plugin as part of MarketStore via the marketstore -config mkts.yml command, I used the glog.Infof() or glog.Errorf() function to output the corresponding error or incorrect data value.

Lastly, I copied the gdaxfeeder test go program and simply modified it for my binancefeeder test go program.

You’ve made it to the end of the blog post! Here’s the link to the Binance plugin if you want to see the complete code. If you want to see all of MarketStore’s plugins, check out this folder.

To summarize, if you want to create a Go extension for any open source repository, I would first read the existing documentation whether it is a README.md or a dedicated documentation website. Then, I would experiment around the repositories code by changing certain parts of the code and see which functions correspond with what action. Lastly, I would look over previous extensions and refactor an existing one that seems close to your plugin idea.

Thanks for reading! I hope you take a look at the MarketStore repository and test it out. If you have any questions, few free to comment below and I’ll try to answer!

Special thanks to Hitoshi, Sho, Chris, and the rest of the Alpaca’s Engineering team for their code reviews and help as well as Yoshi and Rao for providing feedback for this post.

By: Ethan Chiu

/

Algo Trading for Dummies  - 3 Useful Tips When Storing Trade Signals (Part 2)

Handling & Storing Trading Signals Are Hard

The calculation of simple trading indicators is made easy with the use of any one of the Technical Analysis libraries available, however, the efficient handling and storage of trading signals can be one of the most complex aspects of a live trading system.

 Photo by  Jeremy Thomas  on  Unsplash

Calculating Basic Indicators? No Problem

While it’s often necessary to create custom indicators and trading signals, there is still significant benefit to using a standard library such as Ta-Lib for the basics. This saves a lot of time rather than having to reimplement a set of common indicators in your language of choice. It also has the added bonus of increased processing speed as opposed to calculation done in native Python, for example.

When it comes to moving averages and other simple time-series indicators, the process is fairly self explanatory — at every time step you calculate the next numerical value which is then used as the most up-to-date signal to trade against.

(Code Snippet to read data CSV files and process into trading indicators) https://gist.github.com/yoshyoshi/73f130026c25a7dcdb9d6909b1990277

The signals themselves will be stateless in that respect — you aren’t concerned with previous signals that have been made, only the combination of indicators present at that moment. However, you may still wish to store some of the information from the indicators, if only for external analysis at a later point.

Different Story For Advanced Pattern Recognition

Meanwhile, more advanced pattern recognition cannot be handled in such a simple manner. If, for example, your strategy relies on finding divergence between indicators, its possible to get a significant performance boost by storing some past data-points from which to construct the signal at each new step, rather than having to reprocess the full set of data in the look-back period every time.

This is the trade-off between storage/ RAM efficiency and processing efficiency, with the latter also requiring greater software complexity to achieve.

How You Should Store Signals Depends On How Fast You Need It To Be

For optimal processing efficiency, you would not only store all the previously calculated signals from past time-stamps, but also the relevant information to calculate the next step in as fewer steps as possible.

While this would be completely unnecessary for any system with a polling rate above a seconds, it is exactly the kind of consideration you would have for a higher frequency strategy.

Meanwhile, a portfolio re-balancing system, or even most day-trading strategies, have all the time in the world (relatively). You could easily recalculate all the relevant signals at each time-step, which would cut down on the need for the handling of historical indicator sets.

Depending on the trading period of the system, it may also be worth using a hybrid approach to indicator and signal storage. Rather than permanently saving the data, you could calculate the full set of indicators at start-up and periodically dump and refresh the data to keep only whats going to be used in RAM.

The precise design trade-offs should considered on an individual basis, as holding more data in RAM may not be an option when running the software from lower power cloud computing instances nor, at the other end of the spectrum, would you be able to spare the seconds to recalculate everything for a market making bot.

3 Useful Tips When Storing Trade Signals

As mentioned in the part 1 of this series, there are range of different storage solutions that can be used for trading data. However, there are several best practices which apply across all:

  1. Keep indicators in a numeric or boolean format where possible for storage. For example, splitting a more complex signal set into boolean components. This particular problem caused me several issues in projects I’ve had to work on in the past.
  2. Only store what is complex or time-consuming to recalculate. If a set of signals can be calculated in time in a stateless manner, its probably easier to do so than add the design complexity of storing extra information.
  3. Plan out the flow of data through your system before you start programming anything. What market data is going to be pulled for each time-step? What will then be calculated from this and what is necessary to store? A well thought-out design will reduce complexity and hassle down the line.

Past this, common sense applies. Its probably best to store the indicators and signals in the same time-series format as, and along side, the underlying symbols they’re derived from. More complex signals, or indicators derived from multiple symbols, may even warrant their own calculation and storage process.

You could even go as far as to create a separate indicator feed script which calculates and stores everything separately from the trading bot software itself. The database could then be read by each bot as just another data feed. This not only has the benefit of keeping the system more modular, but also allowing you to create a highly optimized calculation function without the complexity of direct integration into a live system.

Whatever flavour of system you end up using, make sure to plan out the data storage and access first and foremost, before starting the rest of the design and implementation process.

By Matthew Tweed

/

Algo Trading for Dummies  -  Collecting & Storing The Market Data (Part 1)

The lifeblood of any algorithmic trading system is, of course, its data — so that’s what we’ll cover in the first two posts of the mini-series.

 Photo by  Farzad Nazifi  on  Unsplash

Always Always Collect Any Live Data

For the retail trader, most platforms and brokers are broadly the same, you’ll be provided with a simple wrapper for a relatively simple REST or Websocket API. It’s usually worth modifying the provided wrapper to suit your purposes, and potentially create your own custom wrapper — however, that can be done later once you have a better understanding of the structure and requirements of your trading system.

Depending on the nature of the trading strategy, there are various types of data you may need to access and work with — OHLCV data (candlesticks), bid/ asks, and fundamental or exotic data. OHLCV is usually the easiest to get historical data for, which will be important later for back-testing of strategies. While there are some sources for tick data and historic bid/ask or orderbook snapshots, they generally come at high costs.

With this last point in mind, it’s always good to collect any live data which will be difficult or expensive to access at a later date. This can be done by setting up simple polling scripts to periodically pull and save any data that might be relevant for back-testing in the future, such as bid/ask spread. This data can provide helpful insight into the market structure, which you wouldn’t be able to track otherwise.

Alpaca Python Wrapper Lets You Start Off Quickly

The Alpaca Python Wrapper provides a simple API wrapper to begin working with to create the initial proof of concept scripts. It serves well for both downloading bulk historical data and pulling live data for quick calculations, so will need little modification to get going.

It’s also to be noted that the Alpaca Wrapper returns market data in the form of pandas Dataframes, which has slightly different syntax compared to a standard Python array or dictionary — although this is covered thoroughly in the documentation so shouldn’t be an issue.

Keeping A Local Cache Of Data

While data may be relatively quick and easy to access on the fly, via the market API, for live trading, even small delays become a serious slow down when running batches of backtesting across large time periods or multiple trading symbols. As such, it’s best to keep a local cache of data to work with. This also allows you to create consistent data samples to design and verify your algorithms against.

There are many different storage solutions available, and in most cases it will come down to what you’re most familiar with. But, we’ll explore some of the options anyway.

No Traditional RDB For Financial Data Please

Financial data is time-series, meaning that each attribute is indexed by its associated time-stamp. Depending on the volume of data-points, traditional relational databases can quickly become impractical, as in many cases it is best to treat each data column as a list rather than the database as a collection of separate records.

On top of this, a database manager can add a lot of unnecessary overhead and complexity for a simple project that will have limited scaling requirements. Sure, if you’re planning to make a backend data storage solution which will be constantly queried by dozens of trading bots for large sets of data, you’ll probably want a full specialised time-series database.

However, in most cases you’ll be able to get away with simply storing the data in CSV files — at least initially.

Cutting Down Dev Time By Using CSVs

(Code Snippet to download and store OHLCV data into a CSV) https://gist.github.com/yoshyoshi/5a35a23ac263747eabc70906fd037ff3

The use of CSVs, or another simple format, significantly cuts down on usage of a key resource — development time. Unless you know that you will absolutely need a higher speed storage solution in the future, it’s better to keep the project as simple as possible. You’re unlikely be using enough data to make local storage speed much of an issue.

Even an SQL database can easily handle the storage and querying of hundreds of thousands of lines of data. To put that in perspective, 500k lines is equivalent to the 1 minute bars for a symbol between June 2013 and June 2018 (depending on trading hours). A well optimized system which only pulls and processes the necessary data will have no problem in overheads, meaning that any storage solution should be fine. Whether than be an SQL database, NoSQL or a collection of CSV files in a folder.

Additionally, it isn’t infeasible to store the full working dataset in RAM while in use. The 500k lines of OHLCV data used just over 700MB of RAM when serialized into lists (Tested in Python with data from the Alpaca client mentioned earlier).

When it comes to the building blocks of a piece of software, its best to keep everything as simple and efficient as possible, while keeping the components suitably modular so they may be adjusted in future if the design specification of the project changes.

By Matthew Tweed

/

So You Want to Trade Crypto - Hedging with Cryptocurrency and correlation structure (Part 6)

As a new asset class with historically low correlation to traditional financial products, many see Cryptocurrencies as a useful hedging tool against global downturns. However, the structure of Crypto volatility and correlation relative to market capitalization may prove somewhat detrimental to this use-case.

 Photo by  Tyler Milligan  on  Unsplash

A story of Volatility

 (Raw data from  coinmarketcap.com . These charts show the mean of the 60 day annualized volatility from 1st Jan 2017 to time of writing.)

(Raw data from coinmarketcap.com. These charts show the mean of the 60 day annualized volatility from 1st Jan 2017 to time of writing.)

As within equity markets, we see a small decrease in volatility as the market cap of coins increase (albeit with a relatively low correlation). This can be likened to blue-chip stocks vs mid-caps, with the former providing greater stability due to their established dominance in their respective sectors.

Although market cap is a slightly misleading metric when applied to Cryptocurrencies, it at least implies a higher value to a coin - thus requiring more money to shift its direction dramatically. That being said, volatility has been higher across the board over the last couple of years as Crypto shifted from the accumulation phase post 2013 into the major bull run. Finally pushing to record high as we moved into the final phase of the bull run and subsequent bear market as we entered 2018.

This structure of volatility allows Crypto portfolios and indexes to be constructed similarly to those of equities: high-cap only selection for reduced risk and volatility; mid-caps for higher risk and reward; or a more diversified index to try to capture a middle ground.

The Trend of Correlation

 (Raw data from  coinmarketcap.com . These charts show the mean of the 60 day Pearson’s Correlation Coefficient against Bitcoin USD from 1st Jan 2017 to time of writing.)

(Raw data from coinmarketcap.com. These charts show the mean of the 60 day Pearson’s Correlation Coefficient against Bitcoin USD from 1st Jan 2017 to time of writing.)

Here we see nearly zero correlation between the market capitalization of a coin and its average correlation to Bitcoin (the historical leader of the Cryptocurrency space).

While this disproves the theory of high cap Cryptos holding a closer correlation to Bitcoin, it highlights the extremely high levels of correlation present throughout the market. This, as mentioned in previous posts, is likely due to the highly speculative and sentiment driven nature of the market, along with its relative immaturity compared to more traditional traded assets.

Interestingly, there isn’t much difference between the mean of correlation and the mean of absolute (positive only) correlation, meaning that we rarely see any negative correlation between ALT/USD pairs and BTC/USD.

Cryptocurrency as an Asset Class for hedging

Crypto holds the useful property of historically low correlation to other asset classes, such as equity and commodities, suggesting it to be a good hedge against external global factors. However, there are two main issues to this plan: Cryptocurrency has never weathered a global financial crisis; and the internal correlation within the Crypto space.

Since Bitcoin, and the rest of the Cryptocurrency market, has been experiencing its own market cycles due to its rapid growth over the past few years, any fluctuations due to correlation with equity markets has been almost unnoticeable - leading many to speculate that Cryptocurrency would continue this trend and make a good hedging tool against global downturns.

This observation happens to come on the back of a decade of huge growth in both US and global equity markets. Investors have been increasingly complacent in their gains over the past few years, and are happy to take greater and greater risks, betting money on more speculative assets such as Cryptocurrencies. However, such high yield assets are always the first to tumble at the onset of a recession, as investors scramble claw back their risk as their other positions drop.

Always "Different This Time"

Many will claim that its somehow “different this time” - it always is until the inevitable pullback. This was true of the dot-com bubble and I wouldn’t be surprised if the same fate will hold true for Cryptocurrency during a global dip. Not to say that Cryptocurrencies won’t be successful long term - the internet didn’t exactly disappear after 2000. But it should be approached with the same caution as any other high risk investment.

As alluded to in the first half of the article, the levels of volatility and correlation in Cryptocurrency make it difficult to create a well diversified portfolio - no matter what you pick you’re still at the mercy of Bitcoin and can incur the same volatility spikes and drawdowns.

While it may be possible to hedge a portfolio by shorting Bitcoin itself and creating synthetic ALT/BTC pairs, this won’t be able to eliminate the sensitivity of low-mid cap coins to shifts in market sentiment, so would have to be more actively managed.

All-in-all, Cryptocurrencies provide an interesting new opportunity for traders and investors alike - with high risk but much higher reward possibilities. They will not be a miracle financial product, nor a get rich quick scheme - but they can provide something truly new and different for those who have the time to understand and appreciate them.

By Matthew Tweed

/

So You Want to Trade Crypto - Exploiting Cryptocurrency Correlation (Part 5)

There is correlation within any sector or asset class, however there are particularly interesting patterns in Cryptocurrency due to the new and speculative nature of the market, along with its historical pairs structure.

Weakness and Strength

 (Raw USD pairs from  coinmarketcap.com , raw BTC pairs from  poloniex.com )

(Raw USD pairs from coinmarketcap.com, raw BTC pairs from poloniex.com)

Historically, the Cryptocurrency space has been dominated and led by Bitcoin, with Bitcoin’s 80% — 90% of total market cap only starting to be challenged in the last couple of years, as covered in “Market Cap Distribution and Rise of Altcoins”. This huge shift in capital distribution caused a bloom in many Altcoins during the major bull run of 2017.

However, despite this redistribution of power in the market, a correlation between different altcoins and Bitcoin stayed strong throughout 2017, suggesting that the market is still centering around Bitcoin both as an indicator of general sentiment and health and, as a safe haven asset.

During the bulk of the bull run, correlation of USD pairs stayed high, with the notable exception of periods prior to Bitcoin pullbacks, such as the dips from $3k and $5k. This seems to form a bit of a leading indicator (albeit a very noisy one), as a divergence in altcoin movement appears to precede a local top and a pullback.

Correlation and investor sentiment

The BTC pairs also tell an interesting tale, moving into negative correlation as the Bitcoin trend weakened before bouncing back once a bottom had been reached.

This view of combined market correlation can also give clues to the sentiment of investors. During bull markets, projects have a high-value premium based on the expectation of future success, meaning that while correlation stays generally positive, the price movement of projects will shift around based on their own news and merits — lowering overall correlation.

However, if we look to 2018 and the bearish trend, we see a very different pattern as fear enters the market. Correlation tends towards 1.0 in a bear market, as sell-offs are sharp across the board due to the cycle of panic. Individual projects are no longer values on their own merits, instead being sold off at whatever price they’ll fetch as the market falls in unison.

This fear can be seen clearly as Bitcoin’s first sell-off from $19.6k to $6k takes effect, followed by a very slight regaining of hope after the bounce (drop in correlation) before tending back towards 1.0 as we moved for a retest of $6k.

ALT/BTC pairs and hedging

Historically, most Cryptocurrency trading was done with Bitcoin as the base pair, and even now we still see $100Ms daily through Bitcoin pairs. Back in the days where both regulation and market volume was limited, this made a lot of sense. A Crypto-Crypto exchange didn’t need to deal with the hassle of accepting and storing fiat currencies, nor the regulatory issues of handling money.

This had the effect of tying the USD value of altcoins closer to the shifts of Bitcoin, which is still a factor today (although to a lesser extent). This, along with the psychology of fear during a bear market, has lead to the levels of correlation we see in the USD pairs during large pullbacks.

In theory, this makes ALT/BTC pairs extremely useful for trading: during a bull market you’re betting that your chosen coin does better than Bitcoin on its technical merits; during a bear market you expect the ratio to stay relatively level as your coin maintains a correlation near to 1.0 against BTC/USD.

However, in reality we also start to see correlations between the ALT/BTC pairs and Bitcoin also rising during major downturns, suggesting that Bitcoin is used as a safe haven asset to hedge against pullbacks, as people rotate out of higher risk and reward altcoins.

This causes issues in the assumption of a hedged market exposure from trading Bitcoin pairs during a pullback. This does, however, provide yet more signals as to market sentiment. Since you will often see sharp dips in BTC/USD being mirror across ALT/BTC pairs, a deviation from this trend is a helpful indicator of market strength and bullish sentiment.

Trading the correlation

An understanding how different Cryptocurrencies react within the market allows you to optimize portfolios and reduce beta to Bitcoin during downturns, while maintaining upside during the bulk of a bull run.

Well established coins with higher market caps tend to keep high correlations to Bitcoin throughout the market cycle along with lower volatility (at least in terms of crazy Crypto volatility). Meanwhile, low-mid cap coins tend to be used as high risk high reward speculation tools during bull markets, but drop sharply as market sentiment shifts.

The optimal portfolio would likely re-balance periodically between a mix of mid and high cap coins, weighted by a market sentiment metric, while also hedging some exposure by shorting Bitcoin to create synthetic ALT/BTC pairs.

While it is impossible to hedge away all risk in such a new and underdeveloped market, such a portfolio may help to ease the nerves of the more risk averse investor, while maintaining exposure to Cryptocurrency as a whole.

by Matthew Tweed

/

9 Most Commonly Asked Questions About MarketStore And Answers To Them

 Photo by  William Stitt  on  Unsplash

Each of these articles seeks to explain the technology we build along with our Alpaca algo trading brokerage. These articles led active discussions on Reddit and Medium, and it became clear to us that there is a lot of interest and a pretty large need in the community for a timeseries database dedicated for financial market data. The database world and software engineering in general have changed so much over the last decade as we’ve seen an explosion in open source programming and databases. We are seeing some people now actively using open source to and contributing some code in the GitHub repository.

In social media and offline, we’ve been answering questions and responding to comments, but today we wanted to take the opportunity to put all the queries and responses together in one post and share it with the entire community so everyone can get a look at the responses on a single post.

Q: Does MarketStore store data in memory?

A: No. MarketStore is designed to run in a reasonable size of host without huge hardware investment. If you have lots of cash, software technology is irrelevant, but what software engineering can bring is that you can do a lot better job with cheaper hardware. MarketStore’s primary use case is to be able to store and distribute years of data at second level granularity for more than tens of thousands of series (US equities and crypto coins across exchanges can easily become this size). The data size can be a few terabytes, and it is not still very common to have this big size RAM in a commodity hardware. MarketStore instead stores everything in disk, but the on-disk format is nearly identical to the layout in the memory, and thanks to SSD evolution, MarketStore can load the data at the speed competitive to in-memory storage.

Q: How does it make sense to compare with PostgreSQL and includes DataFrame loading?

A: Even if you can store the data, offloading it from application processes, it is not useful if you cannot use it. MarketStore is mainly used in the context of AI machine learning and backtesting, and the application typically loads it into some tabular structure such as Pandas DataFrame. That is why MarketStore’s network protocol is byte sequence in MessagePack so the inefficient JSON deserialization can be avoided. The client can load the delivered byte data into memory as C array, which is what is used behind DataFrame.

Q: How is it better compared to InfluxDB?

A: We have not compared the performance with InfluxDB, but InfluxDB and other general-purpose timeseries databases use-case is as system metrics or activity log analysis. Those require more flexible data structure and don’t necessarily need specific functions such as timezone-aware aggregate. The flexibility comes with necessary overhead as tradeoffs as always, and MarketStore should be much faster and cost effective if the use case is the financial market data.

Q: Why are you comparing with PostgreSQL when Timescale should be faster?

A: You can send us the benchmark results if you have them, but in our internal experiments, Timescale is even slower than PostgreSQL compared to MarketStore. The loading time at the database server level for Timescale is 2–3x slower than PostgreSQL, since Timescale makes use of table partitioning (aka table constraints exclusion) that needs to open lots of files from disk. It will give advantage to filter a small slice of the data out of large amount of data, but it will not work better if you scan most of it. MarketStore stores the data in an optimal way on disk and reads sequentially direct to memory compared to those relational databases, so it is way faster.

Q: MarketStore can be used only for historical data but not for real-time data right?

A: There is a new feature coming soon to MarketStore that will allow streaming and realtime push on every new data write. MarketStore was originally designed to help our algo trading platform that builds trading algorithms using deep learning, and run them in the real market, and had JSON websocket streaming. The feature has been for the time being so that Marketstore can find a way to fit in larger use cases. But thankfully it is now back in as a plugin. We have been testing this with thousands of updates every few seconds and so far it is working perfectly.

Q: Why do I need this for machine learning? I can load the data from disk without a problem

A: If your training process doesn’t use much data (e.g. just daily bars from one stock), then yes probably you don’t need MarketStore for performance reasons. What we needed to do on Alpaca trading platform requires a server that is large enough to store an amount of intraday data across the entire market (can be up to terabyte range), and load the necessary series data back and forth. If you are familiar with typical machine learning training process, you can tell how the training iteration can load random data from the pool. That said, MarketStore is not just for performance, but also for the convenience to prove the uniformed way to access historical and real-time timeseries data the same way without worrying about how to manage local files etc. And the built-in data ingestor can load the data without even writing any code.

Q: Where is the installer?

A: Sorry, at the moment, we are not providing the one-click installer! But instead, we package the server process into a docker container image, so if you have docker, you can just start it in a second.

Q: Why is it open sourced?

A: Because there is a problem to be solved! MarketStore was implemented proprietary for our internal use and has been used in our production, but we have also seen the common problems affecting many people in the space. Our mission at Alpaca is to help individual investors with technology, and improve the algo trading environment, regardless of whether we give that information away to users or offer it in a premium package. This kind of product has only been accessible by financial institutions with large capital resources. But now we are making it available to anyone who is eager to try out! That’s awesome, isn’t it!?

Q: I found a bug

A: Please report it in the GitHub issue!

/

So You Want to Trade Crypto — Market Cap Distribution and Rise of Altcoins (Part 4)

Bitcoin dominating >90% of the total value of the market to <40%

From the start of 2016 to the end of 2017, we’ve gone from Bitcoin dominating >90% of the total value of the market to <40%.

This flow of capital has lead to a boom in alternative Cryptocurrencies, which offer newer technologies and wider use-cases.

Market Cap Misconceptions

The first point to address is the issue with market capitalization as a metric when applied to Cryptocurrency. For a stock, the market cap is calculated as:

     Price per share * shares outstanding

Which makes sense, as each share represents a stake in the assets and profits of the company. This same calculation is applied to Cryptocurrencies:

     Price per token * tokens available

This starts to cause issues due to the ease that a new token can be made and added to one of the dozens of small exchanges.

If someone creates a new coin with a total supply of 100B and manages to get it listed on a small exchange and trades it a few times with their friends for $1 per coin, it technically has a market cap of $100B. But in reality, is has no true value and no trading volume to sustain any kind of selling pressure.

Maintaining an artificially inflated market cap

Adding to this, there are many coins that do have significant daily trading volumes while maintaining an artificially inflated market cap as the majority the of the supply is locked up by developers and isn’t tradeable. This raises serious questions about how the investors and traders price in total supply of a token and whether the theoretical value of a project lines up with reality.

Many people also misunderstand what market capitalization means in terms of capital flow.

A market cap of $100B does not mean that $100B has been invested into the token, as shown earlier. Nor does a token’s market cap changing from $100B to $150B or $50B mean that $50B of capital has changed hands.

Not enough money in the system to redeem every token

The profit from a Cryptocurrency investment should be treated as “paper gains” until cashed out or hedged — there is simply not enough money in the system to redeem every token to anywhere near the value of its market cap.

Despite this, for a Crypto with sufficient trading volume and age, market cap can be useful for rough comparison, but make sure to always take it with a grain of salt.

Shift in Market Cap Distribution

 (Market Cap Values from  coinmarketcap.com )

(Market Cap Values from coinmarketcap.com)

As we can see, the last few years have not been kind to Bitcoin’s historical dominance of the Crypto market, with many new projects taking off in the first half of 2017.

For many years, Bitcoin has held onto its “first mover advantage”. However, political issues surrounding the development of Bitcoin caused a slow down in advancement — creating a void for a multitude of altcoins to fill.

Result of “ICO Mania”

Over the past year this new crop of development has accelerated, with smart contract platforms taking many of the top spots. 2017 also saw the rise of “ICO Mania”, with dozens of new tokens and projects gaining investment from speculators looking for yet higher yields on their equity.

In the long term, Bitcoin will likely continue its decline in market share, as its older technology simply cannot compete with new offerings. As long as the political issues surrounding development continue, this will not change. Bitcoin made for an excellent proof of concept, but if it can’t adapt, it risks becoming the Myspace of the Crypto world.

Trading the Altcoin Boom

With altcoins making consistent gains in market share and the relative stagnation of Bitcoin development, Bitcoin is likely to drop from the top spot over the next couple of years (if not sooner) in favour of a newer generation Cryptocurrency.

This shift will likely see a huge change in the attitude and composition of the market as a whole, as everyone tries to pile into the new top coin and related technologies, so that they can ride the hype train.

As always, it is best to keep a level head and stick to your trading and investment strategies. A firm understanding of the underlying technology and use-case of a wide range of Cryptocurrencies will serve well in positioning yourself to take advantage of this shift.

There are a wide variety of projects which all have their use-cases

While smart contract focused Blockchains are some of the leaders at the moment, in the long term their value and success will be measured by the applications and businesses that run on top of them. Meanwhile, we shouldn’t forget the other uses of Blockchain, such as ledgers for supply chains, auditing or even Internet of Things devices. There are a wide variety of projects which all have their use-cases.

Cryptocurrency investments should be managed like a stock portfolio. You wouldn’t place your entire value into a single stock, similarly you shouldn’t be overly dear about a single coin. A well balanced holding of different projects across different areas can help hedge against black swan events in the market while profiting from the broad growth of Crypto as an asset class.

by Matthew Twee

/

50x Faster Bitcoin Price Data Powered by MarketStore for AI Trading

In our last post “How to Setup Bitcoin Historical Price Data for Algo Trading in Five Minutes”, we introduced how to set up bitcoin price data in five minutes and we got a lot of good feedback and contributions to the open source MarketStore.

 Photo by  chuttersnap &nbsp;on  Unsplash

Photo by chuttersnap on Unsplash

The data speed is really important

Today, I wanted to tell you how fast MarketStore is using the same data so that you can see the performance benefit of using the awesome open source financial timeseries database.

Faster data means more backtesting and more training in machine learning

Faster data means more backtesting and more training in machine learning for our trading algorithm.  We are seeing a number of successful machine learning-based trading algos in the space, but one of the key points we learned is the data speed is really importantIt's important not just for backtesting, but also for training AI-style algorithms since it by nature requires an iterative process.

This is another post to walk you through step by step. But TL;DR, it is really fast.

Setup

structure.001.jpeg

Last time, we showed how to setup the historical daily bitcoin price data with MarketStore.

This time, we store all the minute-level historical prices using the same mechanism called background worker, but with a slightly different configuration.

 

root_directory: /project/data/mktsdb
listen_port: 5993
# timezone: "America/New_York"
log_level: info
queryable: true
stop_grace_period: 0
wal_rotate_interval: 5
enable_add: true
enable_remove: false
enable_last_known: false
triggers:
 - module: ondiskagg.so
   on: "*/1Min/OHLCV"
   config:
     destinations:
       - 5Min
       - 15Min
       - 1H
       - 1D
bgworkers:
 - module: gdaxfeeder.so
   name: GdaxFetcher
   config:
     query_start: "2016-01-01 00:00"
     base_timefame: “1Min”
     symbols:
       - BTC

Almost 2.5 years with more than 1 million bars

The difference from last time is that background worker is configured to fetch 1-minute bar data instead of 1-day bar data, starting from 2016-01-01.  That is almost 2.5 years with more than 1 million bars. You will need to keep the server up and running for a day or so to fill all the data, since GDAX’s historical price API does not allow you to fetch that many data points quickly.

Again, the data fetch worker carefully controls the data fetch speed in case the API returns “Rate Limit” error. So you just need to sleep on it.

Additional configuration here is something called “on-disk aggregate” trigger.  What it does is to aggregate 1-minute bar data for lower resolutions (here 5 minutes, 15 minutes, 1 hour, and 1 day).

Check the longer time horizon to verify the entry/exit signals

In a typical trading strategy, you will need to check the longer time horizon to verify the entry/exit signals even if you are working on the minute level. So it is a pretty important feature. You would need pretty complicated LEFT JOIN query to achieve the same time-windowed aggregate in SQL. But with MarketStore, all you need is this small section in the configuration file.

The machine we are using for this test is a typical Ubuntu virtual machine with 8 of Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz, 32GB RAM and SSD.

The Benchmark

Unfortunately lots of people in this space are using some sort of SQL database

We are going to have a DataFrame object in python which holds all the minute level historical price data of bitcoin since January of 2016 from the server.  We compare MarketStore and PostgreSQL.

PostgreSQL is not really meant to be the data store for this type of data, but unfortunately lots of people in this space are using some sort of SQL database for this purpose since there is no other alternative.  That’s why we built MarketStore.

The table definition of the bitcoin data in PostgreSQL side looks like this.

btc=# \d prices
              Table "public.prices"
 Column |            Type             | Modifiers
--------+-----------------------------+-----------
 t      | timestamp without time zone |
 open   | double precision            |
 high   | double precision            |
 low    | double precision            |
 close  | double precision            |
 volume | double precision            |

The code looks like this.

# For postgres
def get_df_from_pg_one(conn, symbol):
    tbl = f'"{symbol}"'
    cur = conn.cursor()
    # order by timestamp, so the client doesn’t have to do it
    cur.execute(f"SELECT t, open, high, low, close, volume FROM {tbl} ORDER BY t")
    times = []
    opens = []
    highs = []
    lows = []
    closes = []
    volumes = []
    for t, open, high, low, close, volume in cur.fetchall():
        times.append(t)
        opens.append(open)
        highs.append(high)
        lows.append(low)
        closes.append(close)
        volumes.append(volume)

    return pd.DataFrame(dict(
        open=opens,
        high=highs,
        low=lows,
        close=closes,
        volume=volumes,
    ), index=times)

# For MarketStore
def get_df_from_mkts_one(symbol):
    params = pymkts.Params(symbol, '1Min', 'OHLCV')
    return pymkts.Client('http://localhost:6000/rpc'
                         ).query(params).first().df()

You don’t need much client code to get the DataFrame object

The input and output is basically the same, in that one symbol name is given, query the remote server over the network, and get one DataFrame.  One strong benefit of MarketStore is you don’t need much client code to get the DataFrame object since the wire protocol is designed to give an array of numbers efficiently.

The Result

First, PostgreSQL

%time df = example.get_df_from_pg_one(conn, 'prices')
CPU times: user 8.11 s, sys: 414 ms, total: 8.53 s
Wall time: 15.3 s

And MarketStore

%time df = example.get_df_from_mkts_one('BTC') 
CPU times: user 109 ms, sys: 69.5 ms, total: 192 ms Wall time: 291 ms 

Both results of course look the same like below.

In [21]: df.head()
Out[21]:
                       open    high     low   close   volume
2016-01-01 00:00:00  430.35  430.39  430.35  430.39   0.0727
2016-01-01 00:01:00  430.38  430.40  430.38  430.40   0.9478
2016-01-01 00:02:00  430.40  430.40  430.40  430.40   1.6334
2016-01-01 00:03:00  430.39  430.39  430.36  430.36  12.5663
2016-01-01 00:04:00  430.39  430.39  430.39  430.39   1.9530

 

50 times difference

A bitcoin was about $430 back then… Anyway, you can see the difference between 0.3 vs 15 seconds which is about 50 times difference. Remember, you may need to get the same data again and again for different kinds of backtesting and optimization as well as ML training.

Also you may want to query not just bitcoins but also other coins, stocks and fiat currencies, since the entire database wouldn’t fit into your main memory usually.

Scalability advantage in MarketStore

MarketStore can serve multiple symbol/timeframe in one query pretty efficiently, whereas with PostgreSQL and other relational databases you will need to query one table at a time, so there is also scalability advantage in MarketStore when you need multiple instruments.

Querying 7.7K symbols for US stocks

To give some sense of this power, here is the result of querying 7.7K symbols for US stocks done as an internal testing.

%time dfs = example.get_dfs_from_pg(symbols) 
CPU times: user 52.9 s, sys: 2.33 s, total: 55.3 s Wall time: 1min 26s 
%time dfs = example.get_dfs_from_mkts(symbols) 
CPU times: user 814 ms, sys: 313 ms, total: 1.13 s Wall time: 6.24 s

Again, the amount of data is the same, and in this case each DataFrame is not as large as the bitcoin case, yet the difference to expand to large number of instruments is significant (more than 10 times).  You can imagine in real life these two (per instrument and multi-instruments) factors multiply the data cost.

Alpaca has been using MarkStore in our production

Alpaca has been using MarkStore in our production for algo trading use cases both in our proprietary customers and our own purposes.  It is actually amazing that this software is available to everyone for free, and we leverage this technology to help our algo trading customers (early access signup is here).

Thanks for reading and enjoy algo trading!


 

/

MarketStore, the financial time series database, is now open source

We are happy to announce MarketStore is now open source! MarketStore is a database server optimized for financial timeseries data written in pure Go, designed and developed by Alpaca. You can think of it as an extensible DataFrame service that is accessible from anywhere in your system, at higher scalability.

Read More
/

Where Do We Stand in the AI Hype Cycle?

Working for an AI centered algorithmic trading company has allowed me to gain insight on the two most disruptive industries of modern day: AI and finance. This precise positioning in the middle of so many up and coming industries has given me a unique perspective regarding the future of artificial intelligence and crypto trading.

What Is the Hype Cycle?

1_2q01cCYMQC2lKC8aKHfPAQ.png

Great picture explaining the hype cycle from Wikipedia

To begin to understand the opportunities associated with these two technologies, we first must comprehend the hype behind them. Gartner, a prominent IT research firm, spearheaded the hype cycle concept, outlining 5 key phases that a trend goes through. This theory has been proven to work, as there are many examples of trends that have fallen into this established pattern. Not everything is the same, of course, but you can use the patterns this cycle to predict where a particular trend will go. A fascinating part of this trend is that in order to join the mainstream hype, a technology needs to experience both an upward peak and a downward trend of disillusionment, exhibiting an oscillating volatile nature.

The Hype Cycle In Action: The DotCom Bubble a.k.a. Internet

One of the best parts of living in Silicon Valley is that you can hear the real, raw stories about the historical moments that have taken place in technology. I have several friends who experienced the notorious DotCom bubble hype.

The crazy uptrend started in the 90’s where people, especially in the tech space, started to claim there would be the new type of economy within the digital world, one characterized not by tangible products and profit rather an idea of a new way of doing business. Some people compare it with today’s ICO hype, as many of the current ICO projects don’t have real products yet manage to raise considerable amounts of money.

It took roughly 20 years for the Nasdaq index to reach its previous peak in 2017. During that time period, companies like Google, Apple, Amazon and Facebook grew and flourished, and our entire lifestyle was transformed by the Internet. Let me emphasize this again. It took a full 20 years for the original vision of the internet to come to fruition, even with super-smart, hard-working innovators.

How did Crypto Start?

Now let’s turn to crypto. 2017 was a great year for the crypto space, as bitcoin prices not only soared 1,000%, but more importantly, the philosophy behind the bitcoin and blockchain technology traveled into the mainstream hype. Even my mom has now heard about it.

It is easy to mistakenly think that crypto is a quite recent trend, but the Bitcoin paper by Satoshi Nakamoto was actually first published in 2008. It took almost ten years for this trend to enter people’s daily lives and affect the common person. Over the last decade, so many risk-takers have put in enormous efforts to push this once naive technology to such a level, applicable to a wide range of things, from easy-to-use wallet systems to merchant spending infra. If you haven’t checked out the documentary video Banking on Bitcoin by Christopher Cannucciari, which offers a fantastic overview of the origins and path of Bitcoin, I strongly recommend watching it.

And Where Is Crypto Today?

If you are not too young, you might remember the prominent event in the bitcoin history about Mt. Gox case. It was 2011 when the firm suffered a security breach and lost almost all of their customer assets. By 2013, I was starting my own startup and had couple of friends in the bitcoin startup community, but I was completely out of the loop regarding the growing mainstream bitcoin hype. I never imagined bitcoin would be something my mom would talk about in 5 years. Note that this 2011–2013 time horizon was a full 5 years after the publishing of the Bitcoin paper, and even bitcoin connoisseurs like me never fathomed the recent crypto craze would occur.

With this being said, I still have no clue what the future that the crypto will be making in the next 10 years. Who could have imagined you would be able to connect with your high school friends through Facebook, and Amazon would start something called cloud business leveraging their online bookstore infrastructure, 20 years back? I’ll be humble and admit I probably don’t know how crypto technology will change the world exactly. People are excited about the opportunity behind this technology as well as how it can change the economy, and some anarchists go as far as to say that our entire sense of governments will be disrupted. The only thing I can say at this point is that this crypto trend just passed the peak of excitement, and will probably see a huge depression over the next few years, as the hype cycle predicts, but will see bigger impact over the next 10–20 years.

When Current AI Boom Started?

It is a very well known fact that the current AI trend is actually the third one in the AI history. The first one started right after the modern computer was born in 60’s-70’s, and the second one arose in the 90’s, when new theories arose. These two AI booms were significant but never reached full fruition, as the computing power was just not enough to accomplish what was aimed by them.

The third era emerged from the memorable 2012 ImageNet competition when the Deep Learning approach by the Toronto team outperformed any other previous techniques by far and approached the human recognition level. Some later research identified the use of GPU realized the theoretic idea with realistic cost. GPU, of course, is only one of many hardware approaches like FPGA, but it did prove that computation power had caught up to theory to some extent.

Since then, the chip maker nVIDIA has jumped into the space, turning itself from a game company to an AI business. Google established the Google Brain project, hiring many top-notch brains from academia, competing with companies like Baidu in self-driving car space, as well as beating human champion of Go by AlphaGo, backed by so many trials and errors with acq-hired startups. Around 2014–2015, we also saw the nativity of many Deep Learning startups that either don’t exist anymore or acquired by big players, around us Alpaca.

And Where Is AI Today?

It’s 2018 and it’s been only less than 6 years from the ImageNet shock. If you compare the bitcoin space, it is around the time Mt. Gox was in trouble and I had no clue what they were talking about. I can now see that AI may have some trouble soon; we are already starting to see technologies in this space fall short of what we expect, such as un performing chatbots self-driving cars, but we will just have to wait and see exactly how the AI trend as a whole plays out.

The best time to invest in AI is right now, based on the lessons learned from crypto. If you compare this trend with the internet boom, it’s either even before the bubble, or in another angle it is only around 2003–2004 where things like Google came out to the mainstream. I sometimes see that people think AI means Deep Learning, but that is not true; it is also not just playing Go or self driving cars. Artificial intelligence possesses a myriad of opportunities and applications, and has the potential to change every aspect of the human life, including finance; we have no idea the potential impact of this monumental technology. There are many leaders who offer specific insights and arguments regarding the future of AI technology, such as Elon Musk or Mark Zuckerberg. They predict it could kill people or there will be singularity. The only thing I can say for sure is that we are underestimating the impact of this trend, and we can only surely determine its effects 20 years from now. Today, however, Alpaca can take pride in the fact that we are the ones that are pushing the boundaries into this undefined space of innovation, paving the way for a new world full of possibility and innovation.

“There is only a “one in billions”chance that we’re not living in a computer simulation. Our lives are almost certainly being conducted within an artificial world powered by AI and highly-powered computers, like in The Matrix” — Elon Musk
/