Browsed by
Category: The Blog

Posts about the Tiingo Blog

Tiingo’s New End-of-Day (EOD) Price Engine

Tiingo’s New End-of-Day (EOD) Price Engine

Announcing Tiingo Composite Price Feeds

You can access  the data here: Tiingo API – EOD Daily Data


This year our company hit a major turning point with revenue rapidly rising. The first thing on the list we had wanted to do? Get better data. Announcing our new End-of-Day (EOD) Price Data Engine powering and its API

Because of each and every one of you, we were able to expand our data budget literally 15-fold in the past couple months. And today, I am proud to announce our new Data Engine initiative. As of June 28th, 2017, we have converted 98% of equities over, and 60% of Mutual Funds. The rest is being migrated this week.

But what is the new methodology? Glad you asked. We went back to the drawing board and realized, “if ISPs and web hosts have redundancy? Why don’t we as a data firm?” We started there and expanded it.

So we broke up our process into 4 phases as you can read below. In summary:

Each ticker must go through 4 phases before prices are made available:

Phase 1: Each ticker is covered by at least 2 different data providers. This ensures redundancy and is also a method to cross-check updates

Phase 2: Each data provider’s data must then pass our statistical error checks. If there are any errors, our system looks to autocorrect them. For ex. one example of what our statistical engine does is detect duplicates

Phase 3: Human intervention. Companies do weird things and markets haven’t always been automated. This makes it very hard for computers to detect things like re-listings or sparse data on lesser liquidity companies, or companies that have been pre-computer era. Our systems alert us when the statistical engine can’t auto-correct. Each of our human steps are documented, so we can explain what decisions were made and why.

Phase 4: AI. Once we have enough data from Phase 2 & 3, our systems can start auto-correcting certain errors. Note: for readers of the blog, you know we are skeptical of full-automation, or even most of the AI methods out there when it comes to financial data. The AI will always be conservative, but it is an important step when error-checking.

Only after the above 4 phases do we release price data for a ticker. Now imagine that times 40,000.

Just a quick note: EOD is very hard to get right, especially with companies doing weird things with listings/delistings/restructuring. So if you identify an issue, please let us know.We are actively working hard on transparency and also creating better data for all, but it will require a joint effort. We look forward to working on this solution!

Here are cool graphics below:

What It Takes For Prices To Be Published On Just 1 Ticker

(Now multiply this by 40,000)

Phase 1: Source data from multiple providers for both redundancy and error-checking

We’ve gone to a variety of different data vendors, each with different methods of access, to ensure that the data feeds remain as unique as possible. Our goal is to have a minimum of 2 data providers per ticker.  We are using AAPL in the examples below

4 Different Data Providers for AAPL

When then extend and compare each historical EOD data from each provider. We even use some datasets that are no longer around, but offer historical time data that far surpass the others.

Phase 2: Run Statistical Error Checking on Each Data Source

We then use a proprietary suite of statistical tools to clean each data feed and also detect issues or errors within each feed. This helps us score and keep track of each feed, and also automate common errors that we may find, e.g. duplicate values

Phase 3: Good-ole Human Intervention

Computers are smart, but they don’t understand qualitative history quite well enough yet. When our statistical engine catches a discrepancy that it can’t auto-fix, we go on a mission to dig into what happened. This involves anywhere from scanning through historical press releases, financial statements, making phone calls, or whatever we need to do in order to get to the bottom of it.

When you alert us of an error, we go out of our way to fix it. We built this entire engine because users identified an error and we realized one data source wasn’t going to cut it. We take these reports that seriously.

(We spent all of our money on data, not clothes)


Phase 4: AI

In order for robots to take over, they need to learn. We keep track and audit every override we’ve done from phases 2 and 3 so when we have enough data, we will implement our AI that will learn how to auto-correct better and when to alert us to issues. Those of you who frequent the blog, or know our team, know we are very wary of AI’s ability to fix data errors. When this is implemented, it will be incredibly conservative as we will always prefer phase 3.



With all the data now cleaned, derived, and made into composite indices, we release it to you all in a single EOD data source.

Making the World’s Best Screener for Our Users Pt. 2

Making the World’s Best Screener for Our Users Pt. 2

If you haven’t seen part one – read it here: Making the World’s Best Screener for Our Users

As we’ve improved our screener- we also couldn’t stand idly by not updating our custom metrics creator. Tiingo was the first major fintech company to allow any user to create their own stock screening metric.

And as time passed – we realized we were going to make it so much better for you.

Announcing: The Sexy, Newly Revamped Custom Metrics Creator:

The New “IDE”

Programmers use “IDE”s to code – and we wanted to make that simple for everybody to use. We created our own version that makes it so simple – so if you know Excel, you know how to make a custom metric.

And the best part? Each line will give you the number calculated so you get feedback immediately.
IDE Example

We even added autocomplete:

IDE Autocomplete


The Shifting Distribution

One of the most impressive features we’ve ever coded on Tiingo is taking your custom metric, and then calculating analytics on it immediately. The horse power this took was incredible and it pushed our coding abilities. Coding the shifting distributions, especially with custom metrics, took us 80% of our time.

As you enter your metric, you will see the distribution of your metric across the entire Tiingo Universe:


And now when you screen – just like the screener, the distribution of your metric will shift


The Docs

This blog post couldn’t cover all of the metric and functions (like calculating the mean EPS over the past two years), so we created comprehensive documentation that let’s you harness the full power of the new screener

The Docs


We know you’re going to love our new Custom Metrics: Tiingo Custom Metrics

Making the World’s Best Screener for Our Users Pt. 1

Making the World’s Best Screener for Our Users Pt. 1

It is over a year ago Tiingo launched it’s first screener. We were attempting to move forward the power of screeners, and we had a grandiose ideas of how to do it. We were the first to:

  1. Allow users to create their own metrics
  2. Created a new UI that challenged existing assumptions of screeners

We’re never happy with the status quo, so we decided to challenge ourselves further. We were going to make the custom metrics more intuitive, the screener more informative, and the user experience so intuituve –  you would have no idea that you just screened through ten million datapoints because it took 200ms.

Announcing: The Sexy, Newly Revamped Screener:

Tiingo Screener

The New Notebook

We’ve consolidated the screener overview page into a notebook format. This allows for easy switching among screens and reduces clutter while saving you clicks. We strive for beautiful minimalism here at Tiingo:

Tiingo Screener Notebook format

Searchable Filters

While the old drag and drop was nice, we wanted to come up with a new way to add/remove filters. We’ve created a beautiful searchable table, organized by the type of metric.

Metric Selection Table

Shifting Distributions

We believe data visualization should be done with a level of minimalism. We don’t want charts for the sake of charts. And research has shown time, and a time again, less is more when conducting analysis with numbers.

So we started off with the concept that when somebody screens, they should have context.

Is a filter for a P/E Ratio between 10-25 too common?

PE Between 10 and 25 with Distribution

But that wasn’t enough.

If we’re looking at a PE Ratio of 10-25, what kind of companies are we choosing? Are they small-cap or large cap? The Market Cap distribution has shifted and we want to know that.


The Shifting Distribution


We can see that a P/E of 10-25 leans slightly toward larger companies.

How about a P/E ratio of -20-0.

PE -20 to 10 Market Cap

The distribution has switched largely to smaller Market Caps.

Suddenly, you now have context into all of your screening metrics. And the best part? It’s all done in a flash of a second. If you don’t believe us – check it out here (no registration required): Tiingo Screener

The Results

We wanted our users to be able to see the latest data – quickly. Now the results show you metrics seamlessly and beautifully:

Screen Results

And you can simply click to see more about a company:

Screen Results Expanded

We know you’re going to love this new screener: Tiingo Screener

Presenting the Tiingo API

Presenting the Tiingo API

It’s here, it’s finally here.

The Official Tiingo API has launched after months and months of people requesting this, followed by months and months of dev time. The reason it took so long? We didn’t just do standard-API stuff, but we built infrastructure in exchange data centers to help significantly reduce to costs to everyday users to FinTech and Institutional players.

For example, EOD data is included in the Tiingo price, whereas for FinTech: real-time data is $500/month instead of $4200/month.

In summary: the entire API was built with the idea, “how much can we give and get away with it?” instead of, “how much can we charge and get away with it.”

And with that here’s the lowdown:

Our technological approach:
The origin story:

There exist a few limitations:

  1. Every user is entitled to 40GB of bandwidth a month. Yes we realize that’s insane – it’s why we did it.
  2. Every user gets 10k requests an hour and 100k requests a day. We plan to increase these as more datasets come online and as we phase out of beta. You can monitor your usage at:
  3. In order for us to track these limitations, we will need you to create an account (hope that’s ok!)


We’ve worked hard to make the documentation super simple to use. You can view them here:


Here are our datasets:

Included in

  • EOD Data 
    • End of Day price data for over 37,000 tickers including ADRs and Chinese stocks
  • Mutual Fund Data
    • Getting ready for launch
  • Technicals
    • Getting ready for launch

Additional (for FinTech)

  • IEX Price Data
    • Tiingo is the first FinTech company to bring IEX Real-time data to the mainstream public
    • IEX Real-time data for $500/month versus $4200+/month for other services
    • Websockets and REST implementation



How to set up Hosted Web Apps with Windows Live Tiles

How to set up Hosted Web Apps with Windows Live Tiles

For those of  you who have been keeping up with this blog, the Javascript container process is something I’ve been following closely for the past decade. Earlier in the year, Peter Kruger from Microsoft reached out asking if I could test their latest implementation, which we presented at //Build. It was an honor, and since then I’ve been advocating Microsoft and OpenFin’s implementation as my favorites.

In a nutshell: The Javascript container process let’s you take a JS website and make it feel native to the operating system – whether it’s iOS, Droid, or Windows. We’re going to cover the Windows Live Tile implementation here.

For those of you with Windows machines, tablets, or phones (okay Surface users and PCs) – you may see what Windows calls “Live tiles.” Windows 8 may have overdone it 😉 but Windows 10 nailed it. Tiles allow you to gain a snapshot of what the app is doing without having to open it. I always found this implemented on Android clumsy from a UI standpoint and the feature is mostly non-existent on iOS -with the exception of a Apple-apps like weather and the clock. I use an iPhone for the record.

But – Microsoft nailed it IMO with the perfect amount of structure and dynamic content. Whereas Android has widgets which are all sorts of shapes, Microsoft forces structure and lets you “snap” together tiles.


Windows Live Tiles
Windows Live Tiles

Notice the Tiingo one? Yeah we like it it too 🙂

We’re going to cover how we got these going in our pure-Javascript implementation. It didn’t require any native coding which was nice. It turns out if you’re using Hosted Web Apps, which let you convert your Javascript web app into Windows app, Microsoft injects a Windows library that you can use to interact with Windows.

This GitHub page does a good overview, but we’re going to go a little more in-depth. Still a good read-through:


Step 1 – Download the source code/generate the manifest

You need to generate source code, or a manifest file for this to work. If you don’t know what it is (like me initially), you can use App Studio which has a wizard and takes care of this for you. Visit here:, make an account, and then created a “Hosted Web App” via this URL:

When you’re done with the wizard, click “Generate” and download the Source code.

Download Source Code

Once you the source code, you can open it up via Visual Studio. You can download the Community edition for $0 here:

Step 2 – Choose a Template

Microsoft has pre-generated templates that you can “fill-in.” In reality, these are XML templates where you can change the content and then update it. So we’re going to choose a template, populate it with data, and then send the notification update to the Windows Notification library

Find a template that you like. We’re going to change the content in them to present the data that we want. You can see the catalog here:

For Tiingo, we went with tileWide310x150Text05 . Keep track of this “identifier” code as we will need it in our javascript code.

I like the idea of clean, text and for financial data images are not as necessary. Maybe later we will include them for news stories, but first I wanted to include text.

Once you choose the template, you can scroll down and see the XML. For tileWide310x150Text05  it looked like (taken from MSFT’s website):

    <binding template="TileWideText05">
      <text id="1">Text Field 1</text>
      <text id="2">Text Field 2</text>
      <text id="3">Text Field 3</text>
      <text id="4">Text Field 4</text>
      <text id="5">Text Field 5</text>

  <visual version="2">
    <binding template="TileWide310x150Text05" fallback="TileWideText05">
      <text id="1">Text Field 1</text>
      <text id="2">Text Field 2</text>
      <text id="3">Text Field 3</text>
      <text id="4">Text Field 4</text>
      <text id="5">Text Field 5</text>

Step 3 – Update the tile in your JS code

Next we have to tell Windows when to update the data and what to do.
We used this snippet, check the comments to see what each line means:

//See if the Windows namespace is available (injected by Windows for HWAs)
if (typeof Windows !== 'undefined' && typeof Windows.UI !== 'undefined' &&
typeof Windows.UI.Notifications !== 'undefined') { 
     //setting dummy market data
     var marketData = {spy : {returns : .05}, newsLinks: [{title: "Headline 1"}, {title: "Headline 2"} ]};
     //Get the Windows UI Notifications
     var windowsNotifications = Windows.UI.Notifications;
     //Load in the template, which will contain the XML we can modify
     var tileTemplate = windowsNotifications.TileTemplateType.tileWide310x150Text05;
     var tileXML = windowsNotifications.TileUpdateManager.getTemplateContent(tileTemplate);
     //We now get all the text elements and append text nodes
     var tileText = tileXML.getElementsByTagName('text');
     //First line will be a header
     tileText[0].appendChild(tileXML.createTextNode("Market Snapshot"));
     //Next we get the returns and append a "+" sign if the return is >0. For negative numbers, JS defaults to appending a "-"
     if(marketData.spy.returns > 0) 
          tileText[1].appendChild(tileXML.createTextNode("S&P 500 +" + (marketData.spy.returns * 100).toFixed(2).toString() + "%"));
          tileText[1].appendChild(tileXML.createTextNode("S&P 500 " + (marketData.spy.returns * 100).toFixed(2).toString() + "%"));
     //Next we add the news headlines
     //Create the TileNotification, passing our modified XML template and then send the update command
     var tileNotification = new windowsNotifications.TileNotification(tileXML);
     var tileUpdater = windowsNotifications.TileUpdateManager.createTileUpdaterForApplication().update(tileNotification);

Since we are using Angular, we wrapped the initial call in a $timeout() and then set an $interval to get the marketData JSON object from our back-end every 30 seconds.


Step 4 – Test the app by running it in Visual Studio, pin the app to your start menu, and voila!


Our Example Tile
Our Example Tile
The Protagonists Fixing the Problem that Apps Created (Part 2)

The Protagonists Fixing the Problem that Apps Created (Part 2)

This is part 2 of the blog post: Apps Have Recreated the Problem the Web Was Trying to Fix


In this post we’re going to discuss the protagonists who are creating tools and frameworks to unify the “App” experience across desktop and mobile. If successful, this will mean we are getting closer to mobile and desktop cross-platform and cross-browser compatibility. Please read part 1 if you are curious as to what this problem has meant for firms and developers.


All UX engineers will tell you that the mobile interface is fundamentally different than a desktop application. After all, we all know what the “three lines” mean, right?

The three-lines we came to know as the “Hamburger Menu”

It is universal code for, “There are more features that will show themselves if you click us. Do it. Click us.”

What Google is therefore doing is creating a design specification that makes a unified standard across both desktop and web applications. For a very comprehensive description of this, check out their website available here: Material Design Introduction. It’s a wonderful read of their philosophy and great information for those of you learning UX like myself.

One example of Material Design, for those of us familiar with Google’s Hangouts App, is this menu:


Here we can see Google is attempting to unify the experience of the “Hamburger Menu,” by creating both a mobile and desktop interface for it.

But Google isn’t the first to attempt this.

Note: The hamburger menu has it’s critics, but that is beyond the scope of this blog post.

Twitter (Bootstrap)

Twitter created a platform known as Bootstrap that’s become ubiquitous and set a new standard for a unified desktop/mobile experience, otherwise known as “responsive.” It set the foundation for many of the design frameworks you see today and almost all responsive web applications rely on this framework today.

It popularized the “grid layout” and always had the philosophy of “Mobile First.” It even helped set the mobile and web icons  you see today. For a full list of all the features please visit:

If there was a museum of “web development,” I would argue Bootstrap would have its own exhibit. The impact it’s had is absolute awe-inspiring and all of the criticisms people have of it come with an implicit asterisk:

*We are not insulting bootstrap. It’s amazing. The whole reason we can critize them is because they set a new standard that got people thinking differently.

Please visit: as no matter what images I post, it will not due them justice.


Microsoft has been the platform I have been the most excited about. Close friends of mine have heard my rants on unified web experiences, so it felt like kismet when a senior product individual reached out to me asking to test out their Web App Studio.

I was impressed with the premise: they are allowing individuals to create their own apps meanwhile creating a container process to take HTML5 web apps and make them feel like native experiences. While they are not the first (as we will discuss below), they are the major web company actively supporting this process given the deprecated “Mobile Chrome Apps.”

The premise of the App studio is two fold (Fed Dual mandate anyone?…sorry)

  1. Allow users to create their own apps in a point-and-click manner
  2. Allow your HTML5 web application to feel like a native app

While this post won’t get into 1, it does help many small businesses who want an app alongside their product.

With respect to 2, I found the app submission process relatively easy, with the majority of my time spent typing out app descriptions, ratings, etc. The actual wrapping of the Tiingo took all of about 15 minutes.

Here is a screenshot of Tiingo running a native desktop application in Windows 10:

Tiingo Running in their Web App Studio Container
Tiingo Running in their Web App Studio Container

For those of you who’ve never published an App before in the Windows Store, use the videos in the middle of the page: Web App Studio. I find it difficult to sit still and watch videos, so I will be posting a graphical walk-through of how to do this.

Having been around web development and seen multiple container processes come and go, this has been the easiest experience to date. So far I have not found the same memory leaks that have plagued the fork’d Chrome projects with a similar premise in mind.

Also – a thank you to Microsoft with their Edge browser. Seriously -the company that brought you IE6, has launched a new browser that is challenging other browsers in benchmarks (include Google’s benchmarking tests) and recently they have open-sourced their javascript engine: While it has a ways to go, especially with extensions and feature compatibility, initial results are more than promising – they’re exciting. And thankfully, this performant javascript engine is powering their Web App container.

Apache Cordova

The 500 lb gorilla in the room: Apache Cordova

I love what this platform is doing, but I detest that it’s had to exist because the major tech giants couldn’t get together to hammer out a standard (looking at you Apple….from my iPhone).

The goal of this platform is to take an HTML5 web application and wrap it so it can be pushed to the App Stores of Google, Apple, and Microsoft. This has benefits as this means a native feel and interaction with a phone’s hardware and interfaces such as cameras, GPS, and notifications.

The downside, and similar to the Java Virtual Machine, is that these programs run in Javascript and the performance noticeably slower since native code will always be faster than Javascript (although the gap doesn’t have to be this wide – something Java has closed decently well).

Compatibility Features with Native Applications
Compatibility Features with Native Applications



The open source and web-dev communities are doing wonderful things to address the problem of cross-platform/browser compatibility, but ultimately it is the platforms that have web stores that should be pushing forward with a solution. If Apple continues down this road, it will only be a matter of time before development becomes more inconvenient, and if market-share shifts, the iOS will become the second app we develop for instead of the first. Even more so, arguably the Safari browser is becoming more difficult to work with. As Microsoft can tell you, that’s a hard reputation to brush off.

Ultimately, projects like Apache Cordova are wonderful, but I hope go the direction of jQuery where they are no longer necessary or become components of high-level frameworks like Angular. The work jQuery did set a new standard and I hope Cordova goes the same way.

I applaud both Google and Microsoft from tackling this problem head-on with different solutions: support for Cordova, making a unified UX, and explicitly supporting Web App Containers to save developers time.

Well done –


Why 13-F filings are Poor for Replicating Funds

Why 13-F filings are Poor for Replicating Funds

I’ve seen hedge fund and trader replication ETFs and strategies for some time now and I realized a lot of them are based on 13-F filings.I thought I would go into why these are poor for replication. I hope it’s helpful for some readers out there. And in case I miss something, please feel free to add some more points.

I originally made this post on Reddit, but decided to put it here as well for the readers of this blog. A few Redditors responded and that is below the “Edit” portion below.

1) They aggregate the positions of many different people

Typically the funds they replicate often have a Portfolio Manager structure. Just like with mutual funds you have many different types of funds, on the hedge fund side, you have something similar except you have a ton of different individuals. The 13-F filings are an aggregation of the entire fund so you are seeing the aggregated thesis of the entire fund. You may also be looking at the position of a portfolio manager who fundamentally looks at the world entirely different than you and understands the company in a context you may not. Some people may view this as “crowdsourcing” within hedge funds, but then I present a couple other points.

2) They are delayed

The filings are quarterly so you are getting lagging data. It’s not uncommon for a fund to change positions every month. If you are using 13-F filings, make sure the fund has very long holding periods to account for this. Even then, if there is market-moving news, you wont really know their position until the next report.

3) They show you an incomplete picture

A long/short equity fund will often have a short component. Traders often use pairs trades, or short trades to come up with a trade structure. 13-F filings though only represent the long position.

For example the 13-F filings may be long comcast, when the fund could also be short Timewarner against it. Both companies make up the trade thesis. So even if Comcast loses money, they may be making money on the entire trade as Timewarner was the other leg of the trade. It may appear they are “in it for the long haul” when really you can only see one side of the trade. It’s true long/short equity funds tend to make more money on the long side, but some of that is beta exposure.

What I have used 13-F filings for

1) Trade idea generation.

Sometimes smaller hedge funds will find stocks that I haven’t heard of. I will do my own research though and form my own thesis. It’s almost like a screener I suppose. If I know if a hedge fund is a value fund, a long position may be a value position.

2) To get a hf gig

In college I would look up 13-F filings for local small hedge funds, then research the companies, and cold E-mail hedge funds to discuss the idea. This tended to be received well.

Did I miss anything?


Here is what Reddit commenters added – please make sure to give them the karma they deserve

>Yes, 13-F following works best for idea generation from funds with very concentrated portfolios and known for mostly long positions.
One metric that isn’t used much that I like to estimate is the % of overall shares of a particular company that the fund holds (not the % it represents of their own portfolio) . This may give you an even better sense of their conviction in the business. When they start owning close to 20% of a company (many don’t go over this limit because of poison pill arrangements and filing requirements), it implies a high level of conviction, even if it’s a relatively smaller portion of their overall portfolio.

(Expanding upon delayed releases)
>Not only that, they will often wait the full 45 day time limit after quarter end to file, so when you see that report you’re already looking 45 day old data.

>Nice post
Could be long the CDS or puts and long the stock to tweak the risk. 13f makes look like the like the position.

That’s Enough Machine Learning – thanks!

That’s Enough Machine Learning – thanks!

Alright – so I’m going to hammer on one specific topic that’s been bothering me in the tech scene and that’s just machine learning being thrown everywhere. “Need a t-shirt? Let’s use machine learning to find our different habits and predict our tastes.” Or, you know, you could go to a store and see what appeals to you. OK that’s an exaggeration and going to stores and checking merchandise doesn’t scale across variety the web offers you. But I like this analogy so I’m going to keep it.

The problem I see with machine learning, and why I think it’s overused in markets inappropriately, is that it cannot explain in the same way human consciousness can. What I mean by that is that traditional science tells us to form a hypothesis before conducting an experiment. The idea being that by forming an explanation before seeing the data, we are forced to take current observations and make a rational expectation. This of course leads to biases which is shown quantitatively by the inability to replicate research as well as the number of papers that seem to support their hypothesis. What “big data” (I throw up a little in my mouth when I use that phrase) presents us though is the ability to get instant iterative feedback and A/B testing lets us test our samples in the real-world and see if our models hold up.

This is how it “should” be done. What happens though is that machine learning instead of being used as an optimization method becomes used as a method of find explanations. Many of us are using it to find relationships and then we are are backfilling a hypothesis and shows to be the case. While the current method of science is far from perfect, this approach seems far far worse. I have seen some who can master this, but they often have very strict processes in place to ensure the models hold up. Some enforce it via risk management while others run statistical tests – usually a combination of the two.

But do we really need to use advanced machine learning to create explanatory relationships instead of being an optimization method? After speaking with many people using it this way and reading papers on it, it seems like many doing it drastically overfits and their live results/trading do not match their out-of-sample. A common response to this idea is that, “machine learning should work if we properly out-of-sample tests.” Well, something taught to me by Josh + Steve @ AlphaParity (on this list), was that many people inappropriately run out-of-sample tests. What people often do is they initially have an in-sample and out-of-sample but when out-of-sample doesn’t match the in-sample performance, they parameterize the in-sample until the out-of-sample matches what they want. This creates just one in-sample and no out-of-sample.

Using machine learning as an explanatory relationship finder often leads to complexity of models, which just further adds the probability of overfitting. A secondary problem with markets is that regime shifts can happen rapidly, making machine learning less effective on larger time periods where there become new macro drivers. While it absolutely can be done, I know only one who has pulled it off and I have no idea how they do it. The question is, that all of this complexity worth it? The largest hedge funds out there like AQR do not use it to find explanatory relationships but use it for what it was meant to be: an optimization algorithm that slightly boosts performance. The simplicity of models like this reduce the chances of overfitting and also allow us to know when a model will break – when there will a regime shift. This knowing-when-it-fails allow us to assign higher odds as to when to size down risk (or weighting in non-market cases), or use portfolio construction to provide correlation/diversification benefit.

So before we go crazy with machine learning trying to be predictive from the start, I think it’s worthwhile to test the relationships and run studies and then consider ML at a “tweaking” stage. When used properly, it can be an effective tool, I just don’t think as effective as the mass-adoption of this phrase implies for the vast majority of cases. I think a good example of those who properly used it were the winners behind the Netflix Prize, where their solution is public. Their initial papers explored biases and preferences people had when initially ranking movies. Their final solution contained different ML and statistical methods to push results over the edge. Reading Team BellKor’s Pragmatic Chaos’s papers in sequential order is good fun: Direct link to final paper. Ignoring the math, their logic and explanations are fantastic displays of the scientific method + optimization methods.

Podcast: Ch.1 Sifting Through the Noise and Taking Action – A Chat with Garrett Baldwin

Podcast: Ch.1 Sifting Through the Noise and Taking Action – A Chat with Garrett Baldwin

When I started out in finance, and even now, I get bogged down whenever I read certain financial news outlets. Even after years in the industry, it is tough to weed out what’s important and who is credible.
That’s why I asked Garrett Baldwin, an esteemed financial journalist, academic and the managing editor of AlphaPages.comFutures MagazineModern Trader, and FinAlternatives to be a guest on the podcast.

In this episode, we talk about a variety of topics including Garrett’s journalistic process,  holding Wall St. analysts, journalists and bloggers accountable, and tips on building an investment process.

Check out the podcast to learn how financial journalism is changing and how the latest FinTech tools can help us sift through the noise to find meaningful, actionable data.

Garrett also mentions the Tiingo community in the cover story of his newest publication coming out:  Modern Trader (Available June 23rd at Barnes & Noble, E-mail will be sent out).


Here are a few resources we discussed in the episode:
Modern Trader

Garrett is the Managing editor of, Futures Magazine, Modern Trader, and FinAlternatives. In this episode, we touch upon a variety of topics including the journalistic process in finance, holding Wall Street analysts and bloggers accountable, and tips on building an investment process. Learn how financial journalism is changing today and how the latest FinTech tools can sift through the noise and find meaningful, actionable data.

iTunes Link

Non-Itunes (

Given the back-and-forth nature of this Episode, there is no transcript.

Podcast: Ep.7 Our First Hedge Fund Strategy

Podcast: Ep.7 Our First Hedge Fund Strategy


In this episode we cover not only what hedge funds are, but one of the most recently used hedge fund allocation strategies: risk parity. The largest quantitative hedge funds are using this method and it is now presenting some real dangers. We use this example to touch upon how we can skeptically look at performance and also what to beware of with 13F filings. This episode synthesizes everything we’ve learned into a single practical episode.

iTunes Link

Non-Itunes (

Here is the script that was used in today’s episode.

Note: I don’t follow scripts word-for-word as they can sound unnatural, but the episodes do closely follow them.

Get excited listeners. We’re going to synthesize everything we’ve learned to create our first hedge fund strategy and go over what a hedge fund is. If you haven’t listened to the other episodes, that’s okay because this can be a good test to see if you need to brush up on anything. For the most part though, this will be a very simple explanation so relax and enjoy listening.  Oh! And I even made an entirely new feature and initiative on Tiingo to aid in this episode.  Actually, I had this podcast all scripted out and then I realized, “I should just make this hedge fund tool for everyone.” So… this is going to be a really fun episode.

I consider this an important episode because we’re going to be using some metrics we’ve learned about and touching upon new ideas like risk management and position sizing and what they mean. We’re also going to discuss criticisms of the hedge fund strategy we’re covering, which will give you a look into how we should all view markets and claims made by individuals. One of the most important skills you can develop as an investor and trader is skepticism.

Here is a fun story that upsets me quite often. I used to work at a big bank, and there was a Managing Director there. A managing director is the most senior title you can get at a bank before you get into CEO or CTO.  In other fields it may be called a Principal, Partner, and so on. Point is, it’s a very high title. Well this MD, managing director not medical doctor, was followed across wall street because his research was popular. What the bank didn’t advertise was that this MD originally traded, but because he lost money for 7 years straight, they no longer allowed him to trade with bank money and instead allowed him to publish research because it helps their relationships with clients. Another fun point? Of the people who read his research, half of them mocked him and used him as a joke of everything wrong in market analysis. This MD would literally look at a price graph and then draw arrows. That’s it. He would circle things, and draw arrows where he thought things were going.

I rarely trash talk as you know in this podcast, but I bring up this example to highlight how important skepticism is. Even if you think somebody is a pundit or brilliant, fact checking is incredibly important. Misinformation is so dangerous because it means we can lose our money. It’s one thing if the misinformation is a genuine mistake and a person tried, it’s another if an institution knows a person had bad research yet still promotes him for sales. I will never stand for the latter and will continue to be vocal on this.

So to recap: always be skeptical. Even of me. Verify everything I say. I try my best but I am human so if you think I’m wrong, please check. If you don’t think I’m wrong, then definitely fact check me! Haha, that’s an important lesson!

OK moving on to some quick Tiingo announcements. This week we have revamped the entire fundamental database so it has the data structured in tables as well as graphs. The data is now also more accurate and had extensive coverage for over 3,500 stocks.  Secondly, I have started the Tiingo Labs initiative, which contains a powerful tool you can use with this podcast. And thirdly, I just added a chat reputation system, as well as something called a Tiinglet. I realized some of the best converrsations among friends happen within chats, but we don’t have a way to save them down. I present a Tiinglet, it lets you turn your discussion about markets into something you formalize and give to the public to help others learn. If you open the Tiingo chat, click a username of a message you like, a box will come up and within a few clicks, you will make a site centered around your dialogue.

For example, if you and a friend are talking about Apple and one of you comes up with great analysis you think you could help others, then you can simply click the text and a message box comes up that lets you turn the conversation into page that is accessible to others who may have the same questions as you do.

In addition, if you like the Tiingo project – the mission, podcast, web app, and so on, please consider paying for Tiingo at once again I have a pay what you can model so nobody is excluded, but in order to exist, we will need people to pay for the product.

So let’s move on into our first hedge fund strategy!
To begin let’s discuss what a hedge fund actually is and how news can often misinterprets what they do.

A hedge fund’s goal is to make money that’s uncorrelated to other assets like stocks, bonds, and so on. Think of it as if you invested in real estate. If you bought a condo,you probably wouldn’t compare it to stocks. In fact, many times people invest in property to build equity or have other investments besides stocks and bonds.

So it’s not so much hedge funds have to make more money than the stock market like the S&P 500 or NASDAQ index funds, but that they have to have a return stream that differs from those.  They are a tool used by pension funds, wealthy people, banks, other institutions, and so on to diversify away their risk. For example, if you had 10 billion dollars, stocks and bonds may be nice, but you may want to have other investments too like real estate. So think of a hedge fund as a tool used by wealthy investors to diversify away some of their risk.

You may often see headlines that say, “the stock market returns 20% this year, but hedge funds only returned 12%.” But that’s not a bad thing. A hedge fund’s goal isn’t to beat stocks, it’s be uncorrelated for stocks. For example, if stocks were up 20% and a hedge fund was up 20%, and if stocks were down 10% and a hedgefund was down 10%, why would you pay fees to a hedge fund when you could own an index fund?

So to create strategies uncorrelated to the stock market or bond market, a hedge fund will trade in different styles. They are considered active managers. They also have a tool called leverage. This simply means they can borrow money. If they have $10,000, they may trade as if they had $50,000. They can also sell short, a topic we covered in Q&A. This differs significantly from mutual funds and index funds, which tend not to really use leverage in the same way, and also mutual funds and hedge funds don’t sell short. Because of this, hedge funds are often classified as an “alternative investment.”  They are alternatives to traditional assets like stocks and bonds. They manage money in what is considered non-traditional ways.

Some hedge funds may be long a stock while being short another stock. This is called a long/short equity fund. Others may trade commodities or fx, and these are often called global macro funds. Some hedge funds employ quantitative strategies where they build computer programs that decide what to invest in.

One problem you see in the

The fee structure for a hedge fund is often more aggressive than a mutual fund or index fund. It’s typically assumed a fund takes 2/20 (2 and 20) or maybe you will see 1.5/15. Let’s use 2/20 as an example. The first number, 2, is the management fee.  This is similar to a mutual fund. If you invested $1mm, you would pay 2% of what you invested. IN this case it would be 2% of $1mm, or, $20,000. The second number, 20, is the cut they get based on performance. For example, if they make 15% on $1mm, or $150,000, they will get a cut of that $150,000. The second number represents the % cut they get. So if it’s 20%, they would get 20% of $150,000 which is $30,000. So 2/20 (2 and 20), is a 2% management fee on what’s invested, and a 20% performance fee which is shaved off the additional money they make. If the hedge fund doesn’t make money, or losses money, they still get the management fee but do not get the performance bonus. They get the 2% but not the 20%.

So a hedge fund is a pooled investment, like a mutual fund or index fund, but they take investor’s money and then use alternative strategies to make money in different ways. Their goal is to make money regardless of market conditions while also being uncorrelated to other assets. As usual this should be the case, but often time isn’t.

Anyway, this is what a hedge fund is. It often has a mystique to it like hedge fund traders are brilliant. But just like any profession, you have people who are very good, and others who may not be so good. Often I find the media portrays hedge fund managers, especially quants, as these super brilliant mathematicians. Having gone to that side, I can assure you…unless it’s High frequency trading, the Ph.D.s and the chess champions don’t make a difference.  They’re just normal people that are incredibly passionate about markets.

Now that we know what a hedge fund is, we are going to discuss a popular strategy using the knowledge we’ve gained. We need to understand volatility, correlation, and stock indexes and etfs.

So a hedge fund takes a non-traditional approach to investing. Do not try what we’re discussing at home. There are a lot of caveats to a strategy like this, some of which we’ll get into, but making sure this is done right takes a lot of practice.  I don’t want to be responsible for any execution errors or mishaps. This strategy is not guaranteed to make money, and in fact could very well lose you money. Anyway, with this very scary, yet important disclaimer aside, let’s move forward, woo-hoo!

We’re going to discuss a strategy called a risk-parity strategy. Actually, risk-parity is not a strategy but an allocation method. That simply means, it’s a method to determine how much money you should put in each asset you own. What I mean by that is if you own a stock index fund and a bond index fund, how much should you put in each? In episode 3 we discussed two different ways to determine this, one was simply always keeping 60% of your cash in stocks, and 40% of your cash in bonds. We spoke about how this is naïve because it stays the same regardless of other factors. For example, if you are younger, you may be able to take greater risks, which will let you be in more stocks.

In the same way, a risk parity strategy helps you decide how much to put in each stock. We’re going to use the 60/40, 60% stock, 40% bond, portfolio as an example for this strategy.

So a big trend among large hedge funds, like AQR and bridgewater, is to determine how much to put in each asset using a risk-parity strategy.  They may add a few twists to the idea, but at it’s base core, a lot of it is determined by this method.

So what is risk parity? Well it simply means equal-volatility weighting your portfolio. Before you shut off this podcast, I will actually explain what that means. I can’t stand when people define terms using equally difficult terms or phrases so I won’t do that to you.

So you know how in the 60/40 stock/bond portfolio 60% of our cash was in stocks, and 40% was in bonds? Well we generally assume stocks move around a lot more than bonds do. Bonds are assumed to be a bit more stable.  This is a concept we call volatility. We say, on average, stocks are more volatile than bonds.  Typically, many people measure risk as volatility. Something that moves around a lot, could be said to be more risky. So sometimes volatility and risk are sometimes said to be synontmous.  SO breaking down the term, risk parity, we can say volatility-parity. And parity means for something to be equal.  Using these definitions, we can say “risk parity” roughly translates to “volatility equal”, or more naturally, “equal volatility.” Risk parity means equal volatility.

But what does that mean practically? A common example is if you take a 60/40 stock/bond portfolio, and measure the volatility, we see 90% of the volatility comes from stocks, and 10% of the volatility comes from bonds.  Going forward we are going to use the term “cash.” This means exactly that. If we put 60% of our cash in something, it means if we had $1,000, we would take $600 and invest it in stocks. We would then take $400 and put that in bonds. a 60/40 portfolio is 60% cash in stocks, 40% cash in bonds.

If we took 60% of our cash and put it in stocks, and 40% of our cash  and put it in bonds, 90% of the movement would come from stocks. Only 10% of the movement would come from bonds.  Because stocks are said to be higher risk, or higher volatility in this case, they would make up 90% of the risk in your portfolio, even if they were only 60% of the cash.

So what risk parity says is that we should make stocks only take up 50% of the risk, and bonds make up 50% of the risk. If 60% cash results in 90% risk, how much would we have to scale back? Well if we put 33% of our cash in stocks, that would make the portfolio take up 50% of the risk.

What about bonds? Well since 40% cash results in 10% risk, if we multiply our bond position by 5, we can get 50% risk. That means we have to take the 40% cash position/10% risk position, and multiply both by 5. We can see that 200% cash in bonds results in 50% risk.

But how do we put 200% of our cash in something? Well this is a concept called leverage. This is something hedge funds can do as we mentioned earlier, they can essentially borrow money to multiply their returns.  Individuals can do this too through margin and futures, but we’re not going to cover this here quite yet as this is a more advanced topic and has serious risks involved.

So to recap, in order to take a 60/40 stock/bond cash portfolio, and make the portfolio 50/50 in volatility/risk, we have to cut the position of stocks and lever up the position in bonds.

Notice how we are using the volatility of an asset to determine how much to allocate? This is a dynamic method, and no different than if we did 60/40 or another allocation method. So risk parity just tells us how much to put into each asset. The strategy will tell you what assets, and risk parity will tell you how much to put in each asset.

So let’s get down to it, how much would this strategy make vs a 60/40 strategy? And here is where things are gonna get SO fun.

The risk parity strategy returned 45% total over the past 12 years. The 60/40 portfolio returned 61% total. This wasn’t assuming reinvesting dividends for those wondering – if you want to ask my why shoot me an E-mail.

So you may be thinking,”Rishi you said this was profitable…but I would make less? What is wrong with you.” Well here is the key information and why hedge funds can do this better than 60/40. We have to look at how this strategy performed relative to the path it took. In episode 5 we talked about volatility and how the path to the return we got matters. For example, if invested $100,000 and doubled our money to $200,000 that’s awesome. But what if half way through that $100,000 turned into $50,000?

Likewise, what if you invested $100,000 and made $150,000 but the lowest your portfolio ever got was $99,000. Which would you prefer? Even if you’re telling me the down $50,000 scenario, here is why it’s still worse if you’re a hedge fund.

The risk parity strategy had a volatility of about 5.5%. The volatility of the 60/40 was about 11%, almost double. So what a hedge fund will do is that they will apply even more volatility, because investors want a higher return. So to compare apples to apples, a hedge fund may use leverage and double the amount of money into risk parity, so you take that volatility of 5.5% and double it, and now you have 11% volatility. But you also have double the return.
So if we want to compare apples to apples, we should also compare the volatility, or the path it took us to get to the return we have. So if you double the leverage to a strategy, you not only double volatility but the return. So that 45% we made on risk parity becomes 90%. 90% on risk parity vs. 61% on a 60/40 portfolio. There are a bit more nuances to this strategy that actually improve performance of risk parity, but we’ll get to that soon enough in this podcast series.

If you want to play with this risk parity allocation method, I mentioned I created a tool to help you do this. Know this is an informational tool and you should not trade on the results. I have not put in tradeable assumptions, but this is a good informational off-the-cuff proof of concept. And please treat it as such, it’s not a full replication of the strategy nor how much you should invest. So with that diclaimer, check out the tool on You’ll see a link that has risk parity. This is a sweet tool that may let you get an idea. You just type in the tickers you want in your portfolio and press enter. Maybe you want to include the S&P500, bonds, but also small cap stocks? But anyway, the possibilities are endless and I hope you find joy and fun is playing around with this!

The next question we have to ask ourselves, is why does this strategy perform so well?

This is where skepticism in markets is so critical. If a strategy performs very well, it’s important to ask ourselves why? What conditions are allowing it to perform so well? Is it the economy, maybe government policy? Certain changes in technology?

In this case, the common explanation of why risk parity does so well is especially from the bond market. In the U.S., for the past 30 years, bonds have done extremely well. They’ve never really gone down for an extended period of time like stocks have. And after the 2008 crises, the Federal Reserve, which sets an interest rate that bonds are affected by have gone down. The Federal Reserve, or Fed, did this to promote credit and boost the economy. We will get into how that works later, but the take away is that fed policy has allowed rates, like loan or mortgage rates, to stay low. Not only that, awhile ago the Fed committed to doing that for awhile.

In Episode 5 we mentioned how uncertainty creates volatility. Well, when a book government agency that influences rates says, “we’re going to do this for a long time” it removes a lot of uncertainty. This in turn removes volatility from bonds.

So what we’ve seen are that bonds are performing very well, the price goes up. If you’re new to bonds, it’s said the price of bonds is inversely proportional to the interest rate. What that means is that if rates, like you see on loans, goes down, the bond is worth more. We will cover this more in depth later, but if rates are up, bond prices are down. If rates are down, bond prices are up.

So since the Fed committed to keeping rates low, you’ve seen bond prices go up. Secondly, you’ve seen a lot of uncertainty removed in the bond market, resulting in low volatility. And since risk parity equal-volatility weights, in order for the volatility to be 50-50 stocks and bonds, hedge funds have bigger positions in bonds.

So the argument against risk parity is that it applies an unfair amount of leverage to bonds. To mitigate this, hedge funds look at the volatility every month, and do what we call “rebalance.” If volatility was higher for an asset the previous month, they will put less money in the asset the next month. Every month they make take the average volatility for the past 3 months and add or reduce their position in each asset.

However, rebalancing happens once a month. What if the price of bonds fell quickly within a month. Let’s explore it for a moment.

In our example, we borrowed double our money to invest in bonds. Let’s say we had $1,000 and borrowed another $1,000. The $1,000 we had is our equity. So if bonds fell 50%, we would lose 50% of the combined value of $2,000. We would lose half, so $1,000, which would completely wipe out our equity.

If we levered 300%, so we had $1,000, but borrowed another $2,000, a 33% fall in bonds would be a loss of $1,000 and wipe out our equity. The equity is what we actually have, so if we lose all of it, we go bankrupt.

The truth is though, with hedge funds, if they are down 20%, investors get scared and often pull their money away. If your mutual fund was down 20%, you would probably rethink the investment.

Right now, a big worry among investors in hedge funds is that the economy has been doing pretty well. So when is the Fed going to allow interest rates to rise? And what if it happens really quickly? If rates rise quickly, the price of bonds will fall quickly. And remember, the top hedge funds are using this strategy and they manage $200bn among the top few alone. If these $200bn is highly levered, imagine how many billions could be wiped out if bonds just fall 10%.

This is similar to the 2008 crises. Many people were hurt because banks were offering low down-payment mortgages, and that just means people levered. a 10% down payment means you are levered 10:1, or 1000% on your equity. If your house dropped 10%, you were wiped out. This is what hurt people.

In the same way, a big worry is that because funds are so levered on bonds, if they fall in price, you could see billions of dollars wiped out. Funds have tried to come out and recognize this problem and are taking steps to address it. I wont comment specifically if I think these steps are appropriate, at least not publicly, so if you want to have that discussion, shoot me an E-mail!

The caveat though is that leverage needs to be used appropriately, and many people think the reason this strategy has done so well is because bonds have done incredibly well over the past 20 or 30 years. Stocks have also done very well in the past 6 years, so this keeps adding to the returns of the strategy.

These are the drawbacks and everytime you see performance numbers, always ask yourself why? Asking why an opportunity exists is not just a powerful tool in business, but also markets. Maybe there is a reason this strategy works that may make you feel uncomfortable.

Either way, I know this is a lot to take in, so if you have to repeat a few parts, I apologize. But this example is truly an expression of how we can combine the things we learned so far into a strategy the largest hedge funds are using.

This has been fun and if you have any feedback please E-mail me at