FCA Bans Commission Payments for Corporate Access

May 8th, 2014

The UK Financial Conduct Authority (FCA) released its updated rules on payments for investment research, eliminating payments for corporate access with client commissions.  Moreover, the rules go into effect in less than a month, reflecting the FCA’s view that the rules are simply a clarification, and that investment managers should have been following them all along.

The full text of Policy Statement (PS) 14/7 can be found at http://www.fca.org.uk/your-fca/documents/policy-statements/ps14-07.

Corporate Access

The FCA is implacable in its aversion to corporate access as a form of research. The FCA received 62 comment letters and met with investment managers, investment banks and corporate issuers prior to issuing its final rules.  In the end, the FCA did not make major changes to the proposed rules.

Some respondents argued that the definition of corporate access was too broad.  Nevertheless, the FCA held to its definition denoting any service of arranging or bringing about contact between an investment manager and an issuer or potential issuer.

The FCA also pointedly addressed the issue of bundling corporate access with other services, such as conferences or analyst input.  Investment managers will need to explicitly disaggregate the research portion and make a ‘fair assessment’ of the charge to pass on to their customers.

It will also scrutinize situations where the corporate access is deemed a ‘free good’.  Effectively, investment managers will need to fully justify any other payments made to a broker arranging corporate access as a non-monetary benefit and demonstrate that corporate access is not being subsidized by other payments.

Bundled research

With or without corporate access, investment managers will need to scrutinize unpriced bundled services more carefully.  The FCA expanded the final rules to place more emphasis on the need for investment managers to proactively value bundled services, whether or not investment banks are cooperative.

Investment managers should do “fact-based analysis” of bundled services. This would include comparisons to priced services such as independent research (which is often an order of magnitude less expensive).  Asset managers are also encouraged to estimate what it would cost them internally to perform the service. Or determine what they would be willing, in good faith, to pay for the service.

The FCA seems unsympathetic to the investment manager lament that investment banks are not forthcoming in providing pricing for their services:

Where an investment manager feels they need information from the broker or third party provider to assist them in a valuation process, we would expect a reasonable level of transparency to be present as part of any commercial arrangement. This could include the nature of a good or service provided, and the extent to which it has been used.

Primary research

FCA singled out corporate access but is less clear for other forms of research, leaving it up to investment managers to justify whether primary research meets the test.  Primary research providers were particularly troubled by the definition that research needs to “present” meaningful conclusions.  The FCA stuck to its guns on this point, arguing that the phrase ‘meaningful conclusions’ has been part of the UK’s definition of eligible research since 2006.  Moreover the FCA “would not expect an investment manager to accept as substantive research a good or service that only has a purely ‘artificial’ conclusion added by a broker or third party.”

On the plus side, the FCA says it does not intend to rule out “research that is used by the investment manager to feed into their own further research or assessment of investment and trading ideas.”  However, it concludes that eligible research must meet its cumulative definition of research, including presenting meaningful conclusions:

In the UK, the ability to use dealing commissions to acquire research is not intended to cover all non-execution related inputs into the investment manager’s decision-making process, but only additional third party research that can met our cumulative evidential criteria.

Conclusion

Those firms hoping the FCA would moderate its position will be disappointed in the published rules, which, if anything, are tougher in their final form.  PS 14/7 sends the signal that the FCA will err on the side of less rather than more eligible research, and none of the arguments presented to date seem to have softened their position.

Investment banks will be hardest hit by the new rules.  The FCA intends to erase £500 million (US$800 million) in commissions it estimates are spent on corporate access in the UK.  Most of that goes to investment banks.  However, if the FCA succeeds in pressuring investment managers to put a fair value on bundled investment bank research, the damage could be larger.  Priced independent research is generally significantly less than what investment banks have traditionally been paid for their research.  Or to use the other metric suggested by the FCA, investment managers could hire a lot of internal staff with the amounts paid for bank research.

The FCA has not only been fielding comments to CP 13/17.  From November 2013 to February 2014, it conducted a thematic supervision review examining both asset managers and sell-side brokers to review current market practices and business models linked to the use of dealing commissions.  Reading between the lines, it does not appear that this review relaxed the FCA’s perspective.   The FCA promises to report on this review later this year.

The Damoclean sword over everyone’s head is the MiFID II, which could result in an outright ban on paying for any research with client commissions.  As the FCA puts it, “MiFID II has the potential to impact the ability of portfolio managers to receive third party inducements, which may include research acquired in return for dealing commissions linked to execution services.”

The FCA is a party in the MiFID II discussions, but, judging by PS 14/7, a ban on research commissions would not break its heart.

Subscribe to Integrity ResearchWatch by Email or  in an RSS/XML reader

A Vision for Big Data

May 7th, 2014

Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…

To help those of us innocent to thrills of big data, we’ve asked Harry Blount, founder of DISCERN, to share his views in this guest article.  Harry Blount was a member of the Committee on Forecasting Disruptive Technologies sponsored by the National Academy of Sciences, and since 2009 has been running an investment research firm dedicated to harnessing big data insights.

As a technologist, Blount has focused not only in gathering big data sources but also on developing analytic tools to help process them.  By comparison, Majestic Research, now part of ITG Research, ended up hiring analysts to interpret the data and provide insights.  Here is Blount’s perspective, which we have excerpted from a DISCERN white paper, “Big Data for Better Decisions”, which can be obtained at http://www.aboutdiscern.com/white-paper.

A vision of the big data future

It is 2015. Acme Commercial Drones has launched its private drone fleet following a 2013 congressional mandate ordering the FAA to integrate drones into commercial U.S. air space. Acme has begun conducting daily flyovers over the major U.S. ports counting the number of cargo ships manifested to carry corn to foreign ports. The average ship is floating 2-3 feet lower in the water than the prior month, suggesting higher than expected corn exports. Another Acme drone flying over the U.S. corn-growing region registers a subtle and non-trendline shift towards rust coloration on the crop suggesting that the late-growth corn season may not be as robust as consensus expectation. Finally, live camera feeds from the newly opened “New Panamax-class” locks at the Panama Canal show that a cargo ship has damaged one of the hydraulic gates, slowing transit to a fraction of capacity.

The purchasing officer of a large soft-drink company uses DISCERN’s Personal Analytics Cloud (PAC) to monitor the global corn transport infrastructure and major corn-growing regions. His PAC generates signals notifying him of these non-trendline events. He immediately executes purchase orders to secure longer-term contracts at current spot prices anticipating that the price of corn is about to spike. His fast action locks in a six-month margin advantage over the competition, allowing the company to report record profits.

What is Big Data?

The phrase “Big Data” refers to massive bundles of datasets, allowing for powerful aggregation and analysis at unprecedented speeds. Some 2.5 quintillion (2.5×1018) bytes of data are created every day2. Mobile devices, RFID readers, wireless sensing networks, video cameras, medical organizations, government entities – to name just few – are collecting an ever growing torrent of data.

Source: BigData-Startups.com

It is our belief that most decision-making tools, analytics and processes commonly applied to business have not kept pace with the aggressive explosion of data and the visualization capabilities of big data aggregation engines. Just as there is a growing digital divide between those with access to the internet and those without, we believe there is an equally important analogue for organizations with big data strategies in place, and those without.

One of the more popular definitions of big data is encompassed by the 4Vs – Volume, Velocity, Variety and Veracity. They sound more like bullet points in a brochure advertising high-end sports cars than in a comprehensive description on what your big data package should bring to the table. While the 4V’s may help a vendor sell data feeds and speeds, nowhere does this definition speak to the customer’s need for more insights earlier and more often.

Evaluating Big Data

The real question is – “How can you get more insights more often from big data?” In our opinion, the most critical aspect organizations need to assess when selecting a big data vendor is in the vendor’s ability to convert noise to signals. Said differently, most big data technology vendors will be able to bring you “the world of data” via a big data aggregation platform, but only a few will have a process capable of delivering you the all the relevant data (e.g. structured, unstructured, internal, external) in deep context in a way that is personalized and persistent for your requirements.

In analyzing a big data vendor, it would be wise to ask a few questions:

1) Can the vendor deliver a solution without selling you hardware and without expensive and extensive IT configuration?
2) Does the vendor aggregate data of any type – internal and external, structured and unstructured, public and commercial?
3) What is the data model for organizing the data?
4) What is the process for providing data in its original and curated context?
5) Can the vendor deliver the data in accordance with the context and unique needs of individual users within the organization?
6) Is the data accessible from any browser-enabled device?
7) Does the solution fit into your existing workflow?

Conclusion

During my involvement with the National Academy of Science study “Persistent Forecasting of Disruptive Technologies“, I came to strongly believe that in most cases, all of the information required to make an early and informed decision was available in the public domain or from inexpensive commercial sources. However, there simply was not a tool or service that was optimized to persistently scan and process the data to surface these insights to decision-makers.

We believe human interaction will always be critical to the decision-making process. However, too much time is spent searching, aggregating, cleansing and processing data, and not enough time is spent on value-added activities such as analysis and optimization.

We believe the emergence of big data tools such as DISCERN’s personal analytics clouds (PACs) and other persistent and personalized decisionmaking frameworks have begun to gain traction because they are able to address the shortcomings of traditional tools and processes. These platforms combine the power of big data platforms with the increasingly sophisticated capabilities of advertising networks and on-line shopping vendors.

Harry Blount is the CEO and Founder of DISCERN, Inc., a cloud-based, big data analytics company that delivers signals as a service to decision-makers.  The vision for DISCERN was inspired, in part, based on the work of Harry and Paul Saffo, DISCERN’s head of Foresight, during their tenures as members of the National Academy of Science Committee for Forecasting Disruptive Technologies.  Prior to founding DISCERN, Harry spent more than 20 years on Wall Street, including senior roles at Lehman Brothers, Credit Suisse First Boston, Donaldson Lufkin & Jenrette, and CIBC Oppenheimer. Harry has been named an Institutional Investor All-American in both Information Technology Hardware and Internet Infrastructure Services, and The Wall Street Journal has recognized him as an All-Star covering the Computer Hardware sector.  In addition, he is Chairman of the Futures Committee for The Tech Museum of Innovation in San Jose, California and is on the Silicon Valley Advisory Board of the Commonwealth Club. Harry graduated from the University of Wisconsin – La Crosse in 1986 with a B.S. in Finance.

Subscribe to Integrity ResearchWatch by Email or  in an RSS/XML reader

Layoffs Surge In April, Though Signs of Improvement Show

May 5th, 2014

Layoffs in the financial services industry surged in April as Wall Street firms continued to slash jobs last month due to weakness in mortgage applications and overall rightsizing at many of the nation’s largest banks.  However, the data shows some signs of improvement signaling that the employment picture may be turning around in the financial services industry.

April Challenger, Gray & Christmas Report

According to Challenger, Gray & Christmas’ monthly Job Cuts Report released last week, the financial services industry announced a surge in planned layoffs during April with plans to cut 4,124 jobs in the coming months.  This is a 490% increase from the 698 planned job cuts announced in March, and is an almost 300% gain from the number of layoffs announced in April of 2013.

Despite the surge in announced April layoffs, Wall Street firms have announced 19,430 planned layoffs on a year-to-date basis – a 44% decline in the number of layoffs announced during the same period in the previous year.  This suggests that the pace of right sizing in the financial services industry might finally be turning around.

John Challenger, CEO of Challenger, Gray & Christmas explained this development, “We are seeing some stabilization in the banking industry. We may continue to see cutbacks in the mortgage departments, as banks shed the extra workers hired to handle the flood of foreclosures, but those areas are getting back to normal staffing levels.”

Planned hiring at financial services firms rose 133% in April to 350 jobs from a modest 150 new jobs announced in March.  In addition, the number of new April hiring plans represents a 600% gain from the level seen in April of the prior year.  However, it is important to note that year-to-date hiring in the financial services industry remains anemic as it is 53% below the levels seen during the same period in 2013.

Major Industry Moves

The one major financial services firms which announced layoffs in April was Citigroup which announced that it was cutting 200 to 300 jobs (2% of it staff) in its global markets business.  The primary reason for this reduction was a response to a slump in profits due to losses in its bond trading unit.

Another major evolving story on Wall Street layoff front is Barclays Plc’s planned revamp of its investment bank that is expected to be presented to shareholders on May 8, and which executives have said will include significant job cuts.  Analysts at Sanford C. Bernstein estimate that Barclays Plc could eliminate 7,500 jobs at its investment bank to improve returns at its securities unit. Bernstein analysts note that the European fixed-income, currencies and commodities business may be the hardest hit, with about 5,000 job losses. Cuts of 6,500 to 7,500 would represent a reduction of between 25% to 30% of the unit’s employees.

Positives for Equity Research

As we have written recently, the outlook for Wall Street’s equities businesses has become more upbeat in the past few months.  For example, Bank of America, Morgan Stanley, Credit Suisse, and JP Morgan have all recently reported growth in their equities revenues (http://www.integrity-research.com/cms/2014/04/23/upbeat-equities-environment/).

In addition, institutional investors received $431 bln in new equity mandates in 2013 with a significant portion of these asset inflows going into international equities.   Another report shows that as of March 31st, hedge fund assets have hit new highs of $2.7 trillion thanks to healthy net capital inflows http://www.integrity-research.com/cms/2014/04/30/more-positives-for-equity-research/).

Relevance for the Research Industry

Despite the surge in layoffs in April, it looks like most Wall Street firms are starting to achieve more stable employment levels, eliminating the need for large layoffs in the near future.  In addition, a number of developments are starting to suggest that the sell-side and buy-side equities businesses are starting to turn around.

Consequently, we are becoming more constructive about the employment outlook for research analysts, research sales people, and other research support staff at investment banks and buy-side firms.  In addition, we suspect that a future increase in research staffing levels at sell-side and buy-side firms will also create higher demand for alternative research, unique datasets, and other analytic inputs – a virtuous cycle that has been evident in the past.  Certainly, this would be great news for an industry that has been buffeted over the past few years.

 

Subscribe to Integrity ResearchWatch by Email or  in an RSS/XML reader

More Positives for Equity Research

April 30th, 2014

Increased equity mandates, growing hedge fund assets and humbled momentum players equal more good news for equity research providers.

Increased equity assets

According to a recent article in Pensions & Investments, equities have received growing net inflows from institutional investors.  U.S. and international active equity mandates rebounded in 2013, according to a report released in January by consultant Eager, Davis & Holmes.  In 2013, 373 U.S. active equity hires were made, up from the 305 made in 2012 and the 288 in 2011.

Investment consultant Mercer reported that $431 billion in new assets were committed to equities in 2013.  This was an order of magnitude greater than previous years, which averaged  about $40 billion each year from 2009 to 2012 inclusive.  Comparatively, Mercer saw $1.2 billion go into bond funds in 2013.

Of the investors returning to equities, many are turning toward international equities, particularly those within Europe and emerging markets, which are expected to continue to rise. Data from Eager, Davis & Holmes shows that 291 active international equity hires were made in 2013 — a record high — compared to 271 hires made in 2012 and 210 in 2011. Of these hires, the majority of them — 136 — were in emerging markets equity, also a record high.

The flows into equities have tended to be from new money, rather than a dramatic shift from fixed income.  Most money managers and consultants contacted by P&I agreed they hadn’t been seeing flows coming out of fixed income and into equities as much as they have seen investors reallocating within their fixed-income and equity portfolios.

Momentum strategies punished

At the same time there has been a growing shift away from momentum strategies, and growth stocks, and into value.  According to a recent article in the Financial Times, trend-following hedge funds have suffered further outflows amid weak investment performance, raising questions about their survival.

Commodity trading advisers (CTAs), hedge funds that tend to follow trends in markets using computers to place bets, suffered their 10th consecutive month of net outflows in March, according to Eurekahedge, the data provider.

Overall, investors pulled $5.3bn from CTAs in the first quarter. This follows continued performance difficulties for these funds, which were down 1.8 per cent in the first three months of the year – the worst return of any hedge fund strategy.

CTA funds have experienced losses of 3.3 per cent over the past five years, a sharp reversal from the 11 per cent a year such funds made between 2000 and 2008, according to Hedge Fund Research, the data provider.   Last year the strategy was down 0.61 per cent, Eurekahedge figures show.

Matthew Beddall, chief investment officer of Winton Capital Management, believes that CTAs will need to embrace research more aggressively to survive: “There is nothing about the CTA business model that I see as broken. For CTAs that have invested less heavily in research, I am maybe inclined to say yes. It is just not easy to make money in financial markets, so our saying is: evolve or die.”

Growth in hedge fund assets

Despite troubles in the CTA segment, overall hedge fund assets are hitting new highs.  Pension and Investments reported industry assets hit a new peak of $2.7 trillion thanks to healthy net inflows.

Despite slight dips in aggregate performance in the months of January and March, investment performance and net capital inflows throughout the quarter were sufficient to push hedge fund industry assets to a new peak of $2.7 trillion as of March 31, according to Hedge Fund Research.

The first quarter was the industry’s seventh consecutive quarter of record-breaking net growth, HFR said in a recent research report. The jump in aggregate hedge fund industry assets was the result of the combination of $26.3 billion in net inflows and $47 billion of net investment performance gains.

HFR noted that net inflows in the three-month period ended March 31 were the highest quarterly inflow since the second quarter of 2011 when net inflows totaled $32.5 billion.

If the current pace of new money continues, net inflows for 2014 will top $105 billion, making it the best year since 2007, when net asset growth totaled $195 billion, historical HFR data show.

Equity hedge fund strategies experienced the high net inflows in the first quarter of $16.3 billion, followed by relative value strategies, $11.2 billion, and event-driven approaches, $4.1 billion. Macro hedge funds had net outflows of $5.3 billion, according to HFR tracking data.

Conclusion

Asset owners’ transfer of new assets into equities in 2013 was reflected in the excellent equities markets last year.  Nevertheless, the mandates tend to be relatively sticky, suggesting a continued positive for equities market participants.

Even more bullish for equity research providers is the growth in hedge fund assets, particularly inflows to equity hedge fund strategies.  In their search for alpha, hedge funds are the most aggressive and innovative users of investment research.

Nor does it hurt research providers that CTAs are being pressured to modify their pure quantitative momentum-oriented strategies to incorporate more research.

Combined with the positive commission trends we noted in recent earnings announcements of the large investment banks  the environment for equity research is looking very robust indeed.

Subscribe to Integrity ResearchWatch by Email or  in an RSS/XML reader

New Model for Economic Consensus Estimates

April 28th, 2014

News agencies and specialized newsletters have traditionally surveyed a limited number of economists to ascertain the market consensus for various economic indicators.  However last week, crowd sourcing data provider, Estimize added Economic Indicators to its platform in order to obtain a more comprehensive picture of what the market expects will be reported for regular government economic releases.

Value of Consensus Estimates

Determining the “consensus estimates” for corporate earnings announcements, FDA drug trial results, political elections, or government economic releases has historically been an important part of the investment research process as this has helped investors determine what “the market” has discounted into existing securities prices.

Consequently, once an investor knows what the market consensus is for a particular event (like a corporate earnings release), then he/she has a good idea how investors will respond if the actual earnings release for that stock is better or worse than that market expectation.  A better than expected earnings report will generally lead to an improvement in the price of that particular stock while a worse than expected earnings report will lead to a decline in the price of that stock.

Of course, it is important to remember that different types of assets like stocks, bonds, or commodities will respond differently to these expectations surprises as misses will have different implications for the specific security.

In addition, some investors use consensus estimates to help then position trades before the announcement of a specific event – particularly if they expect an outcome that is markedly different than what the consensus view.

Traditional Way to Discover Market Consensus

In the past, various news agencies or newsletter providers have determined the consensus estimate for any formal event or release by periodically surveying a limited number of “experts” like sell-side analysts or economists and calculating the mean or median estimate of that survey.

This practice has been well established for most of the data reported in quarterly corporate earnings announcements.  In fact, over the years a number of consensus earnings databases and calendars have been established enabling investors to get a good handle on current market expectations and how these have changed over time.

However, consensus estimates for monthly and quarterly government economic releases have not been tracked as rigorously.  Of course, most major newswires including the Wall Street Journal, Thomson Reuters, Bloomberg and CBS MarketWatch all conduct regular surveys of the major government economic releases a week or two before the release date.

Typically, these surveys include the estimates of a few dozen well-known Wall Street economists at that point in time.  Unfortunately, none of these surveys track how these forecaster’s estimates have changed over time.  This is important because most economists revise their forecasts as they see evidence from other data releases about actual economic conditions.  For example, many economists might publish a preliminary forecast for monthly retail sales, which they later revise after auto sales, same store sales, and other private sales data is made available.

Of course, there are a few more formal surveys of government economic releases that are commercially available to customers, including Consensus Economics, Focus Economics, and Blue Chip Economic Indicators.  These products typically focus on longer-term estimates like quarterly GDP, but they do collect estimates for monthly time series like retail sales, housing starts and unemployment.  The good news about these consensus surveys is they generally include estimates from a much larger universe of economists.  However, the frequency of these surveys (at best monthly) means the estimates included might be preliminary and not take advantage of the most up-to-date data.

New Estimize Offering

Strangely, very little has changed in the way investors have collected economic consensus estimates for decades, that is until last week when Estimize announced that they were adding economic indicators to their crowd-sourcing financial data platform.

The new service will include market estimates on 17 different US economic indicators, including:

  • Nonfarm Payrolls
  • Unemployment Rate
  • Average Weekly Earnings
  • GDP
  • Producer Price Index
  • Consumer Price Index
  • Retail Sales
  • Purchasing Manager’s Index
  • Existing Home Sales
  • New Single Family Home Sales
  • Case-Shiller Home Price Index
  • Housing Starts
  • Durable Goods Orders
  • ISM Non-Manufacturing Index
  • Initial Claims for Unemployment Insurance
  • Manufacturing New Orders
  • Industrial Production

 

Estimize expects to add other US indicators and international economic time series as this service gains traction.

Estimize CEO, Leigh Drogen explained the reason for adding this new service, saying, “The currently available Economic Indicator estimate data sets are extremely poor, to put it lightly. They are literally a poll, a few weeks before the report, from 30 economists, with no revision history, and no forward looking estimates. One has to wonder how this has been the case for so long.  We’re going to change all of that now, by using the philosophies that have made Estimize so successful to this point. We believe a more accurate, and more importantly, more representative data set will be produced, with greater depth of information by collecting data the way we believe it should be collected.”

Summary

Investors have had to put up with getting consensus estimates for economic indicators in the same old way for decades – by conducting a point in time survey of a limited number of Wall Street economists.  However, last week’s announcement by Estimize that it was going to start collecting forecasts on US economic indicators by leveraging the “wisdom of the crowds” could completely change this practice.

It will be extremely interesting to see if Estimize can attract the same volume of forecasts that it currently gets for corporate earnings, and whether this consensus can be as accurate in aggregate as its earnings data has been.  However, there is no reason to believe that the consensus estimate for economic indicators will be any less accurate than what they are currently finding for corporate earnings.  In fact, given the real-time nature of the Estimize platform, we have every reason to believe that the Estimize economic consensus might be even more accurate than its earnings data as forecasters will be able to include estimates with the most recent data releases.

If this proves to be true, then we don’t expect too many serious investors will continue relying on obtaining their consensus economic indicator estimates from traditional sources in the future.  It will be exciting to see how this all pans out.

 

Subscribe to Integrity ResearchWatch by Email or  in an RSS/XML reader

Upbeat Equities Environment

April 23rd, 2014

Earnings reports from major investment banks are showing continued improvement in the equities environment during the first quarter of 2014.  The longer term trend also suggests a more stable context for equity research.

Quarterly equities growth

Bank of America, Morgan Stanley, Credit Suisse, and JP Morgan have reported sequential growth in their equities revenues and all except JP Morgan had higher equities revenues than the first quarter of 2013.  Citigroup was the outlier.  Although its overall earnings report was well received, its equities business is down 12% relative to comparable quarters.

Goldman Sachs equity revenues were also down in aggregate, which it explained as the result of the sale of its reinsurance business and lower derivatives revenues.  However, its commission revenues were up 4% compared to the first quarter of last year and 11% sequentially.

The average for the banks reporting so far shows slight growth over the first quarter of 2013 and a robust 21% uptick from the fourth quarter.  Still to report: Barclays, Deutsche Bank and UBS.

Commissions as a component of equities

Unlike its competitors, Goldman provides detail on the components of its equities revenues.  Its report of equity commission revenues excludes the ‘noise’ from other equities products less directly related to investment research.  This is a reminder that although overall cash equities revenues are the best metric we have, it isn’t a perfect proxy for commission revenues.

The negative press surrounding high frequency trading has led to speculation that banks may be under pressure to close their dark pools.  Goldman’s CFO told analysts that there were “no strategic plans” to close its Sigma X pool during the first quarter conference call.  Closure of dark pools would negatively impact overall equities revenues, but not commissions revenues tied to investment research.

Equities trend

Looking at the longer term trend in equities revenues for one of the top equities houses, Credit Suisse, we see a near doubling of equities revenues in the run up to the financial crisis followed by an equally quick decline.   By 2011, Credit Suisse’s equity revenues were back to 2005 levels.  However, since then equities revenues have stabilized and show moderate growth (10%) in 2013.  (Note: CHF = 1.13 US$ and has trended around $1 US.)

The quarterly revenue picture is more complex, but has a similarly encouraging picture of an improving near term trend.   The quarterly revenues also show decreasing volatility, which is the trend across all the major investment banks.

The quarterly pattern also suggests some seasonality to the revenues with stronger revenues earlier in the year and weaker results later in the year.

Conclusion

Overall, it is premature to break out the champagne.  Not all banks have reported yet, so the picture could alter. More importantly, past experience tells us that equity revenues are volatile and a strong first quarter does not necessarily presage a strong year.   Nevertheless, the general trend suggests a generally more stable equities environment, at least for the leading equities players, which is a positive for the research segment.

Subscribe to Integrity ResearchWatch by Email or  in an RSS/XML reader

Big Data and Investment Research: Part 2

April 21st, 2014

Last week we wrote about why buy-side firms are considering adding “big data” analytic techniques to their research process.  This week, we will investigate the impact that “big data” is now having on the sell-side and alternative research industry.

The “Big Data” Evolution

As we mentioned last week, the term “big data” when applied to investment research is the practice of regularly collecting, storing and analyzing huge amounts of structured and unstructured data sources to generate predictive insights with the sole purpose of generating consistent investment returns.

Despite pronouncements to the contrary, this is not really a new phenomenon in the research business as most of the best sell-side and alternative research firms have been doing just this for many years — albeit to a more limited extent.  Countless research analysts have conducted formal channel checks, surveys, or shopping bag counts; or they have licensed unique commercial data feeds; they have warehoused this data; and they have analyzed these unique data points as part of their research process for many years.  In fact, some research franchises like Midwest Research and it’s many spinoffs were widely known for this type of data-driven research process.

However, in recent years as computing power has grown, computer storage costs have fallen, and the internet has exponentially increased the availability of both structured and unstructured data, both the capability and interest in expanding the data-driven research model also increased with buy-side research consumers and the third-party research producers that serve them.

History of “Big Data” Investment Research

As mentioned in last week’s article, one of the first third-party “big data” firms to serve institutional investors was Majestic Research, founded in 2002, who collected, warehoused, and analyzed a number of unique data sets to identify investable signals for asset managers.  However, Majestic Research found this business model was difficult to maintain as quantitatively oriented hedge funds that initially used the service felt the predictive edge of this analysis deteriorated as more investors used it.  In other words, Majestic Research could not scale its research business.

In response, the firm decided to hire fundamental securities analysts who could leverage their proprietary data and statistical analysis to produce data-driven fundamental research.  They found the market for this type of research was much broader than pure quantitative investors, and these buy-side clients were less worried that the data was too widely distributed.  They valued the insight Majestic provided more than the underlying data which led to that insight.  As discussed last week, Majestic was acquired by ITG in 2010 and this data-driven research product became the foundation for ITG’s research product since.

However, other firms saw that the Majestic model could enhance the value of the traditional fundamental research product.  The largest single “big data” research initiative was rolled out by Morgan Stanley in 2008 with its Alpha Wise initiative.  Initially, AlphaWise conducted and charged hedge funds and asset managers for customized primary research, including market research, web bots, and expert surveys.  However, eventually, AlphaWise morphed into a unique data-driven research product (they call it evidence based) that Morgan Stanley clients could access based on hundreds of terabytes of proprietary survey and web scraped data.

Then in 2009, San Francisco-based alternative research firm, DISCERN started up to build a “big data” research model to meet the varied needs of different types of institutional investors.  As mentioned last week, DISCERN is an institutional equity research firm which covers a wide range of sectors, including specialty pharmaceuticals, financial services, energy, and real estate investment trusts.  Its research is based on the statistical analysis of huge amounts of structured and unstructured data to identify unique investment signals, and then overlays this data analysis with contextual insights generated by experienced sell-side analysts to identify key trends, market deflection points, and other potential investable opportunities.  Discern provides a wide range of unbundled services, including allowing buy-side clients to license its data, the predictive signals it has developed, or the extensive research it generates.

Consequences for the Research Industry

So, does the adoption of “big data” by these three research firms have any long-term consequence for the research industry?  We think so.  Clearly, these three firms have found extensive market acceptance of their data centric research products.  Based on our own analysis, DISCERN has been one of the faster growing alternative research firms in the industry over the past few years.  In addition, research produced by the AlphaWise team has been some of the most highly demanded research produced by Morgan Stanley.

If investment research firms want to compete with growing research firms like Majestic (now ITG), Morgan Stanley’s AlphaWise, or DISCERN, they are going to have to make significant changes to their traditional research businesses.  They will need to invest in building (or renting) scalable data warehouses and other technical infrastructure.  In addition, they will need to start thinking about what public data they want to collect; what commercial data they want to license; and more importantly, what proprietary data they should create by conducting longitudinal surveys, implementing web scraping technology, or utilizing other primary research techniques.  They will also need to hire data scientists or analysts skilled in building predictive analytics to work alongside their traditional analysts with deep company or industry expertise.

In addition, they will need to think through their business model and product set.  Do they only want to sell access to their analysts and the research reports they wrote as they did in the past?  Or, do they want to unbundle their research process and allow customers to cherry pick exactly what they want, including data, investment signals, full research reports, custom work, etc.

And of course, how do they want their clients to pay for this research – through hard dollar subscriptions, CSAs, trading, or some combination of the above? It is interesting to note that currently, all three of the companies mentioned earlier in this article, ITG, Morgan Stanley and DISCERN enable their clients to pay for these “big data” research services by trading through their trading desks.

Winners & Losers in “Big Data” Research?

Given the significant financial investment required in people, data, and technology, we suspect the obvious winners in rolling out “big data” research services are likely to be sell-side investment banks.  Clearly, many of these firms have produced data driven research in the past based on the analysts they have hired.  The move to a “big data” focus will really be a commitment on the part of the firm to basing all their investment research on a deep statistical analysis of underlying data sources.

However, it is important to note that many sell-side firms will not choose to make the switch to a “big data” based research process, nor will everyone that tries to do so succeed.  One of the major impediments to success is a bank’s unwillingness to make the financial commitment necessary to succeed.  Certainly, the downturn in equity commissions over the past five years and the struggle to make equity research pay could convince many management teams that an investment in “big data” research is just too risky with no obvious payoff.   Another reason some firms will fail in its big data efforts is their inability to adapt the firm’s culture to this new approach to investment research.

So, can alternative research firms succeed in developing “big data” research services?  We think so, though we do not think it will be a quick road to success.  In the past, both Majestic Research and DISCERN were alternative research firms that became successful in this space, though in each case, the firms developed their coverage and their product offering incrementally rather than trying to roll out broad coverage at the outset.

Similarly, we suspect that other alternative research firms will be successful in the future by initially focusing on a limited number of sectors based on a discreet set of proprietary data they have collected and few predictive signals they have developed.  These firms will be able to expand their businesses as they gain commercial traction by adding new sectors, more proprietary data, and additional analytic signals.

Another possible winner in the “big data” research space could be existing data aggregators or predictive analytics providers who decide to move up the value chain with their data products by adding context and investment insight to their offerings by hiring investment research analysts.  Unfortunately, we think that very few data aggregators or analytics providers will take the risk of stepping outside their domain expertise to enter the investment research business.  Consequently, we  don’t expect to see too many of these types of firms enter the research business.

Summary

In our view, the acceptance of “big data” techniques in the investment research industry is a forgone conclusion.  Buy-side investors have increasingly exhibited their appetite for unique datasets and for data driven research over the past decade.  To meet this hunger, a number of sell side and alternative research firms have well developed efforts under way, and we suspect that a few significant new entrants are likely to announce initiatives in 2014.

The real question remaining is how existing research firms plan to respond to this trend.  As we mentioned last week, the adoption of “big data” techniques enables a firm to develop a proprietary edge – whether it is through the collection of unique datasets or the development of proprietary signals that have predictive value.  We believe that as more research firms adopt a “big data” oriented research process, it will be increasingly harder for other traditional research firms, with no discernible proprietary edge to compete.  The days where a research firm could succeed solely on the basis of having a cadre of analysts with deep industry experience might be over.

 

Subscribe to Integrity ResearchWatch by Email or  in an RSS/XML reader

High Frequency Research: How HFT Impacts Investment Research

April 16th, 2014

The hubbub over high frequency trading (HFT) has implications for investment research.  The future direction of HFT will have direct implications for research.  And the Michael Lewis phenomenon is a cautionary tale for the research industry.

Low carb trading

It is estimated that HFT accounts for about half of trading in the US (down from two-thirds).  From the perspective of research, HFT is low-carb trading.  In other words, it doesn’t pay the research bills because it is too low margin and HFT firms don’t use Wall Street research.

From the HFT perspective, research is one of the obstacles to HFT growth.  In Michael Lewis’s controversial new book, the founder of IEX, a new exchange designed to counteract the evils of HFT, was challenged by buy-side inertia in directing trades because of the need to pay for research.  According to Lewis, the buy-side traders were outraged by HFT predatory tactics, yet continued to send orders to the HFT-infested dark pools operated by Wall Street banks like lambs to the slaughter.  All because of bank research.

The rise of high frequency research

What Lewis does not mention is that HFT has declined from its heyday, when it accounted for two-thirds of US market trading.  In 2009, high-frequency traders moved about 3.25 billion shares a day. In 2012, it was 1.6 billion a day, according to a Bloomberg BusinessWeek article.   It has declined because margins have declined from fierce competition among HFT firms.   Average profits have fallen from about a tenth of a penny per share to a twentieth of a penny.

Lewis misses the fundamental point with HFT: it is simply automated trading.  Yes, HFT trading has predatory practices, but that is not the core.  The core is computerized trading.  An unnerving aspect of HFT given surprisingly short shrift by Lewis is the frequency of flash crashes, which occur regularly but so quickly that humans can’t detect them.  Check out his excellent TED talk on HFT:  http://youtu.be/V43a-KxLFcg

For researchers, it is worth noting the current direction of HFT: high frequency research (HFR).  HFTs are using sophisticated programs to analyze news wires and headlines to instantly interpret implications. Some are scanning Twitter feeds, as evidenced by the Hash Crash, the sudden selloff that followed the Associated Press’s hacked Twitter account reporting explosions at the White House.  We can expect further developments and innovations in HFR as algorithms get more sophisticated.

Much has been written about the automation of trading and the declining role of human traders.  The automation of research is yet to be written.

A cautionary tale

One last point.  Flash Boys is compelling story of an outsider who uncovers HFT abuses and works to counteract them.   While it makes for a great read, it stretches belief that Brad Katsuyama was the first to discover the pernicious effects of HFT.

Like many other aspects of Wall Street, HFT was yet another open secret (although perhaps not understood in the exacting detail that Brad pursues).  Can you think of another generally accepted quirk that applies to research, just waiting for the next Michael Lewis tome?

HFT & Research

While the Wall Street banks have used HFT to augment cash equities revenues, that game is declining.  HFT is fundamentally hostile to traditional bank research.  Its trades don’t pay research bills, and ultimately HFT leads to a very different form of research that sends chills down the spine of every fundamental analyst.

Wall Street offers opportunities for talented writers to prosper by spotlighting commonly accepted idiosyncrasies in the markets.  Or for talented politicians seeking greater fame.  Will Soft Boys the next Lewis opus?

Subscribe to Integrity ResearchWatch by Email or  in an RSS/XML reader

Big Data and Investment Research: Part 1

April 14th, 2014

One of the key themes in the institutional investment research business this year is the growing importance of warehousing and generating meaningful signals and insights from “big data”.  This week we will discuss the impact this trend is starting to have on the internal investment research process at buy-side firms and next week we will write about how these developments will transform the sell-side and independent research business.

So What is Big Data?

The term “big data” when applied to investment research is one which is used by countless journalists, vendors, and consultants, though few really agree on a definition.  We see “big data” as the regular collection, storage and analysis of numerous structured and unstructured data sources to generate predictive insights and consistent investment returns.

One of the first real “big data” firms to serve the U.S. institutional investor was Majestic Research, founded in 2002 by Seth Goldstein and Tony Berkman.  Majestic Research entered into exclusive licenses with proprietary third-party data providers and they obtained data freely from the web enabling them to generate data driven insights on sales trends in industries such as telecommunications, real estate airlines.  The firm was sold to ITG in 2010 for $56 mln.

 

As you can see from the well-known IBM slide above, many analysts break “big data” down into four dimensions – Volume, Velocity, Variety, and Veracity.  However, the team at Integrity Research believes that one additional “V”, Validity should also be added when looking at “big data” from an institutional investor’s investment perspective.

Applying the 5 V’s to the Buy-Side

As you might guess, Integrity Research has interacted with many buy-side investors who either have already started a “big data” initiative or who are considering implementing one in the near-term.  So, what issues should buy-side investors be aware of as they plan to develop a “big data” effort to enhance their current investment research processes?

Volume: Consistent with the term “big data” one of the obvious characteristics of any big data initiative is the volume of data that investors must be prepared to collect and analyze.  As you can see from the slide above, 2.3 trillion gigabytes of data are created every day, with estimates suggesting that 43 trillion gigabytes of data will be created by 2020.  Consequently, buy-side investors looking to develop a big data strategy must be prepared to warehouse and analyze huge amounts of data – considerably more than they have ever worked with in the past.

Velocity: Not only is the volume of data huge, but most big data initiatives require that investors analyze this data in real-time to identify meaningful signals.  Fortunately, most buy-side investors are used to working with real-time data.

Variety: One of the key characteristics of “big data” initiatives is the variety of data types that buy-side investors can collect, including both structured and unstructured data.  A few of the major external data types that we have identified for buy-side clients include data from public sources, social media sites, crowd sourcing efforts, various transaction types, sensors, commercial industry sources, primary research vendors, exchanges and market data vendors.

Veracity: All investors understand the problem of poor data quality when trying to build a reliable research process.  Clearly, this becomes an exponentially more difficult issue for buy-side investors as they try to identify and ingest terabytes of data from numerous public and private sources, all who have different data collection and cleansing processes.  Consequently, investors often have to implement sophisticated data quality checks to make sure that the data they warehouse is reasonably accurate.

Validity: One important concern for buy-side investors when deciding what data they want to acquire and or collect is whether this data is actually useful in helping predict the movement of securities or asset prices.  Warehousing irrelevant data only increases cost and complexity without contributing value to the research process.  Consequently, buy-side investors need to clearly think through the potential validity of a dataset before it is acquired.

Big Data Benefits for the Buy Side

So why are buy-side investors starting to jump on the big data bandwagon?  Is it a fad, or is it a long-term trend for institutional investors?  In our mind, the adoption of “big data” methods for investing is merely the next logical step for investors looking to create a way to generate consistent returns.

One of the most obvious benefits of rolling out a “big data” initiative is to enable investors to create a systematic repeatable research process versus an investment process which is overly reliant on specific individuals.  Clearly, this has been the benefit of quantitative investment models used by some asset managers for years.  What is really interesting is the fact that a number of traditionally qualitative investors are now looking into “big data” techniques to add this as an overlay to their primary investment strategy.

A related benefit of implementing a “big data” project is the ability for buy-side investors to develop proprietary predictive signals from the data they are warehousing for individual stocks, sectors, ETFs or other market indices which can help generate consistent returns.  In fact, an investor’s ability to develop predictive signals is often only limited by their ingenuity in either finding existing datasets, their willingness and skill in building new proprietary datasets, and their creativity in analyzing this data.

Adopting a “big data” driven research process should also lower the research risk for institutional investors than for many traditional primary research driven approaches.  Clearly, and investor is unlikely to receive and trade on illicit inside information when using “big data” techniques.

Costs of Implementing Buy-Side Big Data Initiatives

Of course, adopting a “big data” research program is not without significant costs for a buy-side firm.  A few of these costs include:

Obviously, firms that have never implemented a significant quantitative investment strategy are likely not to have the expertise or knowledge to effectively implement a “big data” program.  This includes the expertise in finding unique data sources, technically integrating these data sources, cleaning / validating this data, warehousing huge volumes of data, analyzing this data and developing predictive signals, etc.  Consequently, buy-side firms looking to build “big data” initiatives will be forced to hire different types of talent than they ever have had to hire before.  Some of these professionals will need data integration skills, advanced analytics and predictive analysis skills, complex event processing skills, rule management skills, and experience with business intelligence tools.  Unfortunately, the current supply of high quality data scientists is considerably smaller than the exploding demand for their skills.

Hiring workers with these new skill sets is also likely to create a different issue for buy-side firms, and this is a management and corporate culture issue.  Clearly, these new employees will often need to be managed differently than their peers given their skills, experiences and personalities.  Consequently, finding managers who can effectively manage and motivate these new employees will be critical in recruiting, developing and keeping this talent.

Of course, one of the most significant costs of implementing a “big data” initiative at a buy-side firm is the upfront and ongoing financial investment required to be successful.  Not only does the firm have to hire the right talent (discussed previously), but they also have to acquire and/or build the right technical infrastructure, and they need to identify and acquire the right data.  In some instances, buy-side firms also need to invest in “creating” unique proprietary time series (e.g. by conducting longitudinal surveys or employing other primary research techniques) which will also require specialized know-how and a significant financial investment.

Alternatives to Building Big Data Programs In-House

Does this mean that only the largest buy-side firms have the management, technical or financial resources to successfully implement a “big data” program?  Well, the answer is yes and no.  If a buy-side firm wants to build this type of program in-house, then it will take considerable resources to pull off.  However, if a buy-side firm is willing to outsource some of this initiative to other partners, then it is possible to build a “big data” program more cost effectively.

In fact, there are a growing number of vendors who can provide buy-side investors with various components of a “big data” research program such as sourcing and aggregating unique data sets, leveraging a data warehouse for both external and proprietary data, and even building custom signals for investors.

One such vendor is DISCERN, a San Francisco-based research firm which collects large amounts of publicly available information.  The firm was founded in 2010 by former Lehman Brothers IT analyst Harry Blount.  Besides producing data-driven research, DISCERN has also leveraged cloud-based machine-learning algorithms and persistent queries to automate data aggregation, enhance data discovery and visualization, signal decision-makers about new business developments, and deliver masses of disparate data in a manageable format.

In addition to DISCERN, a number of new vendors have sprung up in recent years providing asset managers with aggregated sources of structured and unstructured public data, access to proprietary industry or transaction data covering various sectors, or investment focused signals based on various social media sources.  

Summary

As we mentioned earlier, adopting a “big data” research strategy is not without its own issues and related costs for any buy-side investor considering this course of action, including deciding whether to “build or buy”, acquiring the relevant know-how, finding appropriate management, and resourcing the project sufficiently to attain success.

Despite these issues, the use of “big data” research techniques is likely to transform a significant part of the investment community in the coming years as the buy-side looks to implement a research process which produces repeatable and consistent returns, and which does so at a lower risk than traditional approaches.

In our minds, one of the most exciting aspects of this trend is discovering what new data vendors, infrastructure providers, and analytic tool suppliers spring up to meet the growing buy-side demand to more easily and cost effectively adopt “big data” as an integral part of their research process.

 

Subscribe to Integrity ResearchWatch by Email or  in an RSS/XML reader

Financial Social Media Coming of Age?

April 9th, 2014

Gnip, a data provider specializing in social media feeds, released a whitepaper saying that mining social media for financial market insights is three years behind branding analysis, and is reaching a tipping point.  Social media traffic relating to financial markets is growing exponentially.  Social media analysis is spreading from equity markets to foreign exchange, energy and commodities markets.  Gnip argues that financial analysis of social media is reaching a point of accelerating adoption equivalent to the earlier adoption of social media branding analysis.

Founded in 2008, Gnip provides data feeds of various social media sources.   Gnip was the first to distribute Twitter and Twitter archives and has added feeds of Foursquare, Word Press, Facebook and others.  Its primary client base utilizes its feeds for brand analysis, but sees growth in financial data mining of social media.

Growth in financial analysis

Early adopters of social data analytics in finance were a small set of hedge funds and high frequency traders.  In 2013, a few things happened to increase financial market interest in social media:

  • The SEC blessed social media outlets for corporate announcements in compliance with Regulation Fair Disclosure.
  • The Hash Crash in April took 140 points off the Dow in two minutes after the AP Twitter account was hacked and tweeted about explosions in the White House.
  • A tweet by Carl Icahn caused Apple’s stock price to add $12.5 billion in market value.

We also noted that a tweet by a Hedgeye analyst caused Kinder Morgan Inc (KMI) shares to drop 6 percent taking $4 billion off the company’s market capitalization.

Financial discussions on social media have grown over the past three years, in part thanks to the “Cashtag”.  Cashtagging is the convention of adding a “$TICKER(s)” tag to content to associate the discussion with tradable equities.  According to Gnip, in comparable periods from 2011 to 2014 cashtagged conversations on Twitter around Russell 1000 securities increased more than 550% reaching several million messages per quarter.  Use of cashtags has expanded beyond equities to FX, futures, and commodities.

A tipping point?

Gnip claims that financial analysis of social media is now entering the second stage of a product S-curve, the stage of rapid adoption.  It argues that adoption is about three years behind the use of social media data for branding analysis, which accelerated around 2010.

Gnip is seeing two primary use cases for financial analysis of social media.  One is to mine social media (mainly Twitter) for news.  Bloomberg and Thomson Reuters have added filtered data from Twitter and StockTwits to their platforms.  News oriented startups include Eagle Alpha, Hedge Chatter, Market Prophit and Finmaven.

The second use case is to apply analytics to social media to create scores, signals and other derived data from Twitter or other social media.  These companies include Social Market Analytics, Contix, Eagle Alpha, Market Prophit, Infinigon, TheySay, Knowsis, Dataminr, PsychSignal and mBlast.

Our take

Although Gnip has a clear incentive to talk its book, there is no question that the intersection of social media and finance is growing.   However, there are still some formidable barriers to be breached before social media becomes a mainstream financial markets vehicle.  One is regulatory.  Although the SEC condoned use of social media for company announcements, the use of social media for the distribution of sell-side stock research is still problematic, not least because of required disclosures.  More importantly most producers of sell-side research (the banks) strive to control and measure access to its research.

Which isn’t to say that social media won’t ultimately disrupt the current research business model.   Academic studies suggest that crowd-sourced models such as Seeking Alpha and SumZero are outperforming Street research.   Deutsche Bank’s Quantitative Research Group recently validated the use of Estimize’s crowd-sourced estimates data in quantitative research models.  However, it is difficult to disrupt a business model subsidized by client commissions totaling over $10 billion globally.

Difficult, but not impossible.  Financial analysis of social media will continue to grow, investors will increasingly mine big data, whether derived from social media or other sources, and crowd sourcing of financial analysis will increase.  The tipping point, however, is still ahead of us and has some obstacles to surmount.

Subscribe to Integrity ResearchWatch by Email or  in an RSS/XML reader