Financial Python

Studies in Finance and Python

Google Wave is built for sales & trading desks (and a little on Chrome OS)

leave a comment »

I finally got a Google Wave invitation (yaay) and have fooled around with it a bit. It’s tough to really kick the tires when most of the people you would wave with don’t have an account yet. The only other option is to wade into massive public waves that appear a bit chaotic. It’s like when I first discovered usenet and electronic bulletin boards way back when. I had no idea what was going on and the geek factor was kicked up a notch. But it was also sort of cool. Anyway, here’s former Lifehacker Gina Trapani explaining Google Wave at W2E:

Nevertheless, is it just me, or does Google Wave cry out for a trading desk application? I can see an enterprising outfit using Google’s open source Wave protocol to bring trading communications into the 21st century. Between the persistent state of wave “documents” and the extensibility it offers with bots and gadgets, I could see Google Wave replacing many solutions firms currently depend on for internal and external communication. There are good structural reasons why it probably won’t happen, but a little speculation doesn’t hurt.

From my experience, investment banks currently use a patchwork of communication channels. Most have their own internal chat system, Bloomberg messaging/chat, email, AIM (well they used to use AIM), and the telephone. From a research perspective, notes are syndicated via email, Bloomberg, internal chat, proprietary blog-like systems, and (of course) hardcopy.

So what does Google Wave offer? From an inside-the-firm perspective, it’s easy to see Wave helping traders, analysts, and salespeople collaborate around a central hub of information. That’s the whole point of having a “desk” where people sit right next to each other – to improve communication. In a global enterprise, however, it can be difficult to achieve the immediacy market-making demands. Using a centralized waves to manage communications would certainly reduce the number of tools in use and provide a re-playable record of the day’s activity. For example, currency traders in NY could replay or review a shared global wave as they take over for London, etc. Wave gadgets could also be created for the ever popular polls that get sent out to clients and other traders in the bank. In-line responses would also help organize the information in a single place rather than switching from chat to email to bloomberg, etc. etc. throughout the day. I could see a salesperson subscribing to a trading wave (it may be he can make a risk free trade by crossing with another salesperson), and maintaining a client wave (for those who choose to do so).

For firms with strong data infrastructures, I could see Wave paired with plotting and analytical extensions that could be used to share data and potential insights. Before Lehman’s demise, LehmanLive was a great example of a firm moving to the web in a way that allowed the entire firm to leverage its data and analytics. For those of you who remember, imagine LehmanLive, POINT, and Google Wave all wrapped up into a single package, and you get where I’m going with this.

Many of the same benefits could be enjoyed by clients in separate sandboxed waves. And since firms can implement their own Wave system, client accounts could be created that access the firm’s servers rather than Google’s. And compliance will love it since wave’s are persistent (again, see the playback feature). Those who want to do something shady will probably stick to the phone…

Of course, it’s probably a long shot any of this will happen. The Bloomberg network effect has been well-documented. Everyone uses it because everyone uses it! As such, it can crowd out patience for another system. Furthermore, the wave approach isn’t immediately familiar (though I have no doubt Wall Street would adopt the technology if it thought it would make more money). One might argue that, in liquid markets, information is already traveling pretty darn fast (particularly as computers cut humans out of the loop). In less liquid, over-the-counter markets, there’s actually an incentive to fight transparency since it has a direct negative impact on profitability…though the drive to gain volume and market sustainability often drives the market towards transparency in the end. Finally, for structured products, the process is so darn long and complicated, who cares? Just tell the lawyers to hurry up!

A final thought on Google OS. I watched the presentation today and was tickled by a pointed question by a member of the audience that essentially asked “What can I do on Chrome OS that I cant’ do on a regular browser?” The answer was along the lines of “uh, nothing really…but you won’t get the really fast boot-up!” From an IT perspective, however, I could see Chrome OS being a godsend. Again, as an open source project, a firm could build Chrome OS into a netbook for use with a distributed workforce. If you are the aforementioned firm with a strong, web-enabled infrastructure (using Wave even!), an analyst or salesperson in the field could have instant access to most or all relevant data on the road, either using local storage or a wifi connection/vpn. Since all data is encrypted on the netbook (at least according to the keynote), it’s essentially worthless (from a corporate perspective) to anyone who steals it. And netbooks are CHEAP.

Anyway, my two cents…

Advertisements

Written by DK

November 19, 2009 at 11:52 pm

Posted in Finance

Tagged with ,

Trefis decomposes stock price

leave a comment »

via TechCrunch:

Started by three engineers and math whizzes from MIT and Cornell (Manish Jhunjhunwala, Adam Donovan, and Cem Ozkaynak) who did time at McKinsey and UBS bank, Trefis breaks down a stock price by the contribution of a company’s major products and businesses. For instance, 51.3 percent of Apple’s stock price is attributed to the iPhone, 25.5 percent to the Macintosh, and only 7.7 percent to iTunes and iPhone apps. Don’t agree? You can change the underlying assumptions by simply dragging lines on charts forecasting the future price of the iPhone, its market share going out to 2016, and so forth. Every time you change an assumption, the price target changes accordingly.

So let’s take a company we all love to hate, AT&T. The screenshot above shows how Trefis decomposes the company’s stock price. You can click through to get a more in-depth breakdown of AT&T’s business. There’s also a social component to the service where subscribers can contribute their own customized models.

There aren’t that many companies to choose from, but Trefis is still in the free period. I imagine users will have to pay for full access in the future. In any case, it seems like a neat toy.

Written by DK

November 17, 2009 at 12:59 pm

Posted in Finance

Stock Ticker Orbital Comparison = COOL

leave a comment »

Care of Flowing Data, Stock Ticker Orbital Comparison (STOC) is one of the coolest representations of the market I’ve seen. Although I can’t see anyone really trading on top of this visualization metaphor, it does make one think of how correlations and other parameters might be represented via animation.

STOC was built using Processing, a Java-based visualization IDE developed at MIT. I understand there are Scala and Javascript versions in development as well. The closest python equivalents I can think of are NodeBox and Mayavi. In any case, STOC has swerve. Respect.

Written by DK

October 13, 2009 at 11:50 am

Posted in Finance

Tagged with ,

Import AntiGravity

leave a comment »

Just saw this…

Written by DK

October 12, 2009 at 2:38 pm

Posted in Python

Tagged with

Palantir Finance looks promising

leave a comment »

Garry (one of the Posterous founders), highlights the latest offering from Palantir – Palantir Finance. It looks like it has pretty powerful charting tools. I’ve signed up for an account and will report back once I’ve fiddled with it. Although his reference to a “Make Money” button is a little naive, I’m excited to explore the data exploration capabilities of this new tool (and, of course, whether there’s an API).

Written by DK

October 1, 2009 at 11:56 am

Posted in Finance

Tagged with ,

Parsing DTCC Part 1: PITA

leave a comment »

In a previous post, I complained about the DTCC’s CDS data website and the one week lifespan of the data published there. For those of you who don’t know, the DTCC clears and settles a massive number of transactions every day for multiple asset classes. It’s one of those financial institutions that doesn’t get much press but underpins the entire capital market.

Anyway, the recent crisis motivated the DTCC to publish weekly CDS (single name, index, and tranche) exposure data. A good idea, until one realizes the data goes up in smoke when the next week’s data arrives. Although DTCC recently added links to data for “a week ago”, “a month ago”, and “a year ago,” it’s still pretty inconvenient. So, if you want the data, you have to parse it yourself. I originally wanted to write a smart parser that would dynamically react to whatever format it encountered…I came to my senses and adopted a simpler approach.

The approach thus far:

  • Download the raw html pages/files via “curl.” Urllib2 is the preferred method to pull web pages, but I didn’t have the patience to figure out how to handle redirects. Curl is a utility included with OS X that, for whatever reason, ignores redirects automatically. As such, I created a short python script to download the html for all the tables of interest weekly.
  • Use BeautifulSoup to parse the html. Other libraries, such as html5lib and lxml seem to be gaining ground on BeautifulSoup, particularly as it’s author wants to get out of the parsing game altogether. Nevertheless, I couldn’t be bothered to figure out the unicode issues I experienced with html5lib or lxml’s logic. BeautifulSoup is straightforward and “gives you unicode, dammit!” (quoting the author).
  • Use numpy for easier data manipulation. Since my html, css, DOM, etc. knowledge is basic, I thought it might be better to use numpy to manipulate the table data rather than rely solely on the parser. This meant vectorizing the html data into a 1D array, cleaning it up, and generally preparing it for future reshaping. Numpy, how did I ever live without you?

This would’ve been much easier if all the tables were exactly the same format. Unfortunately, that’s never the case. An extra cell here or there, or weird characters, can throw things off. This isn’t an issue if you are parsing individual pieces of data or a single table. But what if you need to parse ten, 20, 100, etc. tables? It can get ugly fast. The DTCC data is broken into 23 pages, some of which have multiple tables. Luckily, most of my pain was self-inflicted (hey, I’m a parsing virgin). I only had to account for a few different table formats in the end.

One downside to my approach is I do not dynamically produce headers for the data I’m pulling. I plan to manually set the headers for each table (the ultimate destination for the data right now are csv files). If there’s a better way, please let me know.

You can find the code here via pastebin (feedback is welcome).
You can find the DTCC tables here (if you want to view the html source).

Part 2 will cover the process of reformatting the data with numpy and perhaps feature some charts. I’m very curious to see what the numbers show!

Here are a few screenshots of a terminal session using the code so far:


See and download the full gallery on posterous

Written by DK

September 15, 2009 at 7:15 am

Posted in Finance, Python

Tagged with , , , , ,

Planet Money Panel on Financial Innovation

leave a comment »

Tyler Cowen, Felix Salmon, and Rortybomb duke it out on Planet Money:
http://www.npr.org/blogs/money/2009/08/podcast_where_financial_innova.html

Written by DK

August 26, 2009 at 10:57 pm

Posted in Finance

Tagged with