Thursday, September 28, 2006

IBM WebSphere Front Office for Financial Markets

Discussed here on IBM's website.

and from the press release on IBM's Haifa Research Lab (bold text is my highlighting) :

...IBM announced the availability of WebSphere Front Office for Financial Markets, a flexible, high-throughput, low-latency platform. The WebSphere Front Office platform is built on an award-winning portfolio of IBM middleware products that provide an integrated environment optimized for high-volume trading.

Several innovative technologies from the IBM Research Lab in Haifa enabled the platform's performance characteristics and the high availability support including detection, notification and recovery.

"This is IBM's first appearance in the financial front office space for stock exchanges and large institutional customers, which is characterized by extreme data rates measured in hundreds thousands messages per second, and by sub-millisecond delivery latency requirements."

The Reliable Multicast Messaging (RMM) technology and TurboFlow technologies have enabled IBM to address these performance goals and to build an infrastructure that supports the extremely challenging demands of front office financial customers. In addition to high throughput and low latency, RMM is characterized by significant scalability that allows the delivery of financial information to multiple traders at the same time.

Combined with the ITRA (Inter-Tier. Relationship architecture) technology it allows for subsecond data stream failover.

Considering that OPRA (options) data is forcasted to be coming in at 456,000 messages per second, it would be interesting to see if this new product could handle it.

An article in the Inside Market Data newsletter make specific mention of competition against Reuters and Wombat.

©2006 Marc Adler - All Rights Reserved

Sunday, September 24, 2006

Wanted: A Data Pumping Tool

In the companies that I have been involved with in my Wall Street consulting career, it is remarkable how many systems do not have Unit Testing set up.

Concepts like Unit Testing, TDD, code metrics, etc are just starting to make their ways into development groups in IB's. However, one of the areas that has been ignored is stress and soak testing.

One of the tools that we need is what I refer to as a generic Data Pumper. This is a service that can be run to generate data of a certain shape, and pump the data into waiting applications. Some types of data that we may need to pump include quotes, executions, risk, etc.

Here are the features that I would like to see from a Data Pumper:

Playback Modes

We need to have the data replayed in certain temporal formats. We can also apply a distribution curve to the replay interval.

- Burst Mode: Play back data all at once, as fast as we can.
- Interval Mode: Play the data back at certain intervals. For example, playback 500 messages per second. We can also put some sort of distribution on the interval, so that the intervals would be the lowest at the beginning and at the end of the playback period (simulating market open and close).
- Timed Mode: This would cause playback at the exact timings that actual data was generated. In this mode, we would have to first capture real data and record the exact time that the real data was received. Then we would play back the simulated data using the timings of the real data.


We need to configure the transport mechanism which the data is delivered to the waiting application.

- Tibco RV or EMS (Right now, most IB's use Tibco for the distribution of high-frequency data)
- LBM (a Tibco competitor)
- Sockets (or SmartSockets)
- MQ or MSMQ
- CPS (Morgan Stanley)

Data Generation

- Capture actual data for several days in order to provide some reference data
- We can tag certain fields for random data generation. For example, we can vary the prices of the various instruments.
- We can generate completely random data.


XML seems to be used in many places, but you have the latency involved in deserialization. Binary Objects is fast, but necessitates a homogeneous environment.

- Tibco binary message map
- delimited strings
- binary object
- Fixed-length ASCII
- Reuters (Craig will tell me about the legality of simulating data in Reuters format)

Other Considerations

- Instead of sending data directly to the end application, we can send it to an object cache, and let the object cache handle distribution.

- We need a GUI for monitoring the transmission of data, and controls to let the user dynamically modify the timing intervals.

- We need to have probes in the target application so we can monitor its performance in real time under various loads.

Thursday, September 21, 2006

Decode the Marketing Blurb

Here is a fun game for all of you. A certain vendor sells decision systems over a bunch of vertical industries. Here is a blurb from one of their webpages that outlines their offerings for the financial industry:

Modeling: Price and Risk Models
We model the equity market as an open, irreversible, far from equilibrium thermodynamic model subject to dynamic constraints. This approach results in a bi-linear model composed of two dynamical sub-models: price evolution and risk evolution. The price evolution sub-model represents the behavior of pricing of commodities and a market aggregate as a function of exogenous demand and control actions. The risk sub-model represents the behavior of risk as a function of exogenous uncertainty and actions. Further, the risk sub-model represents the uncertainty range of the values computed by the price evolution model.

The game here is to decode the blurb and tell me what this system does.

©2006 Marc Adler - All Rights Reserved

Sunday, September 17, 2006

Microsoft ESB

Microsoft in the Enterprise Service Bus (ESB) space? Ot should be an interesting development to watch, especially for shops who are heavily tied to Tibco RV and EMS. Microsoft will have to really exceed Tibco EMS's performance in order for people to take notice. Also, Microsoft will have to throw the EMS people a bone and support JMS. I might suggest that Microsoft come out with patterns to support synchronous calls over JMS easily.

I can imagine some interesting tie-ins with SQL Server and Excel. You could have database events published on the message bus. You can also have Excel subscribing to the message bus in order to receive real-time stock quotes and position updates, and you can also have Excel publishing risk calculations back out to the bus. If Microsoft were to have this trio (DB, Excel, bus) tied in seamlessly, then this would show Wall Street a real committment.

Are you an RV, EMS, or Sonic shop? What would it take for you to transition to a Microsoft ESB?

By the way .... A few weeks ago, I asked a Microsoft rep about what they are looking at for messaging, and they said that they will be supporting WS-Events. Is this an alternative to JMS for async messaging? What we don't need is to divide the messaging community at this point.

©2006 Marc Adler - All Rights Reserved

Saturday, September 16, 2006


I am starting an evaluation of object cache technology, starting with GemFire. The target app is a legacy C++ app, so the fact that Gemstone has a C++ version of GemFire is a big plus. They also have .NET bindings, and I will be checking those out too.

One gotcha .... GemFire does not work on Windows 2000 because of the underlying dependencies on 29West's LBM message broker. This is a real nasty if you want to do an evaluation on your desktop at work, but your company is still in Windows 2000-land. So, I had to load up GemFire on my home laptop, which runs XP Pro, and will do the evaluation on my laptop.

The plans are to use the object cache as a "data fabric" in order to speed up some of our calc engines. Object Caches like GigaSpaces are used already in a lot of Wall Street IBs just for that purpose. I have heard a little rumbling about GigaSpace implementations from some former colleagues, so we are hoping that GemFire will be worry-free. Already, I am impressed with their support staff (thanks to Mike and Santiago for timely responses).

I would be interested in comments from any of you people who have evaluated or used object caches in your financial apps, especially C++ or C# apps. Feel free to comment here or email me privately.

©2006 Marc Adler - All Rights Reserved

Friday, September 08, 2006

Roy Buchanan

If you are a fan of guitar, blues or just plain old great music, check this out.

I am not a guitarist nor do I really like the blues, but like everyone else who saw this on YouTube, it hit me in the right place. I actually saw Roy years ago when a school chum got tickets to a taping of ABC's In Concert TV Series. The bill was Uriah Heep, Roy, The Persuasions, and Savoy Brown. Roy blew everyone away.

I continue to be astounded by the things I find on YouTube. My YouTube id is VanderTop2, so you can browse my Favorites list and see what kind of things I am unearthing.

©2006 Marc Adler - All Rights Reserved

Thursday, September 07, 2006

Celllllllllebration, Yeah, Come On!

We have been given the go-ahead for the .NET Client Framework!

Thanks to all who contributed ideas, both publicly and privately. And thanks to various open-minded individuals at my IB.

My ex-colleague, Chris, will be embarking on an effort to do the same at another IB. Will it be Service Locator vs Spring.Net? Stay tuned!

Beers are on me.....

©2006 Marc Adler - All Rights Reserved

Sunday, September 03, 2006

Thoughts on Performance in Trading Systems and .NET

Rico has done it again with a post that provides much thought.

With the advent of object oriented languages (C++), and higher-level languages (C#, Java), most developers try to craft elegant, object-oriented frameworks, complete with reflection, heavyweight classes, lots of properties. I am one of these.

However, I remember the days where I had to cram all of the functionality of New York Word (my very first commercial product) into 512K of ram. Pouring over code, trying to see if I can save a few bytes here and a few bytes there. Looking at the output of the Microsoft Linker to see if I can save space. Looking over the disassembly of a crucial module, such as the one that performed wordwrapping.

In the next few years, we are going to start seeing a predicted rate of 456,000 messages per second from some of the market data feeds. The goal is to get these messages, transformed into viable data, into another, trader-facing system, with as little delay as possible. There are additional needs to get this data into algorithmic trading systems and black-box trading systems with the lowest possible latency. The time taken to serialize and deserialize data, construct objects, and perform garbage collection can mean a precious few milliseconds added onto the path that your data takes. If you are writing array-handling code in a managed environment, then the time it takes to perform array checking might add further delay. Even the slightest delay can mean millions of dollars in lost opportunities.

An incredible amount of effort has been spent in writing high-performance, low latency graphics toolkits for rendering objects in video games. Has similar efforts been made to "render" the flow of market data, where every market tick counts?

I would love to hear about any experiences that you have had in getting market data into a client as fast as possible. Things like compressing data, conflating data, choice of transport, choice of GUI grids, high-performance threading, etc.

Microsoft has oodles of articles that deal with performance in .NET. I am anxious to see any performance improvements throughout the entire .NET stack. I am also interested to see how .NET/C#/C++ stacks up against a similar Java/Linux stack, given the same hardware. The PetShop demo might work well for a typical developer, but for trading systems developers, we need to see something a bit more substantial.

©2006 Marc Adler - All Rights Reserved

On the Beach

I have been trying to fly out to a beach for the past few weeks, but every time I reserve a plane, the weather craps out. Had a plane reserved for today, but we have the remnants of Hurricane Ernesto. It is supposed to be an amazing day tomorrow, so I will try to make Ocean City, NJ.

There are a few beaches on the East Coast that you can fly to. Our favorite is Block Island, Rhode Island, right off the eastern tip of Long Island and about 1h15m from MMU. Block Island is only reachable by ferry, and hence, the beaches are relatively less crowded than your typical East Coast beach.

Other beaches that have airports within walking distance include Provincetown (Mass), Ocean City (NJ), Katana (Martha's Vineyard).

©2006 Marc Adler - All Rights Reserved

Market Data Tech Conf - NYC - Sept 28,29 2006

Hope Craig sees this one.

We should have more fans of Market Data where I work, seeing how Craig kept everyone until after 5PM on the Friday before Labor Day with an amazing lecture on Market Data.

In the past, I have just built trader workstations that merely hooked up to a market data feed through some sort of communication mechanism (sockets, Tibco RV, EMS) without having to be concerned about the ins-and-outs of the data. However, Craig has been doing market data for 15 years, and it's a pleasure to work along someone who is so passionate about the area.

©2006 Marc Adler - All Rights Reserved

Investment and Trading System Documentation Project

While browsing through the FIX forums, I saw mention of an effort called the Investment and Trading System Documentation Project. Interesting idea, but it looks as if it really never got off the ground. Still, they have a repository of some articles on Electronic Trading.

©2006 Marc Adler - All Rights Reserved