Sunday, June 29, 2008

Windows Stagnation

Good article in the New York Times about the stagnation of the Windows operating system as compared to the evolution of the Mac OS X.

Last year, my son was the first to venture into Apple territory with a purchase of a MacBook. My daughter, tired of the constant viruses that plague her Dell laptop, is demanding a MacBook too. So is my wife.

Can I be far behind?

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Saturday, June 28, 2008

Recent Buyouts - Tibco/SPlus, Progress/Iona

Progress (aka Computer Associates Lite (hi Louie!)) just bought out Iona, the makers of Orbix. Like most companies, we have pockets of legacy apps that use Orbix, and most companies that I know (including ours) are on a mission to replace Orbix.

A strategy that Computer Associates used to pursue (maybe they still do) was to buy legacy products that were entrenched in the mainframe-based systems that big companies used, and then to extract exorbitant license and maintenance fees from the existing customers. I am sure that Progress will not go with that kind of strategy, but it will be interesting to see what they do with Orbix.

The more interesting acquisition is Tibco's purchase of SPlus. Despite what the various pundits of the CEP world say, I still think that analytics are an integral part of the CEP stack. SPlus is used by a good part of the Quant world today. So, it is with a bit of alarm that I view the acquisition of the SPlus line by a single CEP vendor, especially since some of the CEP vendors already (or are planning to) have hooks into SPlus.

I consider a "CEP-ish" product like Tibco Business Events to be in a different category than a pure CEP-play like Coral8 and Aleri. I think of Business Events as being a more workflow-oriented product, something that you would NOT use to pump Level2 quotes through and create algo apps with. There are certainly synergies between Business Events and SPlus, and the acquisition makes perfect sense from a business standpoint. However, my concern is what it means for people who are using it with other CEP engines.

Hopefully, Tibco will treat SPlus like it treats Spotfire .... kind of like an independent entity.

(Thanks to Paul for correcting me about the Tibco product names ... I originally called Business Events by another name ...)

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Wednesday, June 25, 2008

More Velocity

I had quite an interesting conversation the other day with Anil Nori, who is the "Distinguished Engineer" in charge of the Microsoft Velocity effort.

Anil comes to Microsoft from Oracle, so he is not driven by blind devotion to the Church of Microsoft. Anil is on a mission, wants to do what's right, and has a very similar vision that I do ... Velocity as the center of the "data universe".

One of the things that I was concerned about was interoperability in Velocity. It was good to know that this is a topic that Anil has been thinking about. In most investment banks, it's usually the native Java object caches that are installed, and each of the object caches have to find some way to interop with .NET clients. With Velocity, it is the other way around. Anil knows that, in order to be successful in our world, he has to provide interop with Java.

Quite truthfully, I have to say that Velocity has most .NET developers jumping up and down. I was on a call with the ".NET leadership" of my company today, and when I started talking about Velocity, everyone wanted to know how they could start playing with it. I have not seen this reaction caused by any Microsoft product in quite a while. WPF, WCF, and LINQ were greeted by "ho-hums" within my company. But Velocity was a different story.

This shows you how enterprise .NET architects and developers yearn to get the same tools that our Java brethren have had access to for a while. Yes, I know that Tangasol and Gigaspaces and Gemfire have .NET caches now. But, there is a certain glimmer in people's eyes when they hear that one of these enterprise technologies is coming from Microsoft. It's like welcoming home a lost-long relative.

Can we use Velocity to cache an entire day's worth of orders, and easily let a client retrieve any order by ID or a list of orders in a certain time range or a list of orders that use a certain algo?

Welcome home, Microsoft.... now, don't disappoint me!

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Sunday, June 22, 2008

A few links

Scott, who works for me on the Complex Event Processing project, has created a Powershell reader for Coral8 streams.

JOS recommends that I take a look at SCRAMNET. Craig also chimes in with his thoughts (see the comments section on my recent post about Vhayu's HA).

Redmond Developer News has some tidbits from SIFMA about the IT slowdown on Wall Street. Stevan Vidich chimes in with some thoughts on CEP and realtime BI. Jeffrey Schwartz chimes in with some thoughts on Velocity. (Stevan just got a nice promotion at Microsoft. If you are reading this, Stevan ... congrats!)

Robert chimes in with a link to a resignation-letter-writer from some of the employees at Yahoo.

On a personal note, I am going to be involved again in looking at cool and interesting companies that will allow us to make more money. This is why I originally joined my current company, before taking the deep dive into the world of CEP last fall. Now that my CEP team is growing and cooking, I can start to direct part of my thoughts towards other compelling technologies.

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Friday, June 20, 2008

Vhayu and Hardware Acceleration

Vhayu's Velocity (too many Velocity's out there!) now has data compression performed by FPGAs. This looks like pretty impressive stuff, and sets the bar a bit higher for KDB.

With on-the-fly data compression, you can have more tick and trade data available for backtesting, and since the compression is done in hardware, this should reduce the latency compared with software-based on-the-fly compression.

Even though this is pretty good news, I wonder if Compliance Departments might have an issue with storing zipped data, as it is not readily available for inspection and analysis by non-Vhayu applications. But that is a trade-off that you will need to make. I am sure Ross Dubin can further enlighten us on this...

I would rename the product from Velocity to ZIP-ON-A-CHIP.


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

A Visualization of Few Words

Stephen Few's name is started to mentioned in hushed tones, much in the same way that Edward Tufte's name is now evoked. Stephen's website and blog are mandatory reading for those who are interested in visualizations. Few has a wonderful sense of aesthetics, and always seems to ask the hard questions about visualizations.

It was more than interesting to learn that Few had set his sights on Panopticon in a new article that is available from his website.

Panopticon hosted Few at their Stockholm headquarters for a few days. This is a fact that Few is up front with at the beginning of his article, thus removing the perceptions of a paid-consultancy-for-favorable-review. Among other things, Few reviewed a visualization that Panopticon came up with called a Horizon Chart.

Few relates the genealogy of his encounter with the Horizon Chart, from initial puzzlement to an appreciation of the visualization. I have to admit that, when the Panopticon guys demoed the Horizon Chart to me, I was similarly befuddled. It seems like it is something of an acquired taste, but as you get more used to it, the visualization of the Horizon Chart becomes more compelling.

The only thing that I would like to see for this particular visualization is the ability to better organize the vertical axis. For example, instead of just listing the 50 stocks vertically, I would like to be able to group the stocks by sector or by supply chain or by trading pairs, so I can quickly compare how one stock did against its peers.

Panopticon is striving for new and interesting ways to do visualizations for the financial industry. They started off with Ben Schneiderman's heatmap, and have started to progress outwards. Heatmaps are becoming more standard in the industry, and I am sure it won't be long before .NET charting vendors start to come out with their own variations of the heatmap theme.

Panopticon needs to continue to push the envelope and come up with compelling visualizations for the capital markets industry. I am anxious to see what fruit is borne out of Stephen Few's consultation.

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Accelerated Computing Solutions

Last year, former colleague Larry Cohen hung out a shingle and formed a consultancy called Accelerated Computing Solutions. The purpose of ACS is to consult on all aspects of hardware acceleration for Wall Street companies.

Larry has an interesting stack lined up for providing acceleration for trading applications. He is especially targeting algo trading.

Larry has done extensive research into the latest companies providing HA (including RapidMind, Intel, Tervela, etc), has formed partnerships with a number of these companies, and he is ready to bring his stack to life. He is looking for financial services companies who would be willing to retain his services in order to apply HA to speed up their trading apps. This might appeal to certain hedge funds who are looking to get an extra edge, and perhaps take advantage of latency arbitrage.

If you are interested in Larry, please go to his website and contact him.

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Wednesday, June 18, 2008

Tibco and Hardware Acceleration

It is interesting to note Tibco's sudden renewed interest in Rendezvous (RV). A few months ago, I had lunch with an ex-Tibco consultant who now consults independently for various Wall Street companies. This fellow told me that Tibco was really paying attention to the shots fired across its bow by companies like RTI, 29West, and Tervela. Because of this, Tibco would try to breathe some new life into RV, after spending so much focus on EMS.

Coincidentally, two people independently told me to take a look at Solace Systems. Solace seems to do XML content-based parsing and routing. This was the same route taken by Xambala, a company who I have not heard much about lately.

Hardware-based routing of XML packets would seem to fit in well with Morgan Stanley's CPS message bus. It would also fit in well with CEP engines that like to deal with XML, such as Coral8. Imagine a hardware-based XML-to-Coral8-Tuple processor. Or something that would take XML-based newsfeeds and do some kind of parsing that could be used with semantic processing engines, like Semlab. Or, can there be any kind of synergies between hardware-based XML transformations and WPF/XAML?

I would be interested to hear anyone's ideas around messaging and hardware acceleration.

Here are some links to articles about the Solace/Tibco partnership:

I hope that Tibco also starts thinking about hardware acceleration for EMS as well as RV.


As per Brian Theodore's comment below, the hardware acceleration that Solace is providing to RV does not involve any XML parsing.

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Saturday, June 14, 2008

The Coral8 Community

During the Coral8 dinner the other night, I told Terry Cunningham that a stronger sense of community would help their product. There are two things that I would like to see:

1) An annual user conference. This can coincide with the annual Gartner Conference on CEP. This year's Gartner conference is in Stamford, Connecticut, which is very close to all of the Wall Street firms (and the home to many hedge funds).

One of the Coral8 customers at the dinner was extoling the virtues of Parallel Queries. Another user was giving tricks and tips for pumping in massive amounts of data into Coral8. Plus, some of the non-financial customers of Coral8 are doing some pretty cool things with event processing. There are a number of compelling stories that can be told at a users conference.

2) A user forum on the Coral8 website.

Oh ... and a full-time New York-based sales engineer would be fantastic, but I know that Coral8 is working hard on that. However, you need to be able to pass Mark's tech screening, so study up!

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Friday, June 13, 2008

Consumed with Dashboards

Here is a great site that gives examples of many dashboards:

and the accompanying blog

On another note, here is a page on recommended string methods in .NET 2.0:

I wonder if Resharper 4.0 will recommend these methods for us.

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

NCache Thread

After talking here about Velocity, and mentioning NCache, I was eager to read the article by Jonathan Allen that he just wrote on NCache. In the comments section, this thread was mentioned:

Hmm .......

The .NET caching space might just about surpass the messaging and market data space in terms of the mudslinging going on :-)

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

2008 SIFMA Report

I spent a few exhausting hours at SIFMA on Tuesday, the opening day.

Lots of attendees, but at times, it seemed like the marketing people outnumbered the attendees.

I spent a while at the Aleri booth, looking at their new 3.0 version. I am glad to see that they adopted some of my concerned about usability. Configuring adapters seems to be dead-easy now. They have come out with a new doc set. Their Aleri Studio has some major improvements. Jeff Wooten told me that they did not have to make too many improvements in their engine, as they felt that the engine was already pretty performant. That's good to hear .... but the STAC benchmarks will be the final authority on that.

I ended up spending quite a bit of time at the Progress Apama booth, mostly in the company of fellow STAC Council member Louie Louvas. I finally got to meet John Bates, who had been convincing me through email to come and pay Apama a visit. I got to see some MonitorScript in action. Very powerful and flexible, but as I told Louie, it is so close to C#/Java that, at first blush, it might compete against a custom C#/Nesper or Java/Esper solution. I would need to dive into Apama in depth to see it in all of its glory, but it definitely peaked my interest.

Of course, the next day, there was the blockbuster announcement of the pair-up between Wombat and Apama. Even though it must have come as a shock to Coral8, I was not surprised. Coral8 is one of the CEP companies that does not have a separate vertical application for trading. I, for one, would love for Coral8 to come out with a commercial product which shows that they eat their own dog food with regards to trading apps.

I spent some time with my old friends Neil and Stevan from Microsoft at their booth. Nothing much new going on there. I finally got to meet Robert from Panopticon. The STAC guys were there in their lab coats. Ran into Brian from Reuters, where I found out that he has taken over responsibility for the market data APIs. Nice to reconnect with all of these people. SIFMA is all about schmoozing and re-establishing connections.

Lots of messaging and hosting solutions. Not on my radar right now, although they will be next year.

I am so glad that I don't have to do booth duty at trade shows any more. My first booth was in 1984, at the very first Unix Expo at the Javitz Center in New York, where I was showing off my Unix word processor.

After SIFMA closed for the day, there was a nice party given by the Coral8 folks. I was glad to meet people like Jaime, Eric, Paul, Josh, Ted, Tom, and others. I sat next to Terry Cunningham, and we swapped airplane stories. It was heartening to hear that other customers are pumping a lot of data at fast rates through Coral8 without much problems. It is also good to hear that we were not the only company who gives the Coral8 support and engineering staff a hard time.

To cap off the night, while the Coral8 party was going on, there were severe storms in New Jersey with 75 mph winds. Lots of downed trees and storm damage, and New Jersey Transit cancelled all of the trains. I had an interesting time trying to find my way home.... Next time, I won't hesitate to use the company car service.

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

SSDS and Velocity

I am talking to the SSDS team later today, but after watching some of their webcasts and looking at some of the code examples, I can see an absolutely perfect fit between SSDS and Velocity.

SSDS claims that they are ideal (right now) for apps that do not require ultra-low latency, mainly do the the transmission rates between the remote data center and the app. Velocity can sit right between the app and SSDS, so all data access goes through the cache.

Now, what we need SSDS to do is to asynchronously push updates to the SSDS containers out to the corresponding Velocity cache. This would hopefully minimize cache misses.

If used this way, then Velocity would need to adopt the SSDS concept of "Authority".

Meanwhile, I was very interested in this use of the Amazon S3 cloud by NASDAQ.

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Thursday, June 12, 2008

Consulted Wanted - UI Design, WPF (nice to have)

I have some $$$ in my consulting budget to hire a great UI designer for about 6 months to help us in our Complex Event Processing effort. We want to design a UI that will blow the traders and the business folks away. Something that far away from the endless myriad of grids that you see on most trading floors. If you know the names Tufte and Few, then you are pointing inthe right direction.

For the UI, we are using WPF (ie: sorry, no web developers). We are dealing with alerting on real-time data, so experience in working with real-time is desired, but not required.

The consultant has to have a track record of designing compelling user interfaces, and prior capital markets experience is highly desireable. In order to consider the consultant, we need to see samples of prior work, and we will check your references in order to convince ourselves that the consultant was the one who did the major design work on the UI, not someone who dragged a ChartFX control into a form.

The consultant does not have to be a developer (in fact, we would prefer someone who wasn't a developer), but has to be aware of what it takes to get your designs into code. This means that the consultant cannot give us an artistic masterpiece that is impossible to implement.

The consultant will work with us 4 or 5 days a week, ON-SITE, on the Equities trading floor in the Tribeca section of Manhattan. This means we don't want to hear from consulting firms in Toronto and India who want to know if the work can be outsourced. I am telling you right now ... we will never be outsourcing any part of this project. The consultant need to see the white of the eyes of our traders.

Here is a little more from Scott, who is doing UI development on my team:

Ability to gather user requirements and design and create the visual assets of the application is critical. Comfort with XAML design tools such as Microsoft Expression Blend is highly preferred. Prior experience with Adobe Photoshop, Illustrator, Dreamweaver, Flash are good indicators.

Person will be responsible for designing the look and feel of different business event indicators, along with overall application styling.

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Tuesday, June 10, 2008

The Importance of Tibco EMS Monitoring

If you are using Tibco EMS, you should be aware that there is a decent tool that comes with the Tibco SDK that allows you to monitor all activity that goes on in your broker. In the directory c:\tibco\ems\bin, you will find a command-line application called tibemsmonitor.exe. If you run this utility, you can see every connect/disconnect, every creation and destruction of a MessageProducer and MessageConsumer, every creation of a topic or queue.

In a quest to optimize our code, I started spying on the interaction between EMS and our application. I found that we were creating MessageProducers too many times ... way too many times for an application that did a lot of real-time message processing.

I was curious to see what went on behind the act of creating a message producer and consumer. So, I fired up the invaluable Reflector (from Lutz Roeder) and peeked into the Tibco.EMS.dll assembly. What I found was that, every time a message producer or consumer is created, the Tibco.EMS._CreateMessageProducer function constructs a message, sends it into the broker, and synchronously waits for a response. This takes a lot of time and produces a lot of overhead.

I changed our code so that we now cache and reuse MessageProducers. I have to say that our code looks like it runs a lot faster now. A lot faster....

Since the code in question was something I wrote two years ago, I took it for granted that it was working fine, since about a dozen groups in our company use this code. And, it was working fine, except that it produced a lot of needless overhead. I will gladly accept 10 lashes with a wet noodle for this one ....

The lessons learned are:

1) Optimize, optimize, optimize
2) That old code that you are using that forms the basis of your framework probably can do with monitoring.
3) Use Reflector to see what the underlying libraries are doing.

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Sunday, June 08, 2008

What is the future Wall Street Stack?

Can Microsoft eventually work its way to producing the future Wall Street Stack? A series of components that are tightly integrated and supported. Here is what I imagine:

High speed intelligent message bus (not related to any Biztalk technologies) that can transform data directly into .NET objects. The message bus might have some SQL-like filtering built into it.

The bus delivers data directly to the Velocity object cache.

Velocity is directly integrated with a CEP engine.

When events are detected, some Windows Workflow is triggered and monitored.

Velocity and the message bus has built-in, in-process adapters for market data and FIX messages.

Velocity can be hooked into apps through WCF for push notifications. WCF-based services are available so apps and users can query the state of the cache.

A Service Broker so apps can find out about what kind of data is available in the cache.

A monitoring stack is available right out of the box that will monitor the entire stack and alert when abnormal conditions are detected.

How far off can this be, given Microsoft's current and future directions?

©2008 Marc Adler - All Rights Reserved

Tibco EMS Powershell Cmdlet

A little Powershell cmdlet for finding out information about your Tibco EMS topics (or a specific topic).

using System;
using System.Management.Automation;

namespace MarcsPowershellCmdlets
[Cmdlet(VerbsCommon.Get, "TibcoTopics", SupportsShouldProcess = true)]
public class GetTibcoTopicsCmdlet : Cmdlet
#region Parameters
[Parameter(Position = 0,
Mandatory = false,
ValueFromPipelineByPropertyName = true,
HelpMessage = "The URL of the Tibco EMS broker")]
public string URL
get; set;

[Parameter(Position = 1,
Mandatory = false,
ValueFromPipelineByPropertyName = true,
HelpMessage = "The UserName of the Tibco EMS broker")]
public string User
get; set;

[Parameter(Position = 2,
Mandatory = false,
ValueFromPipelineByPropertyName = true,
HelpMessage = "The Password of the Tibco EMS broker")]
public string Password
get; set;

[Parameter(Position = 3,
Mandatory = false,
ValueFromPipelineByPropertyName = true,
HelpMessage = "The name of the topic the look at")]
public string Topic

protected override void ProcessRecord()
if (string.IsNullOrEmpty(this.URL))
this.URL = "tcp://localhost:7222";
if (string.IsNullOrEmpty(this.User))
this.User = "admin";

Admin admin = new Admin(this.URL, this.User, this.Password);
TopicInfo[] topics = admin.Topics;

if (string.IsNullOrEmpty(this.Topic))
this.WriteObject(topics, true);
foreach (TopicInfo topic in topics)
if (topic.Name.Equals(this.Topic, StringComparison.InvariantCultureIgnoreCase))

catch (Exception)
Console.WriteLine("Cannot connect to the Tibco EMS broker");

©2008 Marc Adler - All Rights Reserved

Microsoft Velocity

I have blogged a number of times about Distributed Object Caches. Almost all large financial firms have investments in object cache products. The three biggies are Gemfire, Tangasol Coherence (now part of Oracle), and GigaSpaces, and traditionally, these companies have targetted the Java and C++ marketplaces.

There have been niche products like NCache and memcache, but I have not seen an incredible amount of use of these out there in Wall Street.

If you have been reading this blog for any length of time, you know that I constantly lament the fact that Microsoft did not have what I refer to as "The Wall Street Stack".

However, last week, Microsoft landed with both big boots on the Wall Street Stack with the introduction of Velocity, their first attempt at a distributed object cache. Velocity, combined with some other efforts that Microsoft is working on (I have been sworn to secrecy on these efforts), dispells my ideas that Microsoft is ignoring enterprise technologies that matter to Wall Street, and especially, the area of trading systems.

I won't go into my thoughts on Distributed Object Caches, as I have blogged about them many times. But, I just want to list of few things that went through my mind as I read the Velocity announcement. I have a call this week with the Velocity team, and I hope to ask them these questions:

1) Will it remain free? I hope so. For the companies who pay lots and lots of money to Gemstone, Tangasol, and GigaSpaces, a free entry into this arena from a major vendor is sure to make the other vendors think about lowering their prices. In these economic times, financial companies will certainly welcome the chance to embrace this technology without having to spend a lot of money.

2) Will it be supported? If so, for how long, and by what product group? Microsoft is well known for coming out with technology that they let languish or deprecate. Right now, it seems like there is a small group in Microsoft that is working on Velocity. Does Microsoft have the appetite to build a support organization to properly support the product?

3) What are the synergies with other Microsoft technologies? LINQ? SQL Server? Gemfire has a SQL query language which you can apply to the data in their cache.

4) The Velocity team already said that they do not have push notifications yet. But, when they do, can we integrate a version of streaming LINQ (CLINQ?) with it? Push notifications is an extremely important feature that Velocity is missing right now, but it seems that a lot of people have told the Velocity team that this is a major shortfall.

5) Support for different messaging systems. Once we are able to get push notifications from Velocity, can we use JMS/Tibco EMS or RV?

6) Interface with Grids. It is no secret that the most prevelant use of object caches on Wall Street is with Grid Computing. How will Velocity interface with Compute Cluster? How about Digipede (if I know John Powers, he probably has support for Velocity already)?

7) Interoperability with non .NET-based systems?

8) What external database systems to they support? Will they support Oracle and Sybase? How about KDB?

©2008 Marc Adler - All Rights Reserved

Sunday, June 01, 2008

Bravo, Tim

©2008 Marc Adler - All Rights Reserved

SIFMA again

It's that time of the year again, and the second week of June usually means the annual SIFMA conference in at the Hilton in New York. June 10th is the first day of the conference, and the one that features the good parties. I will be walking around the show that day, most likely with my badge in my pocket so that I don't get accosted by vendors.

It will be interesting to hear how vendors are coping with the IT spending slowdown in Capital Markets. You cannot turn around without hearing of another round of layoffs or budget cuts at some financial firm. There will be 7000 less people in the financial sector in a few weeks, as JPM severs the tie with half of Bear Stearns. Lehman and Morgan Stanley have been having stealth layoffs. (I am quite amazed at some of the people who have been laid off from Morgan Stanley in the past few weeks, including a few Executive Directors). is reporting that most IB's have cut consulting rates from 10 to 20%.

However, I am hoping for a good show, and a chance to reconnect with former colleagues and friends. I am most interested in cool data visualizations and analysis tools, new products in the CEP space, and most of all, T-Shirts !!!

©2008 Marc Adler - All Rights Reserved

On Code Optimization and the Laziness of Developers

A very wise man, someone who runs the group that owns our more important trading framework, once said to me “I don’t know why we spend all of this effort in investigating Hardware Acceleration. If we actually went into our code, refactored it, and optimized it, we could probably speed up the end-to-end performance by 50%.”

I have always maintained that the continued use of high-level, object-oriented programming languages has made the average programmer lazy. Perhaps it is the pressure of continually churning out releases, but some of the code that I have seen lately is abysmal. Game programmers would cringe if they ever saw some of this code. So would a good number of people who started their lives programming in C (and even C++).

I started programming professionally in 1984. My first commercial product, a word processor for UNIX and the PC, had to run in 128K of RAM. I can’t tell you the number of hours I spent riding the subway, looking over a file of code to see what optimizations I could make in a function or two. That whole optimization-conscious culture seems to have fallen by the wayside.

However, take heart, Mr. Corporate Programmer. It’s not only your code that I am finding fault with.

Last Friday, I dropped down from Manager Mode into Code Optimization Mode. I wanted to optimize a particularly crucial part of our system. Our system is written in C#, so we are using the .NET SDK that is provided by one of the vendors. I notice that vendors who provide SDKs for Java, C++, and C# usually tend to treat the C# SDK as a second-class citizen. I have blogged about this before, and in this case, I felt that it was no different with this vendor. As I debugged into the vendor code, I was amazed at some of the things that I found, and in about 30 minutes worth of time, I had fired off six emails to the vendor.

I found things like this:

[Snippets of Code removed for now]

However, I just scratched the surface of their .Net SDK. I did not give it a full profiling, nor did I bother to give it a code review. This is not my job. It is the vendor’s job to do this.

Just to make sure that I wasn’t bitching about nothing, I showed the code to a colleague who runs one of our departments. He was an old C game developer way back when. When I showed him the code, he shook his head, and said that there was no excuse for this laziness.

Since this code is called about 2 million times per day, I am very concerned. If I found these kinds of anomalies in source code that is made public, what could be lurking under the covers of the actual engine of the product?

In summary, I have some advice for vendors and for coders everywhere:

  • 1) Continually optimize and refactor
  • 2) Put your code through a profiler. Use the .NET CLR Profiler. Use DevPartner Studio. Use the built-in profiler in Visual Studio 2008.
  • 3) Don’t treat C# as a pariah. If you are going to put out a .NET SDK, make sure that it is just as good as your Java and C++ SDKs. And, when you refactor your java SDK, refactor the C# SDK as well.
  • 4) Quality control. In this case, we see that the vendor did not do their due diligence with regards to the .NET SDK. Code coverage and unit tests must be done.
  • 5) If you make an enhancement in one superclass, make sure you enhance all of the other superclasses as well.
  • 6) Code reviews. Get more pairs of eyes on that code!

For my part, I just had one of our junior developers refactor some of the code that parses our FIX messages and converts them into C# objects. He replaced 3 calls to string.Substring() with one call to string.Split(). After running some tests, he said that we only had a 2% improvement. For code that is called a few million times per day, I will gladly take that 2%, not to mention less small things going into the heap.

We need to start working on code coverage, but right now, we need to get a system out there to the traders .... and we have the luxury that nobody except us will be looking at our code!

©2008 Marc Adler - All Rights Reserved