Chris is the new columnist for Automated Trader Magazine.
C'mon Chris .... when are you going to get that venture funding!
©2006 Marc Adler - All Rights Reserved
Wednesday, December 27, 2006
Tuesday, December 26, 2006
Penn-Lehman Automated Trading Project
Here
Plus, an internship at the Prop Trading group at Lehman.
Another reason to make sure that your kids study hard for the SATs.
©2006 Marc Adler - All Rights Reserved
Plus, an internship at the Prop Trading group at Lehman.
Another reason to make sure that your kids study hard for the SATs.
©2006 Marc Adler - All Rights Reserved
News Mining Agent for Automated Stock Trading
Here is a thesis on semantic recognition of financial news items.
... as we slowly move towards phasing out traders (as per IBM's prediction).
©2006 Marc Adler - All Rights Reserved
... as we slowly move towards phasing out traders (as per IBM's prediction).
©2006 Marc Adler - All Rights Reserved
Coding for Humanity
At this time of year, one's thoughts sometimes turn to the larger things in life. You might ask yourself what your legacy is going to be. Does your coding and architecture skills somehow contribute to the greater good of the world, and will humanity benefit from your efforts?
I never put this on my resume, but I consulted part-time for 4 1/2 years for a company called Classroom Inc. From January 1997 to October 2001, I would devote part of my time to writing "simulations" for CRI, and probably over one million school children have used my programs.
Classroom Inc (CRI) was originally formed as a non-profit partnership between IBM, JP Morgan, and Bear Stearns. Lewis Bernard, who was very high up at JPMC, was the CEO of Classroom Inc. The mission was to provide education computer software to inner-city and rural schools where the children could benefit from an "alternative learning experience".
Each one of the "simulations" was an interactive game, where the student was put in a certain life situation. For example, one simulation put you in the role of a bank manaer, while another one put you in the role as the CEO of a paper company. Each simulation consisted of 12 or 15 "chapters", where each chapter was devoted to a certain issue.
When I started consulting for CRI, each simulation went out to over 100,000 students, and by the time I finished up, each simulation reached more than 250,000 students.
The entire framework was written in MFC/C++. A lot of the internal design came from the old Macromedia Director, which was very popular at the time for creating interactive storyboards. A typical simulation took 6 to 8 months to write. The team consisted of a producer, 2 writers, 2 artists, a QA tester, and myself. Every few weeks, I would get a ZIP file in the mail consisting of the script for a new chapter, and all of the graphics, plus some haphazard directions for one or more "activities" that the kids would have to do in the chapter.
My involvement with CRI ended in 2001 when IBM, who was one of the partners, decided that they wanted to move to a new, internal Flash-based system, and wanted to end all C++ development. It took the IBM consultants quite a while to get the first simulation out there in Flash, but they finally managed to recreate all of the simulations and re-release them.
I was proud to have been a part of this effort for such a long time.
I was inspired to write this by a post that was on Joel on Software a few weeks ago. Joel talked about some cool new jobs that his jobsite was advertising, and one of the jobs dealt with medical technology, which is one of the noblest enterprises. Another worthy position is at DonorsChoice, which reminds me a bit of CRI.
©2006 Marc Adler - All Rights Reserved
I never put this on my resume, but I consulted part-time for 4 1/2 years for a company called Classroom Inc. From January 1997 to October 2001, I would devote part of my time to writing "simulations" for CRI, and probably over one million school children have used my programs.
Classroom Inc (CRI) was originally formed as a non-profit partnership between IBM, JP Morgan, and Bear Stearns. Lewis Bernard, who was very high up at JPMC, was the CEO of Classroom Inc. The mission was to provide education computer software to inner-city and rural schools where the children could benefit from an "alternative learning experience".
Each one of the "simulations" was an interactive game, where the student was put in a certain life situation. For example, one simulation put you in the role of a bank manaer, while another one put you in the role as the CEO of a paper company. Each simulation consisted of 12 or 15 "chapters", where each chapter was devoted to a certain issue.
When I started consulting for CRI, each simulation went out to over 100,000 students, and by the time I finished up, each simulation reached more than 250,000 students.
The entire framework was written in MFC/C++. A lot of the internal design came from the old Macromedia Director, which was very popular at the time for creating interactive storyboards. A typical simulation took 6 to 8 months to write. The team consisted of a producer, 2 writers, 2 artists, a QA tester, and myself. Every few weeks, I would get a ZIP file in the mail consisting of the script for a new chapter, and all of the graphics, plus some haphazard directions for one or more "activities" that the kids would have to do in the chapter.
My involvement with CRI ended in 2001 when IBM, who was one of the partners, decided that they wanted to move to a new, internal Flash-based system, and wanted to end all C++ development. It took the IBM consultants quite a while to get the first simulation out there in Flash, but they finally managed to recreate all of the simulations and re-release them.
I was proud to have been a part of this effort for such a long time.
I was inspired to write this by a post that was on Joel on Software a few weeks ago. Joel talked about some cool new jobs that his jobsite was advertising, and one of the jobs dealt with medical technology, which is one of the noblest enterprises. Another worthy position is at DonorsChoice, which reminds me a bit of CRI.
©2006 Marc Adler - All Rights Reserved
Automatic Resource Refactoring Tool
Here
Move all of your hard-coded strings automatically into resource files.
©2006 Marc Adler - All Rights Reserved
Move all of your hard-coded strings automatically into resource files.
©2006 Marc Adler - All Rights Reserved
Skyler Technology
Yet another entry in the time-series, object-cache, feed handler world ... Skyler Technology's C3 Database. This is definitely a hot area to be in right now, as Skyler, Vhayu, Streambase, etc all seem to be vying for a piece of the pie.
Skyler has a nice use case for order book management.
©2006 Marc Adler - All Rights Reserved
Skyler has a nice use case for order book management.
©2006 Marc Adler - All Rights Reserved
Saturday, December 09, 2006
DevExpress XtraPivotGridSuite
Recommended by a colleague who is very interested in OLAP tools....
How long before Infragistics comes up with something similar?
©2006 Marc Adler - All Rights Reserved
How long before Infragistics comes up with something similar?
©2006 Marc Adler - All Rights Reserved
OLAP/Analysis Services
Had a very interesting presentation from Microsoft on Analysis Services and OLAP.
What are traders and risk managers using OLAP for at your bank?
Any experience using OLAP is a real-time scenario to perform up-to-the-second reporting? Our feeling is that OLAP cannot be used successfully in a real-time environment unless you have very small cubes.
©2006 Marc Adler - All Rights Reserved
What are traders and risk managers using OLAP for at your bank?
Any experience using OLAP is a real-time scenario to perform up-to-the-second reporting? Our feeling is that OLAP cannot be used successfully in a real-time environment unless you have very small cubes.
©2006 Marc Adler - All Rights Reserved
Wednesday, December 06, 2006
FPG911
Brilliant quote from a colleague:
If you need time off from work to test-drive a Porsche, just tell your boss that you are investigating hardware acceleration.
©2006 Marc Adler - All Rights Reserved
If you need time off from work to test-drive a Porsche, just tell your boss that you are investigating hardware acceleration.
©2006 Marc Adler - All Rights Reserved
Less Reliance on Vendors
Even though there are a lot of things to be desired about working at Morgan Stanley, I must say that they have got the right idea about lessening their reliance on vendors. Their EIA group has their own XML-based pub/sub message bus (CPS), their own market data infrastructure (Filter), their own .NET client-side framework, and more. What they have done is cut companies like Tibco out of the loop, and are no longer beholden to vendor release cycles, upgrade fees, and huge licensing costs. Morgan owns the source, and has the staff to maintain and enhace their IP. In fact, a friend of mine at Morgan told me that, if they wanted to, Morgan could take CPS and give Tibco a run for their money.
©2006 Marc Adler - All Rights Reserved
©2006 Marc Adler - All Rights Reserved
Sunday, December 03, 2006
CAB, EventBroker, and Wildcards
I will be blogging about the CAB EventBroker soon. But, I think that I like mine (previously published here) better. I would like to see wildcard support in the EventBroker's subscription strings.
In CAB, you can define an event to be published like this:
[EventPublication("event://Trade/Update", PublicationScope.Global)]
public event EventHandler TradeUpdated;
........
public void TradeIsUpdated(Trade trade)
{
if (this.TradeUpdated != null)
{
this.TradeUpdated(this, new InstrumentUpdatedEventArgs(trade));
}
}
In some other module, you can subscribe to an event like this:
[EventSubscription("event://Trade/Update")
public void OnTradeUpdated(object sender, InstrumentUpdatedEventArgs e)
{
}
I need wildcards. I might like to have a function that gets called when any operation happens to a Trade object. So, I would like to see subscription topics like these:
"event://Trade/*"
or
"event://Trade
Both of these would cover the case when any operation happens to a trade. The subscription string would catch the following topic:
event://Trade/Updated
event://Trade/Deleted
event://Trade/Created
I need Tibco-lite as my internal message bus.
©2006 Marc Adler - All Rights Reserved
In CAB, you can define an event to be published like this:
[EventPublication("event://Trade/Update", PublicationScope.Global)]
public event EventHandler
........
public void TradeIsUpdated(Trade trade)
{
if (this.TradeUpdated != null)
{
this.TradeUpdated(this, new InstrumentUpdatedEventArgs(trade));
}
}
In some other module, you can subscribe to an event like this:
[EventSubscription("event://Trade/Update")
public void OnTradeUpdated(object sender, InstrumentUpdatedEventArgs e)
{
}
I need wildcards. I might like to have a function that gets called when any operation happens to a Trade object. So, I would like to see subscription topics like these:
"event://Trade/*"
or
"event://Trade
Both of these would cover the case when any operation happens to a trade. The subscription string would catch the following topic:
event://Trade/Updated
event://Trade/Deleted
event://Trade/Created
I need Tibco-lite as my internal message bus.
©2006 Marc Adler - All Rights Reserved
Saturday, December 02, 2006
CAB and Status Bars?
I have not gotten into the Smart Client Factory yet (preferring to learn the underlying CAB framework), so this question might be answered by the SCSF .... but has anyone made a UIExtensionSite for a statusbar object yet?
©2006 Marc Adler - All Rights Reserved
©2006 Marc Adler - All Rights Reserved
Grid Computing and UBS
Grid computing webcast here, featuring people from UBS, Microsoft and Digipede.
Our head quant tells us that UBS has an internal website that can price exotics amazingly fast. Something that all of us can strive for...
©2006 Marc Adler - All Rights Reserved
Our head quant tells us that UBS has an internal website that can price exotics amazingly fast. Something that all of us can strive for...
©2006 Marc Adler - All Rights Reserved
Thursday, November 30, 2006
CAB and WorkItems
WorkItems
A WorkItem is considered to represent a “use case” in CAB terminology. Ignore this. It is really just a container of other kinds of objects along with some state information.
A CAB application has a tree of WorkItems. The main CabApplication class contains a reference to the root WorkItem, which is referred to by the RootWorkItem property. Given a WorkItem, you can go up one level to its ParentWorkItem, or down to the next level by accessing the workItem.WorkItems collection.
Recall that the main application class is defined like this:
class CABQuoteViewerApplication : FormShellApplication
The first argument in the generic’s argument list is the type of WorkItem that will be the RootWorkItem of our entire application. All other WorkItems will be descendants of this RootWorkItem. A WorkItem has access to any of its descendant’s properties; however, sibling WorkItems cannot access eachother’s properties directly.
A WorkItem also contains various other collections. It has collections of:
You can Activate/Deactivate a WorkItem, Terminate it, and persist it (Save and Load).
There is a virtual function called OnRunStarted() that you can override in order to create views, read data, etc.
WorkItemExtensions
A WorkItemExtension is a way of extending the behavior of a WorkItem without having to change the WorkItem’s code nor resorting to subclassing the WorkItem. It is a class that just receives certain events that happen to the associated WorkItem. These events are:
Initialized
RunStarted
Activated
Deactivated
Terminated
To extend a WorkItem, you need to create a new subclass of WorkItemExtension. Then you need to create an object of that class; this object is associated with an underlying WorkItem. Then you just handle certain events that happen to the WorkItem.
And, to use this new WorkItemExtension, you would do the following:
©2006 Marc Adler - All Rights Reserved
A WorkItem is considered to represent a “use case” in CAB terminology. Ignore this. It is really just a container of other kinds of objects along with some state information.
A CAB application has a tree of WorkItems. The main CabApplication class contains a reference to the root WorkItem, which is referred to by the RootWorkItem property. Given a WorkItem, you can go up one level to its ParentWorkItem, or down to the next level by accessing the workItem.WorkItems collection.
Recall that the main application class is defined like this:
class CABQuoteViewerApplication : FormShellApplication
The first argument in the generic’s argument list is the type of WorkItem that will be the RootWorkItem of our entire application. All other WorkItems will be descendants of this RootWorkItem. A WorkItem has access to any of its descendant’s properties; however, sibling WorkItems cannot access eachother’s properties directly.
A WorkItem also contains various other collections. It has collections of:
- Workspaces
- SmartParts
- Commands
- EventTopics
- Services
- Items (you can stick any object in this collection, including state, views, etc)
You can Activate/Deactivate a WorkItem, Terminate it, and persist it (Save and Load).
There is a virtual function called OnRunStarted() that you can override in order to create views, read data, etc.
WorkItemExtensions
A WorkItemExtension is a way of extending the behavior of a WorkItem without having to change the WorkItem’s code nor resorting to subclassing the WorkItem. It is a class that just receives certain events that happen to the associated WorkItem. These events are:
Initialized
RunStarted
Activated
Deactivated
Terminated
To extend a WorkItem, you need to create a new subclass of WorkItemExtension. Then you need to create an object of that class; this object is associated with an underlying WorkItem. Then you just handle certain events that happen to the WorkItem.
public class QuoteViewWorkItemExtension : WorkItemExtension
{
public QuoteViewWorkItemExtension() {}
protected override OnActivated()
{
PerformanceTimer.StartPerformanceTiming();
}
protected override OnDeactivated()
{
PerformanceTimer.StopPerformanceTiming();
}
}
And, to use this new WorkItemExtension, you would do the following:
QuoteViewWorkItemExtension wix = new QuoteViewWorkItemExtension();
Wix.Initialize(myWorkItem);
This concept is very much akin to the “advice” that AOP containers like Spring.Net provide for you, except that you do not have any runtime code injection.©2006 Marc Adler - All Rights Reserved
CAB and Workspaces
Workspaces
You might be familiar with various kinds of layout managers that automatically arrange the windows that the manager contains. If you are a Java developer, you might be used to layout managers like the FlowLayout manager and the GridLayout manager. The layout manager works in conjunction with a container. The container holds the controls, and the layout manager positions and sizes the controls as they are added to the container.
In CAB, we have Workspaces and SmartParts. A Workspace is a container for holding SmartParts. A WorkItem contains a list of zero or more Workspaces, so you can have workspaces within workspaces.
Most CAB applications will need to create a root Workspace within the main form.
The different kinds of Workspaces in CAB are:
There are two ways of adding zones. One is to use the Visual Studio .Net designer, and drag a workspace from the Toolbox onto a form.
The other way is to dynamically create the workspace in the FormShellApplication’s AfterShellCreate() override.
Here is an example of creating various types of workspaces using the second method (the code is “unwound” for the sake of this article):
When you add a SmartPart to a Workspace, you can pass along hints that tell the Workspace how to layout and decorate the SmartPart. The Workspace class has functions for
You might be familiar with various kinds of layout managers that automatically arrange the windows that the manager contains. If you are a Java developer, you might be used to layout managers like the FlowLayout manager and the GridLayout manager. The layout manager works in conjunction with a container. The container holds the controls, and the layout manager positions and sizes the controls as they are added to the container.
In CAB, we have Workspaces and SmartParts. A Workspace is a container for holding SmartParts. A WorkItem contains a list of zero or more Workspaces, so you can have workspaces within workspaces.
Most CAB applications will need to create a root Workspace within the main form.
The different kinds of Workspaces in CAB are:
- WindowWorkspace
Vanilla area for holding SmartParts
For an MdiWorkspace, will automatically create a Form to hold a SmartPart - DeckWorkspace
Stacks SmartParts in an overlapping manner - MdiWorkspace
Regular MDI container, derives from WindowWorkspace - TabWorkspace
Tabbed Windows - ZoneWorkspace
Allows tiling of window areas, good for implementing an Outlook type layout
There are two ways of adding zones. One is to use the Visual Studio .Net designer, and drag a workspace from the Toolbox onto a form.
The other way is to dynamically create the workspace in the FormShellApplication’s AfterShellCreate() override.
Here is an example of creating various types of workspaces using the second method (the code is “unwound” for the sake of this article):
using System;
using System.Windows.Forms;
using CABQuoteViewer.WorkItems;
using Microsoft.Practices.CompositeUI.SmartParts;
using Microsoft.Practices.CompositeUI.WinForms;
namespace CABQuoteViewer
{
class CABQuoteViewerApplication : FormShellApplication<QuoteViewerWorkItem, MainForm>
{
private IWorkspace m_workspace;
private QuoteViewerWorkItemExtension m_quoteWorkItemExt;
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main()
{
new CABQuoteViewerApplication().Run();
}
protected override void BeforeShellCreated()
{
base.BeforeShellCreated();
if (this.RootWorkItem != null)
{
this.m_quoteWorkItemExt = new QuoteViewerWorkItemExtension();
this.m_quoteWorkItemExt.Initialize(this.RootWorkItem);
}
}
protected override void AfterShellCreated()
{
base.AfterShellCreated();
this.CreateWorkspace("Deck");
}
private void CreateWorkspace(string wsTypeName)
{
if (wsTypeName == "Mdi")
{
this.m_workspace = new MdiWorkspace(this.Shell);
this.RootWorkItem.Workspaces.Add(this.m_workspace, "ClientWorkspace");
}
else if (wsTypeName == "Tab")
{
this.m_workspace = this.RootWorkItem.Workspaces.AddNew<TabWorkspace>("ClientWorkspace");
TabWorkspace tabWorkspace = this.m_workspace as TabWorkspace;
tabWorkspace.Dock = DockStyle.Fill;
this.Shell.Controls.Add(tabWorkspace);
}
else if (wsTypeName == "Deck")
{
this.m_workspace = this.RootWorkItem.Workspaces.AddNew<DeckWorkspace>("ClientWorkspace");
DeckWorkspace deckWorkspace = this.m_workspace as DeckWorkspace;
deckWorkspace.Dock = DockStyle.Fill;
this.Shell.Controls.Add(deckWorkspace);
}
else if (wsTypeName == "Zone")
{
this.m_workspace = this.RootWorkItem.Workspaces.AddNew<ZoneWorkspace>("ClientWorkspace");
ZoneWorkspace zoneWorkspace = this.m_workspace as ZoneWorkspace;
zoneWorkspace.Dock = DockStyle.Fill;
this.Shell.Controls.Add(zoneWorkspace);
}
else
{
throw new Exception("Cannot create workspace");
}
}
}
}
When you add a SmartPart to a Workspace, you can pass along hints that tell the Workspace how to layout and decorate the SmartPart. The Workspace class has functions for
- Showing a SmartPart (which also adds the SmartPart to the Workspace as well)
- Hiding a SmartPart
- Activating a SmartPart
- Closing a SmartPart
There are events that get fired when a SmartPart is activated within a Workspace, and when a SmartPart is closing within a Workspace.
You can create new, custom workspaces in CAB, and a later article will cover this.
©2006 Marc Adler - All Rights Reserved
The CAB Application Class Hierarchy
CAB Application Classes
The hierarchy of the CABApplication classes is shown in the diagram below. The top three classes should never be derived from. In almost all classes, your WinForms-based application will derive from FormShellApplication.
A CAB Application will be driven by the base CabApplication class. This class does most of the work to get the CAB application started.
The only field that the CabApplication class has is the root WorkItem, which as the name implies, is the root node of the entire application’s WorkItem tree. I will discuss WorkItems later.
The constructor of CabApplication does nothing; it is the Run() method (which is called by your application’s Main() function) that bootstraps the entire application. The Run() method will do the following:
The Derived Application Classes
The CabShellApplication class extends the CabApplication by
The FormShellApplication class merely overrides the Start() method and calls the familiar WinForms method Application.Run(mainform).
©2006 Marc Adler - All Rights Reserved
The hierarchy of the CABApplication classes is shown in the diagram below. The top three classes should never be derived from. In almost all classes, your WinForms-based application will derive from FormShellApplication.
A CAB Application will be driven by the base CabApplication class. This class does most of the work to get the CAB application started.
The only field that the CabApplication class has is the root WorkItem, which as the name implies, is the root node of the entire application’s WorkItem tree. I will discuss WorkItems later.
The constructor of CabApplication does nothing; it is the Run() method (which is called by your application’s Main() function) that bootstraps the entire application. The Run() method will do the following:
- Create the pre-defined services
- Authenticate the user.
- Reads the config file (ProfileCatalog.xml) that contains a list all of the plug-in modules that the app wants to load.
Each module can have an optional list of Roles. The module will be loaded only if the current user belongs to that role. If a module has no role information associated with it, then the module will be loaded. - Creates the Shell (ie: the MainForm)
- Loads all of the modules that are permitted to be loaded.
In each class in the module that derived from the ModuleInit class, all fields that are tagged with the [ServiceDependency] attribute are created.
All classes tagged with the [Service] attribute are added to the application’s list of services. - RootWorkItem.Run() is called for the applications root WorkItem.
- In the FormShellApplication class, the WinForm’s method Application.Run() is called to start the WinForm app.
The Derived Application Classes
The CabShellApplication class extends the CabApplication by
- Keeping a reference to the Shell (our MainForm) by adding it to the RootWorkItem’s list of items.
- Adding BeforeShellCreated and AfterShellCreated virtual methods. These methods can be overridden in your application class.
- Using a WindowsForm Visualizer
- Adding some UI and command-routine services
The FormShellApplication class merely overrides the Start() method and calls the familiar WinForms method Application.Run(mainform).
©2006 Marc Adler - All Rights Reserved
A Minimal CAB Application
First CAB Application
1) Create a new Windows Form Application solution named CABQuoteViewer.
a. Change the name of the form to MainForm.
2) Add the existing CAB projects to the solution.
a. Browse to C:\Program Files\Microsoft Composite UI App Block\CSharp\Source
b. Add the existing projects for ObjectBuilder, CompositeUI, and CompositeUI.WinForms.
c. In the CABQuoteViewer project, add references to the above 3 projects.
3) Rename the file Program.cs to CABQuoteViewerApplication.cs
4) In the file CABQuoteViewerApplication.cs,
a. Add references to the CAB namespace
using Microsoft.Practices.CompositeUI;
using Microsoft.Practices.CompositeUI.WinForms;
b. Change the definition of the class from
static class Program
to
class QuoteViewerApplication : FormShellApplication
c. The body of the Main() function should just be
new QuoteViewerApplication().Run();
The final version of CABQuoteViewerApplication.cs is:
using System;
using Microsoft.Practices.CompositeUI;
using Microsoft.Practices.CompositeUI.WinForms;
namespace CABQuoteViewer
{
class CABQuoteViewerApplication : FormShellApplication
{
[STAThread]
static void Main()
{
new CABQuoteViewerApplication().Run();
}
}
}
After compiling and running this application, you will see the empty MainForm appear.
©2006 Marc Adler - All Rights Reserved
1) Create a new Windows Form Application solution named CABQuoteViewer.
a. Change the name of the form to MainForm.
2) Add the existing CAB projects to the solution.
a. Browse to C:\Program Files\Microsoft Composite UI App Block\CSharp\Source
b. Add the existing projects for ObjectBuilder, CompositeUI, and CompositeUI.WinForms.
c. In the CABQuoteViewer project, add references to the above 3 projects.
3) Rename the file Program.cs to CABQuoteViewerApplication.cs
4) In the file CABQuoteViewerApplication.cs,
a. Add references to the CAB namespace
using Microsoft.Practices.CompositeUI;
using Microsoft.Practices.CompositeUI.WinForms;
b. Change the definition of the class from
static class Program
to
class QuoteViewerApplication : FormShellApplication
c. The body of the Main() function should just be
new QuoteViewerApplication().Run();
The final version of CABQuoteViewerApplication.cs is:
using System;
using Microsoft.Practices.CompositeUI;
using Microsoft.Practices.CompositeUI.WinForms;
namespace CABQuoteViewer
{
class CABQuoteViewerApplication : FormShellApplication
{
[STAThread]
static void Main()
{
new CABQuoteViewerApplication().Run();
}
}
}
After compiling and running this application, you will see the empty MainForm appear.
©2006 Marc Adler - All Rights Reserved
Tuesday, November 28, 2006
Getting Started with CAB
It looks like, for various reasons that shall accompany me to the grave, we will be bootstrapping the "top part" of our client-side framework with Microsoft's Composite Application Block. I am excited to finally be given the chance to learn CAB, and to lead the team doing the new client-side framework for my investment bank.
I will try to document my learning process with CAB so that I can save my successors some pain.
Installation of CAB and Accessories
Download and install the following files in order (make sure to install GAX before installing GAT):
1) Download Composite Application Block for C# (CAB)
2) Download Enterprise Library 2006 (EntLib)
3) Download the Guidance Automation Extensions (GAX)
4) Download the Guidance Automation Toolkit (GAT)
(At this point, before installing the Smart Client Software Factory, you must build CAB using the CompositeUI.sln solution file located in the directory C:\Program Files\Microsoft Composite UI App Block\CSharp. You must also close Visual Studio before installing SCSF.)
5) Download the Smart Client Software Factory (June 2006 version) (SCSF)
6) Download the CAB Hands-on Lab
7) Download the Intro to CAB document and the CAB help files (you might want to create shortcuts on the desktop for these files.)
8) Download the Sample Visualizations
If you have problems uninstalling GAX or GAT, read this.
Resources
· The CAB home on Microsoft Patterns and Practices area on GotDotNet
· CabPedia
· MSDN Magazine Article
· Getting Started with CAB on the Fear and Loathing blog
· Understanding CAB series on Szymon’s blog
Class Hierarchy
Here is a diagram of the major classes in CAB:
©2006 Marc Adler - All Rights Reserved
I will try to document my learning process with CAB so that I can save my successors some pain.
Installation of CAB and Accessories
Download and install the following files in order (make sure to install GAX before installing GAT):
1) Download Composite Application Block for C# (CAB)
2) Download Enterprise Library 2006 (EntLib)
3) Download the Guidance Automation Extensions (GAX)
4) Download the Guidance Automation Toolkit (GAT)
(At this point, before installing the Smart Client Software Factory, you must build CAB using the CompositeUI.sln solution file located in the directory C:\Program Files\Microsoft Composite UI App Block\CSharp. You must also close Visual Studio before installing SCSF.)
5) Download the Smart Client Software Factory (June 2006 version) (SCSF)
6) Download the CAB Hands-on Lab
7) Download the Intro to CAB document and the CAB help files (you might want to create shortcuts on the desktop for these files.)
8) Download the Sample Visualizations
If you have problems uninstalling GAX or GAT, read this.
Resources
· The CAB home on Microsoft Patterns and Practices area on GotDotNet
· CabPedia
· MSDN Magazine Article
· Getting Started with CAB on the Fear and Loathing blog
· Understanding CAB series on Szymon’s blog
Class Hierarchy
Here is a diagram of the major classes in CAB:
©2006 Marc Adler - All Rights Reserved
Monday, November 27, 2006
Free Market Data
Free ECN real-time data from OpenTick.
Very cheap ($1) monthly fees for some other data.
They have APIs in various languages, including C# for .Net 2.0. This seems like a good way to test that async data handling part of a framework. Here is some example docs for one of their calbacks:
static void onQuote(OTQuote quote)
Description
Provides real-time and historical quotes.
Provides
OTQuote
©2006 Marc Adler - All Rights Reserved
Very cheap ($1) monthly fees for some other data.
They have APIs in various languages, including C# for .Net 2.0. This seems like a good way to test that async data handling part of a framework. Here is some example docs for one of their calbacks:
static void onQuote(OTQuote quote)
Description
Provides real-time and historical quotes.
Provides
OTQuote
©2006 Marc Adler - All Rights Reserved
Eclipse Trader
A colleague pointed out the open-source Eclipse Trader, built upon the Eclipse Rich Client Platform (RCP).
There exists people in my company whose number > 1 who would be most pleased if I would fully embrace Eclipse RCP.
©2006 Marc Adler - All Rights Reserved
There exists people in my company whose number > 1 who would be most pleased if I would fully embrace Eclipse RCP.
©2006 Marc Adler - All Rights Reserved
Downloading Yahoo Quotes
Here is a great page for composing URLs to get all sort of delayed quote data from Yahoo
©2006 Marc Adler - All Rights Reserved
©2006 Marc Adler - All Rights Reserved
Saturday, November 25, 2006
Windows Shutdown Crapfest
Via Joel, there is required reading on Microsoft here.
I am currently involved with Microsoft's MCS and DPE in my job. I know that many of the people that I interact with are fervent readers of both Joel and of Mini-Microsoft, and that they will eventually read the above-mentioned article.
What has happened to my beloved Microsoft? Thank the lord that we are engaging some of the most talented Microsoft partners on the planet.
©2006 Marc Adler - All Rights Reserved
I am currently involved with Microsoft's MCS and DPE in my job. I know that many of the people that I interact with are fervent readers of both Joel and of Mini-Microsoft, and that they will eventually read the above-mentioned article.
What has happened to my beloved Microsoft? Thank the lord that we are engaging some of the most talented Microsoft partners on the planet.
©2006 Marc Adler - All Rights Reserved
Friday, November 24, 2006
Object Cache Considerations (Part 3)
Distribution and Subscriptions
An out-of-process object cache should not only have a storage component, but a messaging system as well. One of the architects in my group, who is a well-known messaging guru, told me that the ideal object cache should have a state-of-the-art messaging system attached to it.
Our object caches should be distributed and subscribable.
A logical cache can be distributed amongst several different servers. We can do this for load balancing and for failover. Applications also have local caches that communicate with the master, distributed cache(s).
Let's say that we are storing information about each position that our company maintains. We might want to have 3 distributed caches, one that stores positions for our customers in the US, one for customers in Europe and the Middle East/Africa, and one for Asia. Upon startup, the master cache loader will read all of the positions from the database and will populate each of the three caches.
This is an example of a very simple load balancer for the distributed caches. Other load balancing schemes include partitioning the positions by the first digit of the position id, a date range, etc.
Each application that uses positions will have its own local cache. These local caches will usually contain a subset of the data that is in the master caches. For example, the US Derivatives Desk might just need to cache positions from US portfolios that have been active in the last 30 days.
When an application updates or deletes a position in one of the master caches, we need to update all of the other master caches that we are using for failover purposes, and any other master caches that contain that particular position. Similarly, when we create a new position, we need to propagate that new position to any redundant caches or any caches that might be interested in the new position.
We might need to push the new or updated position to any of the local caches that are interested in that position. We have a choice of architecture for distributing updates to local caches.
1) We do not distribute the updated object at all. An application won’t know that there is new data in the master cache until it retrieves that object again.
2) We push the update to the local caches right away. We can push out full objects or just the deltas (changes to the object).
There are disadvantages of both schemes. Under scheme (1), the application could be working with old data. Under scheme (2), we could be updating an object in the local cache while the application is working on that same object. Also, under scheme (2), we now have to worry about messaging more.
The master caches have to have some way of communicating with the local caches. We can communicate with each application by one of the familiar messaging mechanisms; Tibco EMS, Tibco RV, LBM, Sockets, etc.
We need to make sure that the messaging is reliable. Each subscriber must receive the update of the object from the master cache. There is no tolerance for dropped messages. Otherwise, different applications might be working with different versions of an object.
We do not have to make the message durable. In other words, if a client goes offline for a while, then the messaging part of the cache does not have to save the update until a time where that client decided to reconnect. So, this saves us the need of storing out-of-date messages.
Using a JMS-based messaging scheme also means that we can use JMS Selectors to filter out objects that an application is not interested in. Selectors have overhead with them, but it is easy to set up a filter-based pub/sub mechanism between the master caches and any local caches. For example, one application might only be interested in updates to position objects whose position id starts with the prefix “A23”. It is easy to set up a JMS selector that has the pattern “positioned LIKE ‘A23%’”.
©2006 Marc Adler - All Rights Reserved
An out-of-process object cache should not only have a storage component, but a messaging system as well. One of the architects in my group, who is a well-known messaging guru, told me that the ideal object cache should have a state-of-the-art messaging system attached to it.
Our object caches should be distributed and subscribable.
A logical cache can be distributed amongst several different servers. We can do this for load balancing and for failover. Applications also have local caches that communicate with the master, distributed cache(s).
Let's say that we are storing information about each position that our company maintains. We might want to have 3 distributed caches, one that stores positions for our customers in the US, one for customers in Europe and the Middle East/Africa, and one for Asia. Upon startup, the master cache loader will read all of the positions from the database and will populate each of the three caches.
This is an example of a very simple load balancer for the distributed caches. Other load balancing schemes include partitioning the positions by the first digit of the position id, a date range, etc.
Each application that uses positions will have its own local cache. These local caches will usually contain a subset of the data that is in the master caches. For example, the US Derivatives Desk might just need to cache positions from US portfolios that have been active in the last 30 days.
When an application updates or deletes a position in one of the master caches, we need to update all of the other master caches that we are using for failover purposes, and any other master caches that contain that particular position. Similarly, when we create a new position, we need to propagate that new position to any redundant caches or any caches that might be interested in the new position.
We might need to push the new or updated position to any of the local caches that are interested in that position. We have a choice of architecture for distributing updates to local caches.
1) We do not distribute the updated object at all. An application won’t know that there is new data in the master cache until it retrieves that object again.
2) We push the update to the local caches right away. We can push out full objects or just the deltas (changes to the object).
There are disadvantages of both schemes. Under scheme (1), the application could be working with old data. Under scheme (2), we could be updating an object in the local cache while the application is working on that same object. Also, under scheme (2), we now have to worry about messaging more.
The master caches have to have some way of communicating with the local caches. We can communicate with each application by one of the familiar messaging mechanisms; Tibco EMS, Tibco RV, LBM, Sockets, etc.
We need to make sure that the messaging is reliable. Each subscriber must receive the update of the object from the master cache. There is no tolerance for dropped messages. Otherwise, different applications might be working with different versions of an object.
We do not have to make the message durable. In other words, if a client goes offline for a while, then the messaging part of the cache does not have to save the update until a time where that client decided to reconnect. So, this saves us the need of storing out-of-date messages.
Using a JMS-based messaging scheme also means that we can use JMS Selectors to filter out objects that an application is not interested in. Selectors have overhead with them, but it is easy to set up a filter-based pub/sub mechanism between the master caches and any local caches. For example, one application might only be interested in updates to position objects whose position id starts with the prefix “A23”. It is easy to set up a JMS selector that has the pattern “positioned LIKE ‘A23%’”.
©2006 Marc Adler - All Rights Reserved
Sunday, November 19, 2006
So long Mike
Our friend Mike was a 25-year veteran of the Financial Service practice of a very large consulting firm. He was just informed that he was to be dismissed, on the basis of not making his (unrealistic) sales quota. The new management of the division and Mike did not agree on things, and the easy way to get rid of a person is to set him up for failure.
Mike will easily find a new position, as he is known and respected in the financial services area. And, opportunity is also borne out of adversity.
Know the warning signs when you are being set up to fail. Miike saw them a mile away, and was prepared.
©2006 Marc Adler - All Rights Reserved
Mike will easily find a new position, as he is known and respected in the financial services area. And, opportunity is also borne out of adversity.
Know the warning signs when you are being set up to fail. Miike saw them a mile away, and was prepared.
©2006 Marc Adler - All Rights Reserved
SQl Server 2005 Performance Tips
Here
Good performance hints if you are doing a lot of OLTP processing. Most of the tips will work for Sybase as well.
It is remarkable what performance improvements you can make in an OLTP system if you hire a true database tuning expert to go over your systems with a fine-tooted comb. Unfortunately, the tuning does not usually extend into refactoring the data model. This is because a lot of apps touch the databases, and you would then have to go in and start refactoring app code.
SQL Server 2005 comes with new, easy-to-use profiling tools, and I encourage you to take advantage of them as you are developing new apps. If you are a dev manager, try to get funding for two weeks of a SQL Server 2005 expert's time once your data model is written.
In addition, try to get your SANs tuned correctly, and optimize the interaction between frequenty-used database indexes and the spindles.
How about hardware acceleration, like Solid State Disks?
Kudos go out to this site, completely devoted to SQL Server performance tuning.
Final word - if you have a mission-critical application that involves heavy database access, spend the bucks and hire a DB architect who knows how to tune databases like a 1964 Fender Strat.
©2006 Marc Adler - All Rights Reserved
Good performance hints if you are doing a lot of OLTP processing. Most of the tips will work for Sybase as well.
It is remarkable what performance improvements you can make in an OLTP system if you hire a true database tuning expert to go over your systems with a fine-tooted comb. Unfortunately, the tuning does not usually extend into refactoring the data model. This is because a lot of apps touch the databases, and you would then have to go in and start refactoring app code.
SQL Server 2005 comes with new, easy-to-use profiling tools, and I encourage you to take advantage of them as you are developing new apps. If you are a dev manager, try to get funding for two weeks of a SQL Server 2005 expert's time once your data model is written.
In addition, try to get your SANs tuned correctly, and optimize the interaction between frequenty-used database indexes and the spindles.
How about hardware acceleration, like Solid State Disks?
Kudos go out to this site, completely devoted to SQL Server performance tuning.
Final word - if you have a mission-critical application that involves heavy database access, spend the bucks and hire a DB architect who knows how to tune databases like a 1964 Fender Strat.
©2006 Marc Adler - All Rights Reserved
Sunday, November 12, 2006
Object Cache Considerations (Part 2)
Object Versioning
You can consider the ‘version’ of an object to be two different things.
In the first case, the ‘version’ of an object could represent the number of times a particular object has been written to. When an object is first created, its version number is set to 1, and then each time a client updates the object, the version number is increased. So, we can have an API call is our object cache that tests to see if we are holding on to the most recent version:
if (!ObjectCache.IsCurrent(object))
object = ObjectCache.Get(object.Key);
or, if we are using a object that has a proxy to the cache, we can do something like this:
if (!object.IsCurrent())
object.Refresh();
In the second (and substantially more complex) case, the ‘version’ can represent the actual layout or shape of an object’s class. Consider a Trade object:
public class Trade : CachedObject
{
public int SecurityId;
public double Price;
}
This would be version 1.0 of the class.
Let’s say that we have a trading system that has to run 24x7, as is the current rage. Systems that run 24x7 theoretically have no chance to be bounced. Even a system that runs 24x6.5 has a window for maintenance. Our trading system has version 1.0 of the Trade object.
Now let’s say that we have a request to add sales attribution to the trading system, so now we need to add the id of the sales trader that took the trade request.
public class Trade : CachedObject
{
public int SecurityId;
public double Price;
public int BrokerId;
}
This is now version 1.1 of the class.
Our object cache holds version 1.0 objects, and all of our subscribers also hold version 1.0 objects. But, now let’s say that the system that writes new trades into the object cache now has to write version 1.1 objects. What do we do?
There are several things to consider here. How do we represent the object in the cache? Because we are using name/value pairs, all new objects will just have the BrokerId/ field added. The old 1.0 objects that are in the cache do not have to change.
The object cache might want to broadcast a message to all subscribers, telling them that the version number of the Trade object has changed. Since the subscribers may be systems that must run 24x7, then the systems might not be able to be bounced in order to rebuild their trade caches. The systems must be able to read and write the new version 1.1 objects as well as continue to support the older 1.0 objects. But, we cannot reconfigure the layout of the objects dynamically, can we?
Instead of using C# objects, we might consider using dictionaries of dictionaries to represent an app’s object cache. But this is a different kind of programming model. Instead of coding:
Trade obj = ObjectCache.Get(“102374”);
int broker = obj.BrokerId;
We might have to do the following (taking advantage of C# 2.0’s nullable types) :
TradeDictionaryObject obj = ObjectCache.Get(“1002374”);
int? broker = obj.GetInt(“BrokerId”);
What a mess!
What does this tell us? When using an object cache for a 24x7 system, make sure you get your class definition right the first time, and avoid object versioning!
©2006 Marc Adler - All Rights Reserved
You can consider the ‘version’ of an object to be two different things.
In the first case, the ‘version’ of an object could represent the number of times a particular object has been written to. When an object is first created, its version number is set to 1, and then each time a client updates the object, the version number is increased. So, we can have an API call is our object cache that tests to see if we are holding on to the most recent version:
if (!ObjectCache.IsCurrent(object))
object = ObjectCache.Get(object.Key);
or, if we are using a object that has a proxy to the cache, we can do something like this:
if (!object.IsCurrent())
object.Refresh();
In the second (and substantially more complex) case, the ‘version’ can represent the actual layout or shape of an object’s class. Consider a Trade object:
public class Trade : CachedObject
{
public int SecurityId;
public double Price;
}
This would be version 1.0 of the class.
Let’s say that we have a trading system that has to run 24x7, as is the current rage. Systems that run 24x7 theoretically have no chance to be bounced. Even a system that runs 24x6.5 has a window for maintenance. Our trading system has version 1.0 of the Trade object.
Now let’s say that we have a request to add sales attribution to the trading system, so now we need to add the id of the sales trader that took the trade request.
public class Trade : CachedObject
{
public int SecurityId;
public double Price;
public int BrokerId;
}
This is now version 1.1 of the class.
Our object cache holds version 1.0 objects, and all of our subscribers also hold version 1.0 objects. But, now let’s say that the system that writes new trades into the object cache now has to write version 1.1 objects. What do we do?
There are several things to consider here. How do we represent the object in the cache? Because we are using name/value pairs, all new objects will just have the BrokerId/
The object cache might want to broadcast a message to all subscribers, telling them that the version number of the Trade object has changed. Since the subscribers may be systems that must run 24x7, then the systems might not be able to be bounced in order to rebuild their trade caches. The systems must be able to read and write the new version 1.1 objects as well as continue to support the older 1.0 objects. But, we cannot reconfigure the layout of the objects dynamically, can we?
Instead of using C# objects, we might consider using dictionaries of dictionaries to represent an app’s object cache. But this is a different kind of programming model. Instead of coding:
Trade obj = ObjectCache.Get(“102374”);
int broker = obj.BrokerId;
We might have to do the following (taking advantage of C# 2.0’s nullable types) :
TradeDictionaryObject obj = ObjectCache.Get(“1002374”);
int? broker = obj.GetInt(“BrokerId”);
What a mess!
What does this tell us? When using an object cache for a 24x7 system, make sure you get your class definition right the first time, and avoid object versioning!
©2006 Marc Adler - All Rights Reserved
Considerations for an Object Cache (Part 1)
Let’s say that we wanted to write our own multi-platform, distributed, subscription-based object cache. What would we need to do to write the ultimate object cache?
Let’s consider a variety of issues that we would have to consider when writing caching middleware. I am sure that vendors like Gemstone have gone through this exercise already, but why not go through it again!
Multiplatform support
Most Investment Banks have a combination of C++ (both Win32 and Unix), C#/.Net and Java (both Win32 and Unix) applications. It is common to have a .Net front-end talking to a Java server, which in turn, communicates to a C++-based pricing engine. We need to be able to represent the object data is some sort of form that can be easily accessed by applications in all of the various platforms.
The most universal representation would be to represent the object as pure text, and to send it across the wire as text. What kind of text representation would we use?
XML – quasi-universal. We would have to ensure that XML written by one system is readable by other systems. XML serialization is well-known between Java and C# apps, but what about older C++ apps. For C++, would we use Xerces? Also, there is the cost of serialization and deserialization, not to mention the amount of data that is sent over the wire.
Name/Values Pairs – easy to generate and parse. Same costs as XML. We would have to write our own serialization and deserialization schemes for each system. How about complex, hierarchical data? Can simple name/value pairs represent complex data efficiently? Or would we just end up rewriting the XML spec?
Instead of text, we can store objects as true binary objects. What kind of binary object do we store? Native or system-agnostic? If you have a variety of platforms writing into the object cache, do we store the object in the binary format of the system that created the object, or do we pick one platform and use that as master?
Master Format – We pick one format, either C++, C#, or Java binary objects. We would need a series of adapters to transform between binary formats. We would also need an indication of the platform that is doing the reading or writing. Let’s say that we were to store all objects as binary Java objects. If a Java app reads an object, then there would be no costs associated with object transformation, so we can just send a binary Java object down the wire (although we may have to worry about differences between the various versions of Java … can a Java 1.5 object with Java 1.5-specific types or classes be read by a Java 1.4 app?). If a C# app wants to read the Java object, then we must perform some translation. (Do we use something like CodeMesh to do this?) We also need to ensure that the adaptors can support all of the features of the various languages. For example, let’s say that Java came up with a new data type that C# did not support … would we try to find some representation of that type in C#, or would we just not translate that particular data type?
Native Format – We store pure binary objects, without regards to the system that is reading or writing the object. There is no translation layer. Apps are responsible for doing translation themselves. This is the fastest, most efficient way of storing objects. However, different teams might end up writing their own versions of the translation layer.
What other factors might we consider when choosing a native object format?
How about deltas in subscriptions? If we are storing large objects, then we might only want to broadcast changes to the object instead of resending the entire object. Delta transmission favors sending the changes out in text, and we can save the cost of translating the binary into text if we were just to store the objects as text. And, in this case, name/value pairs are favored.
Large sets of name/value pairs can be compressed if necessary, but we have to consider the time needed to compress and decompress.
Can our object cache store both text and binary? Sure, why not. We can tag a cache region as supporting binary or text, and have appropriate plugins for various operations on each.
As always, comments are welcome.
©2006 Marc Adler - All Rights Reserved
Let’s consider a variety of issues that we would have to consider when writing caching middleware. I am sure that vendors like Gemstone have gone through this exercise already, but why not go through it again!
Multiplatform support
Most Investment Banks have a combination of C++ (both Win32 and Unix), C#/.Net and Java (both Win32 and Unix) applications. It is common to have a .Net front-end talking to a Java server, which in turn, communicates to a C++-based pricing engine. We need to be able to represent the object data is some sort of form that can be easily accessed by applications in all of the various platforms.
The most universal representation would be to represent the object as pure text, and to send it across the wire as text. What kind of text representation would we use?
XML – quasi-universal. We would have to ensure that XML written by one system is readable by other systems. XML serialization is well-known between Java and C# apps, but what about older C++ apps. For C++, would we use Xerces? Also, there is the cost of serialization and deserialization, not to mention the amount of data that is sent over the wire.
Name/Values Pairs – easy to generate and parse. Same costs as XML. We would have to write our own serialization and deserialization schemes for each system. How about complex, hierarchical data? Can simple name/value pairs represent complex data efficiently? Or would we just end up rewriting the XML spec?
Instead of text, we can store objects as true binary objects. What kind of binary object do we store? Native or system-agnostic? If you have a variety of platforms writing into the object cache, do we store the object in the binary format of the system that created the object, or do we pick one platform and use that as master?
Master Format – We pick one format, either C++, C#, or Java binary objects. We would need a series of adapters to transform between binary formats. We would also need an indication of the platform that is doing the reading or writing. Let’s say that we were to store all objects as binary Java objects. If a Java app reads an object, then there would be no costs associated with object transformation, so we can just send a binary Java object down the wire (although we may have to worry about differences between the various versions of Java … can a Java 1.5 object with Java 1.5-specific types or classes be read by a Java 1.4 app?). If a C# app wants to read the Java object, then we must perform some translation. (Do we use something like CodeMesh to do this?) We also need to ensure that the adaptors can support all of the features of the various languages. For example, let’s say that Java came up with a new data type that C# did not support … would we try to find some representation of that type in C#, or would we just not translate that particular data type?
Native Format – We store pure binary objects, without regards to the system that is reading or writing the object. There is no translation layer. Apps are responsible for doing translation themselves. This is the fastest, most efficient way of storing objects. However, different teams might end up writing their own versions of the translation layer.
What other factors might we consider when choosing a native object format?
How about deltas in subscriptions? If we are storing large objects, then we might only want to broadcast changes to the object instead of resending the entire object. Delta transmission favors sending the changes out in text, and we can save the cost of translating the binary into text if we were just to store the objects as text. And, in this case, name/value pairs are favored.
Large sets of name/value pairs can be compressed if necessary, but we have to consider the time needed to compress and decompress.
Can our object cache store both text and binary? Sure, why not. We can tag a cache region as supporting binary or text, and have appropriate plugins for various operations on each.
As always, comments are welcome.
©2006 Marc Adler - All Rights Reserved
Saturday, November 11, 2006
Tourist Warning : City Pride
While staying in Canary Wharf last week, a colleague from Microsoft and I went to the City Pride pub, which is one of the few pubs around the area, and close to the Hilton.
After sitting at a table for 15 minutes, and not being served, some kind soul told us that you actually has to go up to the bar to order your beers. (Warning #1). We Americans like to be coddled, and demand waitress service.
Then I ordered a Black and Tan (Guiness and Bass, standard fare at any Irish pub in New York). The bartender had no idea what I was talking about (Warning #2).
To top it off, the bartender gave me the bill, and said that we could feel free to add a tip onto it ... which I did (a 10%, 2 Pound tip) (Warning #3 ... I was told the next day never to tip the bartender).
I guess this was my Lost In Translation moment that every tourist has when visiting a foreign country .... even though Bush treats Britain as our 51st state. (Yo Blair!)
©2006 Marc Adler - All Rights Reserved
After sitting at a table for 15 minutes, and not being served, some kind soul told us that you actually has to go up to the bar to order your beers. (Warning #1). We Americans like to be coddled, and demand waitress service.
Then I ordered a Black and Tan (Guiness and Bass, standard fare at any Irish pub in New York). The bartender had no idea what I was talking about (Warning #2).
To top it off, the bartender gave me the bill, and said that we could feel free to add a tip onto it ... which I did (a 10%, 2 Pound tip) (Warning #3 ... I was told the next day never to tip the bartender).
I guess this was my Lost In Translation moment that every tourist has when visiting a foreign country .... even though Bush treats Britain as our 51st state. (Yo Blair!)
©2006 Marc Adler - All Rights Reserved
Wanted : JMX to .Net Bridge
We need a way for a .Net GUI to speak JMX to a Java server. Anyone come up with anything yet?
Doing a Google search, it looks like we are not the only ones with that need.
Has anyone checked out the WS-JMX Connector?
©2006 Marc Adler - All Rights Reserved
Doing a Google search, it looks like we are not the only ones with that need.
Has anyone checked out the WS-JMX Connector?
©2006 Marc Adler - All Rights Reserved
Office 2007 Compatibility Pack is available
If you are like me, and you keep getting these Office 2007 files sent to you by your local Microsoft reps, but you are only running Office 2003, then you need this
©2006 Marc Adler - All Rights Reserved
©2006 Marc Adler - All Rights Reserved
Wednesday, November 08, 2006
What does a Database Architect do?
Even though I was the very first developer on the Microsoft SQL Server Team, I have to admit that databases don't thrill me .... You have to have a special mindset to deal with databases all day, and to tell you the truth, my interests lie elsewhere. In fact, the sure way to get me to fail an interview is to ask me to write any moderately-complicated SQL query.
I firmly believe that, for major systems, the developers should not be allowed to design the data model, set up the databases, nor write the DDL. I have seen a number of instances in the past where systems that have had their database components designed by non-database experts have performed very poorly. Slow queries, no indexes, lock contention, etc. The best projects that I have been involved in have had a separate person just devoted to the database. If I am leading a major project, I will always have a dedicated db expert as part of the effort.
Let's say that we want to hire a database expert for our Architecture Team. What duties would they have?
1) Advise teams on best practices.
2) Come up with DDL coding standards.
3) Review existing systems and provide guidance on performance improvements.
4) Know the competitve landscape (Sybase vs SQL Server vs Oracle) and affect corporate standards.
5) Be expert at OLAP, MDX, Analysis Services, etc.
6) Know how to tune databases and hardware in order to provide optimal performance.
7) Advise all of the DBAs.
8) Monitor upcoming technologies, like Complex Event Processing, time-series databases, etc. Be familiar with KDB+, StreamBase, Vhayu, etc.
A Database Architect is a full-time job that I think that all Architecture groups should have a slot for.
Know anyone who wants to join us?
©2006 Marc Adler - All Rights Reserved
I firmly believe that, for major systems, the developers should not be allowed to design the data model, set up the databases, nor write the DDL. I have seen a number of instances in the past where systems that have had their database components designed by non-database experts have performed very poorly. Slow queries, no indexes, lock contention, etc. The best projects that I have been involved in have had a separate person just devoted to the database. If I am leading a major project, I will always have a dedicated db expert as part of the effort.
Let's say that we want to hire a database expert for our Architecture Team. What duties would they have?
1) Advise teams on best practices.
2) Come up with DDL coding standards.
3) Review existing systems and provide guidance on performance improvements.
4) Know the competitve landscape (Sybase vs SQL Server vs Oracle) and affect corporate standards.
5) Be expert at OLAP, MDX, Analysis Services, etc.
6) Know how to tune databases and hardware in order to provide optimal performance.
7) Advise all of the DBAs.
8) Monitor upcoming technologies, like Complex Event Processing, time-series databases, etc. Be familiar with KDB+, StreamBase, Vhayu, etc.
A Database Architect is a full-time job that I think that all Architecture groups should have a slot for.
Know anyone who wants to join us?
©2006 Marc Adler - All Rights Reserved
Monday, November 06, 2006
DrKW and Cross-Asset Trading
Dresdner folded its much-hyped Digital Markets division, which they liked to view as their "Bell Labs" of DrKW. All of the major players involved in the Digital Markets group have left or are in the process of leaving.
According to the DWT newsletter, there were several charters to the Digital Markets group:
1) Provide synergies across all lines-of-business at DrKW, and stop the siloing.
2) Provide a system for cross-asset class trading.
Eugene Grygo, the editor of the DWT newsletter, devoted his "Before the Spin" column to this news, and brought up some questions with regards to the future of DrKW. In particular, what will happen to the dream of cross-asset class trading? Grygo mentions that HSBC is actively exploring this space, and I know a few other IBs doing the same. Is it impossible to coalesce the silos and provide true cross-asset class trading? If it is technically feasible, then is it politically feasible?
In these times where everyone is predicting the reduction of traders due to automation, will cross-asset trading be the last field of battle as the silos struggle to maintain their autonomy?
I also wonder what becomes of DrKW's grid project. Maybe Matt or Deglan can illuminate us...
©2006 Marc Adler - All Rights Reserved
According to the DWT newsletter, there were several charters to the Digital Markets group:
1) Provide synergies across all lines-of-business at DrKW, and stop the siloing.
2) Provide a system for cross-asset class trading.
Eugene Grygo, the editor of the DWT newsletter, devoted his "Before the Spin" column to this news, and brought up some questions with regards to the future of DrKW. In particular, what will happen to the dream of cross-asset class trading? Grygo mentions that HSBC is actively exploring this space, and I know a few other IBs doing the same. Is it impossible to coalesce the silos and provide true cross-asset class trading? If it is technically feasible, then is it politically feasible?
In these times where everyone is predicting the reduction of traders due to automation, will cross-asset trading be the last field of battle as the silos struggle to maintain their autonomy?
I also wonder what becomes of DrKW's grid project. Maybe Matt or Deglan can illuminate us...
©2006 Marc Adler - All Rights Reserved
Sunday, November 05, 2006
Grid in Financial Markets
A presentation by JP Morgan on their use of Grid Computing.
Here is a page of PDF's from a February 2006 conference in Italy on Grid Computing in financial markets. There is even a paper on using grid for semantic analysis of financial news feeds. I need to get our London team to read some of this stuff.
There must be synergies between Complex Event Processing and Grids. Anyone looking at this space?
©2006 Marc Adler - All Rights Reserved
Here is a page of PDF's from a February 2006 conference in Italy on Grid Computing in financial markets. There is even a paper on using grid for semantic analysis of financial news feeds. I need to get our London team to read some of this stuff.
There must be synergies between Complex Event Processing and Grids. Anyone looking at this space?
©2006 Marc Adler - All Rights Reserved
Friday, October 27, 2006
Mini-Guide to .Net/Java Interop
Here
Terry is sure to have this stuff in our Wiki before I get to London :-)
©2006 Marc Adler - All Rights Reserved
Terry is sure to have this stuff in our Wiki before I get to London :-)
©2006 Marc Adler - All Rights Reserved
Pricing Turbo Warrants
Here
Today was the first I have ever heard about Turbo Warrants.
A Turbo Warrant call is:
- a barrier knock-out option paying a ...
- small rebate to the holder if the barrier is hit and
- with the barrier typically in-the-money.
©2006 Marc Adler - All Rights Reserved
Today was the first I have ever heard about Turbo Warrants.
A Turbo Warrant call is:
- a barrier knock-out option paying a ...
- small rebate to the holder if the barrier is hit and
- with the barrier typically in-the-money.
©2006 Marc Adler - All Rights Reserved
Thursday, October 26, 2006
The Bile Blog
The Bile Blog is even most cynical and caustic than I am.
Thanks to the ThoughtWorkers at my office for pointing this out. Especially funny are the posts that harpoon the Fowlbots. And, I think that he has even biled yours truly.
This blog is going to take me quite a while to go through ... would be great if Virgin Atlantic had internet connectivity ... I think reading this blog would occupy my entire flight.
By the way ... kudos to our own Fowlbots ... James, Chris, Dave and Alistair. Job well done, boys!
©2006 Marc Adler - All Rights Reserved
Thanks to the ThoughtWorkers at my office for pointing this out. Especially funny are the posts that harpoon the Fowlbots. And, I think that he has even biled yours truly.
This blog is going to take me quite a while to go through ... would be great if Virgin Atlantic had internet connectivity ... I think reading this blog would occupy my entire flight.
By the way ... kudos to our own Fowlbots ... James, Chris, Dave and Alistair. Job well done, boys!
©2006 Marc Adler - All Rights Reserved
C++/CLI Opposition
C++/CLI == C++ Divided By CLI
Here
Interesting reading, especially in the light that investment banks have a ton of old Visual C++/6, non-MFC code. The choice is to go to C++/CLI, or start fresh with C#.
©2006 Marc Adler - All Rights Reserved
Here
Interesting reading, especially in the light that investment banks have a ton of old Visual C++/6, non-MFC code. The choice is to go to C++/CLI, or start fresh with C#.
©2006 Marc Adler - All Rights Reserved
Wednesday, October 25, 2006
I am scared of The Wharf
Earlier this year, my former consulting company closed its London office without warning. In a mailing to the staff, the partners said that there was just no business to be had for very smart .Net and Java consultants in Canary Wharf.
I am imaging what kind of place this Wharf could be. When I get there, will I see a lone seaman, in a yellow raincoat, yelling "Ahoy Matey" to me from a distant pier? Are there thugs and holligans behind every dark corner, waiting to roll me for my wallet? Will I see row of cars, inhabited by hormonal teenagers, "watching the submarine races" (a popular saying in the 1950's)?
I am very afraid of this Wharf place. Perhaps JohnOS, Matt, Deglan and Pin can organize a bile-night to keep me off the streets? Perhaps Rut the Nut is reading this blog and will get some of our old Citicorp/EBS gang together for a slosh-up?
©2006 Marc Adler - All Rights Reserved
Tuesday, October 24, 2006
Funny Code
What is the funniest line of code you ever saw? One of the developers at Barcap wrote the following: line in a C# function
if (this == null)
{
}
It absolutely cracked me up (well... I guess you had to be there).
©2006 Marc Adler - All Rights Reserved
if (this == null)
{
}
It absolutely cracked me up (well... I guess you had to be there).
©2006 Marc Adler - All Rights Reserved
London Bound
I will be in London all next week, checking out the happenings in Canary Wharf. I have not been to London for a long time, so it will be interesting to see how built up the Wharf is. More interesting will be to see if they built any quality pubs. Last time I was in London, all of the pubs closed at 11 at night.
One nice thing will be to test our Virgin Atlantic's Business Class. I have been hearing tales of free massages .... hmmm ...
©2006 Marc Adler - All Rights Reserved
One nice thing will be to test our Virgin Atlantic's Business Class. I have been hearing tales of free massages .... hmmm ...
©2006 Marc Adler - All Rights Reserved
Sunday, October 22, 2006
Separated at Birth
Chris and I came from the same company, and we were both equally dissatisfied with the kind of body shop work that we were doing at our last client. We both struck out simultaneously to find a place where we could do meaningful work. We landed in very similar positions at two of the largest investment banks in the world.
Funnier still is that we share almost the same technology stack, from front to back. I am sure that right after the conference call that we have with our market data infrastructure vendor, Chris is on the same call an hour later.
Craig mentions that there are only about 200 market data specialists in the whole world. Everyone probably knows what eachother is doing. The same thing goes on in Wall Street, and by extension, the City. The after-bonus-shuffle will take place in another 4 months, and it's an opportunity for every company to find out what every other company is doing.
When you come down to it, we all have very similar technology stacks. We all know which vendors are out there, and we all have done the same kind of performance and stressing comparisons between similar vendors. The goal is to squeeze that one extra millisecond of performance so that your order is hit before your competitor's order. It could be the difference of one extra 'lock' in a piece of code. It will be a race to recruit the Joe Duffy's and Rico Mariani's of the world. All of the IBs will need to recognize the need for these kind of people, and adapt themselves so that these people will not feel stiffled within a Wall Street environment.
©2006 Marc Adler - All Rights Reserved
Funnier still is that we share almost the same technology stack, from front to back. I am sure that right after the conference call that we have with our market data infrastructure vendor, Chris is on the same call an hour later.
Craig mentions that there are only about 200 market data specialists in the whole world. Everyone probably knows what eachother is doing. The same thing goes on in Wall Street, and by extension, the City. The after-bonus-shuffle will take place in another 4 months, and it's an opportunity for every company to find out what every other company is doing.
When you come down to it, we all have very similar technology stacks. We all know which vendors are out there, and we all have done the same kind of performance and stressing comparisons between similar vendors. The goal is to squeeze that one extra millisecond of performance so that your order is hit before your competitor's order. It could be the difference of one extra 'lock' in a piece of code. It will be a race to recruit the Joe Duffy's and Rico Mariani's of the world. All of the IBs will need to recognize the need for these kind of people, and adapt themselves so that these people will not feel stiffled within a Wall Street environment.
©2006 Marc Adler - All Rights Reserved
NCache and db4o
Matt is looking at NCache and Chris recommends db4o. Plus Geva is telling us to look at Gigaspaces.
Has anyone does an in-depoth comparison between NCache, db4o, Giga, Gemfire, Tangasol, and any others? Roll-your-owns are also welcome in the comparison.
Hey, hey Microsoft ... what 'ya got cookin'? How 'bout cooking something up for me? Here is the JCP-107 spec. Don't deviate too far from it.
©2006 Marc Adler - All Rights Reserved
Has anyone does an in-depoth comparison between NCache, db4o, Giga, Gemfire, Tangasol, and any others? Roll-your-owns are also welcome in the comparison.
Hey, hey Microsoft ... what 'ya got cookin'? How 'bout cooking something up for me? Here is the JCP-107 spec. Don't deviate too far from it.
©2006 Marc Adler - All Rights Reserved
Saturday, October 21, 2006
.Net - Java Integration
http://www.infoq.com/articles/java-dotnet-integration-intro
Especially interesting are the comments from Roger Voss, formerely of the Aldus Pagemaker portability team. He has chosen .Net for the front end and Tibco EMS as the messaging layer between his Java and .Net tiers.
©2006 Marc Adler - All Rights Reserved
Especially interesting are the comments from Roger Voss, formerely of the Aldus Pagemaker portability team. He has chosen .Net for the front end and Tibco EMS as the messaging layer between his Java and .Net tiers.
©2006 Marc Adler - All Rights Reserved
Free .NET Object Cache
Free .NET object cache at CodeProject.
Wonder if the code has been touch for a while, or whether anyone is using this cache...
©2006 Marc Adler - All Rights Reserved
Wonder if the code has been touch for a while, or whether anyone is using this cache...
©2006 Marc Adler - All Rights Reserved
Concurrency, Joe Duffy and Wall Street
Joe Duffy is soliciting ideas for a new book on concurrency.
Joe's name comes up a lot my talks with Microsoft team that supports me. He and Rico Mariani are two resources from Redmond that I would love to get on an advisory basis. My vision is that they would help us out with optimal .Net architectures for low-latency, high-performance systems.
The elephant in the room that everyone is worried about is the projected Opra feeds of 456,000 messages per second. As I am the Global Architect for Equity Derivatives for one of the largest investment banks in the world, this is something that I am responsible for. As such,the things I need for our stack include high-speed market data feeds with caching and conflation, complex event processing, hardware acceleration, object caches, threading models, efficient GUIs and client-side frameworks, hardware acceleration, super-tight code generation, efficient messaging between components, (and did I mention hardware acceleration?).
From a concurrency point of view, I would like to know what Microsoft considers to be best practices to implement quasi-real-time data processing. Also, what anti-patterns exist and what to avoid in the .Net framework. How to use PerfMon and other third-party performance tools to get the most out of my systems. Coding rules that maximize performance. Things that Microsoft will be releasing (or thinking about) five years down the road.
And, for my own selfish reasons, I would love to see Joe implement a trading system to test out his ideas!
©2006 Marc Adler - All Rights Reserved
Joe's name comes up a lot my talks with Microsoft team that supports me. He and Rico Mariani are two resources from Redmond that I would love to get on an advisory basis. My vision is that they would help us out with optimal .Net architectures for low-latency, high-performance systems.
The elephant in the room that everyone is worried about is the projected Opra feeds of 456,000 messages per second. As I am the Global Architect for Equity Derivatives for one of the largest investment banks in the world, this is something that I am responsible for. As such,the things I need for our stack include high-speed market data feeds with caching and conflation, complex event processing, hardware acceleration, object caches, threading models, efficient GUIs and client-side frameworks, hardware acceleration, super-tight code generation, efficient messaging between components, (and did I mention hardware acceleration?).
From a concurrency point of view, I would like to know what Microsoft considers to be best practices to implement quasi-real-time data processing. Also, what anti-patterns exist and what to avoid in the .Net framework. How to use PerfMon and other third-party performance tools to get the most out of my systems. Coding rules that maximize performance. Things that Microsoft will be releasing (or thinking about) five years down the road.
And, for my own selfish reasons, I would love to see Joe implement a trading system to test out his ideas!
©2006 Marc Adler - All Rights Reserved
Friday, October 20, 2006
Financial Systems Books
I just got the new Third Edition of the classic After the Trade is Made. At a first glance, it looks like they have fattened up the book a bunch, and added a chapter on trading systems. I will report later on the quality of this chapter if I can get a block of free time to read it.
A decent companion to this book is Practical .Net for Financial Markets. I ordered this book after reading my colleague Ted's glowing review of the book on Amazon. I have to admit that, for me, the book was a bit on the disappointing side. The book has a little bit of everything ... a simple crossing engine, some messaging (designed to illustrate how STP works), some encryption, some good example of networking, etc.
However, what seems to be missing from the marketplace is a great book that deals with the entire spectrum of financial instruments from a developer's point of view. Something that will not only discuss the business domain and underlying technology, but will also point to real products that implement the various systems. A deep dive for the developer to become completly immersed in Wall Street systems.
For example, let's take Equities. From a systems standpoint, I want to know what goes on from the time that a trade is entered until someone receives confirmation through snailmail. I want to know what an order entry system does, how trades are routed to the exchanges, how FIX messaging is used, how crossing engines and auto-execution engines work, how stat arb and algorithmic trading factors in, how market-making functions, how settlment is done, how positions are maintained and how P&L are calculated, how market data gets into a system, how risk is calculated, etc. I want to know how systems and vendors like Bloomberg, Fidessa, Reuters, Wombat, Vhayu, etc fit into this space.
In addition, I would like to see topics that are tangental in nature, but geared towards the financial systems developer. How to develop low-latency systems. How to write UI's with fast-updating grids of market data. How to use complex event processing to implement stat arb trading. How to do order routing efficiently using rules engines.
I want a complete end-to-end picture. I want FinancialSystemsPedia.
My former colleague Matt was thinking about writing this kind of book several years ago. I think that there is a real need for this kind of knowledge to be put on paper.
©2006 Marc Adler - All Rights Reserved
A decent companion to this book is Practical .Net for Financial Markets. I ordered this book after reading my colleague Ted's glowing review of the book on Amazon. I have to admit that, for me, the book was a bit on the disappointing side. The book has a little bit of everything ... a simple crossing engine, some messaging (designed to illustrate how STP works), some encryption, some good example of networking, etc.
However, what seems to be missing from the marketplace is a great book that deals with the entire spectrum of financial instruments from a developer's point of view. Something that will not only discuss the business domain and underlying technology, but will also point to real products that implement the various systems. A deep dive for the developer to become completly immersed in Wall Street systems.
For example, let's take Equities. From a systems standpoint, I want to know what goes on from the time that a trade is entered until someone receives confirmation through snailmail. I want to know what an order entry system does, how trades are routed to the exchanges, how FIX messaging is used, how crossing engines and auto-execution engines work, how stat arb and algorithmic trading factors in, how market-making functions, how settlment is done, how positions are maintained and how P&L are calculated, how market data gets into a system, how risk is calculated, etc. I want to know how systems and vendors like Bloomberg, Fidessa, Reuters, Wombat, Vhayu, etc fit into this space.
In addition, I would like to see topics that are tangental in nature, but geared towards the financial systems developer. How to develop low-latency systems. How to write UI's with fast-updating grids of market data. How to use complex event processing to implement stat arb trading. How to do order routing efficiently using rules engines.
I want a complete end-to-end picture. I want FinancialSystemsPedia.
My former colleague Matt was thinking about writing this kind of book several years ago. I think that there is a real need for this kind of knowledge to be put on paper.
©2006 Marc Adler - All Rights Reserved
Thursday, October 19, 2006
Good Luck to Matt Devlin
One of our favorite poachers, Matt Devlin, has just left Finetix to strike out on his own.
Good luck Matt (but stay away from my team!). Poaching is one of the toughest jobs on The Street, especially when it is so close to bonus season.
©2006 Marc Adler - All Rights Reserved
Good luck Matt (but stay away from my team!). Poaching is one of the toughest jobs on The Street, especially when it is so close to bonus season.
©2006 Marc Adler - All Rights Reserved
Tuesday, October 17, 2006
Shortage of Indian Engineers
From today's New York Times:
As its technology companies soar to the outsourcing skies, India is bumping up against an improbable challenge. In a country once regarded as a bottomless well of low-cost, ready-to-work, English-speaking engineers, a shortage looms.
India still produces plenty of engineers, nearly 400,000 a year at last count. But their competence has become the issue.
...found only one in four engineering graduates to be employable. The rest were deficient in the required technical skills, fluency in English or ability to work in a team or deliver basic oral presentations.
©2006 Marc Adler - All Rights Reserved
As its technology companies soar to the outsourcing skies, India is bumping up against an improbable challenge. In a country once regarded as a bottomless well of low-cost, ready-to-work, English-speaking engineers, a shortage looms.
India still produces plenty of engineers, nearly 400,000 a year at last count. But their competence has become the issue.
...found only one in four engineering graduates to be employable. The rest were deficient in the required technical skills, fluency in English or ability to work in a team or deliver basic oral presentations.
©2006 Marc Adler - All Rights Reserved
Monday, October 16, 2006
.NET/C# Trading System
Amazing Aeronautical Charts Site
http://skyvector.com/
If you are a real pilot or a virtual one, then this is the site you want to go to in order to see charts.
©2006 Marc Adler - All Rights Reserved
If you are a real pilot or a virtual one, then this is the site you want to go to in order to see charts.
©2006 Marc Adler - All Rights Reserved
Sunday, October 15, 2006
Wall Street meets Hollywood
From the New York Times article here.
Hedge funds and Wall Street investment banks are plowing money into Hollywood films, paying producers like Joel Silver and Ivan Reitman to produce hits.
I guess that we have to write a market data source that monitors the turnstiles of each of the movies that we invest in. Craig, can you come up with a feed handler in a week?
(I wonder how we will hedge this .... Long Ivan Reitman and short Delta House?)
©2006 Marc Adler - All Rights Reserved
Hedge funds and Wall Street investment banks are plowing money into Hollywood films, paying producers like Joel Silver and Ivan Reitman to produce hits.
I guess that we have to write a market data source that monitors the turnstiles of each of the movies that we invest in. Craig, can you come up with a feed handler in a week?
(I wonder how we will hedge this .... Long Ivan Reitman and short Delta House?)
©2006 Marc Adler - All Rights Reserved
Wanted: Easy way to set multiple breakpoints in VS 2005
I am tracing through some legacy code where a certain class has about 20 different constructors. I would like some way where I can tell VS.NET 2005 to set a breakpoint on each of the constructors.
Likewise, given the name of a method, I would like a way to set a breakpoint at the entry to all of the variants of the method. For instance, if I have a method called GetValue(), and we have a large number of polymophic methods such as
void GetValue(ref bool b)
void GetValue(ref int i)
void GetValue(ref double d)
void GetValue(ref string s)
there should be a way to set a breakpoint at the entry point to each one the GetValue() methods with one one right-click of the mouse.
Sounds like a possible enhancement for Resharper? Or maybe, someone has written a VS macro to do this?
©2006 Marc Adler - All Rights Reserved
Likewise, given the name of a method, I would like a way to set a breakpoint at the entry to all of the variants of the method. For instance, if I have a method called GetValue(), and we have a large number of polymophic methods such as
void GetValue(ref bool b)
void GetValue(ref int i)
void GetValue(ref double d)
void GetValue(ref string s)
there should be a way to set a breakpoint at the entry point to each one the GetValue() methods with one one right-click of the mouse.
Sounds like a possible enhancement for Resharper? Or maybe, someone has written a VS macro to do this?
©2006 Marc Adler - All Rights Reserved
Sunday, October 08, 2006
Fall Foliage
I flew up to Bennington, Vermont yesterday. The fall foliage is absolutely magnificent. There is no place in the United States like Vermont.
It's about a 1-1/4 hour flight from Morristown (MMU) Airport to Stephen Morse Airport in Bennington. Fly over the Catskills to Albany, then make a right turn at the 083 radial of the Albany VOR. It's about another 10 minutes.
©2006 Marc Adler - All Rights Reserved
Saturday, October 07, 2006
What are you doing with your MFC Apps?
An awful lot of IBs still have a lot of front ends written in Microsoft Foundation Classes (MFC).
1) Are you considering MFC to be an end-of-life product?
2) What are your plans for migrating the front ends to other frameworks? Are you going to .NET/C#, .NET/C++, or Java?
3) How will you be doing the migration? Are you devoting a year's worth of time to rewritting the app from scratch? And, while you are doing that, will you be refactoring all of the business logic that you wished that you ha not embedded in the GUI into a new middle tier?
4) Are you going straight for Vista/WPF for your rewrite?
5) Do you wish that Microsoft had a strategy for upgrading your massive MFC codebase to .NET?
Update: John responds here
©2006 Marc Adler - All Rights Reserved
1) Are you considering MFC to be an end-of-life product?
2) What are your plans for migrating the front ends to other frameworks? Are you going to .NET/C#, .NET/C++, or Java?
3) How will you be doing the migration? Are you devoting a year's worth of time to rewritting the app from scratch? And, while you are doing that, will you be refactoring all of the business logic that you wished that you ha not embedded in the GUI into a new middle tier?
4) Are you going straight for Vista/WPF for your rewrite?
5) Do you wish that Microsoft had a strategy for upgrading your massive MFC codebase to .NET?
Update: John responds here
©2006 Marc Adler - All Rights Reserved
Microsoft ESB News
http://www.intelligententerprise.com/showArticle.jhtml?articleID=193104592
©2006 Marc Adler - All Rights Reserved
©2006 Marc Adler - All Rights Reserved
Thursday, October 05, 2006
Sungard's Grid for Analytics
It was brought to my attention today that Sungard has Adaptiv Analytics, a calculation framework that is grid enabled. The marketing literature says that it has pricing and risk simulations right out of the box.
What might be attractive about this is :
1) Sungard has a long history of dealing with financial companies
2) The product seems to be tuned for a specific vertical industry and a specific purpose .... to speed up pricing and risk.
3) They claim to be based totally on .NET !!!!!!!!!!
Anybody using this thing yet? Wonder what the other grid vendors have to say about this?
©2006 Marc Adler - All Rights Reserved
What might be attractive about this is :
1) Sungard has a long history of dealing with financial companies
2) The product seems to be tuned for a specific vertical industry and a specific purpose .... to speed up pricing and risk.
3) They claim to be based totally on .NET !!!!!!!!!!
Anybody using this thing yet? Wonder what the other grid vendors have to say about this?
©2006 Marc Adler - All Rights Reserved
Thanks, Mark Pollack
The co-head of the Spring.Net consortium gave very generously of his time and gave a 2-hour lecture on Spring to my company. There are interesting things coming down the pike for Spring.Net, especially along the areas of messaging.
An ex-colleague of mine was developing a GUI framework for Spring.Net, but now that he is gainfully employed by a major IB, I wonder what will become of that effort. I thought that it could give CAB some real competition. Matt is also delving deep into the CAB world. Wonder if he will take over the framework.
©2006 Marc Adler - All Rights Reserved
An ex-colleague of mine was developing a GUI framework for Spring.Net, but now that he is gainfully employed by a major IB, I wonder what will become of that effort. I thought that it could give CAB some real competition. Matt is also delving deep into the CAB world. Wonder if he will take over the framework.
©2006 Marc Adler - All Rights Reserved
Monday, October 02, 2006
Newest Joshi Paper
Thanks to Mark Joshi for pointing out that he has an updated version of his paper on "A Day in the Life of a Quant".
http://www.markjoshi.com/downloads/advice.pdf
Now maybe I'll be able to talk to Ryan! (Inside joke)
©2006 Marc Adler - All Rights Reserved
http://www.markjoshi.com/downloads/advice.pdf
Now maybe I'll be able to talk to Ryan! (Inside joke)
©2006 Marc Adler - All Rights Reserved
Thursday, September 28, 2006
IBM WebSphere Front Office for Financial Markets
Discussed here on IBM's website.
and from the press release on IBM's Haifa Research Lab (bold text is my highlighting) :
...IBM announced the availability of WebSphere Front Office for Financial Markets, a flexible, high-throughput, low-latency platform. The WebSphere Front Office platform is built on an award-winning portfolio of IBM middleware products that provide an integrated environment optimized for high-volume trading.
Several innovative technologies from the IBM Research Lab in Haifa enabled the platform's performance characteristics and the high availability support including detection, notification and recovery.
"This is IBM's first appearance in the financial front office space for stock exchanges and large institutional customers, which is characterized by extreme data rates measured in hundreds thousands messages per second, and by sub-millisecond delivery latency requirements."
The Reliable Multicast Messaging (RMM) technology and TurboFlow technologies have enabled IBM to address these performance goals and to build an infrastructure that supports the extremely challenging demands of front office financial customers. In addition to high throughput and low latency, RMM is characterized by significant scalability that allows the delivery of financial information to multiple traders at the same time.
Combined with the ITRA (Inter-Tier. Relationship architecture) technology it allows for subsecond data stream failover.
Considering that OPRA (options) data is forcasted to be coming in at 456,000 messages per second, it would be interesting to see if this new product could handle it.
An article in the Inside Market Data newsletter make specific mention of competition against Reuters and Wombat.
©2006 Marc Adler - All Rights Reserved
and from the press release on IBM's Haifa Research Lab (bold text is my highlighting) :
...IBM announced the availability of WebSphere Front Office for Financial Markets, a flexible, high-throughput, low-latency platform. The WebSphere Front Office platform is built on an award-winning portfolio of IBM middleware products that provide an integrated environment optimized for high-volume trading.
Several innovative technologies from the IBM Research Lab in Haifa enabled the platform's performance characteristics and the high availability support including detection, notification and recovery.
"This is IBM's first appearance in the financial front office space for stock exchanges and large institutional customers, which is characterized by extreme data rates measured in hundreds thousands messages per second, and by sub-millisecond delivery latency requirements."
The Reliable Multicast Messaging (RMM) technology and TurboFlow technologies have enabled IBM to address these performance goals and to build an infrastructure that supports the extremely challenging demands of front office financial customers. In addition to high throughput and low latency, RMM is characterized by significant scalability that allows the delivery of financial information to multiple traders at the same time.
Combined with the ITRA (Inter-Tier. Relationship architecture) technology it allows for subsecond data stream failover.
Considering that OPRA (options) data is forcasted to be coming in at 456,000 messages per second, it would be interesting to see if this new product could handle it.
An article in the Inside Market Data newsletter make specific mention of competition against Reuters and Wombat.
©2006 Marc Adler - All Rights Reserved
Sunday, September 24, 2006
Wanted: A Data Pumping Tool
In the companies that I have been involved with in my Wall Street consulting career, it is remarkable how many systems do not have Unit Testing set up.
Concepts like Unit Testing, TDD, code metrics, etc are just starting to make their ways into development groups in IB's. However, one of the areas that has been ignored is stress and soak testing.
One of the tools that we need is what I refer to as a generic Data Pumper. This is a service that can be run to generate data of a certain shape, and pump the data into waiting applications. Some types of data that we may need to pump include quotes, executions, risk, etc.
Here are the features that I would like to see from a Data Pumper:
Playback Modes
We need to have the data replayed in certain temporal formats. We can also apply a distribution curve to the replay interval.
- Burst Mode: Play back data all at once, as fast as we can.
- Interval Mode: Play the data back at certain intervals. For example, playback 500 messages per second. We can also put some sort of distribution on the interval, so that the intervals would be the lowest at the beginning and at the end of the playback period (simulating market open and close).
- Timed Mode: This would cause playback at the exact timings that actual data was generated. In this mode, we would have to first capture real data and record the exact time that the real data was received. Then we would play back the simulated data using the timings of the real data.
Transports
We need to configure the transport mechanism which the data is delivered to the waiting application.
- Tibco RV or EMS (Right now, most IB's use Tibco for the distribution of high-frequency data)
- LBM (a Tibco competitor)
- Sockets (or SmartSockets)
- MQ or MSMQ
- CPS (Morgan Stanley)
Data Generation
- Capture actual data for several days in order to provide some reference data
- We can tag certain fields for random data generation. For example, we can vary the prices of the various instruments.
- We can generate completely random data.
Formats
XML seems to be used in many places, but you have the latency involved in deserialization. Binary Objects is fast, but necessitates a homogeneous environment.
- XML
- Tibco binary message map
- delimited strings
- binary object
- Fixed-length ASCII
- Reuters (Craig will tell me about the legality of simulating data in Reuters format)
Other Considerations
- Instead of sending data directly to the end application, we can send it to an object cache, and let the object cache handle distribution.
- We need a GUI for monitoring the transmission of data, and controls to let the user dynamically modify the timing intervals.
- We need to have probes in the target application so we can monitor its performance in real time under various loads.
Concepts like Unit Testing, TDD, code metrics, etc are just starting to make their ways into development groups in IB's. However, one of the areas that has been ignored is stress and soak testing.
One of the tools that we need is what I refer to as a generic Data Pumper. This is a service that can be run to generate data of a certain shape, and pump the data into waiting applications. Some types of data that we may need to pump include quotes, executions, risk, etc.
Here are the features that I would like to see from a Data Pumper:
Playback Modes
We need to have the data replayed in certain temporal formats. We can also apply a distribution curve to the replay interval.
- Burst Mode: Play back data all at once, as fast as we can.
- Interval Mode: Play the data back at certain intervals. For example, playback 500 messages per second. We can also put some sort of distribution on the interval, so that the intervals would be the lowest at the beginning and at the end of the playback period (simulating market open and close).
- Timed Mode: This would cause playback at the exact timings that actual data was generated. In this mode, we would have to first capture real data and record the exact time that the real data was received. Then we would play back the simulated data using the timings of the real data.
Transports
We need to configure the transport mechanism which the data is delivered to the waiting application.
- Tibco RV or EMS (Right now, most IB's use Tibco for the distribution of high-frequency data)
- LBM (a Tibco competitor)
- Sockets (or SmartSockets)
- MQ or MSMQ
- CPS (Morgan Stanley)
Data Generation
- Capture actual data for several days in order to provide some reference data
- We can tag certain fields for random data generation. For example, we can vary the prices of the various instruments.
- We can generate completely random data.
Formats
XML seems to be used in many places, but you have the latency involved in deserialization. Binary Objects is fast, but necessitates a homogeneous environment.
- XML
- Tibco binary message map
- delimited strings
- binary object
- Fixed-length ASCII
- Reuters (Craig will tell me about the legality of simulating data in Reuters format)
Other Considerations
- Instead of sending data directly to the end application, we can send it to an object cache, and let the object cache handle distribution.
- We need a GUI for monitoring the transmission of data, and controls to let the user dynamically modify the timing intervals.
- We need to have probes in the target application so we can monitor its performance in real time under various loads.
Thursday, September 21, 2006
Decode the Marketing Blurb
Here is a fun game for all of you. A certain vendor sells decision systems over a bunch of vertical industries. Here is a blurb from one of their webpages that outlines their offerings for the financial industry:
Modeling: Price and Risk Models
We model the equity market as an open, irreversible, far from equilibrium thermodynamic model subject to dynamic constraints. This approach results in a bi-linear model composed of two dynamical sub-models: price evolution and risk evolution. The price evolution sub-model represents the behavior of pricing of commodities and a market aggregate as a function of exogenous demand and control actions. The risk sub-model represents the behavior of risk as a function of exogenous uncertainty and actions. Further, the risk sub-model represents the uncertainty range of the values computed by the price evolution model.
The game here is to decode the blurb and tell me what this system does.
©2006 Marc Adler - All Rights Reserved
Modeling: Price and Risk Models
We model the equity market as an open, irreversible, far from equilibrium thermodynamic model subject to dynamic constraints. This approach results in a bi-linear model composed of two dynamical sub-models: price evolution and risk evolution. The price evolution sub-model represents the behavior of pricing of commodities and a market aggregate as a function of exogenous demand and control actions. The risk sub-model represents the behavior of risk as a function of exogenous uncertainty and actions. Further, the risk sub-model represents the uncertainty range of the values computed by the price evolution model.
The game here is to decode the blurb and tell me what this system does.
©2006 Marc Adler - All Rights Reserved
Sunday, September 17, 2006
Microsoft ESB
Microsoft in the Enterprise Service Bus (ESB) space? Ot should be an interesting development to watch, especially for shops who are heavily tied to Tibco RV and EMS. Microsoft will have to really exceed Tibco EMS's performance in order for people to take notice. Also, Microsoft will have to throw the EMS people a bone and support JMS. I might suggest that Microsoft come out with patterns to support synchronous calls over JMS easily.
I can imagine some interesting tie-ins with SQL Server and Excel. You could have database events published on the message bus. You can also have Excel subscribing to the message bus in order to receive real-time stock quotes and position updates, and you can also have Excel publishing risk calculations back out to the bus. If Microsoft were to have this trio (DB, Excel, bus) tied in seamlessly, then this would show Wall Street a real committment.
Are you an RV, EMS, or Sonic shop? What would it take for you to transition to a Microsoft ESB?
By the way .... A few weeks ago, I asked a Microsoft rep about what they are looking at for messaging, and they said that they will be supporting WS-Events. Is this an alternative to JMS for async messaging? What we don't need is to divide the messaging community at this point.
©2006 Marc Adler - All Rights Reserved
I can imagine some interesting tie-ins with SQL Server and Excel. You could have database events published on the message bus. You can also have Excel subscribing to the message bus in order to receive real-time stock quotes and position updates, and you can also have Excel publishing risk calculations back out to the bus. If Microsoft were to have this trio (DB, Excel, bus) tied in seamlessly, then this would show Wall Street a real committment.
Are you an RV, EMS, or Sonic shop? What would it take for you to transition to a Microsoft ESB?
By the way .... A few weeks ago, I asked a Microsoft rep about what they are looking at for messaging, and they said that they will be supporting WS-Events. Is this an alternative to JMS for async messaging? What we don't need is to divide the messaging community at this point.
©2006 Marc Adler - All Rights Reserved
Saturday, September 16, 2006
GemFire
I am starting an evaluation of object cache technology, starting with GemFire. The target app is a legacy C++ app, so the fact that Gemstone has a C++ version of GemFire is a big plus. They also have .NET bindings, and I will be checking those out too.
One gotcha .... GemFire does not work on Windows 2000 because of the underlying dependencies on 29West's LBM message broker. This is a real nasty if you want to do an evaluation on your desktop at work, but your company is still in Windows 2000-land. So, I had to load up GemFire on my home laptop, which runs XP Pro, and will do the evaluation on my laptop.
The plans are to use the object cache as a "data fabric" in order to speed up some of our calc engines. Object Caches like GigaSpaces are used already in a lot of Wall Street IBs just for that purpose. I have heard a little rumbling about GigaSpace implementations from some former colleagues, so we are hoping that GemFire will be worry-free. Already, I am impressed with their support staff (thanks to Mike and Santiago for timely responses).
I would be interested in comments from any of you people who have evaluated or used object caches in your financial apps, especially C++ or C# apps. Feel free to comment here or email me privately.
©2006 Marc Adler - All Rights Reserved
One gotcha .... GemFire does not work on Windows 2000 because of the underlying dependencies on 29West's LBM message broker. This is a real nasty if you want to do an evaluation on your desktop at work, but your company is still in Windows 2000-land. So, I had to load up GemFire on my home laptop, which runs XP Pro, and will do the evaluation on my laptop.
The plans are to use the object cache as a "data fabric" in order to speed up some of our calc engines. Object Caches like GigaSpaces are used already in a lot of Wall Street IBs just for that purpose. I have heard a little rumbling about GigaSpace implementations from some former colleagues, so we are hoping that GemFire will be worry-free. Already, I am impressed with their support staff (thanks to Mike and Santiago for timely responses).
I would be interested in comments from any of you people who have evaluated or used object caches in your financial apps, especially C++ or C# apps. Feel free to comment here or email me privately.
©2006 Marc Adler - All Rights Reserved
Friday, September 08, 2006
Roy Buchanan
If you are a fan of guitar, blues or just plain old great music, check this out.
I am not a guitarist nor do I really like the blues, but like everyone else who saw this on YouTube, it hit me in the right place. I actually saw Roy years ago when a school chum got tickets to a taping of ABC's In Concert TV Series. The bill was Uriah Heep, Roy, The Persuasions, and Savoy Brown. Roy blew everyone away.
I continue to be astounded by the things I find on YouTube. My YouTube id is VanderTop2, so you can browse my Favorites list and see what kind of things I am unearthing.
©2006 Marc Adler - All Rights Reserved
I am not a guitarist nor do I really like the blues, but like everyone else who saw this on YouTube, it hit me in the right place. I actually saw Roy years ago when a school chum got tickets to a taping of ABC's In Concert TV Series. The bill was Uriah Heep, Roy, The Persuasions, and Savoy Brown. Roy blew everyone away.
I continue to be astounded by the things I find on YouTube. My YouTube id is VanderTop2, so you can browse my Favorites list and see what kind of things I am unearthing.
©2006 Marc Adler - All Rights Reserved
Thursday, September 07, 2006
Celllllllllebration, Yeah, Come On!
We have been given the go-ahead for the .NET Client Framework!
Thanks to all who contributed ideas, both publicly and privately. And thanks to various open-minded individuals at my IB.
My ex-colleague, Chris, will be embarking on an effort to do the same at another IB. Will it be Service Locator vs Spring.Net? Stay tuned!
Beers are on me.....
©2006 Marc Adler - All Rights Reserved
Thanks to all who contributed ideas, both publicly and privately. And thanks to various open-minded individuals at my IB.
My ex-colleague, Chris, will be embarking on an effort to do the same at another IB. Will it be Service Locator vs Spring.Net? Stay tuned!
Beers are on me.....
©2006 Marc Adler - All Rights Reserved
Sunday, September 03, 2006
Thoughts on Performance in Trading Systems and .NET
Rico has done it again with a post that provides much thought.
With the advent of object oriented languages (C++), and higher-level languages (C#, Java), most developers try to craft elegant, object-oriented frameworks, complete with reflection, heavyweight classes, lots of properties. I am one of these.
However, I remember the days where I had to cram all of the functionality of New York Word (my very first commercial product) into 512K of ram. Pouring over code, trying to see if I can save a few bytes here and a few bytes there. Looking at the output of the Microsoft Linker to see if I can save space. Looking over the disassembly of a crucial module, such as the one that performed wordwrapping.
In the next few years, we are going to start seeing a predicted rate of 456,000 messages per second from some of the market data feeds. The goal is to get these messages, transformed into viable data, into another, trader-facing system, with as little delay as possible. There are additional needs to get this data into algorithmic trading systems and black-box trading systems with the lowest possible latency. The time taken to serialize and deserialize data, construct objects, and perform garbage collection can mean a precious few milliseconds added onto the path that your data takes. If you are writing array-handling code in a managed environment, then the time it takes to perform array checking might add further delay. Even the slightest delay can mean millions of dollars in lost opportunities.
An incredible amount of effort has been spent in writing high-performance, low latency graphics toolkits for rendering objects in video games. Has similar efforts been made to "render" the flow of market data, where every market tick counts?
I would love to hear about any experiences that you have had in getting market data into a client as fast as possible. Things like compressing data, conflating data, choice of transport, choice of GUI grids, high-performance threading, etc.
Microsoft has oodles of articles that deal with performance in .NET. I am anxious to see any performance improvements throughout the entire .NET stack. I am also interested to see how .NET/C#/C++ stacks up against a similar Java/Linux stack, given the same hardware. The PetShop demo might work well for a typical developer, but for trading systems developers, we need to see something a bit more substantial.
©2006 Marc Adler - All Rights Reserved
With the advent of object oriented languages (C++), and higher-level languages (C#, Java), most developers try to craft elegant, object-oriented frameworks, complete with reflection, heavyweight classes, lots of properties. I am one of these.
However, I remember the days where I had to cram all of the functionality of New York Word (my very first commercial product) into 512K of ram. Pouring over code, trying to see if I can save a few bytes here and a few bytes there. Looking at the output of the Microsoft Linker to see if I can save space. Looking over the disassembly of a crucial module, such as the one that performed wordwrapping.
In the next few years, we are going to start seeing a predicted rate of 456,000 messages per second from some of the market data feeds. The goal is to get these messages, transformed into viable data, into another, trader-facing system, with as little delay as possible. There are additional needs to get this data into algorithmic trading systems and black-box trading systems with the lowest possible latency. The time taken to serialize and deserialize data, construct objects, and perform garbage collection can mean a precious few milliseconds added onto the path that your data takes. If you are writing array-handling code in a managed environment, then the time it takes to perform array checking might add further delay. Even the slightest delay can mean millions of dollars in lost opportunities.
An incredible amount of effort has been spent in writing high-performance, low latency graphics toolkits for rendering objects in video games. Has similar efforts been made to "render" the flow of market data, where every market tick counts?
I would love to hear about any experiences that you have had in getting market data into a client as fast as possible. Things like compressing data, conflating data, choice of transport, choice of GUI grids, high-performance threading, etc.
Microsoft has oodles of articles that deal with performance in .NET. I am anxious to see any performance improvements throughout the entire .NET stack. I am also interested to see how .NET/C#/C++ stacks up against a similar Java/Linux stack, given the same hardware. The PetShop demo might work well for a typical developer, but for trading systems developers, we need to see something a bit more substantial.
©2006 Marc Adler - All Rights Reserved
On the Beach
I have been trying to fly out to a beach for the past few weeks, but every time I reserve a plane, the weather craps out. Had a plane reserved for today, but we have the remnants of Hurricane Ernesto. It is supposed to be an amazing day tomorrow, so I will try to make Ocean City, NJ.
There are a few beaches on the East Coast that you can fly to. Our favorite is Block Island, Rhode Island, right off the eastern tip of Long Island and about 1h15m from MMU. Block Island is only reachable by ferry, and hence, the beaches are relatively less crowded than your typical East Coast beach.
Other beaches that have airports within walking distance include Provincetown (Mass), Ocean City (NJ), Katana (Martha's Vineyard).
©2006 Marc Adler - All Rights Reserved
There are a few beaches on the East Coast that you can fly to. Our favorite is Block Island, Rhode Island, right off the eastern tip of Long Island and about 1h15m from MMU. Block Island is only reachable by ferry, and hence, the beaches are relatively less crowded than your typical East Coast beach.
Other beaches that have airports within walking distance include Provincetown (Mass), Ocean City (NJ), Katana (Martha's Vineyard).
©2006 Marc Adler - All Rights Reserved
Market Data Tech Conf - NYC - Sept 28,29 2006
Hope Craig sees this one.
We should have more fans of Market Data where I work, seeing how Craig kept everyone until after 5PM on the Friday before Labor Day with an amazing lecture on Market Data.
In the past, I have just built trader workstations that merely hooked up to a market data feed through some sort of communication mechanism (sockets, Tibco RV, EMS) without having to be concerned about the ins-and-outs of the data. However, Craig has been doing market data for 15 years, and it's a pleasure to work along someone who is so passionate about the area.
©2006 Marc Adler - All Rights Reserved
We should have more fans of Market Data where I work, seeing how Craig kept everyone until after 5PM on the Friday before Labor Day with an amazing lecture on Market Data.
In the past, I have just built trader workstations that merely hooked up to a market data feed through some sort of communication mechanism (sockets, Tibco RV, EMS) without having to be concerned about the ins-and-outs of the data. However, Craig has been doing market data for 15 years, and it's a pleasure to work along someone who is so passionate about the area.
©2006 Marc Adler - All Rights Reserved
Investment and Trading System Documentation Project
While browsing through the FIX forums, I saw mention of an effort called the Investment and Trading System Documentation Project. Interesting idea, but it looks as if it really never got off the ground. Still, they have a repository of some articles on Electronic Trading.
©2006 Marc Adler - All Rights Reserved
©2006 Marc Adler - All Rights Reserved
Wednesday, August 30, 2006
The Bronx Cheers
Random Seinfeld-ish Thought...
Why are Americans suddenly using the signatory line of "Cheers"? People I know who were born and raised in Brooklyn are suddenly signing email message with "Cheers".
The next time you say "Cheers" to me, you had better have a scotch in your hand!
©2006 Marc Adler - All Rights Reserved
Why are Americans suddenly using the signatory line of "Cheers"? People I know who were born and raised in Brooklyn are suddenly signing email message with "Cheers".
The next time you say "Cheers" to me, you had better have a scotch in your hand!
©2006 Marc Adler - All Rights Reserved
Monday, August 28, 2006
Pair Programming and Market Data?
I was talking about Agile/Scrum with our head of market data systems. He advised me that, with regards to Pair Programming, if you have two sets of eyes looking at a computer at the same time, then the exchanges would consider this grounds for charging double for the market data!
OK, Messrs Fowler, Beck, and Schwaber! What do you have to say about that!
©2006 Marc Adler - All Rights Reserved
OK, Messrs Fowler, Beck, and Schwaber! What do you have to say about that!
©2006 Marc Adler - All Rights Reserved
Sunday, August 27, 2006
Java vs C#/.NET in Investment Banks
Although there are definite pockets of .NET development in most investment banks on Wall Street, the majority of the development efforts are still in Java, especially on the server side. This is nowhere more apparent than in the investment bank that I am currently working for.
I am becoming more and more impressed with the Open Software that has been developed for the Java community, and the I have been similarly impressed with the sense of comradery that the Java community has. Most of the interesting .NET tools have origins in the Java world (NUnit, Spring, Hibernate, etc).
One of my tasks in my current position is to look at what our company has in terms of client-side frameworks and to come up with a proposal for a company-wide framework. This means looking at the toolkits that various groups are using or currently developing, looking at what I have done in the past, and making some recommendations by taking the best-of-breed of the various efforts. In addition to looking at .NET frameworks, I have been asked to look at some of the Java frameworks. And, I have been impressed by some of the Java efforts.
Some people have even asked me to look at things like Eclipse RCP and the NetBeans RCP. Several of my colleagues have blogged in the past about the viability of Eclpise RCP as a client-side framework. I feel that I need to take a serious look at it.
You all know that I have been a staunch advocate of .NET in the past. However, I am finding it more difficult to beat back the Java supporters. Some of the arguments that I have used against Java in the past (slowness, not as nice a UI, etc) have been colored by my past experiences with AWT, Swing, and JFC. But, Java GUI development has definitely evolved.
I would love to hear from my blog readers as to why I should continue to push .NET over Java.
Some reasons that I can think of for preferring .NET over Java:
1) I feel C# is a stronger language, and will continue to evolve into an even more powerful entity.
2) Closeness of .NET to the O/S, especially when Vista comes out.
3) Support for .NET inside of SQL Server 2005.
4) Possible business reasons are Microsoft non-support of a JVM and the precarious nature of Sun's balance sheet.
5) Better integrations with the standards in desktop tools (Excel, Word, etc)
©2006 Marc Adler - All Rights Reserved
I am becoming more and more impressed with the Open Software that has been developed for the Java community, and the I have been similarly impressed with the sense of comradery that the Java community has. Most of the interesting .NET tools have origins in the Java world (NUnit, Spring, Hibernate, etc).
One of my tasks in my current position is to look at what our company has in terms of client-side frameworks and to come up with a proposal for a company-wide framework. This means looking at the toolkits that various groups are using or currently developing, looking at what I have done in the past, and making some recommendations by taking the best-of-breed of the various efforts. In addition to looking at .NET frameworks, I have been asked to look at some of the Java frameworks. And, I have been impressed by some of the Java efforts.
Some people have even asked me to look at things like Eclipse RCP and the NetBeans RCP. Several of my colleagues have blogged in the past about the viability of Eclpise RCP as a client-side framework. I feel that I need to take a serious look at it.
You all know that I have been a staunch advocate of .NET in the past. However, I am finding it more difficult to beat back the Java supporters. Some of the arguments that I have used against Java in the past (slowness, not as nice a UI, etc) have been colored by my past experiences with AWT, Swing, and JFC. But, Java GUI development has definitely evolved.
I would love to hear from my blog readers as to why I should continue to push .NET over Java.
Some reasons that I can think of for preferring .NET over Java:
1) I feel C# is a stronger language, and will continue to evolve into an even more powerful entity.
2) Closeness of .NET to the O/S, especially when Vista comes out.
3) Support for .NET inside of SQL Server 2005.
4) Possible business reasons are Microsoft non-support of a JVM and the precarious nature of Sun's balance sheet.
5) Better integrations with the standards in desktop tools (Excel, Word, etc)
©2006 Marc Adler - All Rights Reserved
Saturday, August 26, 2006
Before you start subscribing to market data...
... read this.
Several Wall Street companies have gotten big fines when the exchanges have found out that people or "interrogation devices" that were not entitled to use market data were actually using it. Try looking at this document and see if you can figure out what you need to license if you have a single PC with several different WinForms apps, each app capable of letting one user see the NASDAQ data feed. And, what happens if there are multiple users on that PC?
My colleague Craig tells me that the fee structure imposed by the various exchanges is one of the main driving points for my "Common Wall Street Client Stack", with various applets running inside of one desktop shell.
Wonder if NASDAQ will start charging per AppDomain?
©2006 Marc Adler - All Rights Reserved
Several Wall Street companies have gotten big fines when the exchanges have found out that people or "interrogation devices" that were not entitled to use market data were actually using it. Try looking at this document and see if you can figure out what you need to license if you have a single PC with several different WinForms apps, each app capable of letting one user see the NASDAQ data feed. And, what happens if there are multiple users on that PC?
My colleague Craig tells me that the fee structure imposed by the various exchanges is one of the main driving points for my "Common Wall Street Client Stack", with various applets running inside of one desktop shell.
Wonder if NASDAQ will start charging per AppDomain?
©2006 Marc Adler - All Rights Reserved
Friday, August 25, 2006
We have 4 Open Positions
Want to work with me at a large Investment Bank in NY and NJ?
Here are 4 positions that we have open right now:
- CI and Agile Champion who ideally can also act as a test architect
- Java master with focus on messaging
- Middle Office architect
- Algo architect, which we may re-position as a generalist Java pro
The Middle Office architect can probably be .NET or Java based.
These positons are all for full-timers.
If you are interested, or know anyone who is, then email me at
magmasystems AT yahoo dot com.
©2006 Marc Adler - All Rights Reserved
Here are 4 positions that we have open right now:
- CI and Agile Champion who ideally can also act as a test architect
- Java master with focus on messaging
- Middle Office architect
- Algo architect, which we may re-position as a generalist Java pro
The Middle Office architect can probably be .NET or Java based.
These positons are all for full-timers.
If you are interested, or know anyone who is, then email me at
magmasystems AT yahoo dot com.
©2006 Marc Adler - All Rights Reserved
Sunday, August 20, 2006
An Article on Trading
A bit dated (1995), but here is an article on the trading process. Useful for understanding a bit on order matching and execution algorithms.
©2006 Marc Adler - All Rights Reserved
©2006 Marc Adler - All Rights Reserved
Thursday, August 17, 2006
Comments Added and WMI Blues
I spaced out on approving and publishing the moderated comments (thanks Craig). So now, there are about 30 commnents, dating back from late May.
I was wondering why nobody congratulated me on my new job :-)
Starting to play around with Microsoft's WMI for instrumenting applications. I need to beat up on Microsoft for not letting you author .NET-based WMI providers that allow method invocation and property setters. All you can do with .NET WMI providers are receive events and do property gets. You cannot change a value through a property setting, nor can you manage an application by calling a method on a provider.
Seems that if you want to write a fully-functional WMI provider, then you have to drop down to COM (even using ATL) and C++.
This limitation is severe. This means that I cannot write a .NET app that can be managed by a .NET provider. This means that, if I want to write a monitoring and management dashboard (for monitoring, let's say, a .NET-based trading system), I have to drag out my old COM/ATL books and write unmanaged code. Ugh!
It seems as if plenty of people on the newsgroups share my pain.
©2006 Marc Adler - All Rights Reserved
I was wondering why nobody congratulated me on my new job :-)
Starting to play around with Microsoft's WMI for instrumenting applications. I need to beat up on Microsoft for not letting you author .NET-based WMI providers that allow method invocation and property setters. All you can do with .NET WMI providers are receive events and do property gets. You cannot change a value through a property setting, nor can you manage an application by calling a method on a provider.
Seems that if you want to write a fully-functional WMI provider, then you have to drop down to COM (even using ATL) and C++.
This limitation is severe. This means that I cannot write a .NET app that can be managed by a .NET provider. This means that, if I want to write a monitoring and management dashboard (for monitoring, let's say, a .NET-based trading system), I have to drag out my old COM/ATL books and write unmanaged code. Ugh!
It seems as if plenty of people on the newsgroups share my pain.
©2006 Marc Adler - All Rights Reserved
Friday, August 11, 2006
Silicon on Wall Street (followup)
As a response to my previous posting, my man John weighs in with some words on FPGA's on Wall Street.
As an aside, I notice that there are some good blogs on issues that Wall Street technologists come across every day... John, Chris, DLG, JO'S, and the daddy of them all, Matt. Know of any others?
©2006 Marc Adler - All Rights Reserved
As an aside, I notice that there are some good blogs on issues that Wall Street technologists come across every day... John, Chris, DLG, JO'S, and the daddy of them all, Matt. Know of any others?
©2006 Marc Adler - All Rights Reserved
Thursday, August 10, 2006
Exotics Computations on Silicon?
Someone floated an idea that sounded interesting ...
Has anyone heard of writing their calc functions in C, and burning them onto field programmable gate arrays? The calc models would have to be fairly static, but it might be able to give you magnitudes of speed improvements.
I confess that I know very little about this technology, but it bears some investigation.
Hopefully J.O'S can weigh in with something on his blog....
Here is an interesting article.
©2006 Marc Adler - All Rights Reserved
Has anyone heard of writing their calc functions in C, and burning them onto field programmable gate arrays? The calc models would have to be fairly static, but it might be able to give you magnitudes of speed improvements.
I confess that I know very little about this technology, but it bears some investigation.
Hopefully J.O'S can weigh in with something on his blog....
Here is an interesting article.
©2006 Marc Adler - All Rights Reserved
Wednesday, August 09, 2006
Barrier Options
Barrier Options are considered to be exotic derivatives. Exotics usually take a long time to price, and if you are architecting a trading system, you usually want to put a lot of computing power behind the calculations of exotics. Many trading firms look to Compute Grids and data caches to help speed up some of these calculations.
This explanation of Barrier Options has been taken from the Bank of New York website:
This explanation of Barrier Options has been taken from the Bank of New York website:
SRDF
SRDF is an EMC product family that provides synchronous or async data replication between LUs (logical disk units) of two SAN (Storage Area Networks).
Here is a story about how data replication saved the day in a trading environment at a Tier-1 bank.
Here is a description of a high-performance trading system that uses SRDF. The brochure contains some good points to consider when designing the infrastructure of a trading system. These kinds of infrastructure components were things that I never had to consider previously when I was concentrating solely on writing good software and managing development teams, but in the architecture group, I have to pay attention to this stuff now.
©2006 Marc Adler - All Rights Reserved
Here is a story about how data replication saved the day in a trading environment at a Tier-1 bank.
Here is a description of a high-performance trading system that uses SRDF. The brochure contains some good points to consider when designing the infrastructure of a trading system. These kinds of infrastructure components were things that I never had to consider previously when I was concentrating solely on writing good software and managing development teams, but in the architecture group, I have to pay attention to this stuff now.
©2006 Marc Adler - All Rights Reserved
The First Week and Many New Terms
Sitting here relaxing on the couch after the 2nd day of a grueling offsite. Switching the TV between the Yankee game and the MSG network showing old WWWF highlights from Madison Square Garden (yes, I used to be a major WWWF fan when I was a kid). Perfect TV for unwinding.
It's strange being on the other side of the full-time/employee equation. Being part of the Equities Architecture team, you have various vendors trying to get a piece of your ear. There is no better PR than having your product as piece of a major equities trading system. As such, I am wondering why Microsoft never devoted major effort to attacking the trading system space. Microsoft would be a major force if they came out with the infrastructure to do low-latency, high-volume market data distribution, pub/sub messaging, ultra-fast UI grids, the ability to take an Excel model and compile it into a DLL, etc. Microsoft always seems to be on the cusp on doing something in the trading system space, but has never come through with a compelling, end-to-end story. However, new technologies like Windows Compute Cluster and Excel Services seem to be moving the company in the right direction from a technological standpoint. Plus, their recent announcement to invade the healthcare space has shown a new willingness to attack certain verticals. It still is an uphill battle to sell Microsoft within Capital Markets for anything but UI work ... but people are starting to listen!
I always have a page in the back of my notebook where I write down terms that I need to explore more. Being a .NET specialist and an architecture generalist, I am bound to come up against unfamiliararities. Here are terms from the 2-day offsite that I attended:
1) Barrier Knockout
2) Bishop Algorithm
3) Monte Carlo Trials
4) SSH
5) Alpha Partition
6) MPI (a grid network API)
7) SRDF
8) SANs and Spindle Optimization with regards to Databases
9) Multicast Storms
10) Java NIO, especially with regards to memory-mapped files
By the way, anyone know of a .NET/C# port of NIO?
©2006 Marc Adler - All Rights Reserved
It's strange being on the other side of the full-time/employee equation. Being part of the Equities Architecture team, you have various vendors trying to get a piece of your ear. There is no better PR than having your product as piece of a major equities trading system. As such, I am wondering why Microsoft never devoted major effort to attacking the trading system space. Microsoft would be a major force if they came out with the infrastructure to do low-latency, high-volume market data distribution, pub/sub messaging, ultra-fast UI grids, the ability to take an Excel model and compile it into a DLL, etc. Microsoft always seems to be on the cusp on doing something in the trading system space, but has never come through with a compelling, end-to-end story. However, new technologies like Windows Compute Cluster and Excel Services seem to be moving the company in the right direction from a technological standpoint. Plus, their recent announcement to invade the healthcare space has shown a new willingness to attack certain verticals. It still is an uphill battle to sell Microsoft within Capital Markets for anything but UI work ... but people are starting to listen!
I always have a page in the back of my notebook where I write down terms that I need to explore more. Being a .NET specialist and an architecture generalist, I am bound to come up against unfamiliararities. Here are terms from the 2-day offsite that I attended:
1) Barrier Knockout
2) Bishop Algorithm
3) Monte Carlo Trials
4) SSH
5) Alpha Partition
6) MPI (a grid network API)
7) SRDF
8) SANs and Spindle Optimization with regards to Databases
9) Multicast Storms
10) Java NIO, especially with regards to memory-mapped files
By the way, anyone know of a .NET/C# port of NIO?
©2006 Marc Adler - All Rights Reserved
Subscribe to:
Posts (Atom)