Page 22 - Citi_Prime_Finance_IT_Survey_September_2011_B

This is a SEO version of Citi_Prime_Finance_IT_Survey_September_2011_B. Click here to view full version

« Previous Page Table of Contents Next Page »
22
I
Citi Prime Finance’s 2011 IT Trends & Benchmarks Survey
Hedge funds launching in this period could now connect to these
off-premises networks in such a way that a higher level of data
replication among environments was now possible. This meant
that it was becoming increasingly affordable to copy critical
information from a hedge fund’s locally hosted data center and
send a copy of that information to the off-premises data center.
For the most hedge funds, the emerging capabilities were seen
as an opportunity to create a much more robust and secure
disaster recovery environment as will be discussed in a moment.
With bandwidth availability still evolving, however, managers
launching in the second wave continued to rely primarily in their
locally hosted on-premises data centers that continued to be
built and serviced by third-party IT integration frms
Servicing of the hedge fund manager’s local data centers
was becoming more robust, however, due to the build out of
sophisticated Network Operations Centers (NOCs). These NOCs
allowed the outsource infrastructure frms to better-monitor the
health of their clients’ networks remotely, administering patches
and performing other maintenance without having to visit their
clients’ physical locations. Integration frms such as Gravitas,
which rose to prominence during this wave, were able to service
their clients more effciently by deploying these service models.
Managers, meanwhile, who had launched in the Hedge Fund 1.0
model, were likely to bring some support staff in-house, often
converting contractors into full-time employees.
Hedge funds were not the only audience able to take advantage
of cheaper bandwidth and expanded availability of data centers.
Software vendors launching during this period, such as Imagine
and Backstop, began to leverage these data centers as well and
offer a new model for their applications. Rather than pursuing
the traditional approach whereby someone purchasing their
systemwould need to locally install the software, these emerging
frms would host their application in their own data center and
give their users remote access using either browser-based
Internet or access technologies like Citrix. This model became
known as “software as a service.”
Other established software providers began to follow suit as
they saw this as a lightweight deployment option that reduced
the need for them to directly install their product in each
individual client’s facility. One such example of a vendor that
evolved their approach in this period is Advent with their Geneva
portfolio accounting solution. Advent decided not to maintain
its own data centers, but rather would recommend hosting
partners. This was a welcome innovation, given the UNIX
platform underlying Geneva, which many hedge funds were not
equipped to support. As hedge funds began to leverage these
software-as-a-service models, they in turn increased their
exposure to cloud-based solutions.
The result was a mixed approach. Some of the hedge fund’s
software, particularly custom developed solutions, were housed
in their local data centers and some of their software was being
accessed remotely via software-as-a-service from the vendor’s
hosted data center.
One impact of this hybrid confguration was that market data
required to feed systems became much more diffused. In the
Hedge Fund 1.0 approach, investment frms would license their
data from providers such as Bloomberg or Thomson Reuters
and pipe that data directly in to locally hosted software
applications. In Hedge Fund 2.0 models, where some of a fund’s
software was hosted locally within their offces and some was
being ºhosted in remote data centers, market data would need
to be licensed twice in some cases. Furthermore, execution
management systems, licensed by hedge funds but funded
by their broker-dealer counter parties in return for trade fow,
would also be receiving market data feeds directly from the
exchanges. The consequence of this was that funds would often
get charged for receiving multiple instances of the same data
from different sources in different physical locations.
This shift in the dynamic of physical infrastructure and its impact
on market data charges heightened the need for funds to focus
on their data costs. Specialty frms, such as Done Plus (formerly
Market Data Insights) were formed to provide business process
outsourcing to address these effects. Through careful analysis
and allocation of market data expense at the user level, a
third-party frm can identify and fle for rebates with the
exchanges, so that any one user is only charged for using a
given set of market data once. These so-called MISU (Multiple
Installation Single User) credits have yielded signifcant savings
for some of the largest funds, whose extensive use of data
makes it one of their greatest expenditures.
“For our DR, we have a direct line out to our IT partner, and
we continuously replicate our data via this 50 MB line. A
year ago, there was a couple of days’ worth of latency, and
data would get backed up on a lag. Now, capacity is a bit
cheaper, and our need for real-time replication has grown
as we’ve added to our infrastructure. This coincided with
AUM growth, we could support this added expense as our
revenues have grown.”
– COO of US based Hedge Fund Managing
between $3 Billion and $5 Billion USD
“As you grow, you have multiple providers, and it becomes
hard to keep on top of data costs.”
CTO of US-based Hedge Fund Managing between $3 billion and $5 billion