Tag Archive: Consultant


In my role over the years as a designer and developer of client reporting packages for dozens of investment advisors, I typically work with decision-makers to facilitate the creation of new client presentations. Many of my clients already know what they want and just need help making it happen.

Though I have an excellent understanding of what is important to most investors and their clients, my opinion is seldom solicited. I speak up when an issue demands it, but most of the time I defer to advisors, listen, and do my best to create what my clients (investment managers) want. In many cases, the bulk of the project is spent on individual report exhibits with little emphasis on the way reports are organized and presented to clients.

I have worked with firms who have wanted to do the bare minimum for their clients (appraisal, and invoice) as well as clients that go above and beyond their duty to report. However, even those with the best reporting intentions can err by including a level of complexity and detail that will not benefit their clients.

On occasion, I am lucky enough to work with investment professionals who are modern thinkers and savvy marketers. The combination of these important characteristics leads to engaging projects and sophisticated report packages. These advisors apparently understand what their clients want to see, and are determined to make the desired reports a reality.

Instead of reporting only what is required, these advisors are trying to exceed the reporting expectations of their clients, and in doing so they engender trust. Their reports are comprehensive and transparent. As such, they have the possibility of highlighting poor performance, but that is a risk that needs to be taken by most advisors. The significance of disclosing this level of information is recognized by investors’ clients and should improve client communications.

In terms of presentation, reports should be bound, with a cover and/or table of contents and well-organized. When a client opens the report up, the most important things are first, and less important details follow. For example, there is a hierarchy to the way the reports are organized in the package such that the relationship is reported first and individual account reports later.  You can view a full report sample illustrating this approach here. In this specific example, the physical report package opens to display pages two and three of the PDF document, which are a relationship summary. The pages that follow provide account-level information.

Reports are typically bound electronically (i.e. PDFs) for those who deliver reports through portals or encrypted email, but firms send most their reports out on paper due to low adoption rates. Paper copies should look professional, and there are cost-efficient options to make this possible whether it is done through printing report packages on 11×17 stock with a saddle stitch or via manual binding of reports after they have been printed. Some of the manual binding options are fairly quick, but shops with hundreds or thousands of reports should not bind reports manually.

Another key to producing impressive report packages is the one-page summary, which allows a client to look at a single page if that is all they want to see. Usually, it is an exhibit that shows them where their investments are, how much they are worth, how they have grown, and how they have performed over various time periods. One-page summaries are also produced to provide information about specific asset types and performance. The idea is to create an executive summary. Clients really want a concise overview of their investments, and rarely look at all the other details that get sent to them on a quarterly basis.

How will you know if your reports have made an impression?

You will hear it from your clients. Even hard-to-please clients should appreciate these types of report improvements. So get to work now, and your new report packages could be ready for next quarter.

About the Author: Kevin Shea is President of InfoSystems Integrated, Inc. (ISI); ISI provides a wide variety of outsourced IT solutions to investment advisors nationwide.

For details, please visit isitc.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@isitc.com.

As a provider of technology solutions for financial services firms small and large nationwide, I frequently come in contact with investment firms of diverse dynamics and decision-making processes.  I am, of course, familiar with the process and discipline of getting

three separate quotes for goods and services, but even after decades of bidding on projects, it is still unclear to me what investment firms actually do with this information.

In some cases, it seems like the decision has already been made and prospects are just going through the motions to fulfill the expectation to follow a procedure and process established by their firm.  Gut decisions sometimes overrule common sense.

One of my clients actually adheres to this discipline for everything and, if the rumors are true, even gets three prices for paper clips.  In my own experience with them, they did, in fact, get three quotes for a single piece of computer equipment that cost about $75.  Considering current wage and consulting rates this arguably may not be a good use of time or money.  Perhaps it’s a more altruistic goal of keeping our economy competitive that drives their policy.

 

Opportunity                          

Recently, I was contacted by a firm looking for assistance with some Axys report modifications.  One of our competitors provided them with a quote for the work they needed.  The prospect felt that the price was too high and they solicited my opinion.  I never saw the quote from my competitor, but heard from the prospect that they wanted 3-4k up front and expected it would cost 7-8k.  In another conversation, I was told that there was also a local company bidding on the work.  That made sense to me – three bids.

I was provided with a detailed specification of what needed to be done and asked to provide them with a quote.  The firm was looking to make some modifications to the Axys report that generates Advent’s performance history data and stores it as Net of Fees (PRF) and Gross of Fees (PBF) data.  Though the requirements seemed complicated initially, it eventually became clear to me that the job simply required filtering of a couple REPLANG routines, and some minor additions.

I shared my impression with the prospect and ball-parked our bid at 3k (a 12 hour block of time) less than half of our known competitor’s bid.   I explained that the actual work was likely to take three to four hours, and rest of the time would be spent on testing, support and maintenance.  My expectation was that we would get the work done in a half day to a day at most and the remainder of our time could be used for any required maintenance or modification later in the year.

 

Follow-Up

After about a week, I called to follow up and found out that the firm was strongly considering having the work done by their local vendor, who told them it could be done for seven to ten days.  “Excuse me,” I said.  “Don’t you mean seven to ten hours?”

“No,” he replied.  He further explained that they really like using the local vendor and would probably use them for the job, which I fully understand.  I have, no doubt, benefited from this sentiment in Boston for years.  At that point in the call, I was thinking that it was more like seven to ten lines of code, but thankfully I didn’t start laughing.  I waited until the call ended.

 

No Risk, No Reward

In the end, your firm’s decision to select one bid over another is a personal one, similar in some respects to the one that dictates an investment adviser’s success attracting new clients and retaining them.  It’s about trust, performance, and the ability to continually communicate that you are worthy of one and capable of the other.  To succeed long-term in the financial services business, you need both.  Through good performance, we gain a measure of trust.  However, without a measure of initial trust or risk, there is no opportunity to perform.

About the Author: Kevin Shea is President of InfoSystems Integrated, Inc. (ISI); ISI provides a wide variety of outsourced IT solutions to investment advisors nationwide. For details, please visit isitc.com or contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@isitc.com.

Yesterday, I fielded a call from one of our clients that generates reports for several thousand accounts using our reporting engine.  As part of their reporting process, they extract data from Axys and import it into a database that facilitates data quality reviews and enhanced PDF reporting via Crystal Reports.

My primary contact at the site phoned me to let me know that part of our process, which took 20 minutes last quarter, was still running after two hours.  We immediately established a remote session to review the issue.   In the past, we have experienced some issues with individual PCs processing at slower speeds due to poor network infrastructure, but more recently this firm invested in better network hardware to support their rapidly growing business.

We play a limited role for this client and focus on their quarterly reporting and billing systems.  Though we are IT experts, it is not our responsibility to oversee and manage their IT infrastructure; however, at quarter end, a processing issue where systems are operating at a fraction of their normal speed rapidly becomes our problem.

I am very familiar with the bottlenecks that can slow Axys performance.   The most critical of these is network speed.  100MB Ethernet (full-duplex) is an older standard, and we still find it in limited use at many offices.  Gigabit Ethernet (full-duplex) is the current standard that should be in use by nearly all investment firms.  Theoretically, gigabit is ten times faster, but you won’t see that in practice.  You actually get six to seven times the performance of 100MB Ethernet with gigabit Ethernet over decent cabling.

Our system is normally connected to the file server that hosts Axys via gigabit network connections.  A quick check of the system revealed that it was connected to a gigabit switch.  We reviewed a few other things to make sure that there wasn’t a performance issue specific to our system.  Everything we looked at pointed to a problem with their environment.  I was fairly certain that, somewhere between our system and the file server hosting Axys, we were not connected at gigabit speed.  We still needed to identify where the breakdown was occurring.

My technical contact at the firm first assured me that all of the systems were connected to gigabit switches, and nothing had changed since last quarter.  We discussed the wiring of the network in detail and I was eventually able to find out that they had added a new Dell switch in the server room, but assured again that it was a gigabit switch.  I asked them to double-check the switch and let me know the model.

Though most of our own experiences purchasing equipment from Dell are good, Dell isn’t perfect.  Perhaps the Dell sales rep didn’t know one gigabit switch from another.  Our client thought they had purchased a managed gigabit switch where all ports were gigabit.  They had, in fact, bought and installed a 100MB managed switch with two gigabit uplink ports.   Further discussion revealed that the gigabit uplink ports were not being used either.

For those not familiar with network nomenclature, the primary switch to which all of your other switches, routers and servers are connected is considered your network backbone.  It is a best practice to implement a backbone that has throughput greater than or equal to that of the devices connected to it. 

When two network devices auto-negotiate to communicate with each other the maximum speed is usually the highest speed commonly supported by both devices.  Other environment specific issues, such as the quality of cabling between two devices, can further degrade the speed at which two network devices communicate.

In this particular case, our client had unknowingly installed a switch that was forcing all of their servers with gigabit Ethernet to communicate with the rest of the network using 100MB Ethernet instead of gigabit.  Users that were not connected directly to the backbone had a gigabit connection though another switch, and assumed that everything was fine.

The short-term fix for this client was to connect their file server hosting Axys to one gigabit uplink switch and use the other uplink to connect to their larger gigabit switches.  They also called Dell and had them send the right switch overnight, which they installed today.

Having an up-to-date network diagram is a best practice.  If you don’t have one, have your technical staff or IT provider create and maintain a network diagram documenting your systems, so you can proactively manage problems with network performance and reliability.

About the Author:
Kevin Shea is President of InfoSystems Integrated, Inc. (ISI); ISI provides a wide variety of outsourced IT solutions to investment advisors nationwide. For details, please visit isitc.com or contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@isitc.com.

Is It Time to Upgrade IT?

Today, a trip to your local computer store to buy a new PC can be an eye-opening experience. Notebooks, for example, come in many shapes and sizes: desktop replacement, notebook, sub-notebook, and netbook. The relative processing power of PCs also varies greatly. This may not matter to users who just want to use their PC for basic office applications and web browsing. However, power users that never want to wait for their computers still care very much about processing power.

When PCs were first introduced, it was far easier to understand the relative processing power of workstations. Each new PC model that was released was a quantum leap beyond its predecessors. If you were around back then, you may remember XTs, ATs, 386s, 486s and 586s. Thanks to those classifications and the relative clock speeds, you didn’t need to be a rocket scientist to determine the approximate speed of one of these PCs.

You knew when to buy a new one.

In later years, other speed-related issues became increasingly important: memory, hard drive, front-side bus, hard drive interface, PCI Express slots, USB version, hyper-threading, multitasking, operating systems, et cetera. Somewhere along the way, the ability to easily differentiate the relative power of various PCs became blurred. Today, even technology experts have to scrutinize specific benchmarks to be sure of exactly what processing power they are getting. The difficulty lies in understanding your primary applications’ infrastructure needs, knowledge of potential bottlenecks and the vast array of available choices that can satisfy your business requirements.

Best Practice
Many firms in the financial industry regularly replace their equipment after two or three years of use. This strategy has as much to do with leasing and depreciation as it does with proactive maintenance and a commitment to technology standards. It is considered a best practice to replace equipment that is older than three years. This practice provides an opportunity to implement more efficient technology, limit future maintenance costs, and reduce the risk of catastrophic system failures. Though we occasionally see firms stretch equipment into a fourth or fifth year, we don’t recommend it.

Our advice is to establish a regular routine for replacing equipment, with priority on shared resources. For instance, a firm might replace all servers every two years and workstations every three years. As game-changing technology emerges, we also make additional recommendations for purchases when appropriate.

Simplifed Hierarchy of Processing Speed Factors

Assessing your systems

For business applications, the most important factors in determining yoursystem’s operating speed are CPU, memory, hard drive, and operating system (OS).  Internet bandwidth and network speed also contribute to how fast your systems process data.   In the remainder of this article, we will take a closer, slightly more technical look at these individual factors, offer some specific recommendations, and give you instructions on how to evaluate certain components.  A software program can effect your perception of system performance too, but we won’t be getting into that.

In order to get a more comprehensive evaluation of your individual systems, you can download a trial of the Passmark’s Benchmarking software and see how your machines compare with other users’ benchmarked systems:
http://www.passmark.com/products/pt.htm

CPU
Passmark’s extensive database benchmarks over 1,300 CPUs . Some are specifically designed for virtualized server environments, while others are designed to maximize the battery life of notebooks. Understanding where your current CPU fits within the benchmarks will help you glean what type of benefit you would see from a faster processor.

 Assuming you are using a Windows operating system, you can identify the processor your PC uses by holding down the WINDOWS key and pressing the BREAK key, which is usually in the upper right corner of your keyboard. Once you do this, you will see text similar to what is shown below:

Look for the line that identifies your processor, then click on the link below and see if you can find your processor on one of the lists.
http://www.cpubenchmark.net/

Using this resource, you should be able to compare the benchmark scores of your processor to those of prospective new PC replacements and approximate the relative processing speed gain.

When purchasing new PCs, we prefer to buy the fastest processors we can without paying an unreasonable premium. We expect the cost to be relatively proportional to the processing speed of various CPU options; we might pay 15% more for a processor that is 20% faster, but we would not pay 66% more for a processor that’s only 10% faster.

Memory
Memory is relatively cheap. Accessing information from random access memory (RAM) rather than hard drive space or network storage is ideal, since accessing RAM is much quicker than pulling data from your hard drive or network. PCs running XP should have 3-4gb. XP cannot access all 4gb, but typically uses a little more than 3gb. Machines running Windows 7 should have at least 4gb, or even better, 8gb. In some cases, you can add 8gb of memory to an older PC for a little as $100.

For optimal performance, memory speeds should match the maximum supported by your PC.

Hard Drive
Buy the fastest hard drives you can afford. You are unlikely to regret it. We have long enjoyed using Western Digital’s Raptor drives (10k RPM) on our workstations. More recently, we have selectively switched to OCZ’s Solid State Drive (SSD).

The link below will take you to Passmark’s list of benchmarked hard drives:
http://www.harddrivebenchmark.net/

Hopefully, you can find your workstation’s hard drive in the “High-End Drive Chart.” If you cannot, you should strongly consider upgrading it to an SSD drive because:

1. SSDs use 80% less power.
2. SSDs are silent.
3. SSDs are much faster than traditional hard drives. (An OCZ Vertex 2 SSD drive is about twice as fast as a 10k Western Digital Raptor drive.)
4. SSDs are more durable, and reliable.
5. SSDs are affordable. An 80gb drive, which should be enough for most workstations, costs $150.

If you want to compare your current hard drive’s benchmark to drives, with which you could replace it, open up Windows Explorer by holding down the WINDOWS key and pressing the “E” key, then right-click on your C-drive, and select properties. The hardware tab should contain the model number of your hard drive, and using this information you should be able to find the benchmark of your current hard drive.

 

Operating System
In the investment business, the reliability of systems is paramount. Selecting the right operating system for your workstations may be one of the most important things you can do to improve systems infrastructure. The majority of RIAs have been stuck on Windows XP for quite some time. Torn between staying on what works with all their existing software and switching to the latest Microsoft OS, many have done nothing.

Vista was a nightmare for early adopters. We upgraded our best system, when it came out, and it subsequently became dedicated to IE browsing and Office 2007 use. In all other respects, it was a pain.

In contrast to Windows XP and Vista, Windows 7 is a rock-solid product. We have been using Windows 7 Ultimate (64-bit) heavily for about a year. Configured with 4gb to 8gb of RAM and high-end hard drives (the SSDs and Raptors mentioned earlier), we have yet to see these systems seize up like Windows XP and Vista might. They consistently and fluidly respond to user requests.

When Advent Software proclaims support of Windows 7 with Axys, we expect that many RIAs will finally upgrade to Windows 7 Professional. Before you decide to move to Windows 7, you should verify that all of your software is compatible with the specific version of Windows 7 you intend to implement.

Choosing the right Network operating system (NOS) is also extremely important. A large number of firms are still using Windows 2003 Server, but they should be planning on migrating to Windows 2008 Server R2 within the next year. The prevalence of DR sites make switching an RIA’s NOS a more complicated and expensive venture, but newer systems offer valuable features such as increased security and integration with Windows 7 providing meaningful incentives to upgrade.

Upgrading the “brains” of your IT infrastructure needs to be carefully planned, scheduled and executed to ensure a successful outcome. In place upgrades of mission-critical servers are an absolute “no-no” without redundant systems to fall back on.

The best practice for systems that aren’t virtualized is buy new equipment with the new NOS for your primary site and your DR site. Virtualized systems offer more flexibility. The ability to store server images allows you to easily backup virtual machines, and revert back to a previous image if necessary.

Internet Bandwidth
Sometime users mistake slow Internet access as slow processing speed on their PC. Identifying these problems correctly is an important part of assessing the speed of your systems.

You can use the link below to test your Internet speed, but in order to get a truly accurate reading you will need to be the only user connected to the Internet. In any event, this test should give you a general idea of your Internet connection’s upload and download speeds.

http://www.speakeasy.net/speedtest/

If you are experiencing a processing problem on your system, try running this test to see what your upload and download speeds are at the time.

Domain Name Server (DNS)
When you type a URL into a web browser, the domain name you type needs to be resolved to an IP address in order to download the information to your web browser. By default, a DNS provided by your Internet Service Provider (ISP) handles this. If you haven’t already done so, you should consider establishing a local DNS server to accelerate domain name resolution.

Network Speed
Network speed is critical for clients that do processing-intensive work on their PCs. Firms using flat-file programs like Axys can see a dramatic improvement in processing by upgrading their LAN technology, but firms that utilize client-server databases locally or cloud-based apps may not.

Gigabit Ethernet (1G) is the standard. Ten Gigabit (10G) Ethernet is available, but with an estimated entry-level hardware cost of $1,500 per user (based on 24 users), the technology is cost-prohibitive for small to medium-sized RIAs, and typically found in enterprise server rooms not small and medium-sized businesses. To be implemented in most office environments special cabling (category 6a or category 7)  is required.  With the future in mind, those moving into new office space should consider paying the premium to install category 6a  or category 7 cabling instead of category 5e or category 6, but do their own cost-benefit analysis.

There are situations where decentralized use of 10G Ethernet could make sense (e.g. an Axys user with more than 10,000 accounts), but most firms will wait for the cost to come down to a more reasonable level. Since faster localized data processing is in demand at the enterprise level, prices may remain where they are for some time.

Many notebooks still do not have gigabit ports. If you are shopping for a notebook make sure it has a Gigabit Ethernet port. If you still haven’t standardized on Gigabit Ethernet at your office, you should be able to, do so at a hardware cost of less than $75 per user.

New systems or new parts?
The best configuration for your new workstations and servers is an affordable one that you never have to upgrade during the useful life of the equipment. While some of the recommendations we have made in this article can be applied individually, it is usually more cost-efficient to buy new equipment that has the right configuration of OS, memory, CPU and hard drive.

Before you spend money upgrading older technology, find out how much your existing equipment is worth. If you aren’t certain, you can look it up on eBay and see what the approximate replacement cost is. This is usually a good indication of how desirable your equipment is as well as its relative processing power by today’s standards, and may validate further investment in the equipment or help solidify plans to upgrade to new equipment in the near future.

About the Author:
Kevin Shea is President of InfoSystems Integrated, Inc. (ISI); ISI provides a wide variety of outsourced IT solutions to investment advisors nationwide. For details, please visit isitc.com or contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@isitc.com.