Latest Entries »

Gamers are having a rough go of it this year and understandably feeling betrayed by one of their long-time hardware darlings, Nvidia.  As you may have heard, Nvidia and other companies like Micron are prioritizing the needs of big business’ AI requirements over gamers and consumers that don’t wield as much sway over their bottom line. This blog post isn’t going to make gamers-at-large any happier, but in my defense, this really isn’t anything new.  For as long as I can remember, I have considered buying a decent GPU for a new desktop PC a prudent and reasonable business expense.

A close-up view of an Alienware gaming desktop PC, showcasing its internal components including a cooling system, graphics card labeled 'GEFORCE RTX', and glowing purple LEDs.

Early on, the GPUs I purchased were intended to ensure support for multiple monitors, but as the technology required to support multiple monitors became ubiquitous, I continued to buy GPUs for special circumstances where I knew users like me could benefit from enhanced GPU processing.  If you value your time and that of your fellow employees and clients, you need to champion investments that empower and facilitate your team’s ability to not only meet ongoing technology challenges but also provide them with the tools that will enable them to exceed expectations in the future.

There is perhaps no better example of this than the implementation of AI at your office, and I am not talking about using an AIPC with Copilot. I mean real-world implementation: running multiple local LLMs simultaneously, LLM orchestration and coding agents (e.g., Claude Code), building and using AI agents (e.g., OpenClaw), using, creating and hosting MCP servers, implementing REST API integration, et cetera. While AI cloud resources, such as frontier foundation models operating within AI factories, can be dramatically more powerful and appear less expensive than purchasing local hardware, the larger issue of data privacy is the elephant in the room. For me, this issue is twofold: I cannot put my intellectual property or any part of my clients’ private data at the mercy of what may turn out to be false security promises as AI use agreements with providers continue to evolve.

The overriding concern of data security puts users in a situation where they are limited in what they can do while using cloud resources.  Users may not feel comfortable attempting certain things on cloud resources due to concerns over security, and rightly so. The answer to these concerns is clear AI use policies and systems – that dictate acceptable use of cloud and local AI resources. Those same policies and systems should simultaneously facilitate the ability to use AI in productive ways and enforce data security without handicapping technological progress. AI is not the be-all and end-all of productivity, but it can be a valuable tool when used responsibly.

A smiling man in a business suit stands in an office environment, holding his hands up in a welcoming gesture.
Apple Intelligence’s handiwork via Playground clearly illustrating why we need to check AI work.

Game-Changing Technology

It is easy to ignore minor changes in processing power year to year, but when true paradigm-shifting tech becomes available and affordable, we need to act on it. This is the thing that makes me buy new hardware.  The Nvidia GeForce RTX 5090 (“5090”) and hardware of its ilk are game-changing. Their affordability may be debatable, but if you aren’t able to use them, or superior tech options, you are operating at a technological and competitive disadvantage to your peers.  With these issues in mind, I strongly recommend systems on par with the Alienware Area-51 Gaming Desktop (model AAT2265) or better for complex local AI use cases.

Six Reasons to Consider Buying the Dell Alienware Area-51 Gaming Desktop for Local AI Use Cases

  1. CPU – The AMD Ryzen 9 9950X3D CPU has excellent single-thread processing speed, superior multithreaded processing speed, and a large cache. It offers power without compromise. One of my aims when purchasing a new desktop is to never have to upgrade the equipment during the life of the purchase, and that should be possible with this system. There is an option to get an Intel Core Ultra 9 285K, but I am not a huge fan of using the Arrow Lake architecture for AI. Additionally, being able to select a PCIe 5 NVMe for primary/OS storage means that you can remove the most obvious remaining local processing speed bottleneck.
  2. Market forces – The expectation of constrained future supply due to AI data center demands taking precedence over SMBs and consumers makes buying now more appealing than waiting until later, when scarcity and corresponding increased demand could impact buying power.
  3. 5090 availability – This local LLM beast facilitates private use of decent-size LLMs (30B parameter models run very fast; 70B parameter models are useable.).  AI is a tool we use to get our jobs done as efficiently as possible. This is simply a cost of doing business. There are other options, but this is currently the fastest GPU you can buy short of enterprise-level hardware, where the cost increases significantly. Due to 5090 availability issues, buying the GPU bundled in a PC gaming build may be the easiest way to get one.
  4. Competitive pricing – Dell’s Alienware pricing is reasonable given the current premiums on 5090 GPUs.  You could get similarly configured gaming Desktop PCs for considerably less, but the Alienware price point offers superior build quality.  You could also spend a lot more money buying similarly configured “workstation” hardware, which might provide a better upgrade path, but you would likely be paying enterprise prices.
  5. Silence and build quality – When you set it up you should notice a deafening silence in comparison to similar systems. The case is extremely well-designed to keep the system cool and quiet. 
  6. Onsite support and hardware/driver continuity – You can be confident that Dell will show up to service the PC if needed.  It weighs a ton. Nobody from your office will want to carry it anywhere for service… ever.  Dell is also very good at making updated drivers available when they become necessary.

Alienware Area-51 Gaming Desktop with AMD Ryzen 9 9950X3D processor, GeForce RTX 5090 GPU, and 64GB memory.

The latest Area-51 build has been out since January of 2025 in Intel CPU options, but Dell added AMD options to the configuration in November of 2025. Based on my experience, even though Dell quoted shipping at roughly a month, they shipped it quicker. The system I ordered in early January 2026 arrived in less than two weeks. It comes with a single year of onsite support, but I added three years to it, and if you buy one, you probably should too.  For those curious about the benchmarks, I ran PassMark’s PerformanceTest on it and have included the results below.

PerformanceTest 11.1 PassMark Rating dashboard displaying a total score of 18876.3, indicating the 99th percentile. The breakdown includes CPU Mark (73008.7), 2D Graphics Mark (1498.6), 3D Graphics Mark (46723.2), Memory Mark (3753.9), and Disk Mark (94890.6).
Dell Alienware Area-51 Gaming Desktop (model AAT2265)
Passmark PerformanceTest results. Compare your PC here.

The Evolution of Local AI Use Cases

Back in 2020, during the crypto boom, I bought a Nvidia GeForce RTX 2060 Super GPU with 8GB VRAM, which cost $500 at the time.  It is not a barnburner by today’s standards, but it can run the OpenAI/gpt-oss-20b model well enough on LM Studio.  I also have a notebook with an NVIDIA GeForce RTX 4060 Laptop GPU.  That too has 8GB of VRAM and can run local LLMs way faster than the old desktop.

These systems enabled me to run, use, and test local LLMs to a certain point, but the results weren’t fantastic.  I am short on patience when it comes to waiting for computers to do things.  As I tried increasingly complex models and tasks locally, I reached some predictable limitations: context, first token, and tokens per second.   Watching my computer render characters in slow motion while using larger LLMs made me wonder how much of a difference running those same models on a 5090 would make. The difference is night and day.  I have zero regrets about this purchase.

Bar graph showing decode speed in tokens per second for different systems: Old Desktop (RTX 2060 Super) at 9.2, Legion Notebook (RTX 4060 Laptop) at 27, and New Desktop (RTX 5090) at 285 tokens/sec.

One interesting takeaway from the experience of using the 5090 and running many tests between the various systems I have is that model results can change when it is run on different hardware. Ideally, they won’t, but your hardware affects how the model is executed by a local AI model runner, which can influence its output. For example, I ran the same version of LM Studio with identical models and settings to provide both my old and new desktop systems with the same prompt. Logically, you might think that you would get the same results, but in fact you get different results.

The result from my old desktop was terse and simple, while the result from my new desktop was comprehensive. Though I theoretically understand how AI works and could have anticipated some differences between the results due to the variability of calculations between hardware, I was admittedly surprised. Seeing the difference firsthand adds context to my understanding.

I wanted to attribute this positive difference to my faster hardware, but that would be incorrect. Mathematically speaking, the output is simply different because the hardware is different, and the fact that the response is comprehensive on my new desktop should be purely coincidental. On closer inspection, the model I used (OpenAI/gpt-oss-20b) likely ran the prompt under constraints when it was run on the 2060 Super with 8GB VRAM.  That would have caused GPU offloading (since the model size is 12GB), noise, and numerical degradation in calculations.  Those issues likely created a bias towards a less comprehensive answer.

Moving Forward

Given the opportunity cost, ongoing demands of AI data centers for PC memory, storage and GPUs, and a perceived scarcity issue that will persist for years, now seems like a better time to purchase a 5090 than later when it may not be possible. Please note this computer makes sense for me and other power users that can benefit from having a 5090 for local AI use cases, but it wouldn’t be a good choice for users that don’t fit that profile. If you are interested in learning about using local AI resources almost any Nvidia GeForce RTX 50 series GPU with at least 8GB VRAM could be a good starting point.

In the PC/GPU world, VRAM ultimately determines how large a model you can use fully on the GPU and how many models you can use simultaneously. A larger model size typically corresponds with greater training depth, capability, and sophistication, which often equates to less iterative work and greater user productivity in the end. When you run out of VRAM, your system attempts to compensate by offloading portions of the model to RAM and CPU (aka GPU offloading), which slows down processing noticeably due to lower bandwidth and higher latency. If you attempt to use more total memory than is available, the model may fail to load or the system may slow dramatically.

Using a Mac with unified memory instead of a PC with a discrete GPU removes the hard VRAM boundary and reduces the performance cliff associated with GPU offloading, but you are still limited to whatever unified memory your Mac has. Assuming you can fit the model(s) in use and their associated KV (Key-Value) cache — which scales with context length — into the 5090’s 32GB of VRAM, your typical Mac isn’t going to outperform a 5090 in raw inference speed.

If you are serious about working with AI locally, you may want to step up to a Nvidia GeForce RTX 50 series GPU with at least 16GB of VRAM, which would provide a longer runway for experimentation.  Either option (8GB or 16GB) shouldn’t break the bank compared to a 5090.  Buying a cheaper GPU will allow you to work with local AI resources and become familiar with the tools, but if all goes well, you may wish you purchased a 5090 GPU or something capable of running even larger models concurrently, such as a high-end Mac Studio (M3 Ultra).


A close-up portrait of a smiling man with brown hair, wearing a green sweater and an orange lanyard around his neck.

About the Author: Kevin Shea is the Founder and Principal Consultant of Quartare; Quartare provides a wide variety of agile technology solutions to investors and the financial services community at large.

To learn more, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.

Over the years I have published many blogs.  Almost none of them are as frequently visited as Getting Data In and Out of Advent APX and Axys. That is a good indicator that the topic remains relevant, but a long time has passed since it was published. If you have a recent version of APX today, there is another option that was not available back then – RESTful API.   Though I knew the functionality existed within more recent versions of APX, I hadn’t had the chance to implement it with a client yet.

Abstract illustration of interconnected blue and orange lines resembling data flow or network connections on a dark background.

Last year, an APX user approached me, seeking to utilize Advent Software’s API to create data pipelines between APX and their in-house MS SQL Server Data Warehouse (DW).  The APX user identified this work as a prerequisite for them to fully migrate from Axys to APX.  They wanted to maintain the high-level integration they had created between their DW and Axys in APX.

I was somewhat concerned because I had researched previous APX versions related to the API and knew of some of the issues early adopters had encountered. Given that information, I had some trepidation about obligating myself to a project that required implementing the API. At the time I was not convinced, via first-hand experience, that the API would work reliably for their planned application of it.

I tempered the prospective client’s expectations and proposed a flat-rate job focused on determining the feasibility of doing APX API integration in their environment. Our end goal was to develop a couple of data pipelines between the DW and APX, with the caveat that those deliverables, developed in Python and/or JavaScript, would be proof-of-concept work and not necessarily ready for production use.

Together we successfully completed the project in an APX v21.x environment self-hosted by the client. The majority of the work was done over the course of a couple of months and the deliverables were ready to be moved into production almost immediately afterwards, but there were some challenges along the way.  In most of the instances detailed below, we looped Advent in for assistance, and they did a commendable job helping us resolve the issues promptly.

  • Error 403 – Initially, we were getting an error when attempting to use the API.  We reached out to Advent, and they noted that the most recent Cumulative Hot Fixes (CHF) update wasn’t applied and recommended that we install it.  Applying the CHF update resolved the error, and the API worked as expected.

  • Postman functionality – There were a couple of days where Postman was completely unresponsive.  During that brief period, we had difficulty doing even the most basic API testing.  This issue seemed to resolve itself, but we may also have logged out of Postman and logged back in.

  • Error 500 writing data to APX – During dev the functionality to read APX data was working very well, but we found that attempting to write data to APX generated an Internal Server Error.  I assumed that this meant that the data was not being written to APX.  After looping Advent in for another call, we discovered that although the error was being generated, the data was being successfully written to APX. Advent indicated that they would put a fix request in, but it might not happen because v21.x was sunset. With some reservations, I updated my code to ignore the error 500 when we wrote selective data to APX via the API.

If you do reach out to Advent for assistance, make sure you have Postman installed.  Advent has no desire to review your code.  They will want to test the functionality of the API with you using Postman.

Screenshot of a code editor displaying a Python script for updating data via an API. The interface includes files and folders related to API components, test logs, and a main script for API interaction.
Visual Studio screenshot of Python code sample illustrating API use.

What is Required to Get Started with the API?

Utilizing the API requires some detailed set up and work to get up to speed.  It probably won’t be something that just works without some troubleshooting, and there is a bit of a learning curve.  The following list may not cover everything you need to do to get up and running with the API, but it is a good place to start.  I wish there had been a better resource for me when I started working with APX’s REST API.

Here are some tips that should help those interested in implementing the API:

  1. Make sure you are on the latest CHF for your current version of APX.  If the latest hot fixes have not been installed, you may have problems trying to utilize the API.
  2. Download the Advent Portfolio Exchange REST APIs Postman Collection from the Advent Community website.
  3. Create a Postman account if you don’t already have one, and locally install the Postman software.
  4. Load the collection into your Postman profile and review the documentation completely.
  5. Do a search on the API in the Advent community site and read through some of the threads.  The code samples there were simple, but helpful.
  6. Create the client/credential and verify its existence via SSMS.  The client is persistent, so once you have created it, you shouldn’t have to create it again unless you update APX.  Verify the existence of the client (e.g., cc.postman) in the APX dbo.clients table.  If you have trouble creating the client using your code, try using the PowerShell script to create the client.
  7. The user profile you are using needs to have appropriate rights.  Though we escalated my individual user rights in all the documented required areas, I eventually started using the admin user profile, which worked more reliably in our environment.  I believe Advent recommends using the admin user profile if possible.
  8. Test basic APX API functionality in Postman to make sure it works before attempting to create code via C#, Python, JavaScript, et cetera that leverages the API.

Once you have completed the set up required and can use the API to read and write data to APX, you are ready to build out your solution.  If you have trouble with your implementation, validate specific functionality of the API with Postman.

Calling the API

Almost any use case of the API to write or read data from the APX requires the following steps:

  1. Get IdentityServer base address from APX authentication configuration.
  2. Get token endpoint from IdentityServer configuration.
  3. Get token with client_credentials grant type.
  4. Perform whatever API action you want (multiple calls to the API with the access_token are fine).
  5. End your API Session.  The API utilizes one of your APX seats while the session is active.

Those familiar with API use and Python are likely aware that manipulating data can necessitate working with JSON as well as Python dictionaries.  As an example, in order to read data from APX and write data from the DW into APX that is different from what is already in APX, you may need to:

  1. Query APX for the relevant data via the API, which creates a JSON file.
  2. Query the DW for the relevant data.
  3. Load the JSON data received from APX into a Python dictionary.
  4. Parse and compare the APX data from the Python dictionary with the records from the DW.
  5. Add the records that meet the criteria to the JSON payload.
  6. Send a patch request via the APX API.

The following diagram details this workflow.

Flowchart illustrating the data pipeline between Advent APX and a Data Warehouse using REST API, detailing various processing steps and data storage interactions.

To wrap up the project, I created a PowerPoint presentation summarizing and detailing what we did and how it all works to empower the internal development team to understand, troubleshoot, and replicate my work if they need to in the future. I am always available to support the solutions I create, but I prefer that my customers call me because they want to, not because they need to.

Why would you want to use the API instead of IMEX?

There are pros and cons to using the API. It presents an opportunity to use a single unified methodology to integrate data in your environment but may fall short of that depending on the specific needs of your firm.

The pros of using the API include the fact that is is a more modern approach to extracting and importing data at a granular level. The API can be used to pull data such as holdings, select time period performance, etc. In some use cases, APX users are likely extracting and transforming data that they drop into a DW. Some of those transformations, such as recalculating performance figures, may not be necessary when utilizing the API. The API has the potential to be more secure but given that the default password for admin user in APX frequently doesn’t get changed, it probably isn’t any more secure than IMEX in most self-hosted APX environments.

The cons of using the API are that some data elements may still be in flux. Reading and writing certain data points may not be possible via the API, which could force you to use IMEX or other methods (e.g., Replang, public views, stored procedures, SSRS) in addition to the API. It also may be difficult developers that aren’t Advent APX Subject Matter Experts (SMEs) to bridge this gap. Conversely, it may be difficult for those SMEs that are not developers familiar with the API use to implement it on their own.

Using well-established APX import and export methods like IMEX may still be the most efficient and reliable way to import and extract certain data elements from APX. However, going forward, the growing maturity of Advent’s REST API should force tech-savvy management, users, and integrators to ask “Should we be using the APX API to do this?” as they look to forge a modern data stack that integrates APX data, and meets AI-driven demands for more robust data access.


Kevin Shea Impact 2010

About the Author: Kevin Shea is the Founder and Principal Consultant of Quartare; Quartare provides a wide variety of agile technology solutions to investors and the financial services community at large.

To learn more, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.

Image created using AI query for Python code to create word art with Replang keywords.

The State of Reporting Development for Axys and APX Users

Advent users continue to benefit from many different report development options. There is a tantalizing and sometimes dizzying array of reporting options both within Advent’s architecture and provided by third-party solution providers, products and platforms.  In most cases, leveraging the most enticing options takes a commitment of time, money and patience.

At the top, management may envision staff using a single transformative technology that unifies all the data and makes it easier to push, pull or outright access data from portfolio accounting and ancillary systems. However, the truth, at least where Advent is concerned, is that the most effective way of making all those wonderful connections between applications and other data sources is a blended approach using the most effective method for various data elements.  A cohesive strategy and well-organized approach to data gathering and sharing should be implemented, but it is not critical or realistic that all data elements be delivered via one approach or method.

APX users have the ability to tap data from APX’s underlying SQL Server database using a growing combination of data integration options within the framework of APX.  These options include Stored Accounting Functions, Public Views, SSRS and REST API – as well as any other reporting tools and systems that can make use of that infrastructure.  APX users have a lot of capabilities baked into the platform that Axys users don’t have, but from what I typically see out in the wild, most firms using APX aren’t leveraging those features as well as they could.

Evolving Report Development Options for Axys and APX Users

Axys, APX and other portfolio accounting system users, who have taken the time to use ETL tools, like xPort, to populate their own data warehouses, will have similar data schemas focused on the most critical data (e.g., clients, agreements, revenue, portfolios, transactions, performance, holdings, etc.) to their respective businesses.  Depending on firm size and budget constraints, these users may benefit from tapping that data with a visual analytics platform like Pyramid Analytics, Microsoft Fabric or Tableau.

I am excited about the latest emerging tech and currently working with what I see as some of the best platforms and tech available.  Newer tech isn’t going away, but for someone with their feet firmly planted on the ground who needs to generate a relatively simple report today, it probably makes sense to hit the snooze bar momentarily and attempt to do what needs to be done now.  Though it may appear outdated by comparison, Axys and APX users can also create reports using Report Writer Pro or via updates to Replang source code directly.

While advanced reporting tools can be extremely powerful and, in fact, instrumental for some types of reporting requirements, I am a fan of Occam and his razor. In many cases, there is just no need to complicate reporting any more than is useful to accomplish the end goal. Replang, which was established in Advent Software’s infancy, is still very much part of the reporting architecture of Axys and APX and will likely remain part of it forever.

Like many Advent users out there, I have used Notepad and/or Notepad++ to modify Advent Axys, APX and Report Writer Pro reports. I was modifying these files via the MS-DOS Edit command way back when they were part of The Professional Portfolio. Any of the tools are sufficient, but plain old Notepad and Edit don’t even display line numbers; Notepad++ is a step in the right direction, as it provides line numbers and the ability to use plug-ins, but neither option could be considered a modern tool for source code modifications.

Visual Studio Code

That’s where Visual Studio Code (VSCode) comes in. VSCode, which is perhaps one of the most popular and versatile utilities for source code updates, offers support for many of today’s most popular languages and a few of the older ones as well. When I first started using VSCode, I did a quick search for a Replang extension. Unfortunately, Replang wasn’t one of the supported programming languages, but VSCode does allow developers to build extensions, which are similar to plug-ins in Notepad++.

Prior to creating the extension, I also tried a number of the available supported languages in VSCode to see if anything came close. Some of the best candidates helped a little, but I was disappointed with the results. Out of the gate, VSCode provides line numbering and many other useful features. Frankly, the only reason to ever use Notepad again is because it is always there and it is simple to use.

In order to provide language support for Replang in VSCode, I needed to create an extension with knowledge of Replang’s keywords. Replang for Axys has roughly a hundred keywords, and the most current versions of APX add another hundred-plus keywords. Building a truly robust extension for Replang would mean spending more time than I put into it on the day I created it. Ideally, you could provide keyword-specific information with examples that would appear when you hover over a keyword. Eventually, I may build that into the extension, but the most critical feature in my mind is to provide contrast between keywords, comments and dialog to highlight the syntax and make it easier to read.


Example: Modifying Replang code with Visual Studio Code using the Replanguist extension.

If you routinely modify Advent Reports and are looking for an improved tool to do so, you may want to check out the Replanguist extension I built and published to facilitate Replang edits. You should be able to find it in the list of available VSCode extensions from Microsoft.

As always, if you have questions or suggestions, please feel free to reach out and connect with me.


Kevin Shea Impact 2010

About the Author: Kevin Shea is the Founder and Principal Consultant of Quartare; Quartare provides a wide variety of technology solutions to investment advisors nationwide.

For details, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.

I have been approached many times over the years regarding portfolio accounting system conversions and projects brought about by startups, breakaway advisors, mergers of firms that use Advent Software products, and impending migrations away from Advent products. Each of these types of projects share similar requirements and know-how to extract, transform and load data from the source system into the destination system, but the requisite integration of portfolio accounting records from two different firms and datasets into a single dataset post-merger or acquisition is, by definition, at another level of complexity.

In initial conversations, it is not uncommon for firms to ask me if they can just do it themselves and the answer is … maybe.  If they have the necessary knowledge and skillset, they could, but the experience of doing it multiple times breeds competence, confidence, tools, and valuable insights, making future projects more turnkey.  At most of the firms I work with, the person asking me this question already has a job, and this isn’t it.

Unifying Portfolio Accounting Systems and Records

In portfolio accounting terms, merging companies together is about putting like with like, identifying both common and unique asset classes, security types and securities.   Reviewing this data carefully and making decisions about how you want things to appear in the merged environment creates the foundation for the work that needs to be done.  This process is much easier if one party can clearly be identified as the primary firm that the secondary party’s data is being merged into.

The most fundamental data to the project are the asset class and security type settings.  The differences in data here determine the overall complexity of the work required.  Assuming these data were identical, there would be little to do.  You could simply review the security master, find any ticker naming inconsistencies, and rename those securities.  You would still need to merge the prices, splits, groups, portfolios, performance, indexes, and composites, but the process would be relatively easy.

Unfortunately, it is usually not that simple.  You basically need to get the two systems to speak the same language through the process of reorganizing and renaming securities while maintaining the integrity of the portfolio accounting systems.  You may add, remove, or modify asset classes and security types in the merged environment, but you need to do it with the knowledge of what can be done and what you may be giving up in the new environment.  Most obviously, removing an asset class means that you won’t be able to update performance for that asset class any longer in the merged environment.  That may be okay if asset class performance isn’t in use.

Specific care must be taken not to invalidate performance records or transactions based on security type parameters.  Reclassing a security type as another asset class impacts historic performance.  Renaming a security and its security type can also invalidate performance history for the asset classes involved, unless performance history is regenerated with the new configuration afterwards.

Most of this article is written in reference to work that I have performed merging Axys datasets, but the work required for APX is very similar.  I skirt over certain areas of the process during my description in an effort to keep this blog under 2,500 words, since the purpose of this blog to shed light on the process involved and not necessarily to give readers an A-to-Z guide on how to do their own Portfolio Accounting merge project – though I suspect some may use it as such.

How to Merge Portfolio Accounting Records With Another Firm
I categorize the work required into the following phases:

  • Preparation: Backup, Profiles and Initial Assessment
  • Reorganization and Renaming
  • Merge
  • Validation

Preparation
There is no better way to get started on a project like this than making a backup of the systems involved prior to any work that you perform.  There are typically other backups being run on these systems, but I want to make sure I can restore systems to their original state prior to any work I do, and you should, too.  In an Axys environment, you should be able to simply zip the entire folder (e.g. f:\axys3) from Windows Explorer or run a PowerShell script command like this:

PowerShell
compress-archive -LiteralPath f:\axys3\ -DestinationPath f:\axys3\backup.zip -Force

As part of the preparation process, I typically create multiple partitioned workspaces.  In a recent job where there were already two Axys profiles for separate business lines, I created two additional profiles for my client during the merge project: one as the environment that I would transform to be like or compatible with the firm it would be merged into, and the other environment as the destination for the final merged portfolio accounting system.

As a result of this approach, the pre-merge profiles are all accessible after the merge is completed, and the client can easily test the results of the merge and verify that everything has been correctly merged before cutting over to the new environment.

I also export the INF files for each of the Axys profiles to be merged and do some automated comparisons between the files to help me determine how much work is involved in merging the environments.

Reorganization and Renaming
Reorganization is the crux of any merger project.  It is the reorganization of asset classes and security types itself that leads to much of the renaming, but some of the renaming takes place because it is required to eliminate security duplicates.  The phases are intended to be separate and distinct where you would finish one and not repeat it, but in practice the process within and between phases tends to be more iterative.

Asset Classes
There are no shortcuts here.  Ideally, my preference would be to leave it alone if possible. However, I realize that isn’t possible in all situations.  You may be able to avoid changing the asset class definitions of the firm you are merging data into, and you probably should, but you will likely need to change some of the asset class definitions in either the source or destination to accommodate the merge.

Security Types
As far as I know, you cannot import security types through IMEX, and that is probably a good thing.  I suspect I could figure out a way to force this, but it would be a bad idea.  Along with asset classes, the security type table is the heart of how your portfolio accounting system is organized. The definition of security types determines how each security type is treated in your portfolio accounting system.  Edits must be made manually, and some edits are not allowed.  If you are creating a security type that is a lot like another, you can speed the process by inserting a row and adding a copy of a previously defined security type.

Industry Groups and Industry Sectors
In order to merge security masters, you need to have already merged the industry group and industry sector files, because Advent won’t let you import a security with an invalid industry group or industry sector.  Merging isn’t quite the word for it, because you are unlikely to merge these tables like you would splits.  A more apt description of this merge process would be standardizing and/or reclassifying one set of securities to match the industry groups and industry sectors definitions defined by another profile.

Any changes made to the industry sectors in the target environment could impact the integrity of performance history by industry sector (if that data was generated in either of the source environments), and necessitate regenerating that performance history with the new industry sector definitions.

Renaming Securities
You will find that the reclassification of security types (e.g. efus fmagx versus mfus fmagx) forces you to rename the impacted securities.  Differences in the way firms may name symbols (e.g. swvxx versus swvx.x, ibm versus IBM, 34393t401 versus 34393T401) are also likely to create a slew of necessary renames.

Once again, there is no reason to create something to do this, because it already exists.  Renaming securities can be accomplished by running the process manually. That process works and is fine for a handful of securities, but when you want to rename dozens or hundreds of securities – never mind thousands, it just won’t do.  The real job here is to utilize the existing renaming capabilities through the use of a script, leveraging Advent’s chgsym command to do multiple security renames on each line of the script.

Merge

This is where it all comes together.  Once more, you can utilize basic script functions built into Axys and APX to efficiently merge certain files with minimal effort. Much of what is required can be automated. However, in some cases, it may be simpler to just do the work manually.


Prices
The exported price file formats for Axys and APX are simple enough that you could easily write something to merge price files, but you shouldn’t because that functionality already exists in Advent’s mergepri script command.  Instead, you create code to make a script to merge the necessary price files.  The mergepri command allows you to specify a destination and multiple sources.  The first source is the primary.  Prices in the first source file will not be overwritten by prices found in the secondary source files.

Splits
You could do this manually, but I have included some sample code to do it so you don’t have to.  The program works with exported CSV copies of firms split.inf files and creates a CSV file that must be imported into the merged Advent environment.

VB
' The MergeSplits subroutine and the functions it calls (IsRecordInFile
' & TruncateZeros) are used to merge two exported Advent Axys split.inf
' files into a single split file ready for import into a merged Axys
' environment.

' written in VBA by Kevin Shea (aka AdventGuru) & updated 02/24/2024

' Disclaimer: This routine works fine for the specific instance it was
' created for, but could need additional modifications for different
' circumstances.

Sub MergeSplits(SourceFile1 As String, SourceFile2 As String, DestinationFile As String)

Dim sf1 As Integer
Dim sf2 As Integer
Dim df As Integer
Dim Record As String

df = FreeFile
Open DestinationFile For Output As #df

sf1 = FreeFile
Open SourceFile1 For Input As #sf1

Do While Not EOF(sf1)

  Line Input #sf1, Record
  Print #df, TruncateZeros(Record, ",")

Loop

Close #sf1

sf2 = FreeFile
Open SourceFile2 For Input As #sf2

Do While Not EOF(sf2)

  Line Input #sf2, Record
  Record = TruncateZeros(Record, ",")
  If Not IsRecordInFile(SourceFile1, Record) Then Print #df, Record

Loop

Close #sf2
Close #df

Debug.Print "done."

End Sub

Function IsRecordInFile(SourceFile As String, RecordPassed As String) As Boolean

Dim ff As Integer
Dim RecordToCompare As String
Dim tempIsRecordInFile As Boolean
tempIsRecordInFile = False

ff = FreeFile

Open SourceFile For Input As #ff

Do While Not EOF(ff)

  Line Input #ff, RecordToCompare
  If TruncateZeros(RecordToCompare, ",") = RecordPassed Then
    tempIsRecordInFile = True
    Exit Do
  End If
  
Loop

Close #ff

IsRecordInFile = tempIsRecordInFile

End Function

Function TruncateZeros(SplitRecord As String, FieldSeparator As String)

Dim tempTruncateZeros As String
Dim ZeroEnd As Integer
Dim Cursor As Integer
Dim DecimalFound As Boolean
Dim SplitFields() As String

tempTruncateZeros = SplitRecord
DecimalFound = False
ZeroEnd = 0

SplitFields() = Split(SplitRecord, FieldSeparator)
'to standardize split records for comparison this routine gets rid of extra zeros that can exist in split records

If InStr(SplitFields(2), ".") > 0 Then DecimalFound = True
If DecimalFound Then

  For Cursor = Len(SplitFields(2)) To 1 Step -1
    If Mid$(SplitFields(2), Cursor, 1) <> "0" Then
      ZeroEnd = Cursor
      Exit For
    End If
  Next Cursor
  tempTruncateZeros = SplitFields(0) + FieldSeparator + SplitFields(1) + FieldSeparator + Left(SplitFields(2), ZeroEnd)
End If

If Right$(tempTruncateZeros, 1) = "." Then tempTruncateZeros = Left(tempTruncateZeros, Len(tempTruncateZeros) - 1)
TruncateZeros = tempTruncateZeros

End Function
Expand

The code above does a little more than just combine two files and remove the duplicates; it also truncates any trailing zeros in the split quantity to reduce the likelihood of duplicate split records. The same end goal could be achieved using one the most basic SQL queries – if the data for the split files was already loaded into tables as illustrated below.

SQL
SELECT SplitDate, SplitSymbol, SplitFactor from SplitSource1
UNION SELECT SplitDate, SplitSymbol, SplitFactor from SplitSource2
ORDER BY SplitDate;

This approach certainly looks more direct, but you would need to define the database tables properly. You would also need extract the data to bring it into the database, and store the results of the query file in one of the accepted file formats (TSV or CSV) to import it back into the system.

Dataport

Any redundant symbol (??sym.inf) and account (??act.inf) translation tables need to be merged (e.g. vsact.inf and vssym.inf); a similar approach to the merging of the splits can be used here, but these files are already in fixed text format, so they don’t need to be exported.

After merging the translation tables you may find that you need to update selected interface account number labels. For example, if you needed to create Schwab $vsact labels for newly merged portfolios using the values from existing $csact portfolio labels, and retain the $csact label, you could accomplish that in a few minutes by using the following REPLANG code to produce a script to perform the label updates.

REPLANG
outfile f:\axys3\auto\addvslab.scr n
load cli
.addlabel -files $:file.cli -labelrec \$\vsact,$csact\n
$csact ?
next cli
fclose

Please note, this is something that works in Axys that would not work in APX since the addlabel script command is not a valid APX script command. In APX, you would post the new label through the trade blotter.

Security Masters

You need to update the security master by merging the unique records from the secondary firm into the primary firm, and then import the new security master into the merged profile.  If the security master imports without errors, you are ready to move on to the simpler aspects of the merge. I do this through code I have written for this specific to merging security records, and do an import with a full replace, but it could potentially be done manually or through the use of IMEX’s optional import of unique records.

 

Portfolios, Groups, Performance, Indexes and Composites

It is worth mentioning that Advent has a script command mergecli that merges portfolios, but that’s not applicable here.  That command only merges portfolios from the same database, but it can be a useful tool to aggregate portfolios for other purposes.

If you have performed the previous steps, the now-standardized data for portfolios, groups, performance, indexes, and composites can be copied to the container with the other merged records, but you may need to rename some of these objects and any objects with dependencies if any of the names are redundant with the environment you are merging them into.  For example, if a portfolio (code) was already in use, you would need to rename the portfolio itself, its performance records, and the occurrence of that portfolio in any groups.

 

Validation

When the previous phases are complete, you are finally ready to verify and eventually validate the results.  Initially, this can be done quickly by spot-checking various reports and portfolios.  If you find issues, you may need to revisit the previous phases, make fixes, and rerun processes that you have already created to merge the files again.  When you reach a higher level of confidence about the merged set of data, you should reconcile consolidated appraisals from each of the systems.  If you have made significant changes to asset classes and/or their underlying securities, you may need to regenerate performance history and validate that, too.

 

How long does it take?

Surprisingly, these projects can go very quickly if key personnel make themselves available to do their part.  Those folks need to be able to tell you how they want things organized and be ready to participate in the validation process.  So long as key personnel motivated, the process can go as fast as they want it to go.

In the past few months, I have done a couple of these projects.  The jobs themselves were very similar; one was done over three months – not by any means a rush job.  I did the other one in less than two weeks, but I probably could have completed the work in less than a week if we needed to.

In my opinion, the relative size of the databases doesn’t significantly add to or subtract from the amount of time or work required.  Either way, you are performing the same processes.  In other words, it isn’t about how much data there is; it is about the processes you run to make one set of data ready to be merged into the other.

 

How much does it cost?

Prices for this service are all over the map.  When firms merge, they tend to be somewhat price-insensitive regarding the cost of certain things they deem critical to the merger.  One firm I talked with a few years back told me they paid 100k to merge their portfolio records after a recent merger, and they didn’t bat an eye at it. Another firm I am familiar with was given a quote for 40k by a competitor.  To me, these quotes sound like someone throwing spaghetti at the wall to see if it sticks.  In the latter case, I charged a justifiable premium for my expertise and was still able to do the work easily for less than half of what they were originally quoted.

In closing, there are many other data types that may need to be merged depending on how your firm uses Advent, such as extended data, FXs, FFXs, and factors, but the purpose of the blog is to explain what is involved in a typical portfolio accounting system merger. While some may be tempted to do this on their own, I don’t think anyone that has asked me to assist them with the process of merging portfolio accounting records has ever regretted it.


Kevin Shea Impact 2010

About the Author: Kevin Shea is the Founder and Principal Consultant of Quartare; Quartare provides a wide variety of technology solutions to investment advisors nationwide.

For details, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.