Category: Best Practices


Gamers are having a rough go of it this year and understandably feeling betrayed by one of their long-time hardware darlings, Nvidia.  As you may have heard, Nvidia and other companies like Micron are prioritizing the needs of big business’ AI requirements over gamers and consumers that don’t wield as much sway over their bottom line. This blog post isn’t going to make gamers-at-large any happier, but in my defense, this really isn’t anything new.  For as long as I can remember, I have considered buying a decent GPU for a new desktop PC a prudent and reasonable business expense.

A close-up view of an Alienware gaming desktop PC, showcasing its internal components including a cooling system, graphics card labeled 'GEFORCE RTX', and glowing purple LEDs.

Early on, the GPUs I purchased were intended to ensure support for multiple monitors, but as the technology required to support multiple monitors became ubiquitous, I continued to buy GPUs for special circumstances where I knew users like me could benefit from enhanced GPU processing.  If you value your time and that of your fellow employees and clients, you need to champion investments that empower and facilitate your team’s ability to not only meet ongoing technology challenges but also provide them with the tools that will enable them to exceed expectations in the future.

There is perhaps no better example of this than the implementation of AI at your office, and I am not talking about using an AIPC with Copilot. I mean real-world implementation: running multiple local LLMs simultaneously, LLM orchestration and coding agents (e.g., Claude Code), building and using AI agents (e.g., OpenClaw), using, creating and hosting MCP servers, implementing REST API integration, et cetera. While AI cloud resources, such as frontier foundation models operating within AI factories, can be dramatically more powerful and appear less expensive than purchasing local hardware, the larger issue of data privacy is the elephant in the room. For me, this issue is twofold: I cannot put my intellectual property or any part of my clients’ private data at the mercy of what may turn out to be false security promises as AI use agreements with providers continue to evolve.

The overriding concern of data security puts users in a situation where they are limited in what they can do while using cloud resources.  Users may not feel comfortable attempting certain things on cloud resources due to concerns over security, and rightly so. The answer to these concerns is clear AI use policies and systems – that dictate acceptable use of cloud and local AI resources. Those same policies and systems should simultaneously facilitate the ability to use AI in productive ways and enforce data security without handicapping technological progress. AI is not the be-all and end-all of productivity, but it can be a valuable tool when used responsibly.

A smiling man in a business suit stands in an office environment, holding his hands up in a welcoming gesture.
Apple Intelligence’s handiwork via Playground clearly illustrating why we need to check AI work.

Game-Changing Technology

It is easy to ignore minor changes in processing power year to year, but when true paradigm-shifting tech becomes available and affordable, we need to act on it. This is the thing that makes me buy new hardware.  The Nvidia GeForce RTX 5090 (“5090”) and hardware of its ilk are game-changing. Their affordability may be debatable, but if you aren’t able to use them, or superior tech options, you are operating at a technological and competitive disadvantage to your peers.  With these issues in mind, I strongly recommend systems on par with the Alienware Area-51 Gaming Desktop (model AAT2265) or better for complex local AI use cases.

Six Reasons to Consider Buying the Dell Alienware Area-51 Gaming Desktop for Local AI Use Cases

  1. CPU – The AMD Ryzen 9 9950X3D CPU has excellent single-thread processing speed, superior multithreaded processing speed, and a large cache. It offers power without compromise. One of my aims when purchasing a new desktop is to never have to upgrade the equipment during the life of the purchase, and that should be possible with this system. There is an option to get an Intel Core Ultra 9 285K, but I am not a huge fan of using the Arrow Lake architecture for AI. Additionally, being able to select a PCIe 5 NVMe for primary/OS storage means that you can remove the most obvious remaining local processing speed bottleneck.
  2. Market forces – The expectation of constrained future supply due to AI data center demands taking precedence over SMBs and consumers makes buying now more appealing than waiting until later, when scarcity and corresponding increased demand could impact buying power.
  3. 5090 availability – This local LLM beast facilitates private use of decent-size LLMs (30B parameter models run very fast; 70B parameter models are useable.).  AI is a tool we use to get our jobs done as efficiently as possible. This is simply a cost of doing business. There are other options, but this is currently the fastest GPU you can buy short of enterprise-level hardware, where the cost increases significantly. Due to 5090 availability issues, buying the GPU bundled in a PC gaming build may be the easiest way to get one.
  4. Competitive pricing – Dell’s Alienware pricing is reasonable given the current premiums on 5090 GPUs.  You could get similarly configured gaming Desktop PCs for considerably less, but the Alienware price point offers superior build quality.  You could also spend a lot more money buying similarly configured “workstation” hardware, which might provide a better upgrade path, but you would likely be paying enterprise prices.
  5. Silence and build quality – When you set it up you should notice a deafening silence in comparison to similar systems. The case is extremely well-designed to keep the system cool and quiet. 
  6. Onsite support and hardware/driver continuity – You can be confident that Dell will show up to service the PC if needed.  It weighs a ton. Nobody from your office will want to carry it anywhere for service… ever.  Dell is also very good at making updated drivers available when they become necessary.

Alienware Area-51 Gaming Desktop with AMD Ryzen 9 9950X3D processor, GeForce RTX 5090 GPU, and 64GB memory.

The latest Area-51 build has been out since January of 2025 in Intel CPU options, but Dell added AMD options to the configuration in November of 2025. Based on my experience, even though Dell quoted shipping at roughly a month, they shipped it quicker. The system I ordered in early January 2026 arrived in less than two weeks. It comes with a single year of onsite support, but I added three years to it, and if you buy one, you probably should too.  For those curious about the benchmarks, I ran PassMark’s PerformanceTest on it and have included the results below.

PerformanceTest 11.1 PassMark Rating dashboard displaying a total score of 18876.3, indicating the 99th percentile. The breakdown includes CPU Mark (73008.7), 2D Graphics Mark (1498.6), 3D Graphics Mark (46723.2), Memory Mark (3753.9), and Disk Mark (94890.6).
Dell Alienware Area-51 Gaming Desktop (model AAT2265)
Passmark PerformanceTest results. Compare your PC here.

The Evolution of Local AI Use Cases

Back in 2020, during the crypto boom, I bought a Nvidia GeForce RTX 2060 Super GPU with 8GB VRAM, which cost $500 at the time.  It is not a barnburner by today’s standards, but it can run the OpenAI/gpt-oss-20b model well enough on LM Studio.  I also have a notebook with an NVIDIA GeForce RTX 4060 Laptop GPU.  That too has 8GB of VRAM and can run local LLMs way faster than the old desktop.

These systems enabled me to run, use, and test local LLMs to a certain point, but the results weren’t fantastic.  I am short on patience when it comes to waiting for computers to do things.  As I tried increasingly complex models and tasks locally, I reached some predictable limitations: context, first token, and tokens per second.   Watching my computer render characters in slow motion while using larger LLMs made me wonder how much of a difference running those same models on a 5090 would make. The difference is night and day.  I have zero regrets about this purchase.

Bar graph showing decode speed in tokens per second for different systems: Old Desktop (RTX 2060 Super) at 9.2, Legion Notebook (RTX 4060 Laptop) at 27, and New Desktop (RTX 5090) at 285 tokens/sec.

One interesting takeaway from the experience of using the 5090 and running many tests between the various systems I have is that model results can change when it is run on different hardware. Ideally, they won’t, but your hardware affects how the model is executed by a local AI model runner, which can influence its output. For example, I ran the same version of LM Studio with identical models and settings to provide both my old and new desktop systems with the same prompt. Logically, you might think that you would get the same results, but in fact you get different results.

The result from my old desktop was terse and simple, while the result from my new desktop was comprehensive. Though I theoretically understand how AI works and could have anticipated some differences between the results due to the variability of calculations between hardware, I was admittedly surprised. Seeing the difference firsthand adds context to my understanding.

I wanted to attribute this positive difference to my faster hardware, but that would be incorrect. Mathematically speaking, the output is simply different because the hardware is different, and the fact that the response is comprehensive on my new desktop should be purely coincidental. On closer inspection, the model I used (OpenAI/gpt-oss-20b) likely ran the prompt under constraints when it was run on the 2060 Super with 8GB VRAM.  That would have caused GPU offloading (since the model size is 12GB), noise, and numerical degradation in calculations.  Those issues likely created a bias towards a less comprehensive answer.

Moving Forward

Given the opportunity cost, ongoing demands of AI data centers for PC memory, storage and GPUs, and a perceived scarcity issue that will persist for years, now seems like a better time to purchase a 5090 than later when it may not be possible. Please note this computer makes sense for me and other power users that can benefit from having a 5090 for local AI use cases, but it wouldn’t be a good choice for users that don’t fit that profile. If you are interested in learning about using local AI resources almost any Nvidia GeForce RTX 50 series GPU with at least 8GB VRAM could be a good starting point.

In the PC/GPU world, VRAM ultimately determines how large a model you can use fully on the GPU and how many models you can use simultaneously. A larger model size typically corresponds with greater training depth, capability, and sophistication, which often equates to less iterative work and greater user productivity in the end. When you run out of VRAM, your system attempts to compensate by offloading portions of the model to RAM and CPU (aka GPU offloading), which slows down processing noticeably due to lower bandwidth and higher latency. If you attempt to use more total memory than is available, the model may fail to load or the system may slow dramatically.

Using a Mac with unified memory instead of a PC with a discrete GPU removes the hard VRAM boundary and reduces the performance cliff associated with GPU offloading, but you are still limited to whatever unified memory your Mac has. Assuming you can fit the model(s) in use and their associated KV (Key-Value) cache — which scales with context length — into the 5090’s 32GB of VRAM, your typical Mac isn’t going to outperform a 5090 in raw inference speed.

If you are serious about working with AI locally, you may want to step up to a Nvidia GeForce RTX 50 series GPU with at least 16GB of VRAM, which would provide a longer runway for experimentation.  Either option (8GB or 16GB) shouldn’t break the bank compared to a 5090.  Buying a cheaper GPU will allow you to work with local AI resources and become familiar with the tools, but if all goes well, you may wish you purchased a 5090 GPU or something capable of running even larger models concurrently, such as a high-end Mac Studio (M3 Ultra).


A close-up portrait of a smiling man with brown hair, wearing a green sweater and an orange lanyard around his neck.

About the Author: Kevin Shea is the Founder and Principal Consultant of Quartare; Quartare provides a wide variety of agile technology solutions to investors and the financial services community at large.

To learn more, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.

I have said it before, and I’ll say it again: if Advent does well, it follows that someone like me should do well, too.  I profit directly from Advent customers when they hire me, and indirectly when companies that provide services to Advent customers hire me.  That said, there is a certain amount of Advent’s dysfunction that helps my business.  In short, I offer what Advent cannot and/or does not want to provide to their customers, and many of my customers hire me specifically because I do not work for Advent.

 

Steeplechase race in 1912, Celtic Park, N.Y., through water

On the other hand, there is a limit to what is reasonable.  If Advent proverbially lights their hair on fire and jumps off a tall building, that is not good for me or my business. With thirty-five years of experience dealing with Advent, acting as an advocate to clients, I have seen some absurd things and have a high tolerance for some of it, but even I will occasionally find myself flummoxed.  It pains me to say so, since I really want Advent to succeed… at least enough that they aren’t losing business to competitors without good reason.  That is bad for their business and mine.

Advent Options

If you are to believe the natural progression of things as they are presented to the masses, it follows that some Axys customers who are frustrated by its limitations can be better served by Black Diamond or APX.  From what I have seen personally, Advent hasn’t had great success moving Axys customers to Black Diamond.  I have seen a few good Axys clients go Orion and Tamarac, but I have yet to see one of my clients go to Black Diamond successfully. This is not meant as a criticism of Black Diamond – it’s just an observation that my typical clients haven’t found the complete solution they are looking for in Black Diamond.

For those firms where Axys is no longer the answer and Black Diamond cannot meet their expectations, APX may offer a viable upgrade path.  Aside from its cost, APX has always been a relatively easy upgrade choice for Axys users to make because it is an Advent option that supports the legacy of Axys, which includes knowledge of portfolio management and performance fundamentals, transactions, processes, reporting, and scripts.  That means that when those customers move to APX, much of the reporting, infrastructure and established workflows can remain the same.

In short, APX offers everything that Axys does, plus the benefits of an Enterprise/SQL server platform.  The incremental learning required for operations staff to go from Axys to APX is very manageable, and things pretty much work like they did in Axys.

Among APX offerings, I know of at least five possible permutations:

  1. APX Self-Hosted on Premise
  2. APX Self-Hosted in the Cloud by a Third Party
  3. APX Dedicated with AOS
  4. APX Dedicated without AOS
  5. APX Multi-Tenant (hosted by Advent)

I have listed these APX environment options in order of my personal preference, based on specific experience with all of the options and the ease with which one can effectively manage, integrate, automate, and enhance systems. From my perspective, the first two options clearly give you the greatest degree of control and autonomy over your own systems. Choosing one of the other three options puts you in a place where Advent is enforcing various controls over your system – good and bad. Firms that have always had complete control over their systems and want to continue to do so should bristle at the very idea of this.

Advent’s Dedicated Hosting Service for APX Users

One that is used to hosting their own APX system on premise might think that hosting via Advent’s dedicated environment would be nothing but a boon, but reality quickly shatters that dream for savvy, hands-on APX users. Advent’s value add here is clearly AOS, but if that is the case, why would they ever sell someone dedicated hosting without the AOS service?  When they do that, the Dedicated Hosting service provided is arguably no better than what a third-party vendor can deliver.  Oh, wait, that’s not true.  It is potentially worse, because the system will be locked down in such a way that you won’t be able to do the things you would be able to do if your APX hosted environment was provided by a third-party resource that needed to make sure you were satisfied with their service.

This is because, without an AOS resource, some things that a firm would want to do to automate and enhance their systems simply cannot be done because they fall under the responsibility of the AOS silo.  You can call Advent support all you want, but they cannot resolve your problem, because only AOS can do these things.

Want to schedule a process to run at a certain time?  You can’t do that.  Do you want to install a third-party product?  You can’t do that.  Do you want to log into the server directly?  You can’t do that either. All of these things are only possible with the cooperation of an assigned AOS resource.   And even if you have an AOS resource, you still cannot do those things, but instead must ask your AOS resource. 

A fitting analogy for comparing the work required in their locked-down environment to what one might otherwise do in a self-hosted environment could be comparing the 100-yard dash to steeplechase.  As a result, the automation that you may create is more likely to resemble a Rube Goldberg machine than a typical streamlined process due to Advent’s forced assistance and rules regarding what can and cannot be done. 

 

Unfortunately, as you invite a higher degree of involvement from Advent vis-à-vis Advent’s dedicated (a.k.a. “managed”) hosting model, you lose control of the systems you are entrusted to manage and improve unless you had the foresight to have Advent agree upfront to the access rights required or are willing to spend countless hours dickering with Advent about the rights, which may ultimately end in frustration anyway.  This comment is not based on my direct experience with these systems alone, but also what I have heard amongst my peers.

Advent basically has the keys to the kingdom in this scenario, and the users are at the mercy of Advent. It’s almost plausible that they cannot allow you access to certain areas of the system that you usually have, but at the end of the day, when you cannot easily perform work that you were able to do in the past when you self-hosted APX, it feels much more like a ruse intended to ensure that Advent gets not just what clients have agreed to pay them, but any other work you might want to perform in the environment related to automation.

However, the problem is that they cannot necessarily perform the same work of a contractor with specific experience Advent may lack.  From my perspective, Advent’s focus in their Dedicated Hosting seems to be maintaining the status quo, not constantly striving to build a better mousetrap to service your business processes.  That is the directive I am looking for from my clients.

Anyone that can’t envision how Advent could consume their money, time, and resources while providing this service to them may lack experience working with Advent, or the imagination necessary to take their own client experiences with Advent and extrapolate the possibilities once Advent has a greater degree of control over their systems.  The frustration this arrangement creates can be amplified if the firms facing this entanglement are committed to long-term, bajillion-dollar contracts.  These large, multi-year contracts could be part of the reason Advent feels comfortable repeatedly saying the one word my clients never want to hearno.

Over the years, Tamarac, Orion, Addepar and Ridgeline have all made inroads to capture market share from what was once predominantly Advent’s business to keep or lose, and they will continue to do so until Advent makes improving its rapport with clients a priority.  You may have already guessed, but Advent’s worst enemy and biggest threat to the future of their business may be Advent’s hubris, and winning the WatersTechnology Buy-Side technology award for the Best Portfolio Accounting Provider two years in a row is unlikely to change that.  Even so, if you have deep pockets and are truly ready to hand the reins over to Advent, you may be happy with the results.

 


Kevin Shea Impact 2010

About the Author: Kevin Shea is the Founder and Principal Consultant of Quartare; Quartare provides a wide variety of technology solutions to investment advisors nationwide.

For details, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.

“What we have here is a failure to communicate.” -Captain

Rackspace played an important role in part of the tech stack I implemented for many of my IT customers for nearly ten years. We started implementing Rackspace’s Hosted Exchange solution back before Microsoft Office 365 hit its stride, and their service offering was truly first-rate at the time.  Unfortunately, that time is gone, punctuated by Friday’s dismal service breakdown and Rackspace’s complete failure to communicate with their customers in real-time as things unfolded.

If I am managing the Exchange server for a single company, never mind thousands of companies – which is likely what Rackspace is doing – and that server is not working, I have one responsibility that is just as important as getting the server back online. I must communicate with managers to give them information about what is going on to create reasonable expectations for when and how the issue will be resolved and facilitate their ability to mitigate risk.  In a normal situation, doing so makes perfect sense.

There is no good reason that wouldn’t be done.  The fact that this wasn’t done throughout the day on 12/2 can only mean a few things: absolute chaos, inadequate staffing, lack of information or perhaps some of each of those things.  Almost anyone managing IT and Exchange knows this.  I realize that Rackspace was likely determining the scope and severity of the issue, but in not communicating anything meaningful for the entire business day, Rackspace failed its customers.  They put the IT workers who support their solution in the unenviable position of only being able to communicate to their managers and customers that Rackspace wasn’t communicating with them.

To those who called Rackspace multiple times, listened to incessant jazzy hold music, and kept a vigilant eye on their status page most of the day, it no doubt became clear that this issue wasn’t something they could count on Rackspace to resolve in the short-term.  We will eventually know more about what happened, but the real story so far is Rackspace’s poor communication about what was going on in the moment.

For those still monitoring the status at status.apps.rackspace.com on 12/3, there was an update at 1:57am.  Any lingering hope of Rackspace resolving the issue sometime soon died with this update: “security incident … do not have an ETA for resolution … may take several days” So too would any other plans that IT workers utilizing Rackspace as part of their tech stack to provide Hosted Exchange had for their weekends.

The full message as provided from Rackspace at 1:57am on 12/3 follows.

What happened?

On Friday, Dec 2, 2022, we became aware of an issue impacting our Hosted Exchange environment. We proactively powered down and disconnected the Hosted Exchange environment while we triaged to understand the extent and the severity of the impact. After further analysis, we have determined that this is a security incident.

The known impact is isolated to a portion of our Hosted Exchange platform. We are taking necessary actions to evaluate and protect our environments.

Has my account been affected?

We are working through the environment with our security teams and partners to determine the full scope and impact. We will keep customers updated as more information becomes available.

Has there been an impact to the Rackspace Email platform?

We have not experienced an impact to our Rackspace Email product line and platform. At this time, Hosted Exchange accounts are impacted, and not Rackspace Email.

When will I be able to access my Hosted Exchange account?

We currently do not have an ETA for resolution. We are actively working with our support teams and anticipate our work may take several days. We will be providing information on this page as it becomes available, with updates at least every 12 hours.

As a result, we are encouraging admins to configure and set up their users accounts on Microsoft 365 so they can begin sending and receiving mail immediately. If you need assistance, please contact our support team. We are available to help you set it up.

Is there an alternative solution?

At no cost to you, we will be providing access to Microsoft Exchange Plan 1 licenses on Microsoft 365 until further notice.

To activate, please use the below link for instructions on how to set up your account and users.

https://docs.rackspace.com/support/how-to/how-to-set-up-O365-via-your-cloud-office-control-panel

Please note that your account administrator will need to manually set up each individual user on your account. Once your users have been set up and all appropriate DNS records are configured, their email access will be reactivated, and they will start receiving emails and can send emails. Please note, that DNS changes take approximately 30 minutes to provision and in rare cases can take up to 24 hours.

IMPORTANT: If you utilize a hybrid Hosted environment (Rackspace Email and Exchange on a single domain) then you will be required to move all mailboxes (Rackspace Email and Exchange) to M365 for mail flow to work properly. To preserve your data, it is critical that you do not delete your original mailboxes when making this change.

I don’t know how to setup Microsoft 365. How can I get help?

Please leverage our support channels by either joining us in chat or by calling +1 (855) 348-9064. (INTL: +44 (0) 203 917 4743).

Can I access my Hosted Exchange inbox from before the service was brought offline?

If you access your Hosted Exchange inbox via a local client application on your laptop or phone (like Outlook or Mail), your local device is likely configured to store your messages. However, while the Hosted Exchange environment is down, you will be unable to connect to the Hosted Exchange service to sync new mail or send mail using Hosted Exchange.

If you regularly access your inbox via Outlook Web Access (OWA), you will not have access to Hosted Exchange via OWA while the platform is offline.

As a result, we are encouraging admins to configure and set up their user’s accounts on Microsoft 365 so they can begin sending and receiving mail immediately. If you need assistance, please contact our support team. We are available to help you set it up.

Will I receive mail in Hosted Exchange sent to me during the time the service has been shut down?

Possibly. We intend to update further as we get more information.

As a result, we are encouraging admins to configure and set up their user’s accounts on Microsoft 365 so they can begin sending and receiving mail immediately. If you need assistance, please contact our support team. We are available to help you set it up.

IT workers likely spent much of Saturday and Sunday migrating email to another provider, such as Microsoft, and some may still not be done today.  Depending on the readiness of contingency plans in place at various firms and/or the extent of local OST caching some firms may now be depending on Rackspace to recover their email records.  It is a little late to look at the SLA, but it is probably worth another glance now.

Though nearly all investment professionals utilize email journaling due to compliance requirements, I am not sure that everyone doing so has a complete backup of their current active email accounts.  They may have the ability to query their email records for compliance analysis using the journal but recovering all of the records that were stored at Rackspace as they were on 12/1 may be more complicated and drawn out.

Based on what customers currently know, it is possible that some users may not be able to recover some emails.  Remember that users are waiting for Rackspace to resolve a security issue.  Security is as much about protecting data from being lost as it is about it being compromised.  So there may be an issue with data loss rather than potential hacking that could have exposed passwords or data.  Rackspace hasn’t divulged the exact nature of the security incident.

One obvious takeaway from this issue is that you should be locally caching all Exchange data for your account in your local environment if you can.  To check your settings in Outlook, you can navigate to the screen shown below in Outlook by doing the following:

  1. Click on File, Account Settings, Account Settings (again).
  2. Select the email account you want to verify and click on the Change button.
  3. The default for downloading email for the past is typically “1 year.” If yours is set to “1 year”, you probably want to drag the control to the right to until it says “All” as shown below; however, I would defer to your IT people on this, because if they aren’t downloading all of your data, they could have a good reason.
  4. Once you have updated the setting, click the next button and then done button to commit the changes.

Migration, Initial Recovery and Complete Recovery

For the companies faced with this issue, restoring complete functionality of email and supporting applications will take time. If they haven’t already, they need to initiate migration by redirecting their DNS records so that email flows to another service provider and perform an initial recovery to get email running on computer/phones. They may also need to do a more complete recovery that includes all of the records that were stored in the users’ email and any specific email profile configuration settings that might have been lost.

Assuming the migration process goes smoothly, my estimation of the time required is roughly 2+ hours to update the DNS records necessary to point your email to a new service provider, wait for that info to propagate, and make sure all users are set up in the new service provider’s environment and everything is working properly.  Let’s be pessimistic and say this takes four hours.  Beyond that, you would still need to do the following items for each individual user:

  1. Have a backup of the PST on hand and ready to import, or create one from existing cached copies.
  2. Create new mail profiles to replace individual accounts within the current email profile. (My recommendation would be new profiles because I would want to maintain the old ones with their email records.)
  3. Depending on how things are configured, that might be a process that you would have to do once per user, or multiple times if they have notebooks and desktops with separate email profiles.
  4. Additionally, any mail accounts on Apple iOS and Android devices would need to be deleted and recreated.

Expecting to spend less than an hour per user on average to do this would be overly optimistic, but two is probably a reasonable guesstimate and some of the processing could likely be accomplished for various users simultaneously. But things like this almost never go smoothly.  These times could potentially be reduced through the use of third-party tools and automation, but let’s assume you don’t have access to those. A relatively small ten-person office that was using Rackspace could require 24 hours of IT work done over the weekend to bring them back online with most of their email on a new service.

What happened with Rackspace should also be a wake-up call to firms utilizing any cloud services and depending on them for real-time business continuity without necessarily having a full understanding what will happen in certain contingency scenarios.  Any service, whether it is cloud-based or on-premise, is only as good as the people managing it and your SLA.

Thankfully, the number of customers I service with a dependency on Rackspace has shrunk to almost none. Most have moved to Office 365.  Given this latest issue, it appears to me that Rackspace has been treading water with their Hosted Exchange service for the past year or so.  During that time using Multi-Factor Authentication (MFA) with email has become a critical business requirement and Rackspace hasn’t answered that call on their Hosted Exchange platform.  Their recommended solution for Hosted Exchange customers has been to buy Office 365 via Rackspace to get that MFA functionality from Microsoft.

To Rackspace’s credit, they did eventually start to give more useful information and constructive advice regarding the situation at 8:19 pm EST on Friday, but they went a whole day without providing anything of note. I don’t think I have ever seen a critical IT issue handled quite this way. If you are dealing with a Rackspace employee today, or with someone at your office who has been impacted by this event, try to be patient and kind. Doing anything else is pointless and counterproductive. These people are in an unpleasant and untenable situation today.


Kevin Shea Impact 2010

About the Author: Kevin Shea is the Founder and Principal Consultant of Quartare; Quartare provides a wide variety of technology solutions to investment advisors nationwide.

For details, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.

With my long-standing history as a seasoned and impartial technology consultant catering to the wide-ranging needs of Advent users, it should come as no surprise that companies that have moved away from Advent call me to assist them if they have Advent specific needs after their agreements with Advent have lapsed.  In those specific cases, I suspect my independence from Advent is one of the most appealing features of my service, but many Advent users that have ongoing agreements with Advent also retain me to provide a level of service that Advent seems unwilling or unable to provide.

One of the things I get regular calls about is getting Axys running again.  These calls occur either when firms upgrade their servers or when firms that have moved on to competing Portfolio Management Systems dust off their old Axys files with hopes of tapping into Axys again.  My experience consulting to financial services firms using Advent Software for thirty-plus years facilitates my ability to resolve issues like these easily. 

Many of those calls I get start with the caller telling me, “We reinstalled Axys on the server and it isn’t working.”  And inevitably, this tells me more about the underlying issue than the caller ever could.  You certainly can reinstall Axys, but you probably don’t need to because Axys on the server is just a bunch of files that you access from another PC.  The most important thing to keep Axys working properly aside from the proper installation being done (at some point in the past) is making sure that users have all necessary rights to the shared folders.

This article is focused on explaining what the requirements are to empower you or your firm to resurrect Axys.  As usual, I’ll be providing a level of information in this piece that may be more than you need to solve any immediate problem with the hope that info is useful to you in the future.

Axys Versions

There are two fundamental versions of Axys: the multi-user version and single-user version.  To add a little confusion, the multi-user version is frequently referred to as the network version, but both fundamental versions are regularly installed on networks.  So, the network version is a bit of a misnomer.  Among these two fundamental versions, there is also the version of the software, which is at this point typically version 3.8, 3.8.5, 3.8.6 or 3.8.7.  In addition to these, there are also Monocurrency, Multicurrency and Variable Rate versions, to name a few.  Suffice to say, there are a lot of different versions.

Axys Licensing Model

The concurrent licensing model that Axys implements applies to both single-user and multi-user versions.  In both instances, the number of real Axys users typically exceeds the total licensed users, but having a multi-user version allows more than one user to use Axys simultaneously and adds certain multi-user features, such as user-specific settings and separate blotters, et cetera.

Understanding How Axys is Installed

Initially, the single-user version is simpler to install because the primary program (Axys) and supporting programs (Dataport, Data Exchange, Report Writer, et al.) hypothetically only need to be installed once.  That would be true if there literally was only one user using the software on one PC.  In actuality, the single-user version of Axys and supporting programs get installed multiple times in a network environment. They need to be installed once for every user, albeit to the same destination for each user (e.g., F:\Axys3).

During the Axys install process, certain required files are copied to the user’s PC and/or profile and Axys creates registry keys in HKEY_CURRENT_USER\SOFTWARE\Advent.  The most critical Axys registry keys are stored in HKEY_CURRENT_USER\SOFTWARE\Advent\Axys\3.  Although there are several important Axys files, the firmwide.inf is perhaps the most crucial file.  In a single-user installation, this text file, which can be found in the root folder of Axys (e.g., F:\Axys3), details certain settings in use and where all of the other Axys files can be found.

The multi-user version must also be installed multiple times for users, but the initial Axys install varies.  You install it once to the network/primary destination folder (e.g., F:\Axys3) and then install it again for the rest of the users (e.g., F:\Axys3\users\kevin where a firmwide.inf file will be created).  Similar to the single-user version, the supporting programs such as Dataport, Data Exchange and Report Writer would also need to be installed if the user needs those, or if you are trying to make sure all of the users have access to all of the supporting apps. The same registry keys are used for the multi-user install as the single-user version, but the multi-user (a.k.a. network) version adds an additional critical file: the netwide.inf file.

Netwide.inf versus Firmwide.inf

These two files are closely related.  The netwide.inf file should only be found in the root Axys folder of a network install, but firmwide.inf files exist in both single-user and multi-user environments.  The multi-user version is designed to use the settings in the netwide.inf as the system default and have any settings in the firmwide.inf supersede the settings in the netwide.inf.  As a rule, you should never see a firmwide.inf in the root Axys folder of a network install.  You should also almost never see a netwide.inf file in the root of a single-user Axys installation.


A Recurring Axys Installation Bug

With regard to installing Axys, there is a rather annoying issue that has been going on for several years.  It seems that the Axys install will not recognize certain network locations and/or mapped drives.  The fix requires the following registry settings:

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System]

“EnableLUA”=dword:00000001

“EnableLinkedConnections”=dword:00000001

Once those settings have been applied, the Axys install program will be able to find the mapped drives.  It seems to me that this is an issue Advent should have addressed a long, long time ago.

Understanding Those Axys Shortcuts and Corresponding Registry Entries

The working folder of the Axys shortcut needs to point to the appropriate folder for the firmwide.inf file.  That means that an Axys shortcut for a single-user version of Axys should have a “Start in” folder like F:\Axys3, whereas the multi-user version would have “Start in” folder like F:\Axys3\users\kevin.  Assuming the same install folder was used, the target for these shortcuts would be the same: F:\Axys3\Axys32.exe.  Likewise, the registry entries associated with Axys should match these settings.  When I am looking at a system, I can usually determine if Axys has been installed properly by looking for consistency between the shortcuts and the following registry entries: ExePath, NetPath and UserPath.

In summary, your Axys install is dependent on a few things: the files themselves, access to the location where they are stored and proper mapping to the location of those files in the registry, firmwide.inf and netwide.inf if applicable.  Hopefully, you can get things back online on your own, but if you need assistance with your Advent installation, reach out to me and I’ll do my best to assist you.


Kevin Shea Impact 2010

About the Author: Kevin Shea is the Founder and Principal Consultant of Quartare; Quartare provides a wide variety of technology solutions to investment advisors nationwide.

For details, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.