Tag Archive: technology


Gamers are having a rough go of it this year and understandably feeling betrayed by one of their long-time hardware darlings, Nvidia.  As you may have heard, Nvidia and other companies like Micron are prioritizing the needs of big business’ AI requirements over gamers and consumers that don’t wield as much sway over their bottom line. This blog post isn’t going to make gamers-at-large any happier, but in my defense, this really isn’t anything new.  For as long as I can remember, I have considered buying a decent GPU for a new desktop PC a prudent and reasonable business expense.

A close-up view of an Alienware gaming desktop PC, showcasing its internal components including a cooling system, graphics card labeled 'GEFORCE RTX', and glowing purple LEDs.

Early on, the GPUs I purchased were intended to ensure support for multiple monitors, but as the technology required to support multiple monitors became ubiquitous, I continued to buy GPUs for special circumstances where I knew users like me could benefit from enhanced GPU processing.  If you value your time and that of your fellow employees and clients, you need to champion investments that empower and facilitate your team’s ability to not only meet ongoing technology challenges but also provide them with the tools that will enable them to exceed expectations in the future.

There is perhaps no better example of this than the implementation of AI at your office, and I am not talking about using an AIPC with Copilot. I mean real-world implementation: running multiple local LLMs simultaneously, LLM orchestration and coding agents (e.g., Claude Code), building and using AI agents (e.g., OpenClaw), using, creating and hosting MCP servers, implementing REST API integration, et cetera. While AI cloud resources, such as frontier foundation models operating within AI factories, can be dramatically more powerful and appear less expensive than purchasing local hardware, the larger issue of data privacy is the elephant in the room. For me, this issue is twofold: I cannot put my intellectual property or any part of my clients’ private data at the mercy of what may turn out to be false security promises as AI use agreements with providers continue to evolve.

The overriding concern of data security puts users in a situation where they are limited in what they can do while using cloud resources.  Users may not feel comfortable attempting certain things on cloud resources due to concerns over security, and rightly so. The answer to these concerns is clear AI use policies and systems – that dictate acceptable use of cloud and local AI resources. Those same policies and systems should simultaneously facilitate the ability to use AI in productive ways and enforce data security without handicapping technological progress. AI is not the be-all and end-all of productivity, but it can be a valuable tool when used responsibly.

A smiling man in a business suit stands in an office environment, holding his hands up in a welcoming gesture.
Apple Intelligence’s handiwork via Playground clearly illustrating why we need to check AI work.

Game-Changing Technology

It is easy to ignore minor changes in processing power year to year, but when true paradigm-shifting tech becomes available and affordable, we need to act on it. This is the thing that makes me buy new hardware.  The Nvidia GeForce RTX 5090 (“5090”) and hardware of its ilk are game-changing. Their affordability may be debatable, but if you aren’t able to use them, or superior tech options, you are operating at a technological and competitive disadvantage to your peers.  With these issues in mind, I strongly recommend systems on par with the Alienware Area-51 Gaming Desktop (model AAT2265) or better for complex local AI use cases.

Six Reasons to Consider Buying the Dell Alienware Area-51 Gaming Desktop for Local AI Use Cases

  1. CPU – The AMD Ryzen 9 9950X3D CPU has excellent single-thread processing speed, superior multithreaded processing speed, and a large cache. It offers power without compromise. One of my aims when purchasing a new desktop is to never have to upgrade the equipment during the life of the purchase, and that should be possible with this system. There is an option to get an Intel Core Ultra 9 285K, but I am not a huge fan of using the Arrow Lake architecture for AI. Additionally, being able to select a PCIe 5 NVMe for primary/OS storage means that you can remove the most obvious remaining local processing speed bottleneck.
  2. Market forces – The expectation of constrained future supply due to AI data center demands taking precedence over SMBs and consumers makes buying now more appealing than waiting until later, when scarcity and corresponding increased demand could impact buying power.
  3. 5090 availability – This local LLM beast facilitates private use of decent-size LLMs (30B parameter models run very fast; 70B parameter models are useable.).  AI is a tool we use to get our jobs done as efficiently as possible. This is simply a cost of doing business. There are other options, but this is currently the fastest GPU you can buy short of enterprise-level hardware, where the cost increases significantly. Due to 5090 availability issues, buying the GPU bundled in a PC gaming build may be the easiest way to get one.
  4. Competitive pricing – Dell’s Alienware pricing is reasonable given the current premiums on 5090 GPUs.  You could get similarly configured gaming Desktop PCs for considerably less, but the Alienware price point offers superior build quality.  You could also spend a lot more money buying similarly configured “workstation” hardware, which might provide a better upgrade path, but you would likely be paying enterprise prices.
  5. Silence and build quality – When you set it up you should notice a deafening silence in comparison to similar systems. The case is extremely well-designed to keep the system cool and quiet. 
  6. Onsite support and hardware/driver continuity – You can be confident that Dell will show up to service the PC if needed.  It weighs a ton. Nobody from your office will want to carry it anywhere for service… ever.  Dell is also very good at making updated drivers available when they become necessary.

Alienware Area-51 Gaming Desktop with AMD Ryzen 9 9950X3D processor, GeForce RTX 5090 GPU, and 64GB memory.

The latest Area-51 build has been out since January of 2025 in Intel CPU options, but Dell added AMD options to the configuration in November of 2025. Based on my experience, even though Dell quoted shipping at roughly a month, they shipped it quicker. The system I ordered in early January 2026 arrived in less than two weeks. It comes with a single year of onsite support, but I added three years to it, and if you buy one, you probably should too.  For those curious about the benchmarks, I ran PassMark’s PerformanceTest on it and have included the results below.

PerformanceTest 11.1 PassMark Rating dashboard displaying a total score of 18876.3, indicating the 99th percentile. The breakdown includes CPU Mark (73008.7), 2D Graphics Mark (1498.6), 3D Graphics Mark (46723.2), Memory Mark (3753.9), and Disk Mark (94890.6).
Dell Alienware Area-51 Gaming Desktop (model AAT2265)
Passmark PerformanceTest results. Compare your PC here.

The Evolution of Local AI Use Cases

Back in 2020, during the crypto boom, I bought a Nvidia GeForce RTX 2060 Super GPU with 8GB VRAM, which cost $500 at the time.  It is not a barnburner by today’s standards, but it can run the OpenAI/gpt-oss-20b model well enough on LM Studio.  I also have a notebook with an NVIDIA GeForce RTX 4060 Laptop GPU.  That too has 8GB of VRAM and can run local LLMs way faster than the old desktop.

These systems enabled me to run, use, and test local LLMs to a certain point, but the results weren’t fantastic.  I am short on patience when it comes to waiting for computers to do things.  As I tried increasingly complex models and tasks locally, I reached some predictable limitations: context, first token, and tokens per second.   Watching my computer render characters in slow motion while using larger LLMs made me wonder how much of a difference running those same models on a 5090 would make. The difference is night and day.  I have zero regrets about this purchase.

Bar graph showing decode speed in tokens per second for different systems: Old Desktop (RTX 2060 Super) at 9.2, Legion Notebook (RTX 4060 Laptop) at 27, and New Desktop (RTX 5090) at 285 tokens/sec.

One interesting takeaway from the experience of using the 5090 and running many tests between the various systems I have is that model results can change when it is run on different hardware. Ideally, they won’t, but your hardware affects how the model is executed by a local AI model runner, which can influence its output. For example, I ran the same version of LM Studio with identical models and settings to provide both my old and new desktop systems with the same prompt. Logically, you might think that you would get the same results, but in fact you get different results.

The result from my old desktop was terse and simple, while the result from my new desktop was comprehensive. Though I theoretically understand how AI works and could have anticipated some differences between the results due to the variability of calculations between hardware, I was admittedly surprised. Seeing the difference firsthand adds context to my understanding.

I wanted to attribute this positive difference to my faster hardware, but that would be incorrect. Mathematically speaking, the output is simply different because the hardware is different, and the fact that the response is comprehensive on my new desktop should be purely coincidental. On closer inspection, the model I used (OpenAI/gpt-oss-20b) likely ran the prompt under constraints when it was run on the 2060 Super with 8GB VRAM.  That would have caused GPU offloading (since the model size is 12GB), noise, and numerical degradation in calculations.  Those issues likely created a bias towards a less comprehensive answer.

Moving Forward

Given the opportunity cost, ongoing demands of AI data centers for PC memory, storage and GPUs, and a perceived scarcity issue that will persist for years, now seems like a better time to purchase a 5090 than later when it may not be possible. Please note this computer makes sense for me and other power users that can benefit from having a 5090 for local AI use cases, but it wouldn’t be a good choice for users that don’t fit that profile. If you are interested in learning about using local AI resources almost any Nvidia GeForce RTX 50 series GPU with at least 8GB VRAM could be a good starting point.

In the PC/GPU world, VRAM ultimately determines how large a model you can use fully on the GPU and how many models you can use simultaneously. A larger model size typically corresponds with greater training depth, capability, and sophistication, which often equates to less iterative work and greater user productivity in the end. When you run out of VRAM, your system attempts to compensate by offloading portions of the model to RAM and CPU (aka GPU offloading), which slows down processing noticeably due to lower bandwidth and higher latency. If you attempt to use more total memory than is available, the model may fail to load or the system may slow dramatically.

Using a Mac with unified memory instead of a PC with a discrete GPU removes the hard VRAM boundary and reduces the performance cliff associated with GPU offloading, but you are still limited to whatever unified memory your Mac has. Assuming you can fit the model(s) in use and their associated KV (Key-Value) cache — which scales with context length — into the 5090’s 32GB of VRAM, your typical Mac isn’t going to outperform a 5090 in raw inference speed.

If you are serious about working with AI locally, you may want to step up to a Nvidia GeForce RTX 50 series GPU with at least 16GB of VRAM, which would provide a longer runway for experimentation.  Either option (8GB or 16GB) shouldn’t break the bank compared to a 5090.  Buying a cheaper GPU will allow you to work with local AI resources and become familiar with the tools, but if all goes well, you may wish you purchased a 5090 GPU or something capable of running even larger models concurrently, such as a high-end Mac Studio (M3 Ultra).


A close-up portrait of a smiling man with brown hair, wearing a green sweater and an orange lanyard around his neck.

About the Author: Kevin Shea is the Founder and Principal Consultant of Quartare; Quartare provides a wide variety of agile technology solutions to investors and the financial services community at large.

To learn more, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.

sticky-notes-to-do-listAround this same time last year, many of us said our final goodbyes to Windows XP and Exchange 2003.  This year, Microsoft’s latest End-Of-Life (EOL) event – along with good sense – will force most of the firms that are still using Windows Server 2003  to replace it with a newer version of the Windows Server operating system (OS).  July 14th, 2015 marks the end of extended support for the 2003 product line – after that date, there won’t be any more security updates.

For those unfamiliar with the issue this raises, compliance regulation and standards related to private information and security dictate that firms must keep up-to-date with regular patches to the software and hardware that powers their businesses.  Your firm’s Written Information Security Program (WISP) should detail a policy of adherence to these standards, among many others, and in there somewhere you have almost certainly indicated that you are keeping your systems updated with respect to security.

Like Windows XP, Windows Server 2003 has been around long enough and really should be replaced, so there is not much point in delaying the switch.  Most firms have likely changed over to Windows Server 2008 or 2012, but those that haven’t made the change yet should be planning on upgrading their server(s) in Q2 of 2015.

 

rackAlternatives to Windows 2003?

Assuming your firm is committed to Microsoft Server products, you have two choices:

1. Windows Server 2008 r2 (2008)

2008 is a mature operating system, which is still in use at a large number of firms today. However, mainstream support for 2008 ended earlier this year (1/13/2015), and though extended support is available until 1/14/2020, it probably doesn’t make sense to move from 2003 to 2008 in 2015. Firms that have existing 2008 software licenses may not want to incur the additional expense of 2012 licenses, and those with significant compatibility concerns may opt to install Windows 2008 on new server hardware.

2. Windows Server 2012 r2 (2012)

2012 is the latest and greatest from Microsoft. It has a shiny new interface and a bevy of neat features like deduplication. My experience with 2012 has been overwhelmingly positive. Though worries about 2012 compatibility with legacy applications may delay widespread acceptance of this operating system, many firms will ultimately choose to make the switch to 2012.

What happens if we stay on Windows 2003?

Your server will still work, but you will not get any more security updates from Microsoft, and your firm will technically be out of compliance.

What else could happen?

Software companies and other parties your firm interfaces with will assume that you are making these updates.  Your firm’s failure to upgrade to a later version of Windows Server could cause problems that you and your staff may not be able to anticipate.

As an example of this, one of my clients that was slow to upgrade all of their Windows XP systems last year found that the latest version of Orion’s desktop software, which was automatically updated sometime in Q1 of 2014, was incompatible with Windows XP.  Unfortunately for the client, there wasn’t a way to reverse the update or use an older version.

At the time, I was surprised, especially because the customer wasn’t given any notice of the “feature enhancement.”  It didn’t make sense that a software company would launch an update incompatible with existing customer desktops that were still supported by Microsoft.  Thankfully, Orion addressed the issue quickly by providing the users affected with remote desktop (RDP) connections to Orion servers for an interim period.

About the Author: Kevin Shea is the Founder and Principal Kevin Shea Impact 2010Consultant of Quartare; Quartare provides a wide variety of technology solutions to investment management and financial services firms nationwide.

For details, please visit Quartare.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@quartare.com.

HourglassWindows XP was a mainstay at many financial services firms for nearly a decade.  In keeping with the Microsoft Lifecycle Support Policy, support for Windows XP and similar aged software must eventually end.  You can learn more about the policy here.

According to Microsoft, extended support for Windows XP is scheduled to end on 04/08/2014.  If your office is using Windows XP, you should be working on plans to phase out XP by replacing those systems with new PCs or upgrading the PCs to a more recent workstation operating system in the next six to nine months.  There is no good reason to wait until or beyond April 2014 to perform these upgrades.

Why should you care?

Most security standards – for instance, 201 CMR 17.00 – require that you apply security patches on a regular basis.  It is the extended support from Microsoft that allows you to do this.  After extended support has ended, there is no guarantee that any security patches will be released for these systems.  In order to stay compliant with security standards, firms using Windows XP will need to upgrade to other systems.

Hasta la vista, Vista!

androide

Currently, we are recommending that business users implement Windows 7 Professional on workstations.  Windows 8 makes sense for home users with touch screens, but we prefer not to implement operating systems before they have become mainstream in the workplace; Windows 8 just isn’t there yet.

Vista extended support is good through 04/11/2017, but Vista has always been a dog, and any business users still using Vista should strongly consider moving to Windows 7 Professional immediately.

Server-based systems affected by the Microsoft Lifecycle Support Policy

Windows 2003 Server extended support is good through 07/14/2015.  Nevertheless, Windows Server 2008 R2 will likely be the most widely used network operating system among investment advisors by the end of 2013.  Windows Server 2012 was released on 09/04/2012 and hasn’t yet been widely implemented among SMBs we are familiar with.

Exchange Server 2003 extended support also ends on 04/08/2014.  The implications of this related to security updates are the same as those detailed above regarding XP.  If you know which version of Exchange is in use at your office, you can check Microsoft’s site here to determine when the end of extended support for Exchange will affect your firm.

Like Vista, extended support of Exchange Server 2007 is good through 4/11/2017, so there is no need to upgrade in the near term future.  Exchange 2010 adds OWA support for Firefox and Chrome.  In addition, Exchange 2010 makes better use of lower-cost disk subsystems, allowing you to get a performance boost over 2007 without spending a premium.  Those are nice features, but not nice enough to push an Exchange upgrade before a normal IT lifecycle replacement demands it.

Exchange Server 2003 will be phased out by many advisors this year, and most will move to Exchange Server 2010.  Though Exchange Server 2013 was technically released in November 2012, it may be premature for the SMBs that dominate the investment industry to adopt Exchange Server 2013 over Exchange Server 2010.  Presently, there is no direct migration path from Exchange 2003 to Exchange 2013.  A number of small investment advisors will move to hosted Exchange solutions and no longer keep Exchange servers at their offices.

With this many possible changes slated for the next ten months, now is a good time to make sure your firm has addressed the issues or has a plan to upgrade any systems affected.

About the Author: Kevin Shea is President of InfoSystems Integrated, Inc. (ISI); ISI provides a wide variety of outsourced IT solutions to investment advisors nationwide.

For details, please visit isitc.com, contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@isitc.com.

As a provider of technology solutions for financial services firms small and large nationwide, I frequently come in contact with investment firms of diverse dynamics and decision-making processes.  I am, of course, familiar with the process and discipline of getting

three separate quotes for goods and services, but even after decades of bidding on projects, it is still unclear to me what investment firms actually do with this information.

In some cases, it seems like the decision has already been made and prospects are just going through the motions to fulfill the expectation to follow a procedure and process established by their firm.  Gut decisions sometimes overrule common sense.

One of my clients actually adheres to this discipline for everything and, if the rumors are true, even gets three prices for paper clips.  In my own experience with them, they did, in fact, get three quotes for a single piece of computer equipment that cost about $75.  Considering current wage and consulting rates this arguably may not be a good use of time or money.  Perhaps it’s a more altruistic goal of keeping our economy competitive that drives their policy.

 

Opportunity                          

Recently, I was contacted by a firm looking for assistance with some Axys report modifications.  One of our competitors provided them with a quote for the work they needed.  The prospect felt that the price was too high and they solicited my opinion.  I never saw the quote from my competitor, but heard from the prospect that they wanted 3-4k up front and expected it would cost 7-8k.  In another conversation, I was told that there was also a local company bidding on the work.  That made sense to me – three bids.

I was provided with a detailed specification of what needed to be done and asked to provide them with a quote.  The firm was looking to make some modifications to the Axys report that generates Advent’s performance history data and stores it as Net of Fees (PRF) and Gross of Fees (PBF) data.  Though the requirements seemed complicated initially, it eventually became clear to me that the job simply required filtering of a couple REPLANG routines, and some minor additions.

I shared my impression with the prospect and ball-parked our bid at 3k (a 12 hour block of time) less than half of our known competitor’s bid.   I explained that the actual work was likely to take three to four hours, and rest of the time would be spent on testing, support and maintenance.  My expectation was that we would get the work done in a half day to a day at most and the remainder of our time could be used for any required maintenance or modification later in the year.

 

Follow-Up

After about a week, I called to follow up and found out that the firm was strongly considering having the work done by their local vendor, who told them it could be done for seven to ten days.  “Excuse me,” I said.  “Don’t you mean seven to ten hours?”

“No,” he replied.  He further explained that they really like using the local vendor and would probably use them for the job, which I fully understand.  I have, no doubt, benefited from this sentiment in Boston for years.  At that point in the call, I was thinking that it was more like seven to ten lines of code, but thankfully I didn’t start laughing.  I waited until the call ended.

 

No Risk, No Reward

In the end, your firm’s decision to select one bid over another is a personal one, similar in some respects to the one that dictates an investment adviser’s success attracting new clients and retaining them.  It’s about trust, performance, and the ability to continually communicate that you are worthy of one and capable of the other.  To succeed long-term in the financial services business, you need both.  Through good performance, we gain a measure of trust.  However, without a measure of initial trust or risk, there is no opportunity to perform.

About the Author: Kevin Shea is President of InfoSystems Integrated, Inc. (ISI); ISI provides a wide variety of outsourced IT solutions to investment advisors nationwide. For details, please visit isitc.com or contact Kevin Shea via phone at 617-720-3400 x202 or e-mail at kshea@isitc.com.